labels
float32
1
4.24k
Title
stringlengths
1
91
Text
stringlengths
1
61.1M
9
In 8 US states, Apple will begin storing driver’s licenses on the iPhone
Apple Apple Apple Apple is rolling out the ability to add driver's licenses and state IDs to the Wallet app on the iPhone and Apple Watch in select US states, the company announced this week. The first states to introduce this functionality will be Arizona and Georgia, but Connecticut, Iowa, Kentucky, Maryland, Oklahoma, and Utah will follow. However, neither the states nor Apple have said exactly when the rollouts will begin other than giving a general fall 2021 target. Wallet is an app that comes pre-installed on iPhones and Apple Watch wearables. The app stores credit cards, boarding passes, student IDs, and other items you might normally put in a physical wallet. Wallet sometimes uses wireless communication to transmit relevant information (for example, credit card information at a point of sale), or a user can hold up the phone to show information to someone. Often, a bar code or something similar is used to allow the user to authenticate at a location without handing over the phone—like with airline boarding passes. Much of Apple's newsroom post announcing the new driver's license or state ID feature focuses on one specific use case: airports. The post highlights quotes from the Transportation Security Agency supporting the move and a commitment from the TSA to accept digital IDs at airports in participating states. The idea is that users going through airport security could present both their boarding passes and state IDs in the app to avoid fumbling with their wallets while in line. After the announcement, many on Twitter and elsewhere remarked that they would not feel comfortable handing an unlocked phone to TSA or police officers, but at least as far as airport security is concerned, that's not how it works. Users store data related to their driver's licenses or state IDs on their phones, but they won't show the ID on their phone screen to authorities. Rather, the information will be delivered digitally, so users present their IDs "by simply tapping their iPhone or Apple Watch at the identity reader," according to Apple. The process for adding a driver's license or state ID to Wallet seems a little more involved than with some other supported document and ID types. Here's what Apple says users need to do: Similar to how customers add new credit cards and transit passes to Wallet today, they can simply tap the + button at the top of the screen in Wallet on their iPhone to begin adding their license or ID. If the user has an Apple Watch paired to their iPhone, they will be prompted to also add their ID or driver’s license to their Wallet app on their Apple Watch. The customer will then be asked to use their iPhone to scan their physical driver’s license or state ID card and take a selfie, which will be securely provided to the issuing state for verification. As an additional security step, users will also be prompted to complete a series of facial and head movements during the setup process. Once verified by the issuing state, the customer’s ID or driver’s license will be added to Wallet. And here are some details from Apple about how the TSA check-in process will work in participating states: Once added to Wallet, customers can present their driver’s license or state ID to the TSA by simply tapping their iPhone or Apple Watch at the identity reader. Upon tapping their iPhone or Apple Watch, customers will see a prompt on their device displaying the specific information being requested by the TSA. Only after authorizing with Face ID or Touch ID is the requested identity information released from their device, which ensures that just the required information is shared and only the person who added the driver’s license or state ID to the device can present it. Users do not need to unlock, show, or hand over their device to present their ID. After the announcement, Apple told blogger John Gruber that users will only be able to associate one fingerprint with the IDs on Touch ID devices to ensure that the IDs can be used only by the actual ID holder. (Many people store additional Touch ID fingerprints on their phones for family members or partners to use.) Apple has not yet shared the status of talks with other states to bring the feature to additional locations. Listing image by Apple
2
Spanish Inflation Surges to Highest in Nearly 30 Years on Food
To continue, please click the box below to let us know you're not a robot.
1
Hashtagstack.com – Hashtag generator, FREE, collection creation, stats and more
The Best Hashtag Generator HashtagStack is the best Instagram & Tik Tok HASHTAG GENERATOR. Advanced analytics, Hashtags Collections, and it's FREE! They use our services
24
Linux Mint introduces its own take on the Chromium web browser
Linux Mint is a very popular Linux desktop distribution. I use the latest version, Mint 20, on my production desktops. That's partly because, while it's based on Debian Linux and Ubuntu, it takes its own path. The best example of that is Mint's excellent homebrew desktop interface, Cinnamon. Now, Mint's programmers, led by lead developer, Clement "Clem" Lefebvre, have built their own take on Google's open-source Chromium web browser. Open Source GitHub vs GitLab: Which program is right for you? The best Linux distros for beginners Feren OS is a Linux distribution that's as lovely as it is easy to use How to add new users to your Linux machine Some of you may be saying, "Wait, haven't they offered Chromium for years?" Well, yes, and no. For years, Mint used Ubuntu's Chromium build. But then Canonical, Ubuntu's parent company, moved from releasing Chromium as an APT-compatible DEB package to a Snap. The Ubuntu Snap software packing system, along with its rivals Flatpak and AppImage, is a new, container-oriented way of installing Linux applications. The older way of installing Linux apps, such as DEB and RPM package management systems for the Debian and Red Hat Linux families, incorporate the source code and hard-coded paths for each program. While tried and true, these traditional packages are troublesome for developers. They require programmers to hand-craft Linux programs to work with each specific distro and its various releases. They must ensure that each program has access to specific libraries' versions. That's a lot of work and painful programming, which led to the process being given the name: Dependency hell. Snap avoids this problem by incorporating the application and its libraries into a single package. It's then installed and mounted on a SquashFS virtual file system. When you run a Snap, you're running it inside a secured container of its own. For Chromium, in particular, Canonical felt using Snaps was the best way to handle this program. That's because Alan Pope, Canonical's community manager for Ubuntu engineering service, explained, Maintaining a single release of Chromium is a significant time investment for the Ubuntu Desktop Team working with the Ubuntu Security team to deliver updates to each stable release. As the teams support numerous stable releases of Ubuntu, the amount of work is compounded. Comparing this workload to other Linux distributions which have a single supported rolling release misses the nuance of supporting multiple Long Term Support (LTS) and non-LTS releases. Google releases a new major version of Chromium every six weeks, with typically several minor versions to address security vulnerabilities in between. Every new stable version has to be built for each supported Ubuntu release − 16.04, 18.04, 19.04, and the upcoming 19.10 − and for all supported architectures (amd64, i386, arm, arm64). Additionally, ensuring Chromium even builds (let alone runs) on older releases such as 16.04 can be challenging, as the upstream project often uses new compiler features that are not available on older releases. In contrast, a Snap needs to be built only once per architecture and will run on all systems that support Snapd. This covers all supported Ubuntu releases including 14.04 with Extended Security Maintenance (ESM), as well as other distributions like Debian, Fedora, Mint, and Manjaro.' That's all well and good, but Lefebvre disliked enormously that: In the Ubuntu 20.04 package base, the Chromium package is indeed empty and acting, without your consent, as a backdoor by connecting your computer to the Ubuntu Store. Applications in this store cannot be patched or pinned. You can't audit them, hold them, modify them, or even point Snap to a different store. You've as much empowerment with this as if you were using proprietary software, i.e. none. This is in effect similar to a commercial proprietary solution, but with two major differences: It runs as root, and it installs itself without asking you. So, on June 1, 2020, Mint cut Snap, and the Snap-based Chromium out of their Linux distro. Now, though, Chromium's back. Lefebvre wrote, "The Chromium browser is now available in the official repositories for both Linux Mint and LMDE. If you've been waiting for this I'd like to thank you for your patience." Part of the reason was, well, Canonical was right. Building Chromium from source code is one really slow process. He explained, "To guarantee reactivity and timely updates we had to automate the process of detecting, packaging and compiling new versions of Chromium. This is an application which can require more than 6 hours per build on a fast computer. We allocated a new build server with high specifications (Ryzen 9 3900, 128GB RAM, NMVe) and reduced the time it took to build Chromium to a little more than an hour." That's a lot of power! Still, for those who love it, up-to-date builds of Chromium are now available for Mint users. Lefebvre has always started work on an IPTV player. This is a program you can use to watch video streams from streaming services such as Mobdro, Pluto TV, and Locast. Mint already supports such open-source IPTV players as Kodi, but as Lefebvre noted, there's a "lack of good IPTV solutions on the Linux desktop but we're not sure how many people actually do use it." So, Lefebvre has built an alpha prototype, Hypnotix. If there's sufficient interest, there may eventually be an official Mint Hypnotix IPTV player, but that's a long way off from here. Much closer are some speed and compatibility tune-ups to the Cinnamon interface. Another nice, new feature, the ability to add favorites to its Nemo file manager, has also been added. So it is that Mint keeps improving, which is one of the big reasons I keep using it year after year.
1
Microsoft's beefed-up take on Linux server security has hit general availability
After a few months in preview, Microsoft has made Defender Endpoint Detection and Response (EDR) generally available for Linux servers. Microsoft has extended its Defender product over multiple platforms throughout the last year or so, having shaved the "Windows" prefix from the system. Android, macOS, and iOS have all joined the party and Microsoft Defender for Endpoint turned up for Linux around six months ago. The theory goes that administrators with a mixed network can onboard devices via the same portal and view alerts in what Microsoft describes as a "single pane of glass experience". The EDR support enriches the capability with extra timeline features and enhancements to the advanced hunting tool. "Customers can use this capability," according to Microsoft, "to search for threats across Linux servers, exploring up to 30 days of raw data." Why make games for Linux if they don't sell? Because the nerds are just grateful to get something that works p It's handy stuff for admins already familiar with the Windows experience and keeps procedures consistent. Users can include elements such as process and file creation in their investigations as well as gather insight into where a threat or malicious activity came from. Six Linux distributions are supported at present: RHEL 7.2+, CentOS Linux 7.2+, Ubuntu 16 LTS (or higher LTS), SLES 12+, Debian 9+, and Oracle Linux 7.2. The platform can be deployed and configured with Puppet, Ansible, "or using your existing Linux configuration management tool." There remains no love for a standalone Linux desktop at this stage; this is aimed squarely at servers, although there are no shortage of alternatives from vendors such as Sophos or F-Secure. Users already running Microsoft Defender for Endpoint (Linux) will get the EDR capability with an agent update. Those who opted into the preview programme last year will also need to update the agent. And, of course, Microsoft Defender for Endpoint (Linux) will require the Servers licence. ®
2
German climate protection act largely incompatible with fundamental rights
5763553 Entscheidung zum Klimaschutzgesetz : Karlsruhe for Future Das Bundesverfassungsgericht erklärt das deutsche Klimagesetz für verfassungswidrig – und fordert „Entwicklungsdruck“ für klimaneutrale Lösungen. KARLSRUHE/BERLIN taz | Ausgerechnet Bundeswirtschaftsminister Peter Altmaier (CDU) zeigte sich nach der Klatsche für die Bundesregierung begeistert: Ein „großes & bedeutendes Urteil“ habe das Bundesverfassungsgericht da beschlossen, twitterte er am Donnerstagvormittag gleich nach der Veröffentlichung der Entscheidung. „Es ist epochal für Klimaschutz & die Rechte der jungen Menschen. Und sorgt für Planungssicherheit für die Wirtschaft.“ Wir würden Ihnen hier gerne einen externen Inhalt zeigen. Sie entscheiden, ob sie dieses Element auch sehen wollen. Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen übermittelt werden. Mehr dazu in unserer Datenschutzerklärung. Externen Inhalt erlauben Dabei hatte das Gericht eine Nachbesserung des Klimaschutzgesetzes gefordert. Die Reduzierung der Treibhausgase ab 2030 soll jetzt schon festgelegt werden, damit sich die Gesellschaft besser und schneller auf die erforderliche Klimaneutralität vorbereiten kann. Nur so könnten unverhältnismäßige Eingriffe in die Freiheit künftiger Generationen vermieden werden. Das Klimaschutzgesetz war 2019 nur unter großen Mühen in der Großen Koalition durchgesetzt worden. Es legt nicht nur fest, dass die deutschen Emissionen bis 2050 bei null sein müssen und nennt minus 55 Prozent als Verpflichtung bis 2030. Auf dem Weg dahin definiert es für jedes Jahr ein Reduktionsziel. Gerade von CDU und CSU war dagegen immer wieder der Vorwurf erhoben worden, diese Jahresziele seien „Planwirtschaft“ und nicht akzeptabel. Nun befindet das Bundesverfassungsgericht, zumindest die Regeln für die Fortschreibung des Reduktionspfades nach 2031 „reichen nicht aus“. Den nächsten Generationen droht eine „radikale Reduktionslast“, heißt es Konkret musste das Karlsruher Gericht über vier Verfassungsbeschwerden entscheiden, hinter denen große Teile der Umweltbewegung standen: Greenpeace, der BUND, die Deutsche Umwelthilfe und Protect the Planet. Als Klä­ge­r:in­nen ließ das Gericht aber nur reale Personen zu, zum Beispiel Luisa Neubauer, die bekannteste Aktivistin von Fridays for Future, und Jugendliche von der Nordseeinsel Pellworm. Auch 15 Personen aus Bangladesch und Nepal wurden als beschwerdebefugt anerkannt. Grundgesetz verpflichtet Das Gericht stellte fest, dass sich aus dem Grundgesetz – vor allem aus dem Staatsziel Umweltschutz in Artikel 20a – auch eine Pflicht zum Klimaschutz ergibt. Der Staat dürfe der Erderwärmung nicht einfach zusehen und auf Anpassungsmaßnahmen wie Deichbauten vertrauen. Ziel müsse vielmehr die Klimaneutralität Deutschlands sein. Die Ziele des Abkommens von Paris – die Begrenzung des globalen Temperaturanstiegs deutlich unter 2 Grad, möglichst auf 1,5 Grad – werden faktisch in den Verfassungsrang gehoben. Je weiter der Klimawandel voranschreitet, umso mehr Gewicht habe dieses Klimaschutzgebot gegenüber anderen Interessen. p Dieser Text stammt aus der taz am wochenende. Immer ab Samstag am Kiosk, im eKiosk oder gleich im Wochenendabo. Und bei Facebook und Twitter. Die Rich­te­r:in­nen sehen allerdings die Gefahr, dass, wenn jetzt zu wenig getan wird, die junge Generation ab 2030 ganz unverhältnismäßig belastet wird. Es dürfe nicht „einer Generation zugestanden werden, unter vergleichsweise milder Reduktionslast große Teile des CO2-Budgets zu verbrauchen, wenn damit zugleich den nachfolgenden Generationen eine radikale Reduktionslast überlassen“ würde. So entstehe ein großes Risiko für die Freiheitsrechte, weil fast jede Freiheitsausübung – etwa Reisen oder Einkaufen – derzeit noch mit der Produktion von Treibhausgasen verbunden ist. Mit dieser Argumentation haben die Rich­te­r:in­nen zwei wichtige Weichen gestellt. Zum einen ist nun klar, wer warum gegen die deutsche Klimapolitik klagen kann: alle, die durch spätere Einschränkungen in ihren Freiheitsrechten beschränkt sein werden. Es geht also nicht um die Einschränkungen durch den Klimawandel selbst, sondern durch die später unvermeidlichen strengen staatlichen Klimaschutzmaßnahmen. Die zweite Weichenstellung betrifft die „CO2-Budgets“. Die Rich­te­r:in­nen zitieren das globale CO2-Budget, das vom Weltklimarat IPCC errechnet wurde, und das nationale CO2-Budget, das der Sachverständigenrat für Umweltfragen vorlegte. Das Umweltministerium hat es stets abgelehnt, diese Rechenweise einzuführen, weil sie nicht den Regeln des Pariser Abkommens entspreche. Insofern ist es ein großer Erfolg der Umweltbewegung, dass das Gericht die Budgetkonzeption nun dem Beschluss zugrunde legt. Einstimmige Entscheidung Karlsruhe geht nun aber nicht so weit, sofort eine radikale Reduktion der Treibhausgasemissionen zu fordern, um die jüngere Generation zu entlasten. In der einstimmig ergangenen Entscheidung des Ersten Senats wird als Mindestanforderung für den Gesetzgeber vielmehr ein anderer Weg skizziert. Der Gesetzgeber soll bereits jetzt die Anforderungen an Verkehr, Industrie, Land- und Energiewirtschaft ab 2030 definieren, damit der Weg zur Klimaneutralität schneller und besser gelingt. Die Rich­te­r:in­nen fordern „Entwicklungsdruck“ für klimaneutrale Lösungen und vor allem „Planungssicherheit“. Der Übergang zur Klimaneutralität soll „rechtzeitig“ eingeleitet werden. Nur so seien die nach 2030 drohenden Reduktionslasten „schonend“ zu bewältigen. Das Klimaschutzgesetz sieht vor, dass die Bundesregierung erst 2025 sagt, wie es nach 2030 weitergeht. Das genügt den Verfassungsrichtern nicht. Sie fordern eine Anpassung des Klimaschutzgesetzes schon bis Ende 2022. Grundsätzlich darf die Festlegung der Details sogar weiterhin der Bundesregierung überlassen bleiben, alle wesentlichen Fragen müsse aber der Bundestag im Gesetz regeln und die Vorgaben für die Zukunft dann auch regelmäßig fortschreiben. Die Klä­ge­r:in­nen zeigten sich nach Veröffentlichung der Entscheidung begeistert. Damit habe des Verfassungsgericht ein „Recht auf Zukunft“ anerkannt, sagte Anwalt Remo Klinger. Luisa Neubauer sprach von einem „Grundrecht auf Klimaschutz“. Klinger räumte ein, dass die konkreten Forderungen des Bundesverfassungsgerichts nicht sehr radikal seien. Aber er geht davon aus, dass die Feststellungen des Gerichts dennoch helfen, politischen Druck zu entfalten. „Wenn das CO2-Budget nach derzeitiger Planung schon 2030 aufgebraucht ist, liegt es nahe, bereits bis dahin die Emissionen deutlich zu senken.“ Vorsichtige Zustimmung aus der Wirtschaft Anwältin Roda Verheyen geht davon aus, dass der Karlsruher Beschluss der Umweltbewegung nun auf allen Feldern der Klimapolitik Rückenwind geben wird, etwa beim Kohleausstieg oder bei der Förderung erneuerbarer Energien. Rechtsprofessor Felix Ekardt erkannte einen Auftrag an Deutschland, in der EU eine andere Rolle zu spielen: „Deutschland muss vom Bremser zum Antreiber werden.“ Die umweltpolitische Sprecherin der CDU/CSU-Bundestagsfraktion, Marie-Luise Dött, war weniger euphorisch als ihr Parteifreund Peter Altmaier. Die Entscheidung der Karlsruher Rich­te­r:in­nen sei „zu akzeptieren“, sagte sie, der nächste Bundestag hätte das Gesetz ohnehin anpassen müssen, um die höheren EU-Klimaziele zu erreichen. Sie teilt auch nicht die Planungsfreude der Richter:innen, es sei „für den heutigen Gesetzgeber beinahe unmöglich, bereits zehn Jahre im Voraus sektorscharfe Emissionsreduktionen und Klimaschutzmaßnahmen zu beschließen“. Eine „Stärkung für den Klimaschutz“ sieht Bundesumweltministerin Svenja Schulze (SPD) in dem Beschluss. Sie hätte bei der Schaffung des Klimaschutzgesetzes gern ein Zwischenziel für die 2030er Jahre im Gesetz gehabt, „aber dafür gab es keine Mehrheit“. Nun werde ihr Ministerium noch im Sommer Eckpunkte für eine Verschärfung des Gesetzes vorlegen. Ohnehin müsse durch das höhere EU-Ziel zum Klimaschutz der Emissionshandel verschärft werden, was zu „deutlich mehr Klimaschutz auch in Deutschland schon in den 2020er Jahren“ führen werde. „Diese Bundesregierung ist zu echtem Klimaschutz nicht in der Lage“, meinte die klimapolitische Sprecherin der bündnisgrünen Bundestagsfraktion, Lisa Badum. Das Gesetz müsse geändert werden, um konkrete Reduktionsziele über den gesamten Zeitraum bis zur Klimaneutralität festzulegen und das Klimaziel 2030 auf minus 70 Prozent anzuheben. Aus der Wirtschaft kam vorsichtige Zustimmung: „Die Politik muss transparent gangbare Klimapfade bis 2050 aufzeigen“, hieß es vom Bundesverband der deutschen Industrie. Das schaffe Planungssicherheit für die Industrie. Vom Bundesverband der Energie- und Wasserwirtschaft (BDEW) hieß es, das Urteil „könnte eine Chance für eine langfristiger ausgerichtete Energiepolitik im Sinne des Pariser Abkommens sein“ – mit mehr Erneuerbaren, Wasserstoff und klimaneutralen Gebäuden und Verkehr. Einmal zahlen . Fehler auf taz.de entdeckt? Wir freuen uns über eine Mail an fehlerhinweis@taz.de! Inhaltliches Feedback? Gerne als Leser*innenkommentar unter dem Text auf taz.de oder über das Kontaktformular. p Wir würden Ihnen hier gerne einen externen Inhalt zeigen. Sie entscheiden, ob sie dieses Element auch sehen wollen. Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen übermittelt werden. Mehr dazu in unserer Datenschutzerklärung. Externen Inhalt erlauben taz Talk mit Carla Hinrichs Carla Hinrichs ist Sprecherin der Gruppe Letzte Generation. Kai Schöneberg leitet das taz-Ressort Wirtschaft und Ökologie. Susanne Schwarz schreibt über die Klimakrise. Kleberin fürs Klima – ein taz Talk mit: Carla Hinrichs p Entdecke die Podcasts der taz. Unabhängige Stimmen, Themen und Meinungen – nicht nur fürs linke Ohr. Feedback willkommen! Wir freuen uns auf deine Gedanken, Eindrücke und Anregungen. Schreib uns: podcast@taz.de Wir würden Ihnen hier gerne einen externen Inhalt zeigen. Sie entscheiden, ob sie dieses Element auch sehen wollen. Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen übermittelt werden. Mehr dazu in unserer Datenschutzerklärung. Externen Inhalt erlauben
2
How Covid-19 helped to break the record for longest commercial flight in 2020?
Maybe your travel plan is postponed for now, maybe you are actively seeking your next destination, or you are planning the details of your upcoming trip. No matter what your current travel status is, the travel facts that you will find here will spice up your knowledge or perhaps it will inspire you to look for less-known destinations. At the very least you will have fun time while you go through the travel-world of extremes, opposites and interesting travel facts. According to Airfarewatchdog Air Tahiti Nui flight from Papeete (French Polynesia) to Paris with a distance of 9,775 miles or 15 hours and 45 minutes was the longest flight in 2020. It broke the record held by Singapore Airlines which was 9,534 miles from Singapore to Newark (New Jersey). The aircraft used for the flight was Boeing 787 Dreamliner. Normally the airline had a stopover in Los Angeles, but because of Covid restrictions this was no longer possible. With reduced number of passengers and weight, the aircraft spends less fuel and this made it possible to fly over such a long distance. This flight was an exception and the airline has discontinued the record breaking direct route with a stopover in a different city. Travel fact animation: Longest commercial flight in 2020 According to BusinessInsider the shortest commercial flight is between Westray and Papa Westray islands in Scotland. The airports are 2 miles apart. The flight takes 90 seconds and according to The Points Guy it’s a much better transportation alternative than the 20 minutes choppy boat trip from one to the other island. Scientific study has shown that the intention itself or the plan to go on a holiday makes vacationers to be almost as happy as the trip itself. The study compared vacationers to non-vacationers and the results showed that vacationers had an 8% higher pre-trip happiness score than non-vacationers. Other studies have also confirmed this psychological phenomenon. Maybe the language that you speak affects you more than you think. A study of the top 10 most used languages globally (English, Mandarin, French, Spanish, Portuguese, Indonesian, German, Korean, Russian and Arabic) showed that Spanish is the happiest language, followed by Portuguese, English, German and French. Possibly this is one of the reasons why Spain is in the top 10 or top 5 countries globally by life expectancy, and some media are reporting it is well placed to overtake Japan in two decades. The FICUS countries (France, Italy, China, USA and Spain) get the most visitors repeatedly. They are like irresistible travel magnets. France is the most visited country by travelers in the world with 89 million arrivals, according to the latest UNWTO data related to 2018 travel flows. The next most visited countries are Spain, United States, China and Italy. Together with France they comprise the top 5 most visited countries globally. Travel fact: Top 5 countries by tourist arrivals and share of tourism in the GDP of the countries When it comes to cities, Bangkok is the global leader with 22 million international overnight visitors in 2018, followed by Paris, London, Dubai and Singapore making together the top 5 most visited cities globally according to Mastercard’s Global Destination Cities Index. Eiffel Tower’s size increases 15 cm (6 inches) during summer time because the steel structure expands on higher temperatures. A little reminder on physics 😊 Crossing a river or non-walkable path is less of a problem in some cities. Hamburg has more than 2300 bridges, the largest number of bridges in the world. Next is Amsterdam with 1281 bridges, New York City with 789, Pittsburgh with 446 and Venice with 391. So many bridges sometimes represent a problem. Source; Giphy Costa Rica is one of the few countries globally that has abolished the army and lives without military forces for more than 70 years. Egypt is a country with the cheapest taxi ride, it costs less than a dollar to get a three-mile ride. That’s a handy travel fact to be aware of when you travel to Egypt. On the other end is Switzerland as a country with the most expensive taxi ride. It costs $25 for a three-mile ride, according to USA Today and Taxi2Airport. Based on a research by Deutsche Bank, below is a overview of the cost of a taxi ride in selected cities globally. Price of 5 miles (8 km) taxi ride in selected cities globally According to The Spruce, the oldest tree in the world is nicknamed Methuselah. It’s a bristlecone pine tree located in California’s White Mountain range. Its estimated age is over 4700 years. It’s a non-clonal tree which means it cannot reproduce itself, unlike the clonal trees that have this ability. Big Pharma dreams to replicate the anti-aging secret of living organisms like this. When it comes to clonal trees the oldest one is Old Tjikko in Sweden. It’s roughly 10,000 years old and its trunk survives about 600 years after which a new one sprouts. The tallest tree in the world is located in the Redwood National Park in California. It’s called Hyperion and has a height of 380 ft or 115m. It’s about one third of the height of Eifel Tower. Giza and Egypt. Probably this comes on top of your mind when you think about pyramids. But, it’s not in Giza. The biggest pyramid in the world by volume is located in Cholula de Rivadavia, Mexico. Over time, nature did its job and camouflaged it with soil, grass and other vegetation, so it was not easy to spot it and to see what lies beneath the hill at its location. The Great Pyramid of Cholula is only 54 m (177 ft) tall, but with a base four time larger than the Great Pyramid at Giza, having a volume of 166k ft³ (3.3 million m³) compared to the 84k ft³ (2.4 million m³) of the Pyramid of Khufu. According to BBC it’s also the largest monument on earth ever built by humans. According to the Independent, Sudan has more than twice the number of pyramids that Egypt has and therefore it’s a country with the most pyramids in the world. It has 255 pyramids, while Egypt has between 118 and 138. Going on a date out of your town is not a bad idea. Just collect some travel facts before you do so. According to Deutsche Bank Research of 54 cities globally, Zurich and Oslo are the most expensive cities to go out on a date. A night out with your date in these two cities costs $202 and $163 respectively. Cairo, Bangalore and Buenos Aires are the cheapest ones. If you go on a date there you need to have roughly $42 in your wallet. The cost of going out on a date in 54 cities globally Have you thought how much your love for coffee may cost you in different cities in the world? Well, a cup of cappuccino in an expat area of a city is most expensive in Copenhagen $6.3, the cheapest one is in Milan $1.7, according to Deutsche Bank Research of 54 cities globally. Wondering whether you need to pack in your bags a mosquito repellant for your trip? Iceland is one of few countries in the world that doesn’t have mosquitos. A convenient advantage for such an interesting destination. You will often see information that France is a country with most time zones, 12 particularly. The reason for this is because France still administers territories placed far away across the world. Otherwise Russia and the USA cover 11 time zones. Machu Picchu is built in a way that protects it from earthquakes. The stones used to build it are precisely cut and free of mortar. According to USRA it allows the stones to move, but also to come back to the original position after an earthquake. It’s one of the few advanced architectural features that Machu Picchu has. For those a little bit lazy or for those eager to get fit by walking on a steeper slope, the steepest street in the world is Baldwin Street, in Dunedin, New Zealand. The slope of the street is 34.8%. It’s easier to remember surnames in China than in the USA. According to CNN, China has about 6000 surnames in use and 100 of these surnames are common for 86% of the population. In the United States there are 6.3 million surnames, and most of them appear only once. So if you having trouble to remember names, maybe it’s not your brain, but the country. Guangzhou Baiyun International Airport is the busiest one in the world with 43 million passengers in 2020. The next four busiest are Hartsfield–Jackson Atlanta International Airport with 42 million passengers, Chengdu Shuangliu International Airport with 40 million passengers, Dallas/Fort Worth International Airport with 39 million passengers and Shenzhen Bao’an International Airport with 37 million passengers. The pandemic brought drastic changes in the 2020 rankings. Because of the steep drop of global travel Hartsfield-Jackson Atlanta International Airport in the United States was dethroned from the top spot that it held for 22 years in a row. According to CNN Travel this change is temporary and the airport will return to the top again as the pandemic dissipates. According to AirHelp the worst top 10 airports in the world in 2019 based on on-time performance, service quality, and food and shops are: Flight delays and poor on-time performance are not just numbers. Source: Giphy Being busy does not stop some airports to rank high in performance metrics. Tokyo Haneda Airport (Tokyo International Airport) is an example of that and takes 2nd spot globally. According to AirHelp the best top 10 airports in 2019 are: If you are on search for quality restaurants, according to FineDiningLovers, France has the highest number of 628 Michelin-starred restaurants. Next is Japan with 577, Italy with 374, Germany with 307 and USA with 169 Michelin-starred restaurants. Top 5 countries by number of Michelin-starred restaurants If you need excitement, adrenaline and dopamine rush, you can get all of that with the “Falcon’s Flight” roller coaster in Saudi Arabia. According to CNN Travel, the “Falcon’s Flight” roller coaster’s construction should be finished in 2023. It will have a record-breaking speed of more than 155 miles per hour (250+ km/h) and record-breaking height of 525 feet (160 meters). Although the name suggests that the Spanish Flu who took the lives of 20 to 50 million spread to the world out of Spain, in fact, according to History.com, it is more likely that it started from France, China or Britain. The reason why the name “Spanish Flu” got traction is because the occurrence of the pandemic overlapped with the First World War. While most countries were either taking one side of the two embattled sides, Spain was one of the rare that remained neutral. This allowed the country to have free speech and journalism unrestricted from censorship, unlike the embattled countries. Spain provided detailed reports on the devastation caused by the influenza virus and the general public around the world got the impression that Spain was the original place of the virus. That’s how it got the label. Most people associate Big Ben with the clock on St. Stephen’s Tower in London. In fact the name belongs to the thirteen ton bell on the top of the tower, according to Hotels.com. Naples (Italy), is the birthplace of the some of the most popular pizzas that we know. There is a big chance that you have eaten Margherita pizza in your life, or you are going to eat one sometime in your life, especially if you travel to Italy. Have you thought how Margherita pizza got its name? Diego Zancani, emeritus professor of medieval and modern languages at Oxford University, explains that after the unification of the northern and southern Italy, King Umberto I and Queen Margarita visited a well-known pizzamaker in Naples in 1889. They were offered three pizza choices. Queen Margarita decided to order the one with basil, mozzarella, and tomato because it matched the colors of the flag of Italy. It was a very convenient choice in that moment of time. According to Huffpost, there is a second theory which says that this pizza got its name because the basil and mozzarella on the top of the pizza resemble the daisy flower, whose name is margherita in Italian language. South Africa is exceptional. It’s the only country that has three capital cities. Cape Town holds the legislative function and it’s the place of South Africa’s parliament. Pretoria is the home of the government with the executive function, and Bloemfontein has judicial function. The reasons for this type of set up are mainly political and historical. It’s not only the administrative set up of the country. South Africa’s nature and landscape is very diverse too and can match your preferences for different types of holiday. According to CNN Travel, the dirtiest place on airplanes is the tray table, followed by drinking fountain buttons, overhead air vents, lavatory flush buttons, seatbelt buckles, bathroom stall locks. The amount of colony-forming units per square inch (concentration of microorganisms) is shown in the chart below. Dirtiest places on airplanes and airports The next of the list comprising the top 10 happiest countries in the world are: Denmark, Switzerland, Iceland, Norway, Netherlands, Sweden, New Zealand, Austria and Luxembourg. The hotel will have a shape of a wheel. Its rotation should create artificial gravity. The gravitational power will be weaker and closer in strength to the one on the moon. Apart from the spectacular view and design, the lack of gravity will create a totally new experience, feeling and sense. The 18th century baroque style Trevi Fountain in Rome took about 30 years to be built. According to DW, $1.7 million are collected annually from the coins tossed into the fountain. Usually the coins are donated to a charitable organization Caritas. According to DW, because of the substantial amount of money collected from this monument, recently the Italian government initiated a proposal to put part of the funds to go into local infrastructure development projects. This proposal heated the debate of how the money should be spent properly. Despite this ongoing discussion and yet probably reduced amount of tossed and collected coins because of Covid, the fountain remains a promising revenue generator for the city of Rome in normal times. The tradition says that if you toss one coin you will return to Rome again, if you toss two coins you will get married, and if you do it three times you will marry a Roman. Why toss three coins? Beside the desire to marry a Roman :), perhaps the fountain name has something to do with it. Trevi means “three roads”, signifying the three roads intersecting at the fountain’s location in the past. Can you imagine what would have been the ROI if Trevi Fountain was intersecting four roads? Some of the travel facts above are fun, some are practical, some remarkable and some curious. If you ended up having a little bit more travel related knowledge let us know in the comments section or you can share your favorite travel fact on your blog or social media. For additional interesting travel facts you can go to our 106 travel innovations article. Stick around for more from us.
1
What Is Offshore Outsourcing: Definition and Benefits
Nowadays, it’s as if the world has become one big village. Without much hassle, you can communicate and work with people from around the globe. Some say it’s a good thing. Others condemn it. But whether we like it or not, offshore outsourcing is here to stay. Should you be considering it for your business? We look at the topic – what it is, what it entails, and why so many companies are adopting offshore outsourcing as part of their business model. Offshore outsourcing can be defined as the system of collaborating with an external organization and assigning that organization to carry out some of your business roles. Usually, the product or the service which has been outsourced would not be sold in the offshoring location; it would only be marketed in the outsourcer’s country. Offshore outsourcing gives organizations access to high-quality services at lower operating costs. There are basically three main categories in offshore outsourcing 1. Business Process Outsourcing (BPO) 2. Infrastructure and Technology Outsourcing Organizations usually outsource parts of its process, which is important, but not vital for the organization’s stand. Processes like customer support, payroll processing are outsourced to save on costs. BPOs can be further sub-divided. Back office outsourcing is outsourcing the organization’s internal roles, while front office outsourcing is outsourcing the organization’s call center and customer support services. Call centers, help desks, finance, and accounting services for the organization’s internal operations are all examples of offshore outsourcing. Infrastructure and technology outsourcing services generally include services that support an organization, such as, networking, technology services, and support, etc. Offshore software development services include software development services. India, China, and Russia are the three leaders offering software development services. India is one country that can handle huge projects and deliver them in time. Every department and every employee in your organization implies a capital cost. You need facilities, equipment, and of course, the staff that occupies that space and uses that equipment. Outsourcing makes the provision of workspace, equipment, and human capital somebody else’s problem. When an activity falls outside your core business, you may find yourself or your employees trying to multitask. For example, a manager becomes a marketer but just what does he or she know about marketing? You and your employee face a learning curve, and learning curves are inefficient. Outsourcing gets you access to expertise – and since the task you pass on to them is the core business of the organizations to which you outsource, they’re always on top of the game. There’s no wait time or down-prioritizing. They want to get the job done even more than you do! If you prefer, we can call this “focusing on your core business,” but it amounts to the same thing. Let’s suppose you’re a manufacturer of ball-bearings. Do you really want to get distracted by IT management? All you want is its benefits! The obvious solution is to outsource. Big companies can afford to have dedicated departments for things like accounting, marketing, HR, and IT. Can yours? But if you outsource, you have all the benefits of a dedicated department – without the overheads. As soon as you rely on a technology, there’s a risk that it might become outdated. And in today’s world, that can happen really fast. Meanwhile, you’re continuing with business as usual, and the first time you know there’s a risk is when it bites! But if you outsource to a company that specializes in a technology, it will be very aware of developments and changes, and it will know how to keep you ahead of the curve. If it doesn’t, it risks going out of business completely!
9
NASA to deflect asteroid in test of 'planetary defense'
November 4, 2021 by Chris Lefkow This artist's illustration obtained from NASA shows the DART spacecraft prior to impact with the asteroid Dimorphos. In the 1998 Hollywood blockbuster "Armageddon," Bruce Willis and Ben Affleck race to save the Earth from being pulverized by an asteroid. While the Earth faces no such immediate danger, NASA plans to crash a spacecraft traveling at a speed of 15,000 miles per hour (24,000 kph) into an asteroid next year in a test of "planetary defense." The Double Asteroid Redirection Test (DART) is to determine whether this is an effective way to deflect the course of an asteroid should one threaten the Earth in the future. NASA provided details of the DART mission, which carries a price tag of $330 million, in a briefing for reporters on Thursday. "Although there isn't a currently known asteroid that's on an impact course with the Earth, we do know that there is a large population of near-Earth asteroids out there," said Lindley Johnson, NASA's Planetary Defense Officer. "The key to planetary defence is finding them well before they are an impact threat," Johnson said. "We don't want to be in a situation where an asteroid is headed towards Earth and then have to test this capability." The DART spacecraft is scheduled to be launched aboard a SpaceX Falcon 9 rocket at 10:20 pm Pacific time on November 23 from Vandenberg Space Force Base in California. If the launch takes place at or around that time, impact with the asteroid some 6.8 million miles from Earth would occur between September 26 and October 1 of next year. The target asteroid, Dimorphos, which means "two forms" in Greek, is about 525 feet in diameter and orbits around a larger asteroid named Didymos, "twin" in Greek. Johnson said that while neither asteroid poses a threat to Earth they are ideal candidates for the test because of the ability to observe them with ground-based telescopes. Images will also be collected by a miniature camera-equipped satellite contributed by the Italian Space Agency that will be ejected by the DART spacecraft 10 days before impact. Nancy Chabot of the Johns Hopkins Applied Physics Laboratory, which built the DART spacecraft, said Dimorphos completes an orbit around Didymos every 11 hours and 55 minutes "just like clockwork." The DART spacecraft, which will weigh 1,210 pounds at the time of impact, will not "destroy" the asteroid, Chabot said. "It's just going to give it a small nudge," she said. "It's going to deflect its path around the larger asteroid." "It's only going to be a change of about one percent in that orbital period," Chabot said, "so what was 11 hours and 55 minutes before might be like 11 hours and 45 minutes." The test is designed to help scientists understand how much momentum is needed to deflect an asteroid in the event one is headed towards Earth one day. "We are targeting to be as nearly head on as possible to cause the biggest deflection," Chabot said. The amount of deflection will depend to a certain extent on the composition of Dimorphos and scientists are not entirely certain how porous the asteroid is. Dimorphos is the most common type of asteroid in space and is some 4.5 billion years old, Chabot said. "It's like ordinary chondrite meteorites," she said. "It's a fine grain mixture of rock and metal together." Johnson, NASA's Planetary Defense Officer, said more than 27,000 near-Earth asteroids have been catalogued but none currently pose a danger to the planet. An asteroid discovered in 1999 known as Bennu that is 1,650 feet wide will pass within half the distance of the Earth to the Moon in the year 2135 but the probability of an impact is considered very slight. © 2021 AFP
5
A Layman’s Guide to Recreational Mathematics Videos
A Layman’s Guide to Recreational Mathematics Videos p p p p p p New to LessWrong? Getting Started FAQ Library
1
A Full Circle Journey: Introducing Cloudflare Canada
Pour voir cette publication en français, veuillez cliquer ici. Today Cloudflare announced that Toronto will be home to Cloudflare’s first Canadian office and team. While I currently live in San Francisco, I was born and raised in Saskatchewan. As a proud Canadian, today feels like a homecoming. Canada has always been an important part of our history and customer base, and I am thrilled to see Cloudflare make a further commitment of expanding officially in the country and opening an office in Toronto. We are hiring a team locally to help our current customers and future customers, and to support even more great Canadian organizations. I wanted to share more about how Cloudflare works with Canadian businesses, what today’s announcement means, and some personal reflections. Cloudflare helps ensure anything connected to the Internet is fast, safe, and reliable. We do this by running a distributed global cloud platform that delivers a broad range of network services to businesses of all sizes—making them more secure, enhancing the performance of anything connected online, and eliminating costs and complexity. We help approximately 25M Internet properties around the world—whether you’re a Canadian entrepreneur trying to spin up your next idea, a healthcare company trying to speed up vaccine distribution, a Global 2000 company, or a non-profit. Today we work with thousands of customers in Canada including Canadian entrepreneurs, universities, non-profits, large enterprises, as well as small businesses. All of those services need to be fast online, protected from cyber attacks, reliable, and available around the world. Cloudflare helps make that happen—and we’re really good at it. Each day we have blocked an average of 70 billion cyber attacks on behalf of our customers—more than 3.2 billion of those attacks, that we have seen everyday, originate in Canada—and that number has increased by 26% from the end of 2020. We have also seen online usage increasing as well—Internet traffic in Canada is up about 60% compared to one year ago when the world was first shifting to a virtual lifestyle. Canadians are spending a lot more time online in 2021, compared to 2020. I’m especially proud of how we’ve offered our technology to organizations that may not have the resources to keep up with high traffic and protection from cyberattacks like BullyingCanada, Canada's largest anti-bullying charity serving Canadian youth. Helping keep their website reliable and secure for children and families seeking support, especially when they need it most. It’s also fulfilling to know that we help power COVID-19 vaccine distributors, like Verto Health in Canada and Jane who are helping with the vaccine distribution in British Columbia. We help keep these registration sites accessible, withstand scheduling demands, and efficiently queue and facilitate the distribution of the COVID-19 vaccine. It turns out whether you are a developer working on a hobby project or a large Canadian organization, every business needs to deliver their service more quickly, more securely, and more reliably. That’s exactly why we started Cloudflare. There’s no question that the world has relied on the Internet more than ever before this past year—and that isn’t going away. The Internet has reinvented the way we live and survive. We’ve relied on the Internet to access public information, visit the doctor, get our work done, stay in-touch with friends and loved ones, educate our children, order groceries, and so many other things. Canada has done a great job at fostering digital citizenship, and is continuing to take this ahead of the curve as one of the most Internet connected countries in the world (#7!). Also, the depth and quality of Canada’s tech talent pool is undeniable, with more than 2.8 million STEM graduates and the world’s highest educated workforce. There's a strong growing technology ecosystem and entrepreneurship in Canada. This isn’t just a moment, it’s a movement. And it’s gaining steam. Since 2013, Toronto has added more tech jobs than any other place in North America, including Silicon Valley. There are numerous communities helping propel this. As a Charter Member of The C100 it’s great to see how this community of Canadians in tech are supporting, inspiring, and connecting with Canadian entrepreneurs across the globe. There are plenty of other amazing communities and resources including Next 36, Co.Labs, Elevate and events like Collision Conference bringing together the vast technology industry—I’ll be speaking about Canadian entrepreneurship tomorrow alongside Ariel Garten. Canada is also a strong research hub that’s progressing global standards. We’ve worked with a number of research teams and academic communities, such as with the University of Waterloo, to evolve global cybersecurity, cryptography, and privacy. Now having our team on the ground presents an even stronger opportunity to deepen this work. I grew up in Prince Albert, Saskatchewan, and my journey from the North to Silicon Valley included stops in Montreal and Toronto, before I headed to Boston for business school and, eventually California. Saskatchewan is all about community and hard work, which gave me the foundation to be an entrepreneur and really help build what Cloudflare is today. Toronto is a special place for me because it was where I fell in love with startups. I worked for an early-stage startup, and that’s where I learned about the power of what a small group of motivated people can accomplish together. It’s also where I got my first experience working in technology. I soon saw how pragmatic and actionable it was working in technology, and I was instantly drawn to how it could make an impact on the world. I went to Harvard Business School to pursue my MBA. It was there that I met a super smart serial entrepreneur Matthew Prince. We were classmates and we started to work on Cloudflare together. We eventually moved to San Francisco to join our third co-founder, Lee Holloway and to grow Cloudflare. Fast forward to almost 11 years later, I’m excited for Cloudflare to expand our team to Canada. We are here to help build a better Internet—for Canadian organizations and their online users. I’m thrilled that we are doubling down on hiring local talent to further support local customers and partner with more businesses in the region. As new neighbors in the region, don’t hesitate to reach out if we can help:
227
Show HN: Neko – Self hosted virtual browser that runs in Docker and uses WebRTC
{{ message }} m1k1o/neko You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
2
JO-SQL Database
Welcome to JO-SQL - a versatile multiuser database with reusable parts. A presentation about architecture and main feature is here. In short, the database The full tarball including all sources is at JOSQL-full: A smaller version without server and network library can be found at JOSQL: As detailed on the main page, alternatively a Windows version or a merged version is available on demand. Just drop me an email. Any questions, bugs, improvements, mail to my-first-name AT die-Schoens DOT de
1
Google I/O 2021 preview
Google I/O 2021 is actually happening this year. But due to a certain worldwide pandemic, it will be all online instead of outside in the sun of Mountain View. Google skipped the 2020 edition entirely, but the company is finally ready to deliver its first ever virtual Google I/O. For us onlookers, that means we're officially entering unknown territory. Google I/O starts Tuesday, May 18 at 1 pm EDT, when Google/Alphabet CEO Sundar Pichai will take the stage and presumably show off what Google has been working on all year. We've been prepping for the show ourselves, and the shift to an all-virtual event hasn't lessened the amount of tea leaves to read. We're expecting to see quite a few things over the next week. Well, first, let's talk about what we're probably not going to see: the Pixel 5a. At Google I/O 2019, we saw the launch of the Pixel 3a in May of that year. But with I/O 2020 canceled, the Pixel 4a didn't hit the market until much later in the following year, on August 20, 2020. Normally we would call the launch timeframe for the 5a a toss up between mirroring the 3a or 4a launch dates, but Google has already set us straight. Back in April, the company said the Pixel 5a would be "announced in line with when last year’s a-series phone was introduced." So that's August, not May, and not at Google I/O. XDA Developers XDA Developers / Ron Amadeo XDA Developers XDA Developers We're up to several releases of the Android 12 Developer Preview by now, but Google I/O will mark the release of the first "beta" version. Android 12 definitely has a big redesign coming—we've already seen leaks of the new design, and it looks like a significant departure from previous versions. There's a wild new color-changing UI that shifts to match your wallpaper. All the buttons, sliders, and every other UI widget have been reshaped and rearranged. It has a new scroll list design that, like a Samsung phone, works better on bigger displays by initially starting with a big title and pushing the top of the list content further down the screen, where it can be easily reached. There's a new privacy UI, which alerts you when your camera, microphone, or location is in use. There's also a new look for widgets, mirroring iOS's recent widget revamp. There is so much "Android redesign" evidence out there we don't actually know what Android 12 looks like out of the box. We just keep seeing screenshot after screenshot of wildly different UI bits, and having features that change color based on user settings also really doesn't help when trying to visualize the entire package. We know all that is coming. The question is, will it be officially unveiled at Google I/O? The previous developer previews have been perfectly fine shipping new functionality while stripping out all the interesting UI changes. Google might want to blast out the new design from the I/O virtual stage, or it might want to save it for closer to launch. One good sign we just recently got was a leak of what looks to be a Material Design sizzle reel from YouTuber Jon Prosser. It's still not very enlightening as to what Android 12 will look like, but it seems like the kick-off video for unveiling the next version of Material Design. The Google I/O schedule says at least a few things will be talked about. The new widgets are something that will need developer uptake, so those are getting disclosed at Google I/O during the "Refreshing widgets" talk. The talk promises to show off "useful, discoverable, and beautiful widgets on Android and Assistant." There's an interesting curveball at the end there—what do Android homescreen widgets have to do with the Google Assistant? Sharing code between Android home screen widgets and the Google Assistant is something Google actually started working on before—it was called the "Slices" API. For some reason, though, it never took off. In one of our many interviews with Dave Burke, Android's head of engineering, we asked him point blank, "Whatever happened to the Slices API?" Displaying remote app content in multiple places sounded like a good idea to us. "I still think it's a great idea, but I don't think we found the fit for it just yet," Burke said. "We actually built it out, and right now we're working with the Google Assistant team to see if we can figure out something that makes sense." The Google Assistant team, you say? That sounds suspiciously like the new widget API. So we'll be on the lookout for displaying widget content in other, remote places.
1
Apple subpoenas Valve as part of its legal battle with Epic
A new court filing has revealed that, as part of the ongoing legal battle between Apple and Epic Games, Apple subpoenaed Valve Software in November 2020, demanding it provide huge amounts of commercial data about Steam sales and operations going over multiple years. A brief primer: the dispute between Epic and Apple began in August 2020, when Epic added a new payment system to Fortnite's iOS version that bypassed Apple's 30 percent fee. Apple retaliated almost immediately by removing Fortnite from the App Store. Epic replied in-kind by rolling out a Nineteen Eighty-Fortnite promo, based on Apple's famous 1984 Macintosh ad, and then a few minutes after that—this all happened on the same day—filed a lawsuit against Apple over Fortnite's removal. Since when there has been plenty of legal back-and-forth between the two parties. Apple subpoenaed Valve under the basic argument that certain Steam information would be crucial to building its case against Epic, which is all about competitive practices. Yesterday a joint discovery letter was filed to the District court in Northern California relating to the subpoena, which contains a summary of the behind-the-scenes tussles thus far, and both sides' arguments about where to go from here. Apple's argument is made by the law firm McDermott, Will and Lowery, and states that Valve is relevant to the case against Epic because "Valve’s digital distribution service, Steam, is the dominant digital game distributor on the PC platform and is a direct competitor to the Epic Game Store." Since the subpoena in November, goes Apple's argument, "Apple and Valve have engaged in several meet and confers, but Valve has refused to produce information responsive to Requests 2 and 32." Requests 2 and 32 show some chutzpah, to say the least. Rarely will you see a finer example of lawyer-ese than this: "Apple’s Request 2 is very narrow. It simply requests documents sufficient to show Valve’s: (a) total yearly sales of apps and in-app products; (b) annual advertising revenues from Steam; (c) annual sales of external products attributable to Steam; (d) annual revenues from Steam; and (e) annual earnings (whether gross or net) from Steam. Apple has gone as far as requesting this information in any readily accessible format, but Valve refuses to produce it." Apple's reasoning for this request boils down to a desire to show the extent of the market that the Epic Store is competing in. Which got short shrift from Valve: "Valve has admitted to Apple’s counsel that the information requested exists in the normal course of business, but Valve simply refuses to produce it in any of the formats Apple suggested." Now if you thought that demand from Apple was ballsy (and bear in mind Steam is a non-party to this main dispute) hold on to your hat, because Request 32 piles-on to demand documents showing: "(a) the name of each App on Steam; (b) the date range when the App was available on Steam; and (c) the price of the App and any in-app product available on Steam." That is, Apple wants Valve to provide the names, prices, configurations and dates of every product on Steam, as well as detailed accounts of exactly how much money Steam makes and how it is all divvied-up. Apple argues that this information is necessary for its case against Epic, is not available elsewhere, and "does not raise risk of any competitive harm." Apple still claim its rules "apply equally to every developer".Paraphrasing Orwell, some are more equal than others!0% - Google and Facebook ads0% - Netflix0% - Amazon video direct payment15% - Amazon video paid thru Apple30% - Fortnitehttps://t.co/Rx1mFV1cY2February 17, 2021 Needless to say, Valve does not agree. Its counter-argument to the above says that Valve has co-operated to what it believes to be a reasonable extent—"Valve already produced documents regarding its revenue share, competition with Epic, Steam distribution contracts, and other documents"—before going on to outline the nature of Apple's requests: "that Valve (i) recreate six years’ worth of PC game and item sales for hundreds of third party video games, then (ii) produce a massive amount of confidential information about these games and Valve’s revenues." In a masterpiece of understatement, Valve's legal counsel writes: "Apple wrongly claims those requests are narrow. They are not." Apple apparently demanded data on 30,000+ games initially, before narrowing its focus to around 600. Request 32 gets incredibly granular, Valve explains: Apple is demanding information about every version of a given product, all digital content and items, sale dates and every price change from 2015 to the present day, the gross revenues for each version, broken down individually, and all of Valve's revenues from it. Valve says it does not "in the ordinary course of business keep the information Apple seeks for a simple reason: Valve doesn’t need it." Valve's argument goes on to explain to the court that it is not a competitor in the mobile space (this is, after all, a dispute that began with Fortnite on iOS), and makes the point that "Valve is not Epic, and Fortnite is not available on Steam." It further says that Apple is using Valve as a shortcut to a huge amount of third party data that rightfully belongs to those third parties. The conclusion of Valve's argument calls for the court to throw Apple's subpoena out. "Somehow, in a dispute over mobile apps, a maker of PC games that does not compete in the mobile market or sell 'apps' is being portrayed as a key figure. It’s not. The extensive and highly confidential information Apple demands about a subset of the PC games available on Steam does not show the size or parameters of the relevant market and would be massively burdensome to pull together. Apple’s demands for further production should be rejected." PC Gamer has contacted both Apple and Valve for comment, and will update with any response.
1
How to Run Powershell Script?
Table of Contents PowerShell is an interactive shell developed by Microsoft mainly for System Administrators, especially to replace CMD.EXE, a great tool of its time. CMD is very limited for the current time and needs of System administrators, and it was the main reason behind the birth of PowerShell. When you start exploring the PowerShell in deep, it will quickly become evident that it is way more potent than an interactive shell. System administrators can manage almost anything with PowerShell like Server Operating Systems, Exchange Servers, Active Directory, Office365, Azure, and many more. Windows PowerShell and PowerShell Core: Soon after the success of Windows PowerShell, Microsoft realized that they need PowerShell to be cross-platform, which can operate on any Operating System like Mac, Linux, Windows, so they stop developing the Windows PowerShell, which runs only on Windows Operating systems, and start creating a cross-platform version of PowerShell called PowerShell Core. There are two types of PowerShell one called Windows PowerShell’s latest build version 5.1, and the other is PowerShell Core latest build version 7.2. How to check PowerShell Versions. There are many ways to check the PowerShell version installed on your Operating System, but I will show you the easiest way which will work on both versions of PowerShell “Core and Windows Powershell.”. Run the $PSVersionTable command “cmdlet” in any version/type of PowerShell to know which version/type of PowerShell is running on your computer. How to Open PowerShell: Getting started with PowerShell is a straightforward matter of running PowerShell.EXE instead of CMD.EXE. To launch the PowerShell interactive shell, click on the search bar, and type PowerShell in it, and click on the PowerShell app as shown in the image below. p Recommended For You:Install software using Chocolatey & PowerShell You can also launch the PowerShell from the Command Prompt by typing PowerShell in the command prompt and hitting the Enter button on your keyboard. How to create a PowerShell script: Before I go ahead and show you how to run the PowerShell script, we need to create a PowerShell script first, and for creating a PowerShell script, you should have little knowledge about basic PowerShell commands (cmdlets) . To create a PowerShell Script, you can use any text editor like Notepad or an IDE “Integrated development Environment,” for example, PowerShell ISE or Visual Studio code. In my case, I will use Notepad to create a PowerShell script. If you want to follow along, open up your text editor or any IDE and type Get-Service , a basic PowerShell command that shows all the running services on any computer. PowershSll script example code: After that, save the file with the .ps1 extension to create the PowerShell script. How to Run PowerShell Script from PowerShell console: Running the PowerShell script from the PowerShell console is very easy. Launch your PowerShell console and navigate to the location where you saved your PowerShell script with .ps1 extension, add “.\” before the file name, and hit enter to execute the PowerShell script, as shown in the image below. Even though this is the right way to execute the PowerShell script, but most probably, you will get an error “running scripts is disabled on this system,” as shown in the image below. Running scripts is disabled on this system: PowerShell is a very secure scripting language, and it was developed by keeping in mind all different kinds of security threats. That’s the reason you cannot run any PowerShell script until you understand PowerShell script execution policies. p Recommended For You:How To Create Teams & Channels Using CSV And Powershell PowerShell Script Execution Policies: Restricted: Scripts will not run. (Default Policy) RemoteSigned: You will be able to run Locally created scripts, but scripts created on different machines/computers will not run unless a trusted publisher signs them. AllSigned: Scripts will only run if signed by a trusted author (including locally created scripts). Unrestricted: All scripts will get execute despite who developed them and whether they are signed or not. In our case, the RemoteSigned PowerShell script policy will work well, so let’s set our PowerShell script policy to RemoteSigned by enter the following command in your PowerShell console. After changing the PowerShell script execution policy, the script will run without an issue, and you will not get the error “running scripts is disabled on this system,” as shown in the screenshot below. To learn more about PowerShell script execution policies, execute the following command in your PowerShell console “Get-help about_Execution_Policies.”. About_Execution_Policies cmdlet will provide you complete in-depth detail about PowerShell script execution policies. Run PowerShell script from command line: To run the PowerShell script from the command line, you need to call PowerShell.exe with the script Execution Policy parameters and provide the script file path as shown in the image below. Open PowerShell from cmd (Command Prompt): You can also open PowerShell from cmd (Command Prompt) by typing PowerShell.exe in the command prompt, as shown in the image below. Run PowerShell script as administrator: If you want to run the PowerShell script as an administrator, execute the PowerShell console as administrator before running the script, and the script will run with administrative privileges. You can check the below image for reference. If you want to learn more don’t forgot to bookmark and share MCSAGURU . p p p
9
Event Immutability and Dealing with Change
One of the first things that people hear when they start working with Event Sourcing is regarding the immutability of events, and how this is a useful attribute for data which drives business logic and decisions in software. Very often, however, little more is presented as an argument for immutability other than 'auditing', and while auditing is a valid reason to adopt immutability for your write path data, there are also other reasons to embrace it, some probably more notable than auditing. TL;DR: While immutability may seem on the surface to be problematic as a write-path database trait, it is an enabling constraint. It guides us towards an approach in dealing with changes and discoveries that has a lot of positive characteristics. In the first part of this article, I will try to present why we should consider immutability carefully, even if you don't have auditing requirements. Then in the second part, we'll go over what you can do when you discover you need to change something. In this article, I will assume that you're familiar with the basic concepts of event sourcing. For an introduction, see the What is event sourcing? blog post or one of the many talks Greg Young has on YouTube on the subject. People more commonly encounter the term immutability when they start working with functional languages. In that context, immutability is an approach to data where the state of an object cannot mutate (or change) after its creation. The term has a semantically similar meaning in event sourcing: it describes the practice and policy of not updating, or in any way altering, the event after it has been persisted in our database. If you don't have any experience with functional languages, and your background is with normal form databases or document stores, the above concept may seem entirely alien. It is, however, natural to event sourcing, where we persist entities in the form of domain-specific and domain-recognised events. Whenever we need to change any property of that entity, we simply append an event in the stream that holds the data for that entity. Two things that I have very regularly heard from people who first try to adopt an immutable store are: I'll try to answer both questions in this article. While on the surface it may seem that immutability involves more work, in practice the amount of effort involved is usually similar. Immutability represents a trade-off: you surrender a potentially familiar tech approach with mature tooling to gain: Below, I will be using an event store as an example of an immutable store to save write-path data. While there are many types of immutable databases (event stores, streaming databases, and time-series databases, to name a few), an event store is a great choice to store business-logic data. The benefits of using immutable stores would apply in all cases. Sometimes in a lesser extent, but the benefits mentioned above derive directly from the fact that we are not replacing older data, but instead we append the changes needed. Without further delay, let's visit each in a bit more detail. From a particular perspective, immutability is inherent to event sourcing itself. This is because, to make any change in an event-sourced system (rather than mutate a row in a table) we would emit a domain-specific event that clearly and specifically describes the change that has occurred, in as much detail as we need. The persistence of context is the root cause for some of the other benefits in this list. It is, however, also an essential benefit in and of itself. By preserving context, we're now able to have significant insight into a potentially complex system and be able to get answers to questions historically. For example, we can find out how many goods had prices changed within a certain period. We can find out how many people had a negative balance for more than two consecutive days. We can report on the number of new daily sign-ups or the accounts that became active per day. The ability to answer these questions is built-in when you're storing information using an immutable store, and importantly we have this historically: you will very often be able to answer questions similar to the above examples, even for pre-existing data in your system, even if you didn't prepare beforehand. With mutable systems, when any of the above requests came in, we would have to add new features to our store (new columns or tables), further work to write to those rows, and we quite possibly wouldn't be able to extract the information. I mention this point because, while it is the root cause of other essential benefits, context on its own is of extreme importance to a competitive, lean business, who needs to outmanoeuvre the competition. Being able to ask questions, and get answers, from existing data immediately can be an immense competitive advantage. Assume for a minute that you just got woken up at 3:00 AM because something or other is failing in production. What kind of information would you prefer to have available? Obviously, having a complete history of changes for your entities and all dependencies makes debugging and support much more manageable, especially when you're under pressure. Or any other time you need to debug for that matter. To achieve this we need to use metadata and follow a few basic principles, but using event sourcing and immutable events is the enabling factor, and it works really well with observability principles and tools. Moreover, since data never change, tracing what caused a piece of data to be like it is, is often straightforward. Having causal analysis of changes be easy to do allows you to focus on fixing the problem, instead of trying to find it. No matter how strict you are with decoupling, you can't have a system with absolutely zero coupling. Nor should you. Systems of any complexity naturally have upstream pieces of logic that directly or indirectly cause changes to downstream parts. But what happens when the upstream component made a mistake, and that cascades to consumers? As is often in these cases, fixing the source of the problem and its local data, is by far the more straightforward piece of work. How should dependent components react to the correction? Should they deal with the change? Should they roll back and re-run the new request? What about if any intermediate actions happened in the meantime? Should this latest information simply overwrite the existing one? These are tough questions to answer when operations aren't commutative. And the longer it takes to discover the problem, the more difficult it is to fix. You may not be aware of them, but when dealing with non-trivial mistakes and bugs (think issuing a wrong invoice, not entered an incorrect value in the UI), chances are that established processes already exist to fix them. This shouldn't come as a surprise; people have been making mistakes since long before automation came along, and they had to deal with them. You should definitely ask your domain experts and find these out, as these are tried and tested processes that form a core part of your domain. But why is this mentioned as a benefit of immutable stores? Don't the same processes exist when working with mutable stores? In fact, with the tooling available in most modern databases, changing some values could be as easy as running a very short SQL update statement. There are two primary reasons why correcting such errors is a much better experience with immutable stores: To expand a bit on the second point, quite frankly, I find doing any form of destructive transformation of live data in a public-facing environment terrifying. Unfortunately, I also have a couple of awful horror stories that I prefer to think of as "experience". Honestly, with an immutable store, fear of deployments is vastly reduced (well at least from the data perspective). I am sure for many people who went from SQL server migrations to using immutable stores, this is the thing they love the most. If anything goes wrong on a stream, the previous data still exist. Depending on the change, getting access to that data again may not even need any change in the store itself. Moreover, it's easy to keep around data that was created between deploying and rolling back, if that is useful. TIP: I very strongly recommend that you ask your domain experts about corrective processes during collaborative workshops (like event storming and event modelling) or interviews, as these not only lead to new insights and valuable business processes but also allow you to have a pre-approved way of dealing with some classes of errors. An audit log does one thing, and that is to answer the question: why did we make the decisions we made? Either to respond to requests from your users, or from legal obligations (which often exist in financial domains). Note: In some domains it is important to keep in the audit log information that you would normally keep as part of your event metadata (time of the action, user taking the action and so forth). In the past my preference has been to "promote" these to event properties, as it guarantees that these will not be modified or ignored as part of the normal processing that happens to event metadata withing a complex system. It is easy to add a table to store the actions and decisions when business logic makes these. If auditing is the only reason you were considering to adopt event sourcing, you'll probably be OK with that additional table. However, using immutable events in the context of event sourcing to store information, gives you one guarantee which is essential in some domains and to legal requirements: you use the same data you use for auditing to make all subsequent decisions, and this guarantees that the auditing data are accurate and correct. With an immutable store, no one can tamper with the data to alter it. If a piece of data is missing, it cannot affect the decision we make. With a separate audit log, you could emit more or less information due to bugs, which could put you into trouble. A lot of legal bodies are aware of domain-specific methods of storing data that behave like this, and this will help you immensely if the need arises for auditing. The first principle from Jakob Nielsen's principles for interaction design is visibility of system status. Some parts of this rule are: As an example, let's consider this: multiple users added credits to their accounts using coupons. However, due to a bug, we added twice as many credits in their accounts as we ought to. With mutable stores, we would update the value. With immutable stores, the old value is still there available, with a record of why and how we changed their credit amount. Which of these two provides a better experience? Immutable data help you in all three points: As I mentioned above, we will inevitably make discoveries that lead us to want to change the existing system and potentially existing data. Previously I have outlined why immutability is a desired trait, but I wrote little about the 'how' of dealing with changes. Below I'll go over the most common reasons to change data in a system, and how to deal with them. TIP: Buy and read Greg Young's book Versioning in an Event Sourced System. A lot of what I suggest below is covered in this book in greater detail. Below, I mostly focus on dealing with changes locally. However, there is significant complexity in dealing with downstream consumers that depend on that data. You have to put careful thought about how you deal with dependents to your data. NOTE: To help illustrate the point, I'll be using the bank account example, and also try to go through the possible changes we may need to apply in a sequencial manner. If you worked in banks, you may be aware that the stereotypical example of the event sourced bank account is, in fact, entirely wrong. This is for multiple reasons, which aren't important here. I am however going to be using this example because it is familiar to a lot of people, and will allow people to focus on the aspect of making changes. All systems need to change some values due to external stimuli (a user changing some data, we receive new information and need to update an entry, amongst other reasons). As a typical example of immutable stores, in an event store you would update a value by emitting a domain-specific event. Let's take the stereotypical example of a bank account. We could model the opening of a bank account using an AccountOpened event. We add this event to a new stream that will enforce concurrency on operations for that account, and we 'replay' this event, by projecting the data represented by the event to an in-memory structure to use in domain logic. We can see this below: NOTE: I am showing the DepositMade event and event applier in the above diagram, but only because I assume that a minimum viable product won't be viable without at least the ability to make a deposit. In future examples, changes will be introduced only when needed to demonstrate the point. When the account holder needs to make a deposit and we need to update the account balance, we could append a DepositMade event to the stream that includes information about how much money was deposited, and what was the balance of the account at the end of the deposit. When we're loading the account data from that stream, we would be projecting both events, again in memory to reflect our most recent view. This would then look as follows: Our bank account software works fine for a while, but we (very) soon discover that a bank account needs to be aware of the currency of the money it holds. So we need to add it. As with the bank account, as a system's behaviour expands, it is only natural that we would want to capture more data than before. Alternatively, we may find that we no longer desire to keep some data or property in our system. Both changes are much easier to do using an immutable event store compared to mutating data: Neither of the above requires us to change any existing data in the store at all, with all changes done in application code where we can test it conveniently and thoroughly. Perhaps more valuable is that, assuming reasonable defaults exist, we can apply these changes historically, and quickly and safely revert them if needed. The only thing required is to use weak schema formats (like JSON). To do this in our bank account software, we'd add a property to the events that need it, providing the default in some reasonable way for the language we may be using. We can see this below: And once the account owner makes another deposit, it may look like the below: As you can see, all of these changes have been made purely outside of our database, in a very safe, backwards compatible, and (perhaps more importantly) testable manner. Our system seems to work really well, and more people start using it, with more accounts being opened. But then we find out that some people have problems signing up. What happened in our case is that our system doesn't cater particularly well to people whose name doesn't adhere to the name\surname structure (like mononymous people for example). Our domain experts decide that the best, and more inclusive way forward, is to instead ask for a full name. This means we will no longer have name and surname in our events (or accounts), but instead a single full name property. The change I just described is one example of a change for which deserialisation (even using weak schemas) isn't enough. From experience, this type of change typically occurs when our understanding of the domain evolves, or when new requirements introduce significant changes in our domain. In this case, the shape of the event changes, but remains the same event semantically. You can do this change by introducing a new version of the event and rely on upcasting or parsing when projecting old versions of the event; as you load an old version of the event, you upconvert (or upcast) the old event to the new version through a small piece of code, before passing it to the projection logic. Upcasting is much easier than it sounds, frequently involving simple mapping. So after making this change in our account software, it would look like this: All the above happens exclusively in application code. Again, for emphasis: no DB migrations, no modifications to tables, in fact no destructive changes to any data at all, hooray! Finally, keep in mind that in some cases, a parser may be a better option than upcasting. The parser would receive the raw serialised format of any version of an event and directly parse it to an in-memory structure before replaying the event. In some cases, you will find you need to make some semantic changes to the events. These come in three types: Note: while these may seem to be event-store-specific, they still manifest in other types of databases in the form of moving or renaming columns, or restructuring tables. You have a couple of options here: Keeping the old events and introducing new ones means that you still keep the projection logic that projects those (now obsolete) events alongside the logic for the new ones. Business logic should stop raising the old types of events, and instead only raise the new ones. It is important to note that retaining the old events carries the benefits we already visited in earlier points, including safety, and is therefore recommended. However, keeping those events means that you also need to keep the code that deals with them; if such changes happen enough, you may end up with noise in your code, both for projection logic and the obsolete events themselves. When this happens, a clean-up is in order. That can happen during a copy-replace. A copy-replace is a process where we copy events from one event store to a new event store, at the same time making all the necessary changes. For an example, we'll leave our bank account for a bit, and look at a loan application. We'll start with a stream that has events indicating that the loan has been requested, underwritten, with it's scheduled repayments made, and with the interest and principle paid in full. After some time working on this product, we have now come to a better understanding of our domain. More specifically, our understanding of LoanRequested has evolved such that we now realise this as two separate and distinct domain events: LoanRequested and a separate ScheduleCalculated that represents the proposed repayment schedule, which used to be part of the old LoanRequested. We can use event migration as seen below to read this event from one stream and emit two different events to an output stream with the same name in a new instance of an event store (similar to a blue-green deployment process): Since we are already doing an event migration process, it makes sense to also update some of the events we have in our stream to the latest version that we understand. This will improve the performance of rehydration, and also allow us to drop upcasting logic from our solution. Below, event migration reads one event at a time, and emits an event for each read one on the output stream: Finally, we have realised that it makes little sense for our product to record separate InterestRepaid and PrincipleRepaid events as these are almost always repaid in full together with the last repayment. To do this, we have logic in event migration project that recognises those events and reads ahead to find the next one, before projecting both on a new InterestRepaid event: In this above example, we have an event for LoanRequested still on V1, but the current version of our system uses V3. We're currently dealing with this using the method described above, but now we have decided that we want to split some semantics out into their own event. Specifically there are processes that require the schedule of payments to be made (which is something we calculate based on the loan request), but aren't interested in anything else form the loan request, so we decided to have it separate from the loan request itself. Also, we decided to upconvert the LoanUnderwitten event and store the upconverted event, because we want to be free of the logic to upconvert events we no longer emit, and improve performance. Finally, we decided to merge the PrincipleRepaid and InterestRepaid events to a single LoanRepaid event. The process has some nuances if we have a lot of events, and we want to do a zero-downtime deployment, but the fundamental logic is the same. Note: Splitting, merging or moving properties of events using a copy-replace is not destructive (we're not supposed to remove data during this time), but it is more dangerous than the ones we've seen previously. If you have a bug in the code that does the copy-replace, you may end up with corrupted or missing data. However, the process requires you to move data from one server to the other in a green-blue approach, so you have a live backup you can failover if things go wrong. Note: Even though this is non-destructive from data aspect, in some industries, in order to still adhere to auditing regulations you may need to keep around the old events, or even not be able to use copy-replace at all. Please check with someone who knows your industry laws. Errors in data will manifest in raising the wrong events, or the events raised contained the erroneous data. Commonly you'd choose one of three ways to fix this type of issue: It is important to note that in all the cases mentioned below, the effort involved in actually changing the data is either similar or more straightforward when we take an immutable approach. With immutability, we have to emit one or two new events, but using mutability, we could need anything from updating a cell in some rows, to deleting and changing multiple rows in multiple tables. Any processes needed in downstream consumers will still need to be kicked off and allow any external side-effects to happen, and signalling this needs to occur with both mutable and immutable data stores. One option is to emit a compensating event, sometimes referred to as a partial reversal event. A compensating event is what it sounds like: you emit an additional event which has the opposite effect as the wrong one with just the right values to balance out any errors. To demonstrate this, let's go back to the bank account example. Let's assume we have introduced some bug in our domain logic that makes a mistake during a deposit that ignores decimals in a way that it considers £10.00 to be £1000: Using a compensation event, we would emit the logical opposite of a deposit, in this case a withdrawal seen on the right, to compensate and bring our state to the correct one. Downstream consumers could observe the withdrawal (made more visible through metadata), and correct their local state. As long as your domain allows for this to happen (an opposite action exists to fix the wrong one, and it's permitted legally) a compensating event can fix your error relatively quickly. Always make sure you add metadata to the event to indicate it is a compensating event for your audit trail if you need it. Bear in mind that compensating events may be a quick and easy way to deal with errors, but your history no longer represents reality, unless you inspect metadata, and it could be a problem with downstream consumers. This is quite important in some domains. In fact, I am willing to bet good money that this is not something banks would allow you to do. However, this method still has it's uses in a lot of domains, or generic services that don't have strong audit requirements. Sometimes a bug may have caused us to raise an event when we shouldn't, or the event had the wrong data in it. In both of these cases, we need to declare to downstream consumers that we made a mistake and let them deal with it accordingly. To do this, we can: Let's again go back to our decimal problem from before:: With this method, fixing our deposit bug described earlier, could be visualised as below: Emitting a full reversal event, aside from having the benefit of being informative, also explicitly notifies all downstream consumers about the mistake, and allows them to act accordingly to correct their state. In a pull-based system, where downstream systems process events using a pull-based model, it is even possible to automate some of this corrective process: for example on receiving the reversal event, a query projection could drop the existing data it holds, and rebuild its state from the beginning, skipping the 'undone' one. While important errors in data are usually caught early, sometimes they remain unnoticed for a while, enough for subsequent actions to be taken. This is not a problem for commutative operations, but even for non-commutative operations, my experience has shown that even then a rever-redo process will most often work. If not, another of the options presented will work. In some cases, customer-facing errors are quite complex to fix. For example, if we decided wrongly that a loan has been fully repaid, we cannot just remove the flag and wipe our hands clean. This does not mean we just lost the business money, of course, but it does mean that the process to correct these mistakes is a more complex one than updating a value. I have quite a lot of examples by now (I made my fair share of mistakes) of error recovery workflows, because these have to exist for legal reasons, or because people working on the domain developed and established strategies to fix complex mistakes. Whenever you discover errors in core workflows to your business and domain, I'm willing to bet that your domain experts know of a procedure to follow to fix it. Don't emit an additional event with the correct values without an undo event, relying on idempotency to fix your projections. Not having to raise an undo event may seem less of an effort, but your projections (read or write path) may not stay idempotent while the events exist. You may face some nasty surprises in the future when you take this approach. Moreover, this way, the events in the stream don't represent reality. Some event stores allow you to delete events selectively (delete the third event leaving the first two in place). Deleting events, even wrong ones, is also something I would advise you to avoid. Deleting a wrong event and adding a correct one, is the closest you can go to mutating data in an immutable store and will cause you to lose most of the benefits of using one. Moreover, if you delete one event, while it has newer in front of it, you may end up with insidious concurrency issues, that would be challenging to fix now that you deleted the original event. Changing the transactional boundaries is the one change that is commonly more difficult to address with an event store. While the transactional boundary may not have a representation in software, it does have one in the store: the event stream. Note: Only some mutable stores allow for find-grained control of transactional boundaries, and this operation is easier only in those cases. Typically these would be normal form databases, where such control is built-in. Reshaping documents in document stores, on the other hand, to accommodate new transactional requirements carries an equal (if not more) amount of effort as reshaping streams. In normal form databases, the only thing needed is to change the query to act or return a different view of the data stored. In an event store, when we need to change our transactional boundaries, or significantly reshape the domain we have, the only option is moving events to new streams or already existing ones. We can do this using the copy-replace method we said above, but the logic that copies events over will probably need to be more complicated than what we explained before, to deal with multiple streams. TIP: If you find you also need to split, merge, or move a property from one event to the other in addition to reshaping transactional boundaries, I strongly recommend that you: Split events before reshaping streams Merge events after reshaping streams Move the property to other events only when the events are in the same stream. Do the above during a separate deployment to reshaping the streams and allow time to make sure it works before proceeding with reshaping the streams TIP: From experience, this type of problem happens more often when the domain is new for the company and happens quite rarely when the domain is mature with considerable in-house expertise. I would suggest you either avoid event sourcing if the domain is new with significant uncertainty and you need to experiment, or simply be prepared to make these changes as you discover them to allow your domain to reflect your understanding of the domain. I hope you now see that using immutability for storing data that support business decisions, has advantages compared to mutating data. And I also hope I answered some of your questions about how to deal with domain discoveries, errors, and new requirements that require changes to your data. If you have any questions or comments, feel free to reach out to me  @skleanthous  on twitter.
1
Watch Virgin Orbit launch a rocket from a 747 [video]
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
2
Hans Zimmer created an extended version of Netflix’s ‘ta-dum’ sound for theaters
Netflix’s “dun dun” sound that plays before an original movie is pretty familiar, but in order to spice it up a little for films that receive theatrical releases, the streamer teamed up with composer Hans Zimmer. The sound, which can be heard in the video below, has little in common with the short “ta-dum” sound that I’ve become used to hearing. It’s, well, incredibly Hans Zimmer; orchestral, intense, loud. The “ta-dum” as it existed on Netflix was too short for theaters, and the company knew it needed something longer to play in theaters. Netflix’s brand design lead Tanya Kumar told Dallas Taylor, creator and host of the Twenty Thousand Hertz podcast, that Netflix knew it wanted to work with someone who had deep ties to cinema but also worked with Netflix in the past. Enter Zimmer. Zimmer worked with Netflix on The Crown, and the score has a “simplicity and elegancy to it that we thought was perfect for bringing into our brand as well,” Kumar said. The challenge was finding a way to keep Netflix’s “ta-dum” audio but make it bigger and far more cinematic, Taylor explains. The goal was to make Zimmer’s version feel better, more immersive — something people might expect to hear in a theater. Think of the iconic THX siren or 20th Century’s fanfare. All of this had to be done in a way that felt explicitly Netflix, and, in some ways, opposite of the team’s intention with the original “ta-dum,” which had to be short. “First off, and arguably most important, it had to be really short,” Todd Yellin, vice president of product at Netflix, said. “In our age of click and play, you get to Netflix, you want to be able to click, and there’s no patience, you just want to get to what you’re watching.” Considering Netflix’s “ta-dum” launched just five years ago in 2015, it’s kind of wild to see just how much it’s changed as Netflix adapts to the industry it’s in. Netflix movies a few years ago didn’t even go to theaters, really, but now the studio spends time each year ensuring its Oscar hopefuls get some time to play on the big screen.
994
I want a computer that I own
Miscellaneous Stuff Home Contact I want a Computer that I Own 2-26-21 I have in my mind an idea that though simple in concept may be impossible to achieve today. I want a computer that can be completely autonomous when I want it to be, but which can also be used to communicate securely with anyone on the planet without being observed by a third party. I don't want to be spied on by Microsoft or Google. I don't want the NSA intercepting my conversations or even their metadata. I want complete autonomy and privacy without having to resort to workarounds that have been invented to give me back some of the control I should have had in the first place. In other words, I want a computer that I own completely. I want a computer that does what I want it to do, not one that has a hidden agenda programmed into it at the factory. And, I want to have these capabilities regardless of what anyone has done to the Internet to prevent me from having them. I don't want to be dependent on the whims of a government or the good will of a giant corporation. Perhaps I am looking for something like the x286 DOS computer I had in the early 1990's, but 10,000 times as fast with a built-in solution for total online privacy and the ability to run modern software while blocking spyware. Instead, I have a computer that is designed largely to maximize the profits of the computer industry. Except for a handful of very over-priced models that I can't afford to buy, our computers are increasingly designed to be little more than advertising platforms and vehicles for maximizing the cloud revenues of their true owners: online data gatherers, advertisers, and cloud companies. Our computers have numerous hardware and software back doors that are designed to allow governments and corporations to spy on and track us around the Internet. I must rely on encryption algorithms that are designed with subtle flaws that can take years, if not decades, to come to light. Even open source encryption algorithms that some claim are above reproach are repeatedly being shown to have major flaws, and the fixes to those flaws have their own major flaws. And this often appears to be intentional, because governments cannot stand for a single instant for anyone anywhere to hear, say, or see anything they don't know about. Governments seem to be universally terrified of even the slightest possibility of anyone in the world having a private conversation. So, they do everything they can to goad software companies and computer manufacturers into creating back doors and flaws that they can exploit to take away our privacy and make us afraid to speak freely. If that doesn't work, they pass laws to destroy online free speech while waving their flags and proclaiming how lucky we are to be living in their countries. Will this ever end? Will I ever have a computer that I own? --Tie Copyright © 2019-2023 terraaeon.com. All rights reserved.
1
Tesla releases impressive video of production at Gigafactory Shanghai
Tesla has released an impressive new time-lapse video of its production at Gigafactory Shanghai, giving us a glimpse of what Elon Musk has been referring to as Tesla’s “Alien dreadnought.” A few years ago, Elon Musk decided to have Tesla focus on manufacturing first. He wants Tesla to have exciting products, but he also wants the company to view the factory as a product. The machine that builds the machine, he calls it. The CEO said that his goal is for the factory to look more “alien” than a factory. A machine that outputs new electric vehicles with high automation and at a speed unprecedented in the auto industry. He first introduced this idea with the production of Model 3 vehicles at Fremont factory. Musk emphasized that the first version of the Model 3 production line will only be a “version 0.5” of the “alien dreadnought,” but the line will get updated with more automation and he envisions a “version 3” in a few more years: By version 3, it won’t look like anything else. You can’t have people in the production line itself, otherwise you drop to people speed. So there will be no people in production process itself. People will maintain the machines, upgrade them, and deal with anomalies. Gigafactory Shanghai has the latest Model 3 production line deployed by Tesla and the automaker has made some progress toward its “alien dreadnought” with it. This weekend, Tesla China released a new video showing the production at Gigafactory Shanghai via its official Weibo account: While it’s not exactly the completely automated line that Musk described, it is getting much closer to it. There are sections of the production where you can see eight robots working simultaneously on a single car. As for the output, the time lapse effect is obviously making it look more impressive than it actually is, but the actual production capacity at the factory has been ramping up at a staggering pace. At the end of last quarter, Tesla had an annual production capacity of 200,000 vehicles at Gigafactory Shanghai — an impressive ramp-up in just about seven months after starting production. That’s 4,000 vehicles per week and the pace may have increased throughout the third quarter. What is interesting with Tesla right now is the pace at which they are deploying new production lines. New lines are coming up in Fremont still, but it is also deploying new lines in Shanghai and soon in Berlin and Austin. Every time Tesla deploys a new line, it learns from it and makes improvements that move it toward Elon’s vision of an “alien dreadnought” outputting cars at a high speed. I think the Model Y line in Shanghai will take another step toward that, and then it will be interesting to see how Gigafactory Berlin and Gigafactory Texas look next year. What do you think? Let us know in the comment section below. Add Electrek to your Google News feed. Stay up to date with the latest content by subscribing to Electrek on Google News. You’re reading Electrek— experts who break news about Tesla, electric vehicles, and green energy, day after day. Be sure to check out our homepage for all the latest news, and follow Electrek on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our YouTube channel for the latest reviews.
1
Information+for+Translators
Contribute Free Software Foundation Europe is an international organisation. Our goal is to reach as many people as possible and include them in our activities to promote, help and support the Free Software movement. To achieve this, we want to make our published texts and website available in several languages. A major part of the translation effort applies to the web pages, especially the frequently updated pages like the front page, the news page and the events page. But not only web pages have to be translated: Press releases, newsletters, brochures and leaflets and other texts also become more wide-spread with every additional language they are available in. This page gives you a rough overview of our translation processes. More detailed information how to contribute translations can be found on our translators' wiki pages. As FSFE is active in many different countries, texts are also often written in different languages. However, we generally use English as a starting point for sharing these texts with other parts of the organisation and for further translations. Therefore, we especially need help to Experience shows that the best translation results are achieved when people translate from a foreign language to their mother tongue. Having a good idea about what the FSFE is and about our concepts and values is a good idea before starting translating. To help you out with difficult words and phrases, we maintain a wordlist in over 15 languages that you can rely on when facing technical terminology or just standard ways of expressing oneself when speaking on behalf of FSFE. Translations are generally coordinated on the translators mailing list, and everyone wanting to contribute to translations efforts can subscribe to this list. It is also a place to seek help when in doubt or cooperating with other translators on larger projects. As we have material available in over 30 languages already, the chance is that you will always find somebody in the organization willing to help you. Texts to be translated or proofread are sent to this list along with a reference to the desired languages. Whoever starts with a translation sends an answer to the list to avoid duplicate work. Finished translations are also sent to the list to allow others to proofread the translation and propose possible improvements. Both original texts and translations are usually sent around as plain text file attachments to minimize copy and paste efforts. Ideally, the translation team for a specific language has several members that relieve and support each other, so translations to a language would not depend on a single person. Many of our translators are already active in other Free Software projects in addition to FSFE. Helping with translation efforts in the FSFE strengthens the Free Software community at large and gives people, no matter the language or nationality, a chance of learning more about Free Software. Translating and proofreading texts is a precious contribution to the work of the FSFE and an excellent chance to spontaneously take part in the activities of the FSFE without long-term obligations. Translating for web How to translate fsfe.org, freeyourandroid.org, pdfreaders.org, and drm.info. Wordlist Words and expressions commonly used in the FSFE. Mailing list Coordination and help on translation efforts. If you require more information regarding our translation activities and do not feel confident in public, you are welcome to contact the translation coordinators. Information about the various language-specific teams can be found on the translators' wiki pages. Amandine “Cryptie” Jambert cryptie@fsfe.orgDeputy Coordinator Translations (French) Andrés Diz pd@fsfe.orgDeputy Coordinator Translations (Spanish) Bonnie Mehring bonnie@fsfe.org 🐾 🔑 Coordinator Translations Part-time employee André Ockers ao@fsfe.org 🐾 🔑 Coordinator Translations (Dutch), Deputy Coordinator Netherlands
1
Community–academic partnerships helped Flint through its water crisis
Skip to main content p COMMENT 15 June 2021 Community–academic partnerships helped Flint through its water crisis A city that faced a public-health emergency shows how collaborations with neighbourhood advocates can advance health equity. E. Yvonne Lewis 0 & Richard C. Sadler 1 E. Yvonne Lewis E. Yvonne Lewis is founder and chief executive of the National Center for African American Health Consciousness, Flint; co-community principal investigator at the Flint Center for Health Equity Solutions; co-director of the Healthy Flint Research Coordinating Center Community Core; and director of outreach, Genesee Health Plan, Flint, Michigan, USA. Richard C. Sadler Richard C. Sadler is associate professor of public health at Michigan State University, Flint, Michigan, USA. Access options Rent or buy this article Get just this article for as long as you need it $39.95 Prices may be subject to local taxes which are calculated during checkout Nature 594, 326-329 (2021) doi: https://doi.org/10.1038/d41586-021-01586-8 Competing Interests The authors declare no competing interests. Research collaborations bring big rewards: the world needs more COVID has shown the power of science–industry collaboration How the COVID pandemic is changing global science collaborations A white-knuckle ride of open COVID drug discovery The authorship rows that sour scientific collaborations ‘We need to talk’: ways to prevent collaborations breaking down Tapping local knowledge to save a Papua New Guinea forest Rethink how we plan research to shrink COVID health disparities The best research is produced when researchers and communities work together Farmers transformed how we investigate climate Subjects Society Water resources Environmental sciences Latest on: Society How to make the workplace fairer for female researchers Career News 02 JUN 23 How to define unjust planetary change News & Views 31 MAY 23 Why I’m leading Pacific Islands students in the fight on climate change World View 30 MAY 23 Water resources Saving the iconic Colorado River — scientists say latest plan is not enough News 26 MAY 23 p Correspondence 09 MAY 23 p Correspondence 09 MAY 23 Environmental sciences Why ideas of ‘planetary boundaries’ must uphold environmental justice Editorial 31 MAY 23 How to define unjust planetary change News & Views 31 MAY 23 Safe and just Earth system boundaries Article 31 MAY 23 Jobs Faculty Positions at SUSTech Department of Biomedical Engineering Postdoctoral Fellows/Research scientists Postdoctoral Research Fellow Specialist in Genetic Trials and Precision Medicine Post doctor (2 years) within carbon burial in Arctic lakes Close banner Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily. Email address I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy. Close banner p Sign up for Nature Briefing
1
Shortcuts: How Neural Networks Love to Cheat
Will Artificial Intelligence soon replace radiologists? Recently, researchers trained a deep neural network to classify breast cancer, achieving a performance of 85%. When used in combination with three other neural network models, the resulting ensemble method reached an outstanding 99% classification accuracy, rivaling expert radiologists with years of training. The result described above is true, with one little twist: instead of using state-of-the-art artificial deep neural networks, researchers trained “natural” neural networks - more precisely, a flock of four pigeons - to diagnose breast cancer. Somehow surprisingly, however, pigeons were never regarded as the future of medical imaging and major companies have yet to invest billions in the creation of large-scale pigeon farms: Our expectations for pigeons somehow pale in comparison to our expectations for deep neural networks (DNNs). And in many ways, DNNs have indeed lived up to the hypes and the hopes: their success story across society, industry and science is undeniable, and new breakthroughs still happen in a matter of months, sometimes weeks. Slowly but steadily, however, seemingly disconnected failure cases have been accumulating: DNNs achieve superhuman performance in recognizing objects, but even small invisible changes or a different background context can completely derail predictions. DNNs can generate a plausible caption for an image, but—worryingly—they can do so without really looking at that image. DNNs can accurately recognize faces, but they show high error rates for faces from minority groups. DNNs can predict hiring decisions on the basis of résumés, but the algorithm’s decisions are biased towards selecting men. How can this discrepancy between super-human performance on one hand and astonishing failures on the other hand be reconciled? As we argue in the paper “Shortcut Learning in Deep Neural Networks” and elaborate in this piece, we believe that many failure cases are not independent phenomena, but are instead connected in the sense that DNNs follow unintended “shortcut” strategies. While superficially successful, these strategies typically fail under slightly different circumstances. Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions. Shortcut opportunities come in many flavors and are ubiquitous across datasets and application domains. A few examples are visualized here: At a principal level, shortcut learning is not a novel phenomenon: variants are known under different terms such as “learning under covariate shift”, “anti-causal learning”, “dataset bias”, the “tank legend” and the “Clever Hans effect”. We here discuss how shortcut learning unifies many of deep learning’s problems and what we can do to better understand and mitigate shortcut learning. What is a shortcut? In machine learning, the solutions that a model can learn are constrained by data, model architecture, optimizer and objective function. However, these constraints often don’t just allow for one single solution: there are typically many different ways to solve a problem. Shortcuts are solutions that perform well on a typical test set but fail under different circumstances, revealing a mismatch with our intentions. To give an example, when trained on a simple dataset of stars and moons (top row), a standard neural network (three layers, fully connected) can easily categorise novel similar examples (mathematically termed i.i.d. test set). However, testing it on a slightly different dataset (o.o.d. test set, bottom row) reveals a shortcut strategy: The network has learned to associate object location with a category. During training, stars were always shown in the top right or bottom left of an image; moons in the top left or bottom right. This pattern is still present in samples from the i.i.d. test set (middle row) but not in o.o.d. test images (bottom row), exposing the shortcut. The most important insight here is that both location and shape are valid solutions under the training setup constraints, so there is no reason to expect the neural network to prefer one over the other. Humans however have a strong intuition to use objects shape. As contrived as this example may seem, adversarial examples, biased machine learning models, lack of domain generalization, and failures on slightly changed inputs can all be understood as instances of the same underlying phenomenon: shortcut learning. For example, researchers developed a machine classifier able to successfully detected pneumonia from X-ray scans of a number of hospitals, but its performance was surprisingly low for scans from novel hospitals: the model had unexpectedly learned to identify particular hospital systems with near-perfect accuracy (e.g. by detecting a hospital-specific metal token on the scan, see left). Together with the hospital's pneumonia prevalence rate it was able to achieve a reasonably good prediction during training - without learning much about pneumonia at all. Instead of learning to “understand” pneumonia, the classifier chose the easiest solution and only looked at token types. Shortcut learning beyond deep learning Often such failures serve as examples for why machine learning algorithms are untrustworthy. However, biological learners suffer from very similar failure modes as well. In an experiment in a lab at the University of Oxford, researchers observed that rats learned to navigate a complex maze apparently based on subtle colour differences - very surprising given that the rat retina has only rudimentary machinery to support at best somewhat crude colour vision. Intensive investigation into this curious finding revealed that the rats had tricked the researchers: They did not use their visual system at all in the experiment and instead simply discriminated the colours by the odour of the colour paint used on the walls of the maze. Once smell was controlled for, the remarkable colour discrimination ability disappeared. Animals often trick experimenters by solving an experimental paradigm (i.e., dataset) in an unintended way without using the underlying ability one is actually interested in. This highlights how incredibly difficult it can be for humans to imagine solving a tough challenge in any other way than the human way: Surely, at Marr’s implementational level there may be differences between rat and human colour discrimination. But at the algorithmic level there is often a tacit assumption that human-like performance implies human-like strategy (or algorithm). This “same strategy assumption” is paralleled by deep learning: even if DNN units are different from biological neurons, if DNNs successfully recognise objects it seems natural to assume that they are using object shape like humans do. As a consequence, we need to distinguish between performance on a dataset and acquiring an ability, and exercise great care before attributing high-level abilities like “object recognition” or “language understanding” to machines, since there is often a much simpler explanation: Never attribute to high-level abilities that which can be adequately explained by shortcut learning. Shortcut learning requires changes in the way we measure progress Historically, machine learning research is strongly driven by benchmarks that make algorithms comparable by evaluating them on fixed combinations of tasks and datasets. This model has led the field to enormous progress within very short timespans. But it is not without drawbacks. One effect is that it creates strong incentives for researchers to focus more on the development of novel algorithms that improve upon existing benchmarks rather than understanding their algorithms or the benchmarks. This neglect of understanding is part of the reason why shortcut learning has become such a widespread problem within deep learning. Let’s look at a prominent example: The ImageNet dataset and challenge were created in 2009 as a new way of measuring progress in object recognition, the ability of algorithms to identify and classify objects. Due to its enormous size ImageNet presented itself as an unsolved problem on a scale that nobody dared to tackle before. It was its variety and scale that paved the way for the current deep learning revolution. With their 2012 paper and challenge contribution Krizhevsky et. al. demonstrated that deep neural networks with learned weights were uniquely adapted to handle this complexity (in contrast to the prevalent methods of that time that used handcrafted features for image analysis). In the following few years, ImageNet became a driving force for progress and performance on the ImageNet benchmark synonymous with progress in computer vision. Only in the last few years this slowly started to change when more and more failure cases of DNNs emerged. One main reason behind all these failure cases is that despite its scale and variety ImageNet does not require true object recognition, in the sense that the models have to correctly identify and classify the foreground object we use as a label. Instead in many cases objects can be equally well identified by their background, their texture or some other shortcut less obvious to humans. If it is easier to identify the background than the main object in the scene the network will often learn to exploit this for classification. The consequences of this behaviour are striking failures in generalization. Have a look at the figure below. On the left side there are a few directions in which humans would expect a model to generalize. A five is a five whether it is hand-drawn and black and white or a house number photographed in color. Similarly slight distortions or changes in pose, texture or background don’t influence our prediction about the main object in the image. In contrast a DNN can easily be fooled by all of them. Interestingly this does not mean that DNNs can’t generalize at all: In fact, they generalize perfectly well albeit in directions that hardly make sense to humans. The right side of the figure below shows some examples that range from the somewhat comprehensible - scrambling the image to keep only its texture - to the completely incomprehensible. The key problem that leads to shortcut learning and the subsequent generalization failures is the discrepancy between our perception of a task and what it actually incentivises models to learn. How can we mitigate this issue and provide insight into shortcut learning? A central shortcoming of most current benchmarks is that they test on images from the same data distribution that is used during training (i.i.d. testing). This type of evaluation requires only a weak form of generalization. What we however want are strong generalization capabilities that are roughly aligned with our intuition. To test these we need good out-of-distribution tests (o.o.d. tests) that have a clear distribution shift, a well-defined intended solution and expose where models learn a shortcut. But it doesn’t stop there: as models get better and better they will learn to exploit ever subtler shortcuts, so we envision o.o.d. benchmarks to evolve as well over time towards stronger and stronger tests. This type of “rolling benchmark” could ensure that we don’t lose track of our original goals during model development but constantly refocus our efforts to solve the underlying problems we actually care about, while increasing our understanding of the interplay between the modeling pipeline and shortcut learning. Favoring the Road to Understanding over Shortcuts, but how? Science aims for understanding. While deep learning as an engineering discipline has seen tremendous progress over the last few years, deep learning as a scientific discipline is still lagging behind in terms of understanding the principles and limitations that govern how machines learn to extract patterns from data. A deeper understanding of how to mitigate shortcut learning is of relevance beyond the current application domains of machine learning and there might be interesting future opportunities for cross-fertilisation with other disciplines such as Economics (designing management incentives that do not jeopardise long-term success by rewarding unintended “shortcut” behaviour) or Law (creating laws without “loophole” shortcut opportunities). However, it is important to point out that we will likely never fully solve shortcut learning. Models always base their decisions on reduced information and thus generalization failures should be expected: Failures through shortcut learning are the norm, not the exception. To increase our understanding of shortcut learning and potentially even mitigate instances of it, we offer the following five recommendations: (1) Connecting the dots: shortcut learning is ubiquitous Shortcut learning appears to be a ubiquitous characteristic of learning systems, biological and artificial alike. Many of deep learning's problems are connected through shortcut learning - models exploit dataset shortcut opportunities, select only a few predictive features instead of carefully considering all available evidence, and consequently suffer from unexpected generalization failures. “Connecting the dots” between affected areas is likely to facilitate progress, and making progress can generate highly valuable impact across various application domains. (2) Interpreting results carefully Discovering a shortcut often reveals the existence of an easy solution to a seemingly complex dataset. We argue that we will need to exercise great care before attributing high-level abilities like “object recognition” or “language understanding” to machines, since there is often a much simpler explanation. (3) Testing o.o.d. generalization Assessing model performance on i.i.d. test data (as the majority of current benchmarks do) is insufficient to distinguish between intended and unintended (shortcut) solutions. Consequently, o.o.d. generalization tests will need to become the rule rather than the exception. (4) Understanding what makes a solution easy to learn DNNs always learn the easiest possible solution to a problem, but understanding which solutions are easy (and thus likely to be learned) requires disentangling the influence of structure (architecture), experience (training data), goal (loss function) and learning (optimisation), as well as a thorough understanding of the interactions between these factors. (5) Asking whether a task should be solved in the first place The existence of shortcuts implies that DNNs will often find solutions no matter whether the task is well-substantiated. For instance, they might try to find a shortcut to assess credit-scores from sensitive demographics (e.g. skin color or ethnicity) or gender from superficial appearance. This is concerning as it may reinforce incorrect assumptions and problematic power relations when applying machine learning to ill-defined or harmful tasks. Shortcuts can make such questionable tasks appear perfectly solvable. However, the ability of DNNs to tackle a task or benchmark with high performance can never justify the task's existence or underlying assumptions. Thus, when assessing whether a task is solvable, we first need to ask: should it be solved? And if so, should it be solved by AI? Shortcut learning accounts for some of the most iconic differences between current ML models and human intelligence - but ironically, it is exactly this preference for “cheating” that can make neural networks seem almost human: Who has never cut corners by memorizing exam material, instead of investing time to truly understand? Who has never tried to find a loophole in a regulation, instead of adhering to its spirit? In the end, neural networks perhaps aren’t that different from (lazy) humans after all ... This perspective is based on the following paper: Geirhos, R., Jacobsen, J. H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M., & Wichmann, F. A. (2020). Shortcut Learning in Deep Neural Networks. arXiv preprint arXiv:2004.07780. Author Bio Dr. Jörn-Henrik Jacobsen has conducted the research this article is based on as a PostDoc at the Vector Institute and University of Toronto. Previously he was a PostDoc at the University of Tübingen and did his PhD at the University of Amsterdam. His research is broadly concerned with what it means to learn useful and general representations of the world, with particular focus on out-of-distribution generalization, unsupervised representation learning, stability guarantees and algorithmic bias. Robert Geirhos is a PhD student at the International Max Planck Research School for Intelligent Systems, Germany. His PhD project is jointly advised by Felix Wichmann (Neural Information Processing) and Matthias Bethge (Computational Vision and Neuroscience). He received a M.Sc. in Computer Science with distinction and a B.Sc. in Cognitive Science from the University of Tübingen, and has been fascinated by the intersection of human and computer vision ever since. Claudio Michaelis is a PhD student at the International Max Planck Research School for Intelligent Systems in Tübingen, Germany. His PhD project is jointly advised by Alexander S. Ecker (Neural Data Science) and Matthias Bethge (Computational Vision and Neuroscience). He previously received a M.Sc. in Physics from the University of Konstanz after which he changed his field of interest from stimulating neurons with lasers to understanding artificial neural networks. Acknowledgements We would like to thank Rich Zemel, Wieland Brendel, Matthias Bethge and Felix Wichmann for collaboration on the article that led to this piece. We would also like to thank Arif Kornweitz for insightful discussions. If not stated otherwise, figures are from this paper. Feature image is from https://www.pikist.com/free-photo-sjgei. Citation For attribution in academic contexts or books, please cite this work as Geirhos, R., Jacobsen, J.-H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M., & Wichmann, F. A. (2020). Shortcut Learning in Deep Neural Networks. arXiv preprint arXiv:2004.07780. Jörn-Henrik Jacobsen et al., "Shortcuts: Neural Networks Love to Cheat", The Gradient, 2020. @article{jacobsen2020shortcuts, author = {Jacobsen, Jörn-Henrik and Geirhos, Robert and Michaelis, Claudio}, title = {Shortcuts: Neural Networks Love To Cheat}, journal = {The Gradient}, year = {2020}, howpublished = {\url{https://thegradient.pub/shortcuts-neural-networks-love-to-cheat/ } }, } If you enjoyed this piece and want to hear more, subscribe to the Gradient and follow us on Twitter.-
6
Science Denied: The Biden Vaccine Mandate ⋆ Brownstone Institute
SHARE | PRINT | EMAIL President Biden has decided to go hard on the virus. No more Mr. Nice Guy. Sadly for him, those tiny little pathogens don’t pay taxes, don’t vote, don’t have Social Security numbers, can’t be drafted, and don’t answer phone calls from poll takers, which is to say that he and his agencies cannot really control them. That must be frustrating, poor man. Instead his plan is to control what he can control: people, and, most immediately, federal workers and the employees of large regulated companies. For him, the key to crushing the virus is the vaccine. Not enough people are obeying his demand for near-universal vaccination. In a maniacal move of wild desperation – or as an excuse to try out the most extreme powers of his office – he is using every weapon that he believes he has to assure compliance with his dream of injecting as many arms as possible. Only then will we crush the virus, all thanks to his leadership, all the complaints about “freedom” be damned – and never mind that the realization of his dream did not work in Israel or the UK. What are the immediate problems here? At least five: 1. The Biden mandate pretends that the only immunity is injected, not natural. And so it has been from the beginning of this pandemic, even though all science for at least a year – actually you can say centuries – contradicts that. Indeed, we’ve known about natural immunity since 400 B.C when Thucydides first wrote of the great Athens plague that revealed that “they knew the course of the disease and were themselves free from apprehension.” Biden’s mandate could affect 80 million people but far more than that have likely been exposed and gained robust immunity regardless of vaccination status. 2. This natural immunity is long-lasting and broad, and we’ve known that since last year when the first studies revealed it. You can say that the addition of a vaccine provides even more but it’s new and untested relative to most drugs approved by regulators, and many people are concerned about possible side effects of this vaccine that was approved much faster than any drug in my lifetime – and there is not one living human being in a position to say with certainty that these skeptics are wrong. 3. The mandate presumes that everyone is equally susceptible to severe outcomes from getting exposed to the virus, which we’ve known is not true since at least February 2020. In this entire 18-month fiasco, we’ve not seen any serious high-level communication about the huge range of demographic gradients in infection based on both age and overall health. This ignorance is a consequence of poor public-health messaging, and is grossly irresponsible. The aggregated mandate from the Biden administration ignores this completely, as did the models that suggested lockdowns in the event of a virus from the Spring of 2020. 4. Biden seems still of the belief that vaccines stop infection (he claimed this many times) and spread but we know with certainty that this is not the case, and even the CDC admits it. The best guess at this point is that it can help in preventing hospitalization and death but this experiment is still in its early stages, and the relationship between cause and effect in human affairs is not as easy as throwing around two data sets and saying one caused the other. Most cases in the developed world now are occurring among the vaccinated – and we all know this because we have vaccinated friends who got Covid anyway. Some have died. We are not idiots, contrary to what the Biden administration believes. Nor do any of us have all the knowledge and answers. And it is precisely because science is uncertain that the decisions surrounding it need to be decentralized, depoliticized, and open to correction rather than being imposed by top-down mandates. 5. Biden’s order flies in the face of basic human freedoms and rights. There is no other way to put it. And it is this fact that is the most prescient for the multitudes who are right now seething in anger that one man who happens to hold power can make health decisions for the whole population regardless of their perfectly rational judgements. When the needle filled with liquid is forced into the arms of people who either have natural immunities or do not fear exposure to the pathogen, it gets personal, and people get really mad, especially after they are still forced into masks and denied other essential rights. Truth is that my phone has been blowing up all evening since Biden’s speech. People are demoralized, panicked, furious, and even at the point of losing it completely over this despotic moment in which we are living. Most of us believed that we live in a scientific age in which information would be broadly disseminated to the world and this technology would somehow prevent us as a society from falling prey to charlatans, mob mysticism, and brutal methods of population control, not to mention to the deployment of superstitious talismans and quackery. That turns out not to be true, and this is perhaps the greatest shock of all. Scientists worked for many hundreds of years to understand pathogens. They worked to understand their effect on the body, the range of susceptibility to both infection and severe outcomes, the demographics of vulnerability, the means by which we come to be protected from them, and the opportunities and limits available to people to protect themselves and others. After all this, humanity put together institutions that protected human freedom, individual rights, and public health, while preserving peace and prosperity in the best of times. In the last 18 months, all that hard work and knowledge seems to have been shredded, replaced by superstition masquerading as some kind of new science of social and pathogenic control. In this year and a half, we’ve observed no clear successes and unrelenting flops. One year ago, humanity had the opportunity to embrace the wisdom of the Great Barrington Declaration to protect the vulnerable while letting society otherwise function. Governments instead chose the path of ignorance and violence. The list is long but it includes: travel restrictions, capacity limits, business closures, school shutdowns, mask mandates, forced human separation (“social distancing”), and now mandates of vaccination that, quite apparently, vast numbers do not want. It’s all designed so that governments can prove to the world that they are powerful enough, smart enough, educated enough to outsmart and manage any living organism, even an invisible one that has been part of the human experience since humans had experiences. In this, they have completely failed – in more ways than it is possible to count. We keep thinking that surely, surely, we will come to the end of this madness. I personally believed it would end the second week of March 2020. Instead, it gets worse and worse, the illusion of control having seized the barely functioning brains of the ruling classes of the world’s richest nations. If this doesn’t prove the astonishing stupidity of the world’s most powerful and educated, nothing else in history does. The great myth that has clouded our vision and our expectations has been that we as a people had progressed beyond the kind of statist shibelboths and fanatic brutality that define our age. The truth is that we are not. This very day, a Karen attacked me for being maskless. I looked at her and thought only of the poor people in Colonial America who dared being caught wearing buckled shoes and therefore running afoul of the sumptuary laws, or of the religious minorities in Medieval Europe who were scapegoated for every plague (look up the origins of the the phrase “poisoning the well”), or the demonization of rebels in the ancient Roman empire or the disapprobation of heretics in the hundreds of years that followed the fall of Rome. It is a mark of a primitive society to attribute to political compliance or noncompliance what rational science shows is a feature of the natural world. Why? Ignorance, maybe. Power ambitions, more likely. Scapegoating is apparently an eternal feature of the human experience. Governments seem particularly good at it, even when it is less believable than ever. SHARE | PRINT | EMAIL Name * First Last Email * Subscribe
1
Boeing agrees to pay $2.5B to settle charges it defrauded FAA on 737 Max
Boeing reached a $2.5 billion settlement with the Justice Department on Thursday to settle criminal charges that the company defrauded the Federal Aviation Administration when it first won approval for the flawed 737 Max jet. The plane was grounded by the FAA in March 2019 following two fatal crashes that killed 346 people. It was approved by the FAA to fly passengers again in November after making numerous changes to the flawed safety system that caused the crashes, a system at the center of this fraud case. “The misleading statements, half-truths, and omissions communicated by Boeing employees to the FAA impeded the government’s ability to ensure the safety of the flying public,” said US Attorney Erin Nealy Cox for the Northern District of Texas. “This case sends a clear message: The Department of Justice will hold manufacturers like Boeing accountable for defrauding regulators — especially in industries where the stakes are this high.” The settlement immediately came under fire from critics who said it amounted to a slap on the wrist. About 70% of the $2.5 billion figure represents payments Boeing had already agreed to make to its airline customers as compensation. “Not only is the dollar amount of the settlement a mere fraction of Boeing’s annual revenue, the settlement sidesteps any real accountability in terms of criminal charges,” said Rep. Peter DeFazio, the Oregon Democrat who chairs the House committee on transportation and infrastructure. He said the agreement was “an insult to the 346 victims who died as a result of corporate greed.” “My committee’s investigation revealed numerous opportunities for Boeing to correct course during the development of the 737 Max but each time the company failed to do so, instead choosing to take a gamble with the safety of the flying public in hopes it wouldn’t catch up with them in the end,” said DeFazio. The government’s filing against the company said that at least two Boeing employees, who were not identified, engaged in the fraud from late 2016, in the final stages of the jet’s approval, through late 2018, when the plane was already in use and after the first crash had occurred. According to the filing, at least one of the two employees left Boeing in July 2018 to work for an airline. The employment status of the other employee in the filing was not specified. Boeing has agreed to cooperate with any individual prosecutions arising out of this case. The settlement includes a $243.6 million criminal fine, compensation payments of $1.77 billion to Boeing’s airline customers and an additional $500 million to a fund to compensate family members of crash victims. Boeing had previously already set aside money to pay airlines and $100 million for victims’ families. It said it will take an additional charge of $743.6 million against earnings as a result of the settlement. Under the deal, the Justice Department would defer any criminal prosecution of Boeing for three years and charges will be dismissed if it it sees no more misdeeds by the company. “I firmly believe that entering into this resolution is the right thing for us to do — a step that appropriately acknowledges how we fell short of our values and expectations,” said Boeing CEO Dave Calhoun. “This resolution is a serious reminder to all of us of how critical our obligation of transparency to regulators is, and the consequences that our company can face if any one of us falls short of those expectations.” But several family members of the crash victims attacked the settlement. “This is a Boeing protection agreement,” said Michael Stumo, father of Samya Rose Stumo, who died on the second crash in March of 2019. He said that the families of crash victims had urged the Justince Department not to reach a settlement with Boeing. “The Boeing persons who committed fraudulent acts will not be held accountable. The government continues to protect them despite recognizing their criminal acts. The settlement dollar amounts are merely rounding errors in Boeing corporate finances. This is fake justice agreed to by insiders while excluding victims’ families.” “May this serve as a reminder that Boeing’s and the FAA’s current leadership are not fit to be entrusted with human life,” said Zipporah Kuria, a UK resident who lost her father on the second crash. “Their priority is corporate interest over human life.” She said the settlement “doesn’t even scratch the surface of justice.” The settlement payments are modest compared with what the scandal has cost Boeing over the last two years. Boeing has already detailed $20.7 billion in direct costs in compensation to airlines, increased production costs, storage fees and victim compensation, even before these latest charges. Lost revenue from canceled, delayed or renegotiated sales could cost tens of billions more, according to experts. That could make the 737 Max debacle one of the most expensive corporate mistakes of all time, in terms of both financial cost and lives lost.
8
Why I Quit My Job as a Software Engineer to Join the Mafia
I know what you’re thinking, “Why abandon a stable career in the midst of a pandemic for an illegal venture that could end in death?” Well, let me explain to you a phenomenon men of great distinction often face. We get bored of earning several multiples of the median salary, having great benefits, the ability to work from home, and providing a good life for our family. We feel the call to be great, to escape the drudgery of the mundane, and to enter a more visceral world with high risk and real skin in the game. It was at one of these moments that it hit me; what I had been longing for since I was a child was to join the Mafia. Mafia men live a grand life. Crime, sex, booze, dumping piles of garbage in the parking lot of a New Jersey deli, you name it: if it’s cool, mafia guys are all about it. Compare this with a sedentary life plagued by eye strain and neck aches. Blue light glasses and standing desks are unnecessary when your work is in the streets. While I may get whacked in my new line of work, I will never have to review another intern’s pull request. The real men, the big winners, the lions among sheep, are out there in the real world taking risks. Clearly, a life of crime is much richer than a life of code, but let us examine its impact on others. Hopefully, by now you sympathize with my reasoning, but I am not so selfish as to do this strictly for my own benefit. I subscribe to mafia values like commitment and loyalty, rejecting the optionality and transience of the tech world. In accordance with these values, I feel that making this career change is the best option for my family. I can already hear the objection, “What about your wife and kids?” It is with the utmost confidence that I can assure you, they too will be better off. While my engineering salary may be enough to support a comfortable, upper-middle-class lifestyle, they will never know true abundance unless I switch to a more lucrative line of work. I figure, if I put the same effort into catching bodies and pushing weight as I put into Leetcode, I’ll be a made man in no time. Once I’m made, I will be a king in my community, and my family: royalty. Yet, naysayers think my kids will better off going to a public school in the suburbs. Nonsense. By now you may be convinced to do the same. If not, do not fret. Only 20% of men can be among the top 20% of men. For those of you not bold enough to embark on this journey, I have a great gift: from time to time you may read the posts here and glimpse into a life of grandeur. Thank you for reading. If you enjoyed this post, feel free to share it. To stay up to date on future posts subscribe below or follow me on  Twitter .
23
Actifio squashes employee shareholders ‘like cockroaches’
Actifio, the venture-backed data management startup, has used a reverse stock split to torch the shares of up to 1,000 staff and former employees. The company last week implemented a reverse stock split ratio of 100,000-to-one, which “decreases the aggregate number of shares of Common Stock issuable thereunder to 9,728,360 shares of Common Stock,” priced at 24c “fair value” each. That $0.24 “fair value” means Actifio’s 9,728,360 shares of Common Stock are valued at $2.335m. Yet the company was classed as a unicorn in August 2018, worth more than $1bn by VCs, and has taken in $311.5m of VC investment since it was founded in 2009. The reverse stock split ratio indicates that an incredible 972 billion shares were previously issued. What’s going on here? Peter Godman, the co-founder and former CEO of startup Qumulo, has a plausible explanation. “If I were guessing I’d say they took new investment at near-zero valuation,” he writes on Twitter. “The level of outstanding stock imputed a very low per-share valuation. So, they printed a trillion shares, awarded it to new investors and loyal employees, and then reverse split to fix it all. … Whatever the reason, a sad day for early employees particularly.” Let’s see how this affects Actifio’s small stockholders. Jeff Greene Jeff Greene was a Professional Services Engineer at Actifio from July 2012 to December 2014. “I joined in July 2012 and was employee number 90,” he told us. “I was offered 5,000 shares and when I left in 2014 I purchase 2,333 vested shares for approximately $5k. I viewed it as Vegas money and rolled the dice.” He is now receiving a payout of $0.005599, a five thousandth of a dollar. Yes, you read that correctly. Greene received the news via a letter from Actifio’s CFO Ed Durkin, dated October 5, 2020. Two manoeuvres by Actifio turned Greene’s shares priced at $5,202.59 to dust. First was the recapitalisation and 100,000:1 reverse stock split. That converted 2,333 shares of old Actifio Common Stock into a fractional share, comprising 0.02333 of of the new Voting Common Stock. The second event is detailed in the letter, which states; “In lieu of maintaining as outstanding the fractional shares of Voting Common Stock that resulted from the Reverse Stock Split, the company has opted to pay in cash the fair value of such fractional shares, which is based on the fair market value of a single share of Voting Common Stock of $0.24, as determined by the Company’s Board of Directors.” Jason Axne was a Professional Services Engineer and then Principal Systems Architect at Actifio from August 2012 to March 2018. He said on LinkedIn: “If you are a minority stockholder, a reverse split could extinguish your position and force you out. Unfortunately, there is not much you can do as long as the reverse split follows legal procedures and you receive the correct number of new shares. “Your chance of prevailing in a lawsuit brought against the board of directors is slim. The courts have held that, absent fraud, misrepresentation or misconduct, a corporation has the right to eliminate minority stockholders through a reverse split. “Incredible. So many of us poured our heart and souls into that company only for them to squash us like cockroaches… so disappointing.”
2
Sane-airscan – driverless scanning with eSCL (Apple AirScan) and WSD
{{ message }} alexpevzner/sane-airscan You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
How Lovely It Is to Be Small (On the Writer Robert Walser)
Everyone dies—except writers, who do it up to three times. Once when they choose to or are forced to retire; once when their bodies no longer support consciousness; and once when they are forgotten, which often enough occurs first in the sequence. Fortunately, for those writers whose achievements or notoriety during their lifetimes weren’t enough to warrant a posthumous existence, there are other ways to be rescued from oblivion. To this day, a corpse cannot be revived—but a corpus? For that, there is mythology. To see how this works in practice, look no further than the case of the Swiss writer Robert Walser, whose life appears at first glance to furnish a checklist of romantic tropes about the neglected outsider artist. Walser was the seventh of eight children born to Elisa and Adolf Walser, prosperous general store owners in Biel, Switzerland. The young Walser had a happy childhood, but his happiness was short-lived. His eldest brother died of a sudden illness in 1884, and his father’s lackadaisical management style, combined with a recession, decimated the family fortune, forcing them to move farther and farther from the center of Biel. These events took a toll on Walser’s parents, especially the stalwart Elisa, who died in 1894, after a long struggle with depression, when Walser was 16. The state of the family business meant that Walser had to leave school. He took up an apprenticeship as a bank clerk in Biel and then as a bookkeeper at an insurance company in Zurich. By then, he’d caught the literary bug and began the habit of quitting a job to focus on writing only to take up another when his money ran out. He placed several poems in local newspapers and published a short book before joining his brother Karl, an up-and-coming painter and set designer in Berlin, the capital of the German-language literary scene. In Berlin, he and Karl acquired reputations as dandies and enfants terribles. Walser wrote hundreds of short prose pieces for literary journals and for the feuilleton sections of newspapers. He also published three novels: The Tanners (1907), The Assistant (1908), and Jakob von Gunten (1909), a “poetic fantasy,” as Walser called it, based on his experiences at a butler’s school in Berlin and his short stint as a castle servant in Upper Silesia. Hermann Hesse and Thomas Mann praised these works—which frequently featured schoolchildren, pageboys, junior clerks, and journeymen poets—as examples of naïve or outsider art. Walser was also an important and acknowledged influence on Kafka. Robert Musil, whose own schoolboy novel, The Confusions of Young Törless (1906) , had been a hit, dismissed Kafka’s debut collection as nothing more than a “special case of the Walser Type.” But aside from his novella The Walk (1917), which remains Walser’s best-known piece of writing, none of his novels or collections achieved commercial success. By the time The Walk was published, Walser’s career was beyond repair. He had already returned to Switzerland broke and burned out. The political and economic chaos in Germany that followed the First World War crushed any lingering hopes for literary recognition. Still, Walser remained productive, even as his opportunities for publishing dwindled. The end of the 1920s found him living in one cheap boardinghouse after another, on the edge of poverty and homelessness, all but estranged from his siblings, his former friends, and his publishing contacts. He drank frequently, and his behavior became increasingly erratic and sometimes violent. In 1929, he was institutionalized at Waldau Asylum in Bern and diagnosed with schizophrenia. Not long after, he was transferred to a facility in Herisau, where he spent the next 23 years, living, as a late poem put it, “like a child, enraptured / by the idea that I have been forgotten” (“Agreeableness of Lament”). Walser died of heart failure on Christmas Day 1956 during his daily walk outside the hospital grounds. The last photo of him, taken by a police coroner, is laden with pathos: the footprints leading to his sprawled corpse are already being covered by snow, as though the image were quoting the death of the poet Sebastian in The Tanners or a frozen version of the melancholy lines on Keats’s grave in Rome (“Here lies one whose name was writ in water.”) For Walser, it was the third and final death. His funeral was attended by a pastor; some hospital staff; a family friend named Freida Mermet, with whom he had kept up a long, one-sided, epistolary romance; and Carl Seelig, the young critic who had taken to visiting Walser in Herisau and had over time become his legal guardian and literary executor. After the funeral, Seelig came into possession of shoeboxes that contained more than 500 miscellaneous scraps of paper—business cards, pay slips, and rejection letters, among them—all covered in pencil markings no more than a few millimeters tall. These manuscripts were all that remained of the last period of Walser’s life as a writer. They constitute around 1,300 distinct texts: around 80 short dramatic scenes, 470 poems, and 750 prose pieces, including a 150-page novel, The Robbers, which Walser managed to fit onto just 24 manuscript sheets. (They appeared in six volumes published between 1986 and 2000 as Aus dem Bleistiftgebiet: Mikrogramme aus den Jahren 1924–1933, translated into English as From the Pencil Territory: Microscripts, 1924–1933.) Looking at the tiny letters with the knowledge that they were written during a period of psychic turmoil leading to institutionalization, it is difficult not to imagine that their author was inscribing the asymptotic approach to his own disappearance. As it happens, they had the opposite effect. More than any other fact of his life, the microscripts form the kernel of what his translator Susan Bernofsky, in Clairvoyant of the Small (Yale University Press, 2021), the first biography of the writer to appear in English, calls the “Walser mystique.” *** If successful, a myth generates a revival of interest in a writer’s work, which is elaborated across the introductions to, and reviews of, posthumous publications, reissues, and new translations. At a certain point of reputational consolidation, the baton is passed to biographers, who are then responsible for paring back the stories that have accrued to the writer by comparing them against documentary evidence and contrasting testimony from witnesses, by qualifying them with additional context and hindsight, or by debunking them, as necessary, to send readers back to the work. Bernofsky performs such tasks admirably, especially with regard to Walser’s mental breakdown in 1929. The myth of the mad writer was attached to Walser during his lifetime. For example, in a 1929 article, Walter Benjamin wrote that Walser’s characters came from “insanity and nowhere else,” even though he was probably unaware of Walser’s personal circumstances, let alone the existence of the microscripts. Walser’s drinking and his precarious circumstances would have been enough to occasion intervention; although Walser admitted to depression, suicidal ideation, and even to experiencing auditory hallucinations, Bernofsky argues there are good reasons not to take the initial diagnosis of schizophrenia at face value. (Not least because there seems to have been a clerical error in the processing of the notes from Walser’s intake interview in Waldau: the box labeled “provisional diagnosis” was left blank, and someone had typed schizophrenia into the box labeled “definitive diagnosis.”) From the point of view of present-day psychiatry, Walser was “certainly ill,” she writes, “even if a patient presenting with his symptoms today might be treated with a combination of medication and psychotherapy after a relatively short hospital stay.” For his part, Walser seemed capable of viewing his institutionalization with a degree of ironic detachment, at least if Seelig’s reports can be trusted. When Seelig asked Walser if he knew why he had been institutionalized, he answered, “ Because I’m not a good essayist.” To Seelig’s follow-up question, he responded, “I am not here to write but to be mad.” As for the microscripts, they are unusual, yes, but that Walser’s tiny handwriting was the result of mental illness can almost certainly be ruled out. Trained as a clerk and a bookkeeper, Walser demonstrated meticulous concern for the graphic presentation of his handwriting from a young age: a 1902 letter to his younger sister, Fanny, shows him writing with a smaller hand than usual to “produce a perfectly symmetrical block of writing,” and the manuscript of his first published book contains 20 sections of almost exactly equal length. Later, during his last years in Berlin, during a particularly severe case of writer’s block, he turned to drafting his texts with pencil “before inking [them] into definitiveness,” as he put it in a 1926 prose piece. The so-called “pencil method” of 1913 was not fundamentally different from the technique that produced the microscripts, except that in the later period, Walser often did not bother to make fair copies of the texts. The miscellany of the paper he wrote on probably has more to do with postwar paper shortages and Walser’s poverty than with any impulse toward hoarding. Filling up as much of these writing surfaces as possible was in no small part making a virtue of necessity. *** Although the appearance of Bernofsky’s biography suggests that Walser’s position as an important early-20th-century modernist is now on solid ground in the English-speaking world, one part of his oeuvre remains marginal: his poetry. His poetic activity is clustered between the years 1898 to 1900 and again between 1925 to 1933—that is, at the outset of his career and then in the Pencil Territory texts at its conclusion. This is not an atypical distribution for writers known mainly for their prose and for whom poetry functions as a gateway into professional publishing in youth, which they pass through again in old age, chastened by worldly failure and in search of more spiritual modes of expression. “I myself don’t quite know how I started writing poetry,” admits the Berlin-bound narrator of “The Poems,” one of many Walser stories that features a young poet-protagonist. “I wrote poems out of a mixture of bright-golden prospects and anxious prospectlessness, constantly half-fearful, half almost bubbling over with exultation.” It is a familiar sentiment, but so is this one, recorded in “My Fiftieth Birthday”: For about seven years I then lived in Berlin as a hardworking prose writer and, when publishers were no longer willing to grant me an advance, I returned to Switzerland… …to persist undauntedly in my poetic efforts. Yet in Christopher Middleton’s foreword to Thirty Poems (2012), his translation of published and unpublished Walser poems from the 1920s, Middleton claims that Walser “was essentially a poet.” Middleton can have meant this only in the honorific sense because he immediately qualifies his statement with the observation that much of Walser’s prosody, especially in the earlier period, is “singularly conventional, if not jejune.” Deeply indebted to the German Lied tradition of “whispy singable delicacies,” Walser’s poems are “throttled,” in Middleton’s view, “by the insipidity of their many rhymes, by their perfunctory character.” “Trite” sound play—which neither Middleton nor Daniele Pantano (the other translator of Walser’s poetry, whose compilation Oppressive Light appeared in 2012) attempt to reproduce in English—dictates the development of many of Walser’s poems at the expense of nearly every other aesthetic consideration, including that of sense. Both translators largely omit these poems in favor of those that are unrhymed or irregularly rhymed, especially those on grander themes, such as politics, art, and nature. (These latter selections are a useful corrective to the view of Walser as a “naïve” writer, childishly oblivious to the monumental events taking place around him, but they aren’t particularly representative.) Middleton explains away his obvious embarrassment with some of the published poems by reminding readers that they were “consumer poems” written for newspaper audiences, and some of the unpublished poems remained only at the draft stage. Bernofsky concurs with this assessment. “Admirers of his early poetry emphasized that these works show talent while being interesting, quirky, and a little awkward,” she writes. “[N]o one considered them great, and in fact they aren’t. If such poetry was all Robert Walser ever wrote, he’d be forgotten.” The writings from the Pencil Territory present difficulties of interpretation beyond the labor of deciphering and transcribing them, which required scholars Bernhard Echte and Werner Morlag to peer through the magnifying lenses of thread counters at letters no larger than 2 millimeters long for the better part of two decades. The first difficulty has to do with classification. How is one to tell which texts are poetry and which are prose? Some manuscript sheets contain both lineated and unlineated texts, and these are sometimes separated from one another by folds in the paper or by decorative drawn borders. However, if the distinction between poetry and prose has to do partly with their respective visual forms, isn’t this complicated by the unorthodox visual presentation of all of Walser’s pencil texts? The second, no less thorny issue has to do with completion status. Can one infer from the fact that Walser did make fair copies of some of the microscripts, altered their content, and submitted them for publication that all the writings from the pencil territory are drafts? Or did he decide at some point, perhaps after his opportunities for publication had dead-ended, that the miniature form was by no means incidental to his artistic vision? These questions may be unanswerable, but granting Walser’s microscripts the presumption of authorial intent not only enables us to link his writing to a recognized, if little-known, publishing tradition and to contemporaneous modernist literary experiments, it also illuminates the relationship between his life and work. *** Bernofsky takes the title of her biography from “Le Promeneur Solitaire,” a 1998 essay by the late German writer W.G. Sebald. Smallness is the dominant motif of Walser’s oeuvre, from his status as a minor writer to the youthful characters who populate his work, from his preference for short prose forms such as the feuilleton to his frequent use of the modifier little in his titles. The motif finds its purest expression in the miniscule letters of the microscripts. If there is one exception to this rule, as Bernofsky observes, it is that Walser is not at all a minimalist. Rather, he is a maximalist miniaturist whose late style bears comparison to the word play, neologisms, syntactic complexities, and sheer verbosity of James Joyce and Gertrude Stein. In the decades after Walser’s death, these writers, along with Germanophone luminaries such as Kafka, Sebald, and Thomas Bernhard, created the taste by which Walser’s late style could be appreciated. “The playful—and sometimes obsessive—working in with a fine brush of the most abstruse details is one of the most striking characteristics of Walser’s idiom,” Sebald writes of The Robbers, Walser’s late masterpiece of “detours and digressions.” Instead of “fine brush,” though, he need only have said pencil stub. The medium of the microscripts is not parallel to Walser’s stylistic maximalism; it is part and parcel of it. In her chapter on narratives of the miniature in On Longing (1984), the poet and critic Susan Stewart argues that maximalist miniaturism is hardly the oxymoron it might appear. Size is, above all, a matter of scale, and scale, in turn, is relative to the size of the perceiver, in this case, the human body. When a body encounters the miniature, it stands in “transcendent” spatial relationship to it; access to the miniature and the tiny details that comprise it come to that body only via fine-grained visual perception, or—if you prefer—clairvoyance. Depictions of small things therefore tend to be verbose, as the temporal hierarchy that structures narrative gives way to “an infinity of descriptive gestures.” “It is difficult for much to happen in such depiction,” Stewart concludes, “since each scene of action multiplies in spatial significance in such a way as to fill the page with contextual information.” What results is either a series of static tableaus—such as Walser’s early Fritz Kocher’s Essays (1904) or the late “Felix Scenes”—or what amounts to the same thing, an endless digressiveness, as in The Walk and Walser’s other walk stories. “Just as syntactical embedding is a matter not just of additional information but of a restructuring of information,” as Stewart argues, digression “[toys] with the hierarchy of narrative events.” A proliferation of detail frustrates readers’ desire for closure, distracts the attention, makes it difficult to determine what is significant and what is not, and “nearly erases the landmark[s]” described with them. All of this accounts for Sebald’s sense that Walser’s “ prose tends to dissolve upon reading, so that only a few hours later one can barely remember the ephemeral figures, events and things of which it spoke.” These difficulties disappear, however, as soon as one recognizes that, as Bernofsky’s term maximalist miniaturism suggests, the primary mode of Walser’s late prose is lyrical, not narrative. Appreciating Walser’s late prose means adjusting generic expectations accordingly. Though it is true that, as Sontag puts it, Walser “spent much of his life obsessively turning time into space” through his legendary prowess as a walker, Bernofsky cautions against treating Walser as an instance of the flâneur encountered in the writings of Baudelaire, Benjamin, or Debord. Walser came from a long line of “tremendous walkers” and acquired the habit in his youth. If Walser “thought nothing of walking all night, traversing, say, the eighteen miles from Bern to Thun and then climbing a mountain in the morning,” the practice owes more to the tradition of Swiss Wandern than French psychogeography. If writing about the small reveals the important truth that, as Stewart puts it, “every narrative is a miniature and every book a microcosm,” small writing, whether in the form of miniature books or micrographia, reveals the “disjunction between the book as object and the book as idea” and, by extension, the disjunction between the material and the abstract nature of the sign. There is no semantic difference between a printed sentence and a handwritten one or between a sentence written in normal-sized letters and one written in microscopic letters, but it does not follow that these changes to the materiality of the sign are insignificant. On the contrary, as the labor involved in the production of micrographia “multiplies, so does the significance of the total object,” as Stewart writes. On the reception side, labor also increases to the point where the physical difficulty of reading micrographia impedes the reception of the information presented in it. Removed from the realm of mechanical production and commodity exchange, minute writing returns the aura—to use Benjamin’s term—to the book as object and the sign as a physical mark. By placing both prose and poetry on the same page and making each equally difficult to read, Walser’s microscripts call attention to what is typically obscured by print, namely that lineation no longer serves as a marker for the poetic as such but is a vestige of canons of visual presentation and layout. Something similar might be said of the childish, unrestrained rhyming in Walser’s poetry that so put off Middleton. Rhyme, after all, is also a material rather than a semantic feature of the sign, in this case of the verbal sign, and by pushing phonetic effects beyond the limits of good taste, Walser shows that here too the means of communication can be used to frustrate the ends of communication. In both respects, the writers with whom Walser had the most in common were the Italian and Russian futurists and Dadaists such as Hugo Ball and Raoul Hausmann, the latter of whom performed their experiments in sound poetry, typography, and optophonetics in Zurich and Berlin in the late teens and early ’20s. (It is unclear whether Walser was familiar with them, and Bernofsky does not mention it.) If reading Walser’s poetry in print gives the impression that his work is a cloying relic of 19th-century German Lied, reading it in manuscript—that is, in its original form—shows that, along with the futurists and Dadaists, he is the forerunner of the handwriting experiments of contemporary poets such as Robert Grenier, Susan Howe, and Anne Waldman. Here, rather than in any particular accomplishment as a writer of the short lyric, lies Walser’s real and as yet unacknowledged significance for the subsequent history of poetry and poetics. *** A final biographical mystery remains, however, especially once one accepts Bernofsky’s debunking of the myth that the microscripts are the product of mental illness. Given that Walser’s pencil method significantly increased the labor of production and significantly decreased the likelihood of its reception—some of his microscripts went unpublished because editors simply could not read them, and there is even doubt as to whether Walser could at a few days’ remove—and given that, unlike the Dadaists, for example, there is no record of his interest in philosophical questions about the nature of the sign, why did he write this way? A clue can be found in the history of miniature books and micrographia. The invention of printing coincided with the cultural invention of childhood, Stewart reminds us, and “from the fifteenth century on, miniature books were mainly books for children, and in the development of children’s literature the depiction of the miniature is a recurring device.” In writing his miniscule texts, what was Walser doing? Perhaps he was making toys. If so, the most significant detail of Bernofsky’s biography would be the very first one: “Robert Otto Walser was born at three in the afternoon on April 15, 1878, in a back room above a general store that specialized in toys but also sold all sorts of sewing and stationary items, leather goods, music and umbrella stands, costume jewelry, and mirrors.” Most toys are miniatures: wind-up toys, toy soldiers, dolls, dollhouses, tea sets, building blocks, model trains, and so on. Viewed from the perspective of his nursery, the pervasive motif of smallness in Walser’s writing is the symptom of a powerful nostalgia—a word, let’s not forget, coined by a doctor from Switzerland—for the lost happiness of his childhood in Biel. In the “Felix Scenes,” a series of prose tableaux from the Pencil Territory, the eponymous narrator, a boy of “four or six years old,” is standing in front of his father’s shop, just as Walser must have at that age. “How lovely it is to be small,” Felix remarks. “You’re not responsible for a thing…All the beautiful goods in the shop window…it almost worries me that I have no worries.” The memory of material comfort, associated here with the presence of commodities designed for the consumption of children like Felix—whose name is derived from the Latin word for happiness—is particularly evident in this passage. No less important is the shop window that mediates Felix’s view of these commodities (and Walser’s memory of them). Again, Stewart: “The glass eliminates the possibility of contagion,” namely, the contagion of linear time, which brings responsibilities and worries, disappointments and reversals of fortune, aging and death. “[A]t the same time…it maximizes the possibilities of transcendent vision” characteristic of the body’s relation to the miniature, which in turn is “linked to nostalgic versions of childhood and history.” Objects and scenes viewed through glass appear frequently in Walser’s writing, in poems such as “At the Window I,” “At the Window II,” “The Woman at the Window,” “How the Small Hills Smiled,” and “Spring (I)”; in microscript texts such as “The Prodigal Son” and “Childhood”; and in published prose pieces such as The Walk, “Shop Windows,” and “Three Stories,” in which what is described are the covers of imaginary books. Whether he was conscious of it or not, Walser’s miniature handwriting ensured that the way his most significant work would first be encountered would be under glass—as it so happens, under the glass of Echte and Morlang’s thread counters—in the same way shoppers first encountered the toys in his father’s shop. The toy unites all the strands of Walser’s work—form, subject matter, medium—into a single image. “[T]oday I want to turn / poetry into a children’s game,” Walser wrote in “Parade,” a line that could function as his ars poetica. Anyone whose profession is to invent stories and play with language remains connected to the fantasy worlds of childhood, even if few major writers have made childhood their major theme, as Walser did. But perhaps Walser’s life and work are special instances of a general case and not just among writers. As Benjamin put it: “When a modern poet says that everyone has a picture for which he would be willing to give the whole world, who would not look for it in an old box of toys?”
1
Show HN: I made coding flashcards to help retain CS fundamentals
JavaScript & Data Structures Flashcards Master Essential Coding Concepts with Syntax and Examples. Buy Physical cards Buy Digital Cards Designed to accelerate your tech skills The JavaScript and Data Structure flashcards are great to reinforce fundamental concepts through different programming languages as a whole. If you are already a developer and know the core principle then these flashcards are great for retention. Best way to prepare for a coding interview Contains actual code snippets and concept explanation asked in FAANG interviews. Inside this deck are some of the most popular Data Structures and core JavaScript Algorithms. If you're preparing for a coding interview and need help in hammering down a concept, this is all you'll need to get the nuts and bolts and be productive. The topic ranges from · Linked List Quick Sort Breadth First Search B+ Trees · Hashmaps Recursion Javascript keywords and functions These are great arsenal at your disposal for a quick refresher before your coding interview. Color Coded Snippet Concise Chunks of information with color coded snippet that present a straight-forward explanation of the concept. Each card features an illustration and is filled with unique content to help you retain tricky concepts like recursion, heap sort, and more. Whenever necessary, the comments are invoked inside the code to further clarify the concept. QR Code Embedded A QR code is embedded on each card at the top. At any point in time, you need further clarification on a topic, simply scan the QR code with your smartphone. The QR code takes you to a Youtube video or an article that gives a detailed explanation of the topic. Sample Card Check out what our early customers have to say These cards were really helpful for last minute preparation to ace in my DS Class. Concise information about data structure to review a day before your test. Naime Saves time and effort when preparing to learn the essential coding concepts needed for interviews. It gave me the edge needed to clarify complex algorithms. Sultan They are quick, high-quality concept summations. I often find myself getting lost when learning online. These cards really help me focus on one concept at a time. Elvin
1
What Do GDP Growth Curves Mean?
What Do GDP Growth Curves Really Mean? p p p p p p p p p p
3
Robotic hand augmentation drives changes to neural body representation
× Watch in our app
1
Delete Cargo Integration Tests
p Feb 27, 2021 Click bait title! We’ll actually look into how integration and unit tests are implemented in Cargo. A few guidelines for organizing test suites in large Cargo projects naturally arise out of these implementation differences. And, yes, one of those guidelines will turn out to be: “delete all integration tests but one”. Keep in mind that this post is explicitly only about Cargo concepts. It doesn’t discuss relative merits of integration or unit styles of testing. I’d love to, but that’s going to be a loooong article some other day! When you use Cargo, you can put #[test] functions directly next to code, in files inside src/ directory. Alternatively, you can put them into dedicated files inside tests/ : awesomeness-rs/ Cargo.toml src/ # unit tests go here lib.rs submodule.rs submodule/ tests.rs tests/ # integration tests go here is_awesome.rs I stress that unit/integration terminology is based purely on the location of the #[test] functions, and not on what those functions actually do. To build unit tests, Cargo runs rustc --test src/lib.rs Rustc then compiles the library with --cfg test . It also injects a generated fn main() , which invokes all functions annotated with #[test] . The result is an executable file which, when run subsequently by Cargo, executes the tests. Integration tests are build differently. First, Cargo uses rustc to compile the library as usual, without --cfg test : rustc --crate-type=rlib src/lib.rs This produces an .rlib file — a compiled library. Then, for each file in the tests directory, Cargo runs the equivalent of rustc --test --extern awesomeness=path/to/awesomeness.rlib \ ./tests/is_awesome.rs That is, each integration test is compiled into a separate binary. Running those binaries executes the test functions. Note that rustc needs to repeatedly re-link the library crate with each of the integration tests. This can add up to a significant compilation time blow up for tests. That is why I recommend that large projects should have only one integration test crate with several modules. That is, don’t do this: tests/ foo.rs bar.rs tests/ integration/ main.rs foo.rs bar.rs When a refactoring along these lines was applied to Cargo itself, the effects were substantial ( ). The time to compile the test suite decreased 3x. The size of on-disk artifacts decreased 5x. It can’t get better than this, right? Wrong! Rust tests by default are run in parallel. The main that is generated by rustc spawns several threads to saturate all of the CPU cores. However, Cargo itself runs test binaries sequentially. This makes sense — otherwise, concurrently executing test binaries oversubscribe the CPU. But this means that multiple integration tests leave performance on the table. The critical path is the sum of longest tests in each binary. The more binaries, the longer the path. For one of my projects, consolidating several integration tests into one reduced the time to run the test suite from 20 seconds to just 13. A nice side-effect of a single modularized integration test is that sharing the code between separate tests becomes trivial, you just pull it into a submodule. There’s no need to awkwardly repeat mod common; for each integration test. If the project I am working with is small, I don’t worry about test organization. There’s no need to make tests twice as fast if they are already nearly instant. Conversely, if the project is large (a workspace with many crates) I worry about test organization a lot. Slow tests are a boiling frog kind of problem. If you do not proactively fix it, everything is fine up until the moment you realize you need to sink a week to untangle the mess. For a library with a public API which is published to crates.io, I avoid unit tests. Instead, I use a single integration tests, called it ( i ntegration t est): tests/ it.rs # Or, for larger crates tests/ it/ main.rs foo.rs bar.rs Integration tests use the library as an external crate. This forces the usage of the same public API that consumers use, resulting in a better design feedback. For an internal library, I avoid integration tests all together. Instead, I use Cargo unit tests for “integration” bits: src/ lib.rs tests.rs tests/ foo.rs bar.rs That way, I avoid linking the separate integration tests binary altogether. I also have access to non- pub API of the crate, which is often useful. First , documentation tests are extremely slow. Each doc test is linked as a separate binary. For this reason, avoid doc tests in internal libraries for big projects and add this to Cargo.toml : = tests { } This way, when you modify just the tests, the cargo is smart to not recompile the library crate. It knows that the contents of tests.rs only affects compilation when --test is passed to rustc. Learned this one from , thanks! Third , even if you stick to unit tests, the library is recompiled twice: once with, and once without --test . For this reason, folks from go even further. They add to Cargo.toml , make all APIs they want to unit test public and have a single test crate for the whole workspace. This crates links everything and contains all the unit tests. Discussion on .
14
Poland′s top court rules against primacy of EU law
Poland's constitutional court said on Thursday that Polish law can take precedence over EU law amid an ongoing dispute between the European bloc and the eastern European member state. The decision by the Constitutional Tribunal came after Polish Prime Minister Mateusz Morawiecki requested a review of a decision by the EU's Court of Justice (ECJ) that gave the bloc's law primacy. Two out of 14 judges on the panel dissented from the majority opinion. "The attempt by the European Court of Justice to involve itself with Polish legal mechanisms violates ... the rules that give priority to the constitution and rules that respect sovereignty amid the process of European integration," the ruling said. Brussels considers the Constitutional Tribunal illegitimate due to the political influence imposed upon Poland's judiciary by the ruling Law and Justice party (PiS). The court looked specifically at the compatibility of provisions from EU treaties — which are used by the European Commission to justify having a saw in the rule of law in member states — with Poland's constitution. A ruling by the ECJ in March said that the EU can force member states to disregard certain provisions in national law, including constitutional law. The ECJ says that Poland's recently implemented procedure for appointing members of its Supreme Court amounts to a violation of EU law. The ruling from the ECJ could potentially force Poland to repeal parts of the controversial judicial reform. The EU is withholding billions of euros of aid for post-pandemic rebuilding in Poland over concerns that the rule of law is being degraded in the country. EU, Poland row over judicial committee To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video "The primacy of constitutional law over other sources of law results directly from the Constitution of the Republic of Poland," PiS government spokesman Piotr Muller wrote on Twitter after the court's decision. "Today (once again) this has been clearly confirmed by the Constitutional Tribunal." Michal Wawrykiewicz, a pro-EU lawyer critical of the government, called it a "black day" in Poland's history. "It's a confederation of anti-democratic forces against Poland's membership in the European Union," he tweeted. "Non-recognition of ECJ rulings is de facto the path to Polexit," wrote Borys Budka of the liberal-conservative opposition alliance Civic Coalition. A group of pro-EU protesters outside the Warsaw constitutional court with banners reading "We are Europeans" reflected 80% of the population's desire to remain in the EU. EU leaders and institutions reacted angrily to the court ruling with the president of the European Parliament, David Sassoli, calling on the European Commission to "take the necessary action." "Today's verdict in Poland cannot remain without consequences. The primacy of EU law must be undisputed. Violating it means challenging one of the founding principles of our union," Sassoli said. The European Commission flatly rejected the Polish court ruling in a statement. "EU law has primacy over national law, including constitutional provisions,'' the statement said. "All rulings by the European Court of Justice are binding on all member states' authorities, including national courts... The (EU) Commission will not hesitate to make use of its powers under the treaties to safeguard the uniform application and integrity of Union law," the European Commission added. Terry Reintke, shadow rapporteur on Poland for the Greens-European Free Alliance Group said Thursday's ruling "flies in the face of what the Polish government has signed up to as a member of the European Union." "Fellow EU governments must no longer stand by and do nothing as the Polish government attempts to rewrite the rules of democracy to suit their own interests," she said. The EPP group, the center-right bloc in the European Parliament to which PiS belongs, came out strongly against the court's ruling. "It’s hard to believe the Polish authorities and the PiS Party when they claim that they don’t want to put an end to Poland’s membership of the EU. Their actions go in the opposite direction. Enough is enough," Jeroen Lenaers, MEP and spokesperson for the group, said. "The Polish Government has lost its credibility. This is an attack on the EU as a whole," he added. The European Parliament called on Morawiecki to cancel the court case in a resolution passed last month. It stressed the "fundamental nature of primacy of EU law as a cornerstone principle of EU law." Poland has come under repeated fire from the EU including over issues to do with LGBTQ rights and women's rights and media freedom. The judiciary reforms by the PiS government have been seen as a threat to Poland's membership within the 27-member bloc as well as to the stability of the EU as a whole. The court's decision on Thursday came as little surprise. The presiding judge, Julia Przylebska, is a government loyalist who was appointed by the ruling party. Jack Parrock, DW's correspondent in Brussels, highlighted the importance the decision could have on Poland's role in the EU. "One of the cornerstones of EU membership is that EU law has primacy over all other laws and that the European Court of Justice is the top court within the European Union and what these judges are saying is that in some aspects they don't believe that that is the case," he told DW. "This all started because the European Court of Justice essentially ruled that certain aspects of judicial tampering that the government was doing in Poland's judiciary were not in line with EU law," Parrock explained. "This has been an ongoing saga, and this is a pretty major issue now for the EU. We've already seen some pretty strong reactions coming from European parliamentarians and I'm sure we're going to see some harsh criticism of this ruling coming from the European Commission," he added.
1
Jeff Bezos to step down as Amazon CEO
Business Jeff Bezos to step down as Amazon CEO Photo: Getty Images Must read At least 9 killed in violent protests over the jailing of Senegal’s Sonko Senegal’s opposition leader jailed for “corrupting young people” Zimbabwe’s new bill that imposes death penalty for ‘unpatriotic acts’ Uganda’s Museveni defends anti-LGBTQ law amid aid cut threats The Amazon CEO’s wealth has now reached an estimated $202 billion, according to the Bloomberg Billionaires index as at 2020. Bezos besides Amazaon owns the space travel company, Blue Origin which founded in 2000 as well as The Washington Post newspaper, which he acquired in 2013. List of 100 Most Influential African Women for 2020 Source: Africafeeds.com Previous article Essential Bingo Lingo for Online Casinos Next article Ghana: Academics upset as judges question lecturer for not wearing suit - Advertisement - More articles - Advertisement - Latest article At least 9 killed in violent protests over the jailing of Senegal’s Sonko Senegal’s opposition leader jailed for “corrupting young people” Zimbabwe’s new bill that imposes death penalty for ‘unpatriotic acts’ Uganda’s Museveni defends anti-LGBTQ law amid aid cut threats Paymentology and Blackbullion South Africa to empower financially savvy youth - Advertisement -
1
Show HN: AVATARZ – library of 8000+ 3D avatars (free tier included)
AVATARZ 2 A lovely pack of 400+ millions combinations of 3D avatars + Blender Generator of avatars Money-back guarantee. Free sample Fully compatible with most common designer tools: More than 400+ millions of combinations are possible • male and female • 54 types of clothes (25 for a male, 29 for a female), • 16 types of accessories (7 for a male, 9 for a female), • 36 types of hair (24 for a man, 12 for a woman), • 8 types of facial expressions, • 8 types of facial expressions, • 3 color skins • Lipsync = 400,000,000+ combinations (8000+ rendered out of the box) 🦄 Check what is inside (video) High-quality 8,000 PNG files prepared for you ❤️ Every PNG is trimmed, so it doesn't have extra white space. Images are high quality. Fun fact: It took us weeks to render them all. Blender Generator of avatars We created a Blender Generator for even easier customization. You don't need to know Blender at all. It's that easy 😎 Are you not a master of Blender? No problemo! 🤔 We created a short tutorial for beginners which will give you the opportunity to customize avatars even more. Believe us, you can do it. By the way; Blender is free to use. Be creative, Picasso! 🎨 Use your favorite design tools to play with some extra colors, patterns, etc. You can even combine our different libraries together. "Everything you can imagine is real." Pablo Picasso In total, there are 54 of different clothes. Here is just a sample of it. So many comabinations, that everyone is represented AVATARZ 2 Lovely pack of 400,000,000+ 3D avatars + Blender Generator of avatars Check what is inside (video) Fully compatible with most common designer tools: Money-back guarantee Free sample What others say "Easy and practical to use, if you are tired of stock images or illustrations, this kit is a great choice" Tom Koliba Senior UX Manager @ Oracle See our other libraries: Do you like what we do? Follow us Dribbble Instagram Facebook In case of any question or if you are interested in affiliate programme:
2
Twitter's Dorsey leads $29 bln buyout of lending pioneer Afterpay
Summary Companies Square offers 30% premium in all-scrip bid for Afterpay Afterpay board unanimously recommends deal Afterpay U.S. sales soar in fiscal 2021 SYDNEY, Aug 2 (Reuters) - Square Inc (SQ.N), the payments firm of Twitter Inc (TWTR.N) co-founder Jack Dorsey, will purchase buy now, pay later pioneer Afterpay Ltd (APT.AX) for $29 billion, creating a transactions giant that will battle banks and tech firms in the financial sector's fastest-growing business. The takeover, Square's biggest deal to date and the largest buyout ever of an Australian firm, underscores the popularity of a business model that has upended consumer credit by charging merchants a fee to offer small point-of-sale loans which shoppers repay in interest-free instalments, bypassing credit checks. The buy now, pay later (BNPL) market has boomed in the past year as homebound consumers used it to borrow and spend online during the pandemic, and Apple Inc (AAPL.O) and Goldman Sachs (GS.N) were the latest heavyweights reported last month to be readying a version of the service. read more Square's buyout could pave the way for more acquisitions, with Mastercard Inc (MA.N), Visa Inc (V.N), PayPal Holdings Inc (PYPL.O) and others showing interest, said Christopher Brendler, an analyst with brokerage group D.A. Davidson. "(BNPL) is more mainstream now and (this deal) is going to raise attention," he said. Shares in Square surged 11%, while those in peer Affirm Holdings Inc (AFRM.O) rose as much as 17%. Afterpay shareholders will get 0.375 of Square class A stock for every share they own, implying a price of A$126.21 per share based on Square's Friday close, the companies said. Afterpay shares closed at A$114.80, up 19%. The buyout delivers a payday of almost A$2.5 billion ($1.8 billion) each for founders Anthony Eisen and Nick Molnar. China's Tencent Holdings Ltd (0700.HK), which paid A$300 million for 5% of Afterpay in 2020, will pocket A$1.7 billion. The deal, which eclipses the previous record for a completed Australian buyout, locks in a remarkable run for Afterpay, whose stock was worth just A$10 in early 2020. The Melbourne-based company has signed up millions of users in the United States in the past year, making it one of the fastest-growing markets for BNPL and spurring broad interest in the sector. "Acquiring Afterpay is a 'proof of concept' moment for buy now, pay later," Truist Securities analysts said, adding Square would now be a "formidable" competitor for Paypal, unlisted Swedish startup Klarna Inc and others. Klarna was worth $46 billion in its last fundraising in June. Shares in Australian BNPL peers Zip Co Ltd (Z1P.AX) and Sezzle Inc also closed higher on Monday. "Not surprised those stocks are increasing on future consolidation speculations," D.A. Davidson's Brendler said. "Competition is heating up and they also have platforms that are very attractive." [1/8] The Afterpay app is seen on the screen of a mobile phone in a picture illustration taken August 2, 2021. REUTERS/Loren Elliott/Illustration Talks between the two companies began more than a year ago and Square was confident there was no rival offer, a person with direct knowledge of the deal told Reuters. Credit Suisse analysts said the tie-up seemed to be an "obvious fit" with "strategic merit" based on cross-selling payment products, and agreed a competing bid was unlikely. The Australian Competition and Consumer Commission, which would need to approve the transaction, said it had been notified of the plan and "will consider it carefully once we see the details". "Few other suitors are as well-suited as Square," said Wilsons Advisory and Stockbroking analysts in a research note. "With ... PayPal already achieving early success in their native BNPL, other than major U.S. tech-titans (Amazon.com Inc (AMZN.O), Apple Inc (AAPL.O)) lobbying an 11-th hour bid, we expect a competing proposal from a new party to be low-risk." The deal includes a break clause worth A$385 million triggered by certain circumstances such as if Square investors do not approve the takeover. BNPL firms lend shoppers instant funds, typically up to a few thousand dollars, which can be paid off interest-free. As they generally make money from merchant commission and late fees - and not interest payments - they sidestep the legal definition of credit and therefore credit laws. That means BNPL providers are not required to run background checks on new accounts, unlike credit card companies, and normally request just an applicant's name, address and birth date. Critics say that makes the system an easier fraud target. For Afterpay, the deal with Square delivers a large customer base in the United States, where its fiscal 2021 sales have already nearly tripled to A$11.1 billion in constant currency terms. Square said it will undertake a secondary listing on the Australian Securities Exchange to allow Afterpay shareholders to trade in shares via CHESS depositary interests (CDIs). Morgan Stanley advised Square on the deal, while Goldman Sachs and Highbury Partnership consulted for Afterpay and its board. ($1 = 1.3622 Australian dollars) Reporting by Byron Kaye and Paulina Duran in Sydney, Shashwat Awasthi in Bengaluru and Scott Murdoch in Hong Kong; additional reporting by Niket Nishant and Sohini Podder in Bengaluru and Supantha Mukherjee in Stockholm; Editing by Chris Reese; Christopher Cushing and Saumyadeb Chakrabarty Our Standards: The Thomson Reuters Trust Principles.
1
In-Depth Guide: How Recommender Systems Work
p Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation. Learn More Great Companies Need Great People. That's Where We Come In. Recruit With Us
2
Mattermost integrations: Requesting data with slash commands
(Originally published at controlaltdeliet.be ) In the first two installments in this series, you learned you learned how to send alerts with incoming webhooks and request data with outgoing webhooks. In this article, you will learn how to set up a slash command. Slash commands are very similar to outgoing webhooks and even a little more powerful. To show their power in action, let’s find out how to use slash commands to request the temperature of a specific refrigerator. Follow these four steps to create a slash command. 1. Go to the Menu and choose Integations 2. At the following screen, choose Slash Commands 3. Next, click the Add Slash Command button on the right-hand side of the screen 4. Now it’s time to set up the slash command Here’s how you can do that: At this point, you’ll get confirmation that the slash command has been created. You will also get a very important token that we will be needing in our next step. As we are using our slash command to retrieve the temperature from our fridges (see the first part of this series), we named our slash command Ice Ice Baby, described it as the coolest command, and configured it so it gets triggered with /vanilla_ice. You made a slash command—congratulations! Now, we’ll write some code that can listen to your requests. First things first: We will send encrypted HTTPS requests. It’s a good habit. Next, let’s make a self-signed certificate! Now, the magical Python code (below is a NodeJS example). The slash command sends the token you generated. Check if the token matches your generated token, otherwise anyone can activate your slash command. The variables that the sender adds to the slash command are sent in a variable named text and are whitespace separated. Important note: Your response has to be JSON-formatted: It’s worth noting that your response isn’t limited to just text and you have some options. If you want to use another username to appear as the sender, you can add the username variable. This setting is default disabled and must be enabled in the Managment Console. If you want to change the profile picture, add the icon_url in the request. If you want to add an emoticon in the text, add the shortcode like :tada: in the text. If you are not familiar with Markdown layout, you can find an introduction in the Mattermost documentation and in this blog post. '{ "text" : "| Left-Aligned | Center Aligned | Right Aligned | | : - - - - - - - - - - - - | : - - - - - - - - - - - - - - - : | - - - - - : | | Left column 1 | this text | $100 | | Left column 2 | is | $10 | | Left column 3 | centered | $1 |"}' If you want to send your requests to your internal network, you have to specify to which hosts you want to send. Mattermost can only send to the specified internal hosts or hostnames! You can adjust this setting by navigating to System Console -> Environment -> Developer. If you are using self-signed certificates like in our example, you need to allow outgoing HTTPS connections with self-signed unverified certificates. You can find this setting here: System Console -> Environment -> Web Server. Is something not working as expected? Go to the System Console and under Environment you find the Logging section. In the Logging section, enable Webhook Debugging and set Log Level to Debug. You’ll find in the logfiles (or in the console if you enabled logging to the console) all the relevant information. My most common mistake is a typo in the request. It gives me an Unable to parse incoming data error. For more information on how to use slach commands in Mattermost, check out the docs. You can also always ask for help on the Mattermost forum.
1
Trade-Offs and Triumphs 34
👋 Hello, friends! Thank you for subscribing! Thank you for reading! Welcome to issue 34 of Trade-offs and Triumphs  -  a newsletter of resources and thoughts about how to balance trade-offs in life to find and celebrate the small triumph; every decision point requires thinking through trade-offs and not just immediately aiming for the “solution.” How was your week? What were your trade-offs and triumphs? This week we will hit on: 👉 The Lessons of Amelia Bedelia: Concreteness vs Ambiguity in Communication 👉 Resources for the Week 👉 Closing Thoughts: The Power of a Conversation, Following Your Heart, A Visual of 2020, and Art Deco in Chicago Are you in an alternate universe or are they? Have you ever left a meeting thinking that everyone had agreed to the same objectives and action items, only to realize two weeks later that you were on different planets? That you interpreted the same words differently? This is what I call an “ Amelia Bedelia ” moment. Amelia Bedelia is a series of children’s books where the main character, Amelia Bedelia, is an irrepressible and lovable housekeeper, who tends to take her instructions literally. For example, when Amelia Bedelia receives instructions to “change the towels in the green bathroom,” she takes some scissors and snips “a little here and a little there” to change the towels. But another person may have interpreted this to mean that these original towels needed to be exchanged for newly laundered towels. https://images.app.goo.gl/KiEb3Mx77X6huKQx7 Whether meetings, customer service, or personal relationships, how do we improve communication? How do we reduce ambiguity, lost time, and frustration? In his weekly LinkedIn Newsletter, “ Leadership and Decision Making ,” Xinjin Zhao explains that it may take a “simple shift in language” to improve customer service, because how we use words (pitch, inflection, cadence, and tone) can shape attitudes and behaviors. Xinjin provides examples from Spanish, Korean, Hindi, Japanese, and Chinese languages where the vocabulary and phrases do not provide the entire context. Although direct translation may imply clarity, depending on context, body language, and inflection, you and your counterparty may be interpreting the next step very differently. As you read Xinjin’s essay, take a moment to reflect on when you experienced your “Amelia Bedelia” moments in the English language. And, then think about how to ensure that your intent is actually understand as the agreed upon intent, instead of assuming other parties understand. This applies to oral and written communication. And remember, if you are unsure, ASK. Don’t be afraid of asking questions - it is better to seem foolish than to waste everyone else’s time. I came across a posting from Professor  Jonah Berger  at Wharton that quantitatively demonstrated that a simple shift in language could help improve customer satisfaction. The paper suggests that linguistic concreteness—the tangibility, specificity, or imaginability of words employees use when speaking to customers—can shape consumer attitudes and behaviors. This reminds me of the implication of linguistic ambiguity, the opposite of concreteness, in the context of cultural agility. There was a story from BBC last year by a person who was visiting Mexico for the first time. When she asked a local ice-cream street vendor when he expected a new delivery of chocolate ice cream, she was told “ahorita”, which directly translates to “right now”. After waiting for half an hour with no sign of the delivery, she went back and asked again and got the same answer, or almost the same answer, “Ahoriiiiita” with an obvious expression of confusion from the vendor. It turns out that when a Mexican says ‘ahorita’, it can either refer to the present moment, or a vague reference to some point in the future, or never. The stretch in the ‘i’ sound in the word ‘ahorita’ can be a demonstration of the stretching of time. I have to say that sometimes there is something special of the beauty in the ambiguity. In a different example, Korean language has a unique and versatile phrase ‘우리 [uri],’ which means ‘we/our’ in English. Apparently it is not always as all-inclusive as the English ‘we’—that is, it might not include the listeners, and it might not even be plural, especially when they talk about their family or country. Scholars believe the language use style is a reflection of the Korean culture’s emphasis on the whole rather than the individual. I just learned from a colleague this week that “kal” in Hindi could mean either yesterday or tomorrow. According to an explanation I found on the internet, “kal” is the word derived from “KAAL”. Kaal is the time taken from the time of sunrise to sunrise in India. As such kal can be the time from today's sunrise to yesterday's or tomorrows sunrise. The exact meaning in a sentence would all be dependent on the context. While English language is very logical, Japanese language is highly contextual. This means that there is an emphasis on implicit, indirect and ambiguous communication. For example, “Yes” in Japanese, “hai,” is often ambiguous depending on the situation. It is used in much the same way an English speaker say “I see” without actually agreeing or disagreeing what you just said. It’s always good to double-check with someone if you think they have a question mark on their face. Likewise, there are many ambiguous expressions in Chinese which are prone to misinterpretation. For example, a phrase you hear often in business meetings is “Let’s talk about it later.” It could mean “let’s discuss later, possibly privately” but also could mean “I am hoping we’ll both forget and it never comes up again.” It’s up to you to figure out the true intent based on the context or the body language. With today’s global business environment, it is not only important to have the linguistic concreteness to ensure clarity, but more importantly have the awareness of different cultural context of the people you work with. This is especially true when you have team members who work around the world and many may have anxiety about communicating in English outside of the specific business or technical context, even if they seem to speak English fluently. Even in the same culture, people with different background or in different professions may interpret the same English word very differently. Going back to the customer satisfaction study, I wonder how the results would have looked in a different culture? Would every culture prefer the concreteness of the answers for customer service or would it be better off with some intentional ambiguity in certain culture? I would love to hear your experience. 1,700 Free Online Courses from Top Universities. Learn what you want, when you want. openculture.com/freeonlinecour… p lu.ma Film Your First Video Workshop - Zoom In a world where we’re all buried in emails and text...Videos are a fantastic tool to stand out, get noticed, and form a connection with your audience.With a video, you can virtually reach anyone everywhere, unlock opportunities for yourself, and build your personal brand (video is pretty much the… The best way to learn video is to make one. @cahouser and I are running a free 'Film Your First Video' Workshop on Tuesday.With 61 people registered so far 🚀You'll create a 1-2 minute video 🎬 with your smartphone (no other equipment needed) and our guidance 👇 p Your time is limited, so do not waste it living someone else’s life. Do not let the noise of others’ opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. —Steve Jobs But. Like. For real. Art: @fosslien WSJ: “In 2000, Ron Chez purchased a home overlooking Chicago’s Lincoln Park for $2.2 million and began an ambitious renovation of the U-shaped interior.” WSJ: “The large entryway has limestone flooring and Noir St. Laurent marble inlay. ‘The configuration of the house was a lot of small rooms on the third floor. There were like five bedrooms,’ said Mr. Chez. ‘I like bigger, open as much as possible.’ The renovation took between three and four years and cost roughly $6 million dollars.” Please leave me your questions or thoughts in the  Comments section below  👇👇👇 Follow me on Twitter  or DM me there, Follow me on YouTube ; or Email me at jennykimwop@gmail.com. If you are enjoying Trade-offs and Triumphs, I would love it if you shared it with a friend or two by forwarding it or sharing it with the button below! Be conscious of your trade-offs. Before settling on any one “solution,” run your fingers through all the trade-offs and decide intentionally and specifically. And then celebrate your triumphs, no matter how small.
2
Shein, a Chinese fast-fashion retailer, is a hit with US teens
The A.V. Club Deadspin Gizmodo Jalopnik Jezebel Kotaku Quartz The Root The Takeout The Onion The Inventory p THE KIDS KNOW US teens are flocking to a Chinese e-commerce site you’ve never heard of Founded in China, Shein has become popular with teens in the US. Image: Reuters/Dado Ruvic/Illustration By PublishedApril 8, 2021 We may earn a commission from links on this page. If you’re not familiar with Shein, an online fast-fashion retailer based in China, odds are you’re not a teenage girl. The company has attracted a young and growing fanbase with the constant deluge of trendy, inexpensive new clothes it releases online and through its app, as well as with its aggressive social-media marketing. It advertises heavily on platforms such as Facebook, maintains a network of influencers who promote it on TikTok and Instagram, and regularly reposts photos from customers to keep them sharing its clothes online. In the past year, celebrities such as Katy Perry and Lil Nas X have performed at events Shein streams to shoppers. Whatever tensions exist between the US and Chinese governments, American shoppers are not shying away from the retailer. Shein has twice ranked second only to Amazon as the favorite shopping site of upper-income US teens in a biannual survey of US teens by Piper Sandler, an investment firm. In the most recent installment, which included some 7,000 American teenagers, 7% of upper-income teens picked Shein as their favorite website for shopping. That’s well behind the 56% who chose Amazon. But Shein came in ahead of household names such as Nike and Urban Outfitters, and its share keeps growing. For the first time it also broke into the top-10 favorite clothing brands listed by teens. A Chinese company with global ambitions Shein claims to release hundreds to thousands of new items daily, leveraging China’s fast manufacturing to offer a huge variety of styles at low prices. It sells clothes like halter tops and cropped tees for less than $10 and floral-print dresses for under $30. It makes clothes for men and kids too, but women are its core audience. Its Instagram feed is almost exclusively women, frequently with bare midriffs. While customers sometimes complain online about late deliveries and poor quality, the issues haven’t scared shoppers away. Shein was founded in 2008 by Chris Xu, described as an American-born-Chinese graduate of Washington University in a 2013 press release , when Shein was still going by its original name, Sheinside. (It changed the name in 2015.) A recent profile of Shein (paywall) in trade outlet Business of Fashion noted he also goes by Sky Xu or Yangtian Xu. The privately owned company is notoriously opaque and provides little insight into its sales. But in a 2019 WeChat post , it said sales for the year reached 20 billion yuan (about $2.8 billion), the South China Morning Post reported . Last year, Chinese tech news site LatePost said Shein revealed in an internal meeting that sales had surpassed 40 billion yuan (link in Chinese). It also closed a funding round that valued it at more than $15 billion. Despite being based in China, Shein’s focus is outside the country. It “mainly targets Europe, America, Australia, and the Middle East along with other consumer markets,” according to its site . In October, Reuters reported that Shein was now the biggest online-only fashion company in the world measured by sales of self-branded products, citing data from market-research firm Euromonitor. The company’s lack of stores has perhaps been an advantage in the past year, when the pandemic shuttered shops around the world. While many fashion companies with large brick-and-mortar businesses suffered, sales at online retailers soared . Shein is among those that look set to capitalize on the pandemic’s supercharging of e-commerce, especially if US teens are any indication. 📬 Sign up for the Daily Brief Our free, fast, and fun briefing on the global economy, delivered every weekday morning.
3
Active learning made simple using Flash and BaaL
The CIFAR-10 dataset consists of 60k 32x32 color images in 10 classes, with approximately 6k images per class. The dataset is divided into 50k training images and 10k test images. Here is an image showing some examples. Cifar 10 Examples Source: https://www.cs.toronto.edu/~kriz/cifar.html In this experiment, we aim at reproducing the results from the paper `Bayesian active learning for production, a systematic study and a reusable library` by Parmida Atighehchian, Frederic Branchaud-Charron, and Alexandre Lacoste. For this experiment, we won’t request the data to be labeled by an annotator but use the ground-truth label of the training dataset instead, e.g, we mask the labels and un-masked them when the heuristic determines the associated unlabelled sample should be labeled. This is a common trick used by Active Learning researchers to test out their ideas. In a real-world scenario, the data will be labeled by a human. The informativeness or uncertainty estimation in this experiment would be done either randomly or using the BALD heuristic from BaaL. A heuristic is a method that derived the informativeness of a given unlabelled sample based on the model predictions. For this experiment, we will create a CIFAR10 dataset using torchvision. After loading the data we will apply minimal augmentation, simply random horizontal flip and 30 degrees rotation for the training dataset define as follows: Finally, we use the ImageClassificationData to load the datasets we defined above. In the paper, the head classifier below is created as a sequence of linear layers. The final layer has a dimension of 10 equal to the number of classes within your CIFAR10 dataset. Using Lightning Flash, we can easily create an ImageClassifier with a pretrainedvgg16 backbone and SGD optimizer. Lightning Flash provides components to utilize your data and model to experiment with Active Learning. These components will take care of: And the best of all is that you get all of this with only a few lines of code. By using an ActiveLearningDataModule, you can wrap your existing data module and emulate the above active learning scenario. For this experiment, we will start training with only 1024 labeled data samples out of the 50k and request 100 new samples to be labeled at every cycle. Finally, we will create a Flash Trainer alongside with ActiveLearningLoop and use it to replace the base fit_loop of the Trainer. For this experimentation, we will perform 2500 cycles where the model will be trained from scratch for 20 epochs each time. For this experimentation, we choose to use BALD and random heuristic and as it provides a good tradeoff between efficiency and performance. Here is a table containing more heuristics available within BaaL alongside a short description. You can find more advanced benchmarks from the BaaL Team within their papers. Using the ImageClassifier with Baal BALD heuristic, we can observe that it takes 3.3 times fewer data to achieve the same loss which means the model predictions can be used to estimate the uncertainty for each sample and identify harder ones. Find the full example code here. To show how good active learning is, we shuffled λ % of the labels and ran the same experiment. In the figure below, we show that even with 10% of the labels corrupted, BALD is still stronger than random with no noise! These kinds of experiments are incredibly easy to run using the Lightning Flash integration as we only changed the dataset composition! In this tutorial, we used Lightning Flash Active learning integration using BaaL to run an experiment on CIFAR10. Using only ~50 lines of code, we created a complete experiment and we were able to demonstrate that BALD evaluation results in better results than uniform sampling. BaaL and Lightning Flash Team are working closely to provide seamless integration for more data modalities and tasks. Stay tuned! Built by the PyTorch Lightning creators, let us introduce you to Grid.ai. Our platform enables you to scale your model training without worrying about infrastructure, similarly as Lightning automates the training. You can get started with Grid.ai for free with just a GitHub or Google Account.
2
Hacking third-party APIs on the JVM
The JVM ecosystem is mature and offers plenty of libraries, so you don’t need to reinvent the wheel. Basic - and not so basic - functionalities are just a dependency away. Sometimes, however, the dependency and your use-case are slightly misaligned. The correct way to fix this would be to create a Pull Request. But your deadline is tomorrow: you need to make it work now! It’s time to hack the provided API. In this post, we are going through some alternatives that allow you to make third-party APIs behave in a way that their designers didn’t intend to. Reflection Imagine that the API has been designed to follow the open-closed principle: In object-oriented programming, the open–closed principle states "software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification"; that is, such an entity can allow its behaviour to be extended without modifying its source code. — Open-closed principle Imagine that the dependency’s public API does not fit your use case. You need to extend it, but that’s not possible because the design disallows it - on purpose. To cope with that, the oldest trick on the JVM in the book is probably reflection. Reflection is a feature in the Java programming language. It allows an executing Java program to examine or "introspect" upon itself, and manipulate internal properties of the program. For example, it’s possible for a Java class to obtain the names of all its members and display them. — Using Java Reflection In our scope, reflection allows you to access state that was not meant to be accessed, or call methods that were not meant to be called. public class Private { private String attribute = "My private attribute" ; private String getAttribute () { return attribute ; } } public class ReflectionTest { private Private priv ; @BeforeEach protected void setUp () { priv = new Private (); } @Test public void should_access_private_members () throws Exception { var clazz = priv . getClass (); var field = clazz . getDeclaredField ( "attribute" ); (1) var method = clazz . getDeclaredMethod ( "getAttribute" ); (2) AccessibleObject . setAccessible ( new AccessibleObject []{ field , method }, true ); (3) field . set ( priv , "A private attribute whose value has been updated" ); (4) var value = method . invoke ( priv ); (5) assertThat ( value ). isEqualTo ( "A private attribute whose value has been updated" ); } } 1 Get a reference to a private field of the Private class 2 Get a reference to a private method of the Private class 3 Allow to use private members 4 Set the value of the private field 5 Invoke the private method Yet, reflection has some limitations: The "magic" happens with . One can disallow this at runtime with an adequately configured Security Manager. I admit that during my career, I’ve never seen the Security Manager in use.AccessibleObject.setAccessible The module system restricts the usage of the Reflection API. For example, both the caller and the target classes must be in the same module, the target member must be , etc. Note that many libraries do not use the module system.public Reflection is good if you directly use the class that has private members. But it’s no use if you need to change the behavior of a dependent class: if your class uses a third-party class that itself requires a class and you need to change ABB. Classpath shadowing A long post could be dedicated solely to Java’s class loading mechanism. For this post, we will narrow it down to the classpath. The classpath is an ordered list of folders and JARs that the JVM will look into to load previously unloaded classes. Let’s start with the following architecture: The simplest command to launch the application is the following: java -cp =.:thirdparty.jar Main For whatever reason, imagine we need to change the behavior of class B. Its design doesn’t allow for that. Regardless of this design, we could hack it anyway by: Getting the source code of class B Changing it according to our requirements Compiling it Putting the compiled class the JAR that contains the original class on the classpathbefore When launching the same command as above, the class loading will occur in the following order: Main, B from the filesystem, and A from the JAR; B in the JAR will be skipped. This approach also has some limitations: You need the source code of - or at least a way to get it from the compiled code.B You need to be able to compile from source. That means you need to re-create all necessary dependencies of BB. Those are technical requirements. Whether it’s legally possible is an entirely different concern and outside of the scope of this post. Aspect-Oriented Programming Contrary to C++, the Java language offers single inheritance: a class can inherit from a single superclass. In some cases, however, multiple inheritance is a must. For example, we would like to have logging methods for different log levels across the class hierarchy. Some languages adhere to the single inheritance principle but offer an alternative for cross-cutting concerns such as logging: Scala provides traits, while Java’s and Kotlin’s interfaces can have properties. "Back in the days", AOP was quite popular to add cross-cutting features to classes that were not part of the same hierarchy. In computing, aspect-oriented programming (AOP) is a programming paradigm that aims to increase modularity by allowing the separation of cross-cutting concerns. It does so by adding additional behavior to existing code (an advice) without modifying the code itself, instead separately specifying which code is modified via a "pointcut" specification, such as "log all function calls when the function’s name begins with 'set'". This allows behaviors that are not central to the business logic (such as logging) to be added to a program without cluttering the code core to the functionality. AOP forms a basis for aspect-oriented software development. — Aspect-oriented programming In Java, AspectJ is the AOP library of choice. It relies on the following core concepts: A defines a certain well-defined point in the execution of the program, e.g., the execution of methodsjoin point A picks out specific join points in the program flow, e.g., the execution of any method annotated with pointcut@Loggable An brings together a pointcut (to pick out join points) and a body of code (to run at each of those join points)advice Here are two classes: one represents the public API and delegates its implementation to the other. public class Public { private final Private priv ; public Public () { this . priv = new Private (); } public String entryPoint () { return priv . implementation (); } } final class Private { final String implementation () { return "Private internal implementation" ; } } Imagine we need to change the private implementation. Hack.aj public aspect Hack { pointcut privateImplementation (): execution ( String Private . implementation ()); (1) String around (): privateImplementation () { (2) return "Hacked private implementation!" ; } } 1 Pointcut that intercepts the execution of Private.implementation() 2 Advice that wraps the above execution and replaces the original method body with its own AspectJ offers different implementations: Compile-time: the bytecode is updated during the build Post-compile time: the bytecode is updated just after the build. It allows updating not only project classes but also dependent JARs. Load-time: the bytecode is updated at runtime when classes are loaded You can set up the first option in Maven like this: pom.xml maven-surefire-plugin 2.22.2 com.nickwongdev aspectj-maven-plugin 1.12.6 ${java.version} ${java.version} ${java.version} ${project.encoding} compile org.aspectj aspectjrt 1.9.5 AOP in general and AspectJ, in particular, represent the nuclear option. They practically have no limits though I must admit I didn’t check how it works with Java modules. However, the official AspectJ Maven plugin from Codehaus handles the JDK only up to version 8 (included) as nobody has updated since 2018. Somebody has forked the code on GitHub that handles later versions. The fork can handle the JDK up to version 13 and the AspectJ library up to 1.9.5. Java agent AOP offers a high-level abstraction when you want to hack. But if you want to change the code in a fine-grained way, there’s no other way than to change the bytecode itself. Interestingly enough, the JVM provides us with a standard mechanism to change bytecode when a class is loaded. You’ve probably already encountered that feature in your career: they are called Java agents. Java agents can be set statically on the command-line when you start the JVM or attached dynamically to an already running JVM afterward. For more information on Java agents, please check this post (section "Quick Introduction to Java Agents"). Here’s the code of a simple Java agent: public class Agent { public static void premain ( (1) String args , (2) Instrumentation instrumentation ){ (3) var transformer = new HackTransformer (); instrumentation . addTransformer ( transformer ); (4) } } 1 premain is the entry-point for statically-set Java agents, just like main for regular applications 2 We get arguments too, just like with main 3 Instrumentation is the "magic" class 4 Set a transformer that can change the bytecode before the JVM loads it A Java agent works at the bytecode level. An agent provides you with the byte array that stores the definition of a class according to the JVM specification and, more precisely, to the class file format. Having to change bytes in a byte array is not fun. The good news is that others have had this requirement before. Hence, the ecosystem provides ready-to-use libraries that offer a higher-level abstraction. In the following snippet, the transformer uses Javassist: public class HackTransformer implements ClassFileTransformer { @Override public byte [] transform ( ClassLoader loader , String name , Class clazz , ProtectionDomain domain , byte [] bytes ) { (1) if ( "ch/frankel/blog/agent/Private" . equals ( name )) { var pool = ClassPool . getDefault (); (2) try { var cc = pool . get ( "ch.frankel.blog.agent.Private" ); (3) var method = cc . getDeclaredMethod ( "implementation" ); (4) method . setBody ( "{ return \"Agent-hacked implementation!\"; }" ); (5) bytes = cc . toBytecode (); (6) } catch ( NotFoundException | CannotCompileException | IOException e ) { e . printStackTrace (); } } return bytes ; (7) } } 1 Byte array of the class 2 Entry-point into the Javassist API 3 Get the class from the pool 4 Get the method from the class 5 Replace the method body by setting a new one 6 Replace the original byte array with the updated one 7 Return the updated byte array for the JVM to load Conclusion In this post, we have listed four different methods to hack the behavior of third-party libraries: reflection, classpath shadowing, Aspect-Oriented Programming, and Java agents. With those, you should be able to solve any problem you encounter. Just remember that libraries - and the JVM - have been designed this way for a good reason: to prevent you from making mistakes. You can disregard those guard rails, but I’d suggest that you keep those hacks in place for the shortest period required and not one moment longer. The complete source code for this post can be found on in Maven format. To go further: Using Java Reflection Java Platform Module System: setAccessible is broken? Starting AspectJ Intro to AspectJ
12
Slack Is Down
Bring your team together At the heart of Slack are channels: organized spaces for everyone and everything you need for work. In channels, it’s easier to connect across departments, offices, time zones and even other companies. Learn more about channels
2
Using lightweight formal methods to validate a K/V storage node in Amazon S3
Intern - Economics US, WA, Bellevue We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics, (Bayesian) time series, macroeconomic, as well as basic familiarity with Matlab, R, or Python is necessary, and experience with SQL would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Senior Manager, Applied Science, swami Team US, WA, Seattle The AWS AI Labs team has a world-leading team of researchers and academics, and we are looking for world-class colleagues to join us and make the AI revolution happen. Our team of scientists have developed the algorithms and models that power AWS computer vision services such as Amazon Rekognition and Amazon Textract. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. AWS is the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems which will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. Our research themes include, but are not limited to: few-shot learning, transfer learning, unsupervised and semi-supervised methods, active learning and semi-automated data annotation, large scale image and video detection and recognition, face detection and recognition, OCR and scene text recognition, document understanding, 3D scene and layout understanding, and geometric computer vision. For this role, we are looking for scientist who have experience working in the intersection of vision and language. We are located in Seattle, Pasadena, Palo Alto (USA) and in Haifa and Tel Aviv (Israel). Data Scientist I, SDO Privacy - PDC team RO, Iasi Amazon’s mission is to be earth’s most customer-centric company and our team is the guardian of our customer’s privacy. Amazon SDO Privacy engineering operates in Austin – TX, US and Iasi, Bucharest – Romania. Our mission is to develop services which will enable every Amazon service operating with personal data to satisfy the privacy rights of Amazon customers. We are working backwards from the customers and world-wide privacy regulations, think long term, and propose solutions which will assure Amazon Privacy compliance. Our external customers are world-wide customers of Amazon Retail Website, Amazon B2B services (e.g. Seller central, App / Skill Developers), and Amazon Subsidiaries. Our internal customers are services within Amazon who operate with personal data, Legal Representatives, and Customer Service Agents. You can opt-in for being part of one of the existing or newly formed engineering teams who will contribute to Amazon mission to meet external customers’ privacy rights: Personal Data Classification, The Right to be forgotten, The right of access, or Digital Markets Act – The Right of Portability. The ideal candidate has a great passion for data and an insatiable desire to learn and innovate. A commitment to team work, hustle and strong communication skills (to both business and technical partners) are absolute requirements. Creating reliable, scalable, and high-performance products requires a sound understanding of the fundamentals of Computer Science and practical experience building large-scale distributed systems. Your solutions will apply to all of Amazon’s consumer and digital businesses including but not limited to Amazon.com, Alexa, Kindle, Amazon Go, Prime Video and more. Key job responsibilities As an data scientist on our team, you will apply the appropriate technologies and best practices to autonomously solve difficult problems. You'll contribute to the science solution design, run experiments, research new algorithms, and find new ways of optimizing customer experience. Besides theoretical analysis and innovation, you will work closely with talented engineers and ML scientists to put your algorithms and models into practice. You will collaborate with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. Your work will directly impact the trust customers place in Amazon Privacy, globally. Economist (JP), Japan Consumer Innovation JP, 13, Tokyo The JP Economics team is a central science team working across a variety of topics in the JP Retail business and beyond. We work closely with JP business leaders to drive change at Amazon. We focus on solving long-term, ambiguous and challenging problems, while providing advisory support to help solve short-term business pain points. Key topics include pricing, product selection, delivery speed, profitability, and customer experience. We tackle these issues by building novel economic/econometric models, machine learning systems, and high-impact experiments which we integrate into business, financial, and system-level decision making. Our work is highly collaborative and we regularly partner with JP- EU- and US-based interdisciplinary teams. In this role, you will build ground-breaking, state-of-the-art causal inference models to guide multi-billion-dollar investment decisions around the global Amazon marketplaces. You will own, execute, and expand a research roadmap that connects science, business, and engineering and contributes to Amazon's long term success. As one of the first economists outside North America/EU, you will make an outsized impact to our international marketplaces and pioneer in expanding Amazon’s economist community in Asia. The ideal candidate will be an experienced economist in empirical industrial organization, labour economics, econometrics, or related structural/reduced-form causal inference fields. You are a self-starter who enjoys ambiguity in a fast-paced and ever-changing environment. You think big on the next game-changing opportunity but also dive deep into every detail that matters. You insist on the highest standards and are consistent in delivering results. Key job responsibilities Work with Product, Finance, Data Science, and Data Engineering teams across the globe to deliver data-driven insights and products for regional and world-wide launches. Innovate on how Amazon can leverage data analytics to better serve our customers through selection and pricing. Contribute to building a strong data science community in Amazon Asia. Principal Applied Scientist, SPeXSci US, WA, Seattle Do you want to join an innovative team of scientists who use machine learning to help Amazon provide the best experience to our Selling Partners by automatically understanding and addressing their challenges, needs and opportunities? Do you want to build advanced algorithmic systems that are powered by state-of-art ML, such as Natural Language Processing, Large Language Models, Deep Learning, Computer Vision and Causal Modeling, to seamlessly engage with Sellers? Are you excited by the prospect of analyzing and modeling terabytes of data and creating cutting edge algorithms to solve real world problems? Do you like to build end-to-end business solutions and directly impact the profitability of the company and experience of our customers? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Selling Partner Experience Science team. Key job responsibilities Use statistical and machine learning techniques to create the next generation of the tools that empower Amazon's Selling Partners to succeed. Design, develop and deploy highly innovative models to interact with Sellers and delight them with solutions. Work closely with teams of scientists and software engineers to drive real-time model implementations and deliver novel and highly impactful features. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Research and implement novel machine learning and statistical approaches. Lead strategic initiatives to employ the most recent advances in ML in a fast-paced, experimental environment. Drive the vision and roadmap for how ML can continually improve Selling Partner experience. About the team Selling Partner Experience Science (SPeXSci) is a growing team of scientists, engineers and product leaders engaged in the research and development of the next generation of ML-driven technology to empower Amazon's Selling Partners to succeed. We draw from many science domains, from Natural Language Processing to Computer Vision to Optimization to Economics, to create solutions that seamlessly and automatically engage with Sellers, solve their problems, and help them grow. Focused on collaboration, innovation and strategic impact, we work closely with other science and technology teams, product and operations organizations, and with senior leadership, to transform the Selling Partner experience. Economist - long-term internship (10 months), Economic Decision Science GB, London Are you excited about applying economic models and methods using large data sets to solve real world business problems? Then join the Economic Decision Science (EDS) team. EDS is an economic science team based in the EU Stores business. The teams goal is to optimize and automate business decision making in the EU business and beyond. An internship at Amazon is an opportunity to work with leading economic researchers on influencing needle-moving business decisions using incomparable datasets and tools. It is an opportunity for PhD students and recent PhD graduates in Economics or related fields. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL would be a plus. As an Economics Intern, you will be working in a fast-paced, cross-disciplinary team of researchers who are pioneers in the field. You will take on complex problems, and work on solutions that either leverage existing academic and industrial research, or utilize your own out-of-the-box pragmatic thinking. In addition to coming up with novel solutions and prototypes, you may even need to deliver these to production in customer facing products. Roughly 85% of previous intern cohorts have converted to full time economics employment at Amazon. Applied Scientist, Automated Reasoning Group for Healthcare and Payments Security US, CA, Cupertino We're looking for an Applied Scientist to help us secure Amazon's most critical data. In this role, you'll work closely with internal security teams to design and build AR-powered systems that protect our customers' data. You will build on top of existing formal verification tools developed by AWS and develop new methods to apply those tools at scale. You will need to be innovative, entrepreneurial, and adaptable. We move fast, experiment, iterate and then scale quickly, thoughtfully balancing speed and quality. Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. Key job responsibilities Deeply understand AR techniques for analyzing programs and other systems, and keep up with emerging ideas from the research community. Engage with our customers to develop understanding of their needs. Propose and develop solutions that leverage symbolic reasoning services and concepts from programming languages, theorem proving, formal verification and constraint solving. Implement these solutions as services and work with others to deploy them at scale across Payments and Healthcare. Author papers and present your work internally and externally. Train new teammates, mentor others, participate in recruiting and interviewing, and participate in our tactical and strategic planning. About the team Our small team of applied scientists works within a larger security group, supporting thousands of engineers who are developing Amazon's payments and healthcare services. Security is a rich area for automated reasoning. Most other approaches are quite ad-hoc and take a lot of human effort. AR can help us to reason deliberately and systematically, and the dream of provable security is incredibly compelling. We are working to make this happen at scale. We partner closely with our larger security group and with other automated reasoning teams in AWS that develop core reasoning services. Senior Manager, Applied Science, Sponsored Products US, NY, New York Search Thematic Ad Experience (STAX) team within Sponsored Products is looking for a leader to lead a team of talented applied scientists working on cutting-edge science to innovate on ad experiences for Amazon shoppers!. You will manage a team of scientists, engineers, and PMs to innovate new widgets on Amazon Search page to improve shopper experience using state-of-the-art NLP and computer vision models. You will be leading some industry first experiences that has the potential to revolutionize how shopping looks and feels like on Amazon, and e-commerce marketplaces in general. You will have the opportunity to design the vision on how ad experiences look on Amazon search page, and use the combination of advanced techniques and continuous experimentation to realize this vision. Your work will be core to Amazon’s advertising business. You will be a significant contributor in building the future of sponsored advertising, directly impacting the shopper experience for our hundreds of millions of shoppers worldwide, while delivering significant value for hundreds of thousands of advertisers across the purchase journey with ads on Amazon. Key job responsibilities * Be the technical leader in Machine Learning; lead efforts within the team, and collaborate and influence across the organization. * Be a critic, visionary, and execution leader. Invent and test new product ideas that are powered by science that addresses key product gaps or shopper needs. * Set, plan, and execute on a roadmap that strikes the optimal balance between short term delivery and long term exploration. You will influence what we invest in today and tomorrow. * Evangelize the team’s science innovation within the organization, company, and in key conferences (internal and external). * Be ruthless with prioritization. You will be managing a team which is highly sought after. But not all can be done. Have a deep understanding of the tradeoffs involved and be fierce in prioritizing. * Bring clarity, direction, and guidance to help teams navigate through unsolved problems with the goal to elevate the shopper experience. We work on ambiguous problems and the right approach is often unknown. You will bring your rich experience to help guide the team through these ambiguities, while working with product and engineering in crisply defining the science scope and opportunities. * Have strong product and business acumen to drive both shopper improvements and business outcomes. A day in the life * Lead a multidisciplinary team that embodies “customer obsessed science”: inventing brand new approaches to solve Amazon’s unique problems, and using those inventions in software that affects hundreds of millions of customers * Dive deep into our metrics, ongoing experiments to understand how and why they are benefitting our shoppers (or not) * Design, prototype and validate new widgets, techniques, and ideas. Take end-to-end ownership of moving from prototype to final implementation. * Be an advocate and expert for STAX science to leaders and stakeholders inside and outside advertising. About the team We are the Search thematic ads experience team within Sponsored products - a fast growing team of customer-obsessed engineers, technologists, product leaders, and scientists. We are focused on continuous exploration of contexts and creatives to drive value for both our customers and advertisers, through continuous innovation. We focus on new ads experiences globally to help shoppers make the most informed purchase decision while helping shortcut the time to discovery that shoppers are highly likely to engage with. We also harvest rich contextual and behavioral signals that are used to optimize our backend models to continually improve the shopper experience. We obsess about our customers and are continuously seeking opportunities to delight them. Sr Principal Scientist, Search, Search Science US, CA, Palo Alto Amazon is the 4th most popular site in the US. Our product search engine, one of the most heavily used services in the world, indexes billions of products and serves hundreds of millions of customers world-wide. We are working on a new initiative to transform our search engine into a shopping engine that assists customers with their shopping missions. We look at all aspects of search CX, query understanding, Ranking, Indexing and ask how we can make big step improvements by applying advanced Machine Learning (ML) and Deep Learning (DL) techniques. We’re seeking a thought leader to direct science initiatives for the Search Relevance and Ranking at Amazon. This person will also be a deep learning practitioner/thinker and guide the research in these three areas. They’ll also have the ability to drive cutting edge, product oriented research and should have a notable publication record. This intellectual thought leader will help enhance the science in addition to developing the thinking of our team. This leader will direct and shape the science philosophy, planning and strategy for the team, as we explore multi-modal, multi lingual search through the use of deep learning . We’re seeking an individual that can enhance the science thinking of our team: The org is made of 60+ applied scientists, (2 Principal scientists and 5 Senior ASMs). This person will lead and shape the science philosophy, planning and strategy for the team, as we push into Deep Learning to solve problems like cold start, discovery and personalization in the Search domain. Joining this team, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon [Earth's most customer-centric internet company]. We provide a highly customer-centric, team-oriented environment in our offices located in Palo Alto, California. Applied Science Manager, Japan Retail Science JP, 13, Tokyo Our mission is to help every vendor drive the most significant impact selling on Amazon. Our team invent, test and launch some of the most innovative services, technology, processes for our global vendors. Our new AVS Professional Services (ProServe) team will go deep with our largest and most sophisticated vendor customers, combining elite client-service skills with cutting edge applied science techniques, backed up by Amazon’s 20+ years of experience in Japan. We start from the customer’s problem and work backwards to apply distinctive results that “only Amazon” can deliver. Amazon is looking for a talented and passionate Applied Science Manager to manage our growing team of Applied Scientists and Business Intelligence Engineers to build world class statistical and machine learning models to be delivered directly to our largest vendors, and working closely with the vendors' senior leaders. The Applied Science Manager will set the strategy for the services to invent, collaborating with the AVS business consultants team to determine customer needs and translating them to a science and development roadmap, and finally coordinating its execution through the team. In this position, you will be part of a larger team touching all areas of science-based development in the vendor domain, not limited to Japan-only products, but collaborating with worldwide science and business leaders. Our current projects touch on the areas of causal inference, reinforcement learning, representation learning, anomaly detection, NLP and forecasting. As the AVS ProServe Applied Science Manager, you will be empowered to expand your scope of influence, and use ProServe as an incubator for products that can be expanded to all Amazon vendors, worldwide. We place strong emphasis on talent growth. As the Applied Science Manager, you will be expected in actively growing future Amazon science leaders, and providing mentoring inside and outside of your team. Key job responsibilities The Applied Science Manager is accountable for: (1) Creating a vision, a strategy, and a roadmap tackling the most challenging business questions from our leading vendors, assess quantitatively their feasibility and entitlement, and scale their scope beyond the ProServe team. (2) Coordinate execution of the roadmap, through direct reports, consisting of scientists and business intelligence engineers. (3) Grow and manage a technical team, actively mentoring, developing, and promoting team members. (4) Work closely with other science managers, program/product managers, and business leadership worldwide to scope new areas of growth, creating new partnerships, and proposing new business initiatives. (5) Act as a technical supervisor, able to assess scientific direction, technical design documents, and steer development efforts to maximize project delivery.
2
How to Translate a Web or Mobile Application: 9 Best Practices
After receiving so much feedback and inquiries on our last post about website translation, we came up with this follow up. Here are some of the best practices we’ve used to achieve excellent results for our clients needing web or mobile application translation and localization. Are you planning to expand your startup abroad? Wondering how you can translate your web or mobile application and reach a wider audience? You’re in the right place because we’re going to share some of the best practices for localization in this article. Translating your app allows you to enter a new market, improve brand awareness, and increase revenue. Going global unlocks huge potential; international customers prefer to use their native language online. To have greater success, you need to provide your apps in their language. Web and mobile app translation and localization go beyond just content adaptation. Your goal should be to create a custom made application suitable for your target market. Here are nine best practices of translating a web or mobile application. One of the first steps in app localization is to identify your target market and language. Analyze your existing users to find if there’s any untapped potential. Before you start expanding internationally, ask yourself if there’s a need for your app in the market you’re planning to target. Ensure your app is localization-friendly beforehand. Separate the source code from the actual content that’s going to be translated later. Prior app internationalization will save you a lot of time and money throughout the project. When adapting content from one language to another, it’s essential to ensure a seamless user experience. That’s why you should hire a professional translation company to translate, optimize, and localize your app. Professional translation agencies perform app localization, which includes formatting, editing, and cultural adaptation. Some languages change word length or writing direction when translated; with professionals’ help, you won’t need to worry about these alterations. Use a Translator that works with the file format you provide Professional translators can work with many file formats and make sure they can work with the files as you provide them. This way, you can integrate their work directly into your app and revision control system, without further work. Also, this will streamline the process and reduce involuntary human errors. For instance, at Language Buró, we will deliver the translated copy in the same format as the original. Isolate all text and messages to translate in resource bundles; these are special files in your application that contain the original text to be translated. All popular web and app development frameworks provide libraries to achieve this from the get-go. For example, if developing for Apple devices, the first step to localize your app is to export the development language or base localization for translation. For Android, you should start creating the locale directories and resource files. Be sure to provide instructions on the application interface so that the translation agency can understand your message’s context. Take your time to help translators understand what your strings should achieve. Give them access to the UI, provide screenshots, and share your notes. Having context will allow your professional partner to translate the app clearly and accurately. Some languages require more space and characters for the text, while some even change text direction. That’s why app localization starts with development and design. Your interface should be localization friendly from the visual side as well. Also, if any labels shouldn’t exceed a particular length, you should inform the translators in advance to work around the limitation. If your app uses company-specific terminology, be sure to provide a glossary for the translators. A glossary will help keep the text consistent throughout the application and ensure better user experience. You will face the need to introduce additional logic to cover the grammatical differences of the target languages. Make sure this is correctly implemented in your application and provide instructions and guidelines to your translation partner to apply those rules in the translated version. If you want the project to run smoothly, you should get rid of text on images. You will save a lot of time and trouble! Instead, use filler images and overlap the text programmatically. Avoiding Images with text in another language shows users that you built the app for them. Although translating and localizing a web or mobile application can be challenging, with a professional translation company, you’ll be able to offer a custom made experience to audiences around the globe. At Language Buró, web and mobile application translation services go beyond just translating. We provide localization and content optimization to ensure a seamless user experience transition from one language to the other. We’re eager to get your project started, please book a free consultation with us here.
2
Fight for your Right to Fileshare (2004) [video]
Rasmus Fleischer and Volker Grassmuck Playlists: / / Current copyright law has only one answer to p2p filesharers: sue them. A much better model not only for users but for authors as well, is to permit what can‘t be prevented anyway, and in turn collect a flat levy on Internet access. Thus, the Content Flatrate achieves compensation for creators without control of users. Video MP4 p eng 150 MB Subtitles Help us to subtitle this talk! Embed <iframe width="1024" height="576" src="https://app.media.ccc.de/v/315_Fight_for_your_Right_to_Fileshare/oembed" frameborder="0" allowfullscreen> </iframe> Share:
1
A Guide to Skills Measured for the Azure Fundamentals Exam (AZ-900)
Taking the AZ-900: Microsoft Azure Fundamentals Certification Exam is one of the great first steps in becoming proficient in the Azure Cloud. Those who take this exam will need a foundational understanding of the cloud in order to pass it. Who takes the AZ-900? Well let's be honest, there's no "one type" of person that takes this exam because our world has more than one type of person, period. People in all different roles can take this exam in order to really show what they have been learning. For example, if Terri is currently working in as a JavaScript Developer but wants to go "full stack," she may want to consider ways to improve their knowledge around how her code may be deployed into the cloud. Ray may be working as part of a large Enterprise IT team that has decided that the cloud is the next step for their infrastructure. By educating himself around the information in the AZ-900 exam he can be better prepared on how to begin the process of migrating the company from a datacenter to the cloud. Certifications aren't the end-all and be-all of your skills, but they provide your current or potential employers a way to verify you have a certain baseline of skills that will allow you to execute your job. Like any exam, you'll need to prepare as well. Whether it's books, practice tests, or self-guided education, it's going to be important to be ready for your test day! This blog post will take a look at what the specific skills measured are in the AZ-900 exam and show you how to start training for all of these subjects for free on Microsoft Learn. The Azure Fundamentals Exam: AZ-900, skills measured guide provides you with a list of the skills measured. Let's take a look at them and discuss some important things you'll want to consider. As I mention the content below, I will link to key sections of the Azure Fundamentals Learning Path on Microsoft Learn. Describe Cloud Concepts (15-20% of your grade) This is the section where you're going to start defining the different benefits and considerations for using the cloud. You'll want to fully understand what cloud computing is, economies of scale, capex vs opex, and more. By following along with the Cloud Concepts - Principles of cloud computing module, you'll learn why companies have trusted Azure to build and secure applications for their customers. This section will also want you to understand and define the following key concepts: As we close this first section, we'll note that this is highly conceptual. We simply need to begin understanding the basics before we can really dive in. Describe Core Azure Services (30-35% of your grade) This next section takes that next step in assuring you understand the components that make up the Azure Cloud. What do we do with those big IaaS and PaaS concepts we learned in the last module? How will we ensure the applications we build will remain durable in any situation? We'll need to determine what the different core services that Azure has and how they can be implemented. If you want to build reliably with Azure, you're going to need to know how it's built from the ground up. By being prepared to explain what components like geopgraphies, regions, availabilty zones, and Azure Resource Manager do and how they work with your application is a critical portion of this section. Here you'll learn what compute services you can run on Azure such as Containers, Virtual Machines, Azure SQL, disk storage and much more. You'll also get an idea of some other solutions you can integrate into your applications such as IoT, Azure IoT Hub or big data analytics. You will also begin your first steps into Azure DevOps to start automating your deploys. You can learn how Azure App Service can integrate with Azure DevOps so you'll always trigger a new build and deployment of your application whenever new changes are pushed to your code repository. You'll also need to understand how to begin working with Azure with the various management tools provided. You'll learn about using PowerShell, Azure CLI, ARM Templates, and the Cloud Shell. All tools that let you get things done with Azure. This section will want you to understand and define the following: Don't forget this section contains one major part, how to sign up for Azure! You'll get $200 in credit and 12 months of free services for signing up! Describe Security, Privacy, Compliance, and Trust (25-30%) Here's where we talk about the most important thing for your customers, security. No one wants their data made public, no company wants their customer's privacy leaked, and no one wants their applications vulnerable to attack. This section of the AZ-900 exam helps you prove your knowledge around the services and methods used on Azure to secure applications and infrastructure. You'll also learn about the encryption methods, identity, and certificates that make up Azure. This section will go over virtual network concepts such as Network Security Groups (NSG), Application Security Groups (ASG), Firewalls and Azure DDoS protection. You'll learn how to choose an appropriate security solution and even learn about User-Defined Rules for Azure. You'll understand the shared responsibility model and how it impacts your infrastructure decisions when using Azure. We'll also understand concepts of governance, monitoring, and regulatory compliance within Azure. Our Skills measured on this portion of the exam are: We're one step closer to the final section of our exam skills list. So close to being AZ-900 champions! Describe Azure Pricing, Service Level Agreements, and Lifecycles (20- 25%) This is where we start really concerning ourselves with how our money is spent, how we protect ourselves if there's an Azure failure, and the lifecycle of a service on Azure. Here's the section where we'll begin discussing how subscriptions are used on Azure to set up your billing for each part of your business. We'll also begin looking at what your new free account can do! Money Money Money... You're going to want to save money! You'll need to get a full understanding on the TCO Calendar, best practices on cost savings, and the Azure Cost Management service. This section will want you to understand and define the following: This all shows you understand not only how to deploy on Azure, but how you get the most out of your money. You've got a great list of information and links about the skills you should have prepared before taking the AZ-900 Exam. With the Azure Fundamentals Learning Path on Microsoft Learn, you'll be well on your way to getting that AZ-900 badge on your LinkedIn account. I hope you enjoyed reading this, and remember that you can learn even more Azure Fundamentals on AzureFunBytes every Thursday at 2PM NYC Eastern Time. AzureFunBytes! - Byte-sized content with a live Twitch show! Learn about Azure fundamentals with me! Live stream is available on Twitch at 2 pm EDT Thursday. You can also find the recordings there as well. https://twitch.tv/azurefunbytes https://twitter.com/azurefunbytes Join me, ask questions, and learn about Azure! Microsoft Learn: Azure Fundamentals Microsoft Azure: $200 Free Credit
1
Simple Rsync Wrapper
Indices and tables p p p p p p p p p p Requirements Butterfly Backup is a simple wrapper of rsync written in python; this means the first requirements is python3.3 (required module cryptography for init action) or higher. The other requirement is a openssh and rsync (version 2.5 or higher). Ok, let’s go! Test If you want to try or test Butterfly Backup before installing it, run the test: arthur@heartofgold$ git clone https://github.com/MatteoGuadrini/Butterfly-Backup.git arthur@heartofgold$ cd Butterfly-Backup arthur@heartofgold$ bash test_bb.py Installation Install Butterfly Backup is very simple; run this: arthur@heartofgold$ git clone https://github.com/MatteoGuadrini/Butterfly-Backup.git arthur@heartofgold$ cd Butterfly-Backup arthur@heartofgold$ sudo python3 setup.py arthur@heartofgold$ bb --help arthur@heartofgold$ man bb The upgrade is also simple; type the same commands. Core concept Butterfly Backup it is a server to client tool. It must be installed on a server (for example a workstation PC), from where it will contact the clients on which it will perform the backup. Also for the restore the concept is the same; the server pushes files into the client. Backups are organized according to precise cataloguing; this is an example: arthur@heartofgold$ tree destination/of/backup . ├── destination │   ├── hostname or ip of the PC under backup │   │   ├── timestamp folder │   │   │ ├── backup folders │   │   │ ├── backup.log │   │   │ └── restore.log │ │ ├─── general.log │ │ └─── symlink of last backup │ ├── export.log ├── backup.list └── .catalog.cfg This is how the operating system sees the backups on the file system: destination: it is the destination of all backup of all machine under Butterfly Backup system; this is root. hostname or ip of the PC under backup: it is the root of every machine under backup; contains all the backups. timestamp folder: this is a single backup folder; contains all the data that have been backed up. backup.log: if enabled, it is the log generated by rsync for backup operation. restore.log: if enabled, it is the log generated by rsync for restore operation. general.log: if enabled, it is the general log of the machine; contains all the info, warnings and errors generated during the backup. symlink of last backup: this is a symbolic link of the last backup made. Isn’t supported if you use an MS-DOS file system. backup.list: if enabled, it is the log generated by list operation. export.log: if enabled, it is the log generated by export operation. .catalog.cfg: is the log that contains the catalog of all backups. The file is the reference of Butterfly Backup to perform future backups. Important Be careful to not delete .catalog.cfg file. If you want to initialize the backup catalog, just run this command: bb config --init /catalog/path . Butterfly Backup has, in its core, six main operations: arthur@heartofgold$ bb --help usage: bb [-h] [--verbose] [--log] [--dry-run] [--version] {config,backup,restore,archive,list,export} ... Butterfly Backup optional arguments: -h, --help show this help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode --version, -V Print version action: Valid action {config,backup,restore,archive,list,export} Available actions config Configuration options backup Backup options restore Restore options archive Archive options list List options export Export options Valid action config: operations involving OpenSSH and its configurations (not mandatory). backup: transactions that call rsync, in a single process or across multiple processes in case of a list. restore: transactions that call rsync for pushing files to a client. archive: transaction that zip old backups and move them to a new destination. list: query the catalog. export: export a single backup to other path. It also has three flags that can be very useful, especially in case of error. -h, --help show help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode, test your command --version, -V Print version…and more Config Configuration mode is very simple; If you’re already familiar with the exchange keys and OpenSSH, you probably won’t need it. If you don’t want to discuss the merits of exchange keys and start, then go ahead. First, you must create a configuration (rsa keys). Let’s see how to go about looking at the help: arthur@heartofgold$ bb config --help usage: bb config [-h] [--verbose] [--log] [--dry-run] [--new | --remove | --init CATALOG | --delete-host CATALOG HOST | --clean CATALOG] [--deploy DEPLOY_HOST] [--user DEPLOY_USER] optional arguments: -h, --help show this help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode Init configuration: --new, -n Generate new configuration --remove, -r Remove exist configuration --init CATALOG, -i CATALOG Reset catalog file. Specify path of backup folder. --delete-host CATALOG HOST, -D CATALOG HOST Delete all entry for a single HOST in catalog. --clean CATALOG, -c CATALOG Cleans the catalog if it is corrupt, setting default values. Deploy configuration: --deploy DEPLOY_HOST, -d DEPLOY_HOST Deploy configuration to client: hostname or ip address --user DEPLOY_USER, -u DEPLOY_USER User of the remote machine Two macro-options are available: Init configuration: Generate new or remove configuration. --new, -n Generate new configuration. --remove, -r Remove exist configuration. --init, -i Reset catalog file. Specify path of backup folder. --delete-host, -D Delete all entry for a single HOST in catalog. --clean, -c Cleans the catalog if it is corrupt, setting default values. Deploy configuration: Deploy configuration to client: hostname or ip address. --deploy, -d Deploy configuration to client: hostname or ip address. --user, -u User of the remote machine. At this point, we create the configuration: arthur@heartofgold$ bb config --new WARNING: Private key ~/.ssh/id_rsa exists If you want to use the existing key, run "bb config --deploy name_of_machine", otherwise to remove it, run "bb config --remove" In this case, the RSA key already exists. Now try delete and create a new keys: arthur@heartofgold$ bb config --remove Are you sure to remove existing rsa keys? To continue [Y/N]? y SUCCESS: Removed configuration successfully! arthur@heartofgold$ bb config --new SUCCESS: New configuration successfully created! Once you have created the configuration, keys should be installed (copied) on the hosts you want to backup. arthur@heartofgold$ bb config --deploy host1 Copying configuration to host1; write the password: /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/arthur/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys arthur@host1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'arthur@host1'" and check to make sure that only the key(s) you wanted were added. SUCCESS: Configuration copied successfully on host1! This command will try to copy the configuration with the current user. If you want to use a different user (e.g.: root), run this: arthur@heartofgold$ bb config --deploy host1 --user root Copying configuration to host1; write the password: /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/arthur/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@host1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@host1'" and check to make sure that only the key(s) you wanted were added. SUCCESS: Configuration copied successfully on host1! Cygwin on Windows In order to run Butterfly Backup on Windows hosts, you need to install cygwin. Download https://cygwin.com/ cygwin and follow this instructions to install the necessary packages. : : INSTALL SOFTWARE : : Automates cygwin installation SETLOCAL : : Change to the directory of the executing batch file CD %~dp0 : : Configure our paths SET SITE =http://mirrors.kernel.org/sourceware/cygwin/ SET LOCALDIR = %LOCALAPPDATA%/cygwin SET ROOTDIR =C:/cygwin : : These are the packages we will install (in addition to the default packages) SET PACKAGES =mintty,wget,ctags,diffutils,openssh,rsync,cygrunsrv,nano SET PACKAGES = %PACKAGES%,gcc-core,make,automake,autoconf,readline,libncursesw-devel,libiconv,zlib-devel,gettext : : Do it! setup.exe -q -D -L -d -g -o -s %SITE% -l " %LOCALDIR% " -R " %ROOTDIR% " -C Base -P %PACKAGES% >nul setx PATH " %PATH% ;/cygdrive/c/cygwin/bin;C:\cygwin\bin" /m ENDLOCAL C: chdir C:\cygwin\bin mkpasswd -cl > C:\cygwin\etc\passwd bash ssh-host-config -y cygrunsrv -S sshd Verify if port 22 is in LISTEN If you want to initialize or reset the catalog file, run this: arthur@heartofgold$ bb config --init /mnt/backup -v Important This command preserves existing backups on the file system. It will eliminate only those archived or deleted. Backup There are two backup modes: single and bulk. Let’s see how to go about looking at the help: arthur@heartofgold$ bb backup --help usage: bb backup [-h] [--verbose] [--log] [--dry-run] (--computer HOSTNAME | --list LIST) --destination DESTINATION [--mode {Full,Incremental,Differential,Mirror}] (--data {User,Config,Application,System,Log} [{User,Config,Application,System,Log} ...] | --custom-data CUSTOMDATA [CUSTOMDATA ...]) [--user USER] --type {Unix,Windows,MacOS} [--compress] [--retention [DAYS [NUMBER ...]]] [--parallel PARALLEL] [--timeout TIMEOUT] [--skip-error] [--rsync-path RSYNC] [--bwlimit BWLIMIT] [--ssh-port PORT] [--exclude EXCLUDE [EXCLUDE ...]] [--start-from ID] optional arguments: -h, --help show this help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode Backup options: --computer HOSTNAME, -c HOSTNAME Hostname or ip address to backup --list LIST, -L LIST File list of computers or ip addresses to backup --destination DESTINATION, -d DESTINATION Destination path --mode {Full,Incremental,Differential,Mirror}, -m {Full,Incremental,Differential,Mirror} Backup mode --data {User,Config,Application,System,Log} [{User,Config,Application,System,Log} ...], -D {User,Config,Application,System,Log} [{User,Config,Application,System,Log} ...] Data of which you want to backup --custom-data CUSTOMDATA [CUSTOMDATA ...], -C CUSTOMDATA [CUSTOMDATA ...] Custom path of which you want to backup --user USER, -u USER Login name used to log into the remote host (being backed up) --type {Unix,Windows,MacOS}, -t {Unix,Windows,MacOS} Type of operating system to backup --compress, -z Compress data --retention [DAYS [NUMBER ...]], -r [DAYS [NUMBER ...]] First argument are days of backup retention. Second argument is minimum number of backup retention --parallel PARALLEL, -p PARALLEL Number of parallel jobs --timeout TIMEOUT, -T TIMEOUT I/O timeout in seconds --skip-error, -e Skip error --rsync-path RSYNC, -R RSYNC Custom rsync path --bwlimit BWLIMIT, -b BWLIMIT Bandwidth limit in KBPS. --ssh-port PORT, -P PORT Custom ssh port. --exclude EXCLUDE [EXCLUDE ...], -E EXCLUDE [EXCLUDE ...] Exclude pattern --start-from ID, -s ID Backup id where start a new backup Backup options --computer, -c Select the ip or hostname where to perform backup. --list, -l Select the file list of the ip or hostnames, where to perform backups. [File_list.txt] host1 192.168.0.1 host2 … --destination, -d Select the destination folder (root). --mode, -m Select how rsync perform backup: Full: Complete (full) backup. Incremental: Incremental backup is based on the latest backup (the same files are linked with hard link). A Full backup is executed if this mode fails to find one. Differential: Incremental backup is based on the latest Full backup (the same files are linked with hard link). A Full backup is executed if this mode fails to find one. Mirror: Complete mirror backup. If a file in the source no longer exists, BB deletes it from the destination. --data, -D Select the type of data to put under backup: The values change depending on the type of operating system: User -> folder containing the home. Config -> folder containing the configurations of the machine. Log -> folder containing the log. Application -> folder containing applications. System -> the entire system starting from the root. --custom-data Select the absolute paths to put under backup. --user, -u Login name used to log into the remote host (being backed up) --type, -t Type of operating system to put under backup: Unix -> All UNIX os (Linux, BSD, Solaris). Windows -> Windows Vista or higher with cygwin installed. MacOS -> MacOSX 10.8 or higher. --compress, -z Compresses the data transmitted. --retention, -r Number of days for which you want to keep your backups and minimum number of backup retention (optional). The second number is a minimum number of backup which you want keep. --parallel, -p Maximum number of concurrent rsync processes. By default is 5 jobs. --timeout, -T Specify number of seconds of I/O timeout. --skip-error, -e Skip error. Quiet mode. --rsync-path, -R Select a custom rsync path. --bwlimit, -b Bandwidth limit in KBPS. --ssh-port, -P Custom ssh port. --exclude, -E Exclude pattern. Follow rsync “Exclude Pattern Rules” --start-from, -s The new backup is based on another backup, specified by its ID. Flowchart of the differences between Differential and Incremental backup: +----------+ +----------+ +----------+ | | | | | | | Full | <-+ Inc . | <-+ Inc . | | | | | | | +----------+ +----------+ +----------+ +----------+ +----------+ +----------+ | | | | | | | Full | <-+ Dif . | | Dif . | | | | | | | +----+-----+ +----------+ +----+-----+ ^ | +-----------------------------+ Single backup The backup of a single machine consists in taking the files and folders indicated in the command line, and put them in the cataloging structure indicated above. This is a few examples: arthur@heartofgold$ bb backup --computer host1 --destination /mnt/backup --data User Config --type MacOS # host1 SSH-2.0-OpenSSH_7.5 # host1 SSH-2.0-OpenSSH_7.5 Start backup on host1 SUCCESS: Command rsync -ah --no-links arthur@host1:/Users :/private/etc /mnt/backup/host1/2018_08_08__10_28 Important Without specifying the “mode” flag, Butterfly Backup by default selects Incremental mode arthur@heartofgold$ bb backup --computer host1 --destination /mnt/backup --mode Full --data User Config --type MacOS --user root root@host1's password: Start backup on host1 SUCCESS: Command rsync -ah --no-links root@host1:/Users :/private/etc /mnt/backup/host1/2018_08_08__10_30 arthur@heartofgold$ bb backup --computer host1 --destination /mnt/backup --data User Config --type MacOS --verbose --log INFO: Build a rsync command INFO: Last full is 2018-08-08 10:30:32 INFO: Command flags are: rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_28 -vP INFO: Create a folder structure for MacOS os INFO: Include this criteria: :/Users :/private/etc INFO: Destination is /mnt/backup/host1/2018_08_08__10_42 Start backup on host1 INFO: rsync command: rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_30 arthur@host1:/Users :/private/etc /mnt/backup/host1/2018_08_08__10_42 receiving file list ... 39323 files to consider Users/ ... ... sent 18.91K bytes received 1.59M bytes 25.74K bytes/sec total size is 6.67G speedup is 4143.59 SUCCESS: Command rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_30 arthur@host1:/Users :/private/etc /mnt/backup/host1/2018_08_08__10_42 Note If the backup destination is an MS-DOS fs, it will not support symlink creation. The message that will be returned is as follows: WARNING: MS-DOS file system doesn't support symlink file. Bulk backup Bulk backup follows the same logic and the same options as a single backup, but accepts a file containing the names or ips of the machine to be backed up. This is a few examples: arthur@heartofgold$ bb backup --list /home/arthur/pclist.txt --destination /mnt/backup --data User Config --type MacOS # host1 SSH-2.0-OpenSSH_7.5 # host1 SSH-2.0-OpenSSH_7.5 ERROR: The port 22 on host2 is closed! ERROR: The port 22 on host3 is closed! Start backup on host1 SUCCESS: Command rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_30 arthur@host1:/Users :/private/etc /mnt/backup/host1/2018_08_08__10_50 Important Port 22 (OpenSSH standard) is tested in order to verify the reachability of the machine. If the machine is not reachable, an error is generated: ERROR: The port 22 on host2 is closed! This example, is the same as the previous one, with the only difference being that the parallel flag is specified at 2. This means that maximum two backup jobs will run at the same time. When a first process ends, another one is started. arthur@heartofgold$ bb backup --list /home/arthur/pclist.txt --destination /mnt/backup --data User Config --type MacOS --parallel 2 --log # host1 SSH-2.0-OpenSSH_7.5 # host1 SSH-2.0-OpenSSH_7.5 ERROR: The port 22 on host2 is closed! ERROR: The port 22 on host3 is closed! Start backup on host1 SUCCESS: Command rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_30 arthur@host1:/Users :/private/etc /mnt/backup/host1/2018_08_08__10_58 This is the same example but specifying a retention at 3 (days). This means that in doing so we want to keep only 3 days backups. arthur@heartofgold$ bb backup --list /home/arthur/pclist.txt --destination /mnt/backup --data User Config --type MacOS --parallel 2 --retention 3 --log --verbose INFO: Build a rsync command INFO: Last full is 2018-08-08 10:30:32 INFO: Command flags are: rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_30 -vP INFO: Create a folder structure for MacOS os INFO: Include this criteria: :/Users :/private/etc INFO: Destination is /mnt/backup/host1/2018_08_08__10_30 Start backup on host1 ERROR: The port 22 on host2 is closed! ERROR: The port 22 on host3 is closed! INFO: rsync command: rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_30 arthur@host1:/Users :/private/etc /mnt/backup/host1/2018_08_08__10_58 receiving file list ... 39323 files to consider Users/ ... ... SUCCESS: Command rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_30 arthur@host1:/Users :/private/etc /mnt/backup/host1/2018_08_08__10_58 INFO: Check cleanup this backup aba860b0-9944-11e8-a93f-005056a664e0. Folder /mnt/backup/host1/2018_08_08__10_28 SUCCESS: Cleanup /mnt/backup/host1/2018_08_08__10_28 successfully. INFO: Check cleanup this backup cc6e2744-9944-11e8-b82a-005056a664e0. Folder /mnt/backup/host1/2018_08_08__10_30 INFO: No cleanup backup cc6e2744-9944-11e8-b82a-005056a664e0. Folder /mnt/backup/host1/2018_08_08__10_30 Here we find the same example above but specifying a retention at 2 (days) and 5 backup copies. This means that at least 5 backup copies will be kept, even if they are older than two days. arthur@heartofgold$ bb backup --list /home/arthur/pclist.txt --destination /mnt/backup --data User Config --type MacOS --parallel 2 --retention 3 5 --log --verbose INFO: Build a rsync command INFO: Last full is 2018-08-08 10:30:32 INFO: Command flags are: rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_30 -vP INFO: Create a folder structure for MacOS os INFO: Include this criteria: :/Users :/private/etc INFO: Destination is /mnt/backup/host1/2018_08_08__10_30 Start backup on host1 ERROR: The port 22 on host2 is closed! ERROR: The port 22 on host3 is closed! INFO: rsync command: rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_30 arthur@host1:/Users :/private/etc /mnt/backup/host1/2018_08_08__10_58 receiving file list ... 39323 files to consider Users/ ... ... SUCCESS: Command rsync -ahu --no-links --link-dest=/mnt/backup/host1/2018_08_08__10_30 arthur@host1:/Users :/private/etc /mnt/backup/host1/2018_08_08__10_58 INFO: Check cleanup this backup aba860b0-9944-11e8-a93f-005056a664e0. Folder /mnt/backup/host1/2018_08_08__10_28 SUCCESS: Cleanup /mnt/backup/host1/2018_08_08__10_28 successfully. INFO: Check cleanup this backup cc6e2744-9944-11e8-b82a-005056a664e0. Folder /mnt/backup/host1/2018_08_08__10_30 INFO: No cleanup backup cc6e2744-9944-11e8-b82a-005056a664e0. Folder /mnt/backup/host1/2018_08_08__10_30 Important If there is only one Full in our catalog, the retention policy will never delete that Full backup, but only Increments. We recommend doing more Full for better retention management. List When we run backup commands, a catalog is created. This serves both for future backups and all the restores that are made through Butterfly Backup. To query this catalog, the list command exists. arthur@heartofgold$ bb list --help usage: bb list [-h] [--verbose] [--log] [--dry-run] --catalog CATALOG [--backup-id ID | --archived | --cleaned | --computer HOSTNAME | --detail ID] [--oneline] optional arguments: -h, --help show this help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode List options: --catalog CATALOG, -C CATALOG Folder where is catalog file --backup-id ID, -i ID Backup-id of backup --archived, -a List only archived backup --cleaned, -c List only cleaned backup --computer HOSTNAME, -H HOSTNAME List only match hostname or ip --detail ID, -d ID List detail of file and folder of specific backup-id --oneline, -o One line output List options --catalog, -C Select the backups folder (root). --backup-id, -i Select backup id in the catalog. --archived, -a List only archived backups. --cleaned, -c List only cleaned backups. --computer, -H List only match hostname or ip. --detail, -d List detail of file and folder of specific backup-id. --oneline, -o One line and concise output. First, let’s query the catalog: arthur@heartofgold$ bb list --catalog /mnt/backup BUTTERFLY BACKUP CATALOG Backup id: aba860b0-9944-11e8-a93f-005056a664e0 Hostname or ip: host1 Timestamp: 2018-08-08 10:28:12 Backup id: cc6e2744-9944-11e8-b82a-005056a664e0 Hostname or ip: host1 Timestamp: 2018-08-08 10:30:59 Backup id: dd6de2f2-9a1e-11e8-82b0-005056a664e0 Hostname or ip: host1 Timestamp: 2018-08-08 10:58:59 Press q for exit. Then we select a backup-id: arthur@heartofgold$ bb list --catalog /mnt/backup --backup-id dd6de2f2-9a1e-11e8-82b0-005056a664e0 Backup id: dd6de2f2-9a1e-11e8-82b0-005056a664e0 Hostname or ip: host1 Type: Incremental Timestamp: 2018-08-08 10:58:59 Start: 2018-08-08 10:58:59 Finish: 2018-08-08 11:02:49 OS: MacOS ExitCode: 0 Path: /mnt/backup/host1/2018_08_08__10_58 List: backup.log etc Users If you want to export the catalog list instead, include the log flag: arthur@heartofgold$ bb list --catalog /mnt/backup --log arthur@heartofgold$ cat /mnt/backup/backup.list Instead if you want to see the details of your backup, add details flag: arthur@heartofgold$ bb list --catalog /mnt/backup --detail dd6de2f2-9a1e-11e8-82b0-005056a664e0 Now that we have identified a backup, let’s proceed with the restore Restore The restore process is the exact opposite of the backup process. It takes the files from a specific backup and push it to the destination computer. arthur@heartofgold$ bb restore --help usage: bb restore [-h] [--verbose] [--log] [--dry-run] --catalog CATALOG (--backup-id ID | --last) [--user USER] --computer HOSTNAME [--type {Unix,Windows,MacOS}] [--timeout TIMEOUT] [--mirror] [--skip-error] [--rsync-path RSYNC] [--bwlimit BWLIMIT] [--ssh-port PORT] [--exclude EXCLUDE [EXCLUDE ...]] optional arguments: -h, --help show this help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode Restore options: --catalog CATALOG, -C CATALOG Folder where is catalog file --backup-id ID, -i ID Backup-id of backup --last, -L Last available backup --user USER, -u USER Login name used to log into the remote host (where you're restoring) --computer HOSTNAME, -c HOSTNAME Hostname or ip address to perform restore --type {Unix,Windows,MacOS}, -t {Unix,Windows,MacOS} Type of operating system to perform restore --timeout TIMEOUT, -T TIMEOUT I/O timeout in seconds --mirror, -m Mirror mode --skip-error, -e Skip error --rsync-path RSYNC, -R RSYNC Custom rsync path --bwlimit BWLIMIT, -b BWLIMIT Bandwidth limit in KBPS. --ssh-port PORT, -P PORT Custom ssh port. --exclude EXCLUDE [EXCLUDE ...], -E EXCLUDE [EXCLUDE ...] Exclude pattern Restore options --catalog, -C Select the backups folder (root). --backup-id Select backup id in the catalog. --last, -L Select last available backup in the catalog for same hostname or ip address. --user, -u User of remote machine where you want to restore files. --computer, -c Select the ip or hostname where to perform restore. --type, -t Type of operating system to put under backup: Unix -> All UNIX os (Linux, BSD, Solaris). Windows -> Windows Vista or higher with cygwin installed. MacOS -> MacOSX 10.8 or higher. --timeout, -T Specify number of seconds of I/O timeout. --mirror, -m Mirror mode. If a file or folder not exist in destination, will delete it. Overwrite files. --skip-error, -e Skip error. Quiet mode. --rsync-path, -R Select a custom rsync path. --bwlimit, -b Bandwidth limit in KBPS. --ssh-port, -P Custom ssh port. --exclude, -E Exclude pattern. Follow rsync “Exclude Pattern Rules” This is a few examples: This command perform a restore on the same machine of the backup: arthur@heartofgold$ bb restore --catalog /mnt/backup --backup-id dd6de2f2-9a1e-11e8-82b0-005056a664e0 --computer host1 --log Want to do restore path /mnt/backup/host1/2018_08_08__10_58/etc? To continue [Y/N]? y Want to do restore path /mnt/backup/host1/2018_08_08__10_58/Users? To continue [Y/N]? y SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2018_08_08__10_58/restore.log /mnt/backup/host1/2018_08_08__10_58/etc arthur@host1:/restore_2018_08_08__10_58 SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2018_08_08__10_58/restore.log /mnt/backup/host1/2018_08_08__10_58/Users/* arthur@host1:/Users Important Without specifying the “type” flag that indicates the operating system on which the data are being retrieved, Butterfly Backup will select it directly from the catalog via the backup-id. Now, select the last available backup on catalog; run this: arthur@heartofgold$ bb restore --catalog /mnt/backup --last --computer host1 --log Want to do restore path /mnt/backup/host1/2018_08_08__10_58/etc? To continue [Y/N]? y Want to do restore path /mnt/backup/host1/2018_08_08__10_58/Users? To continue [Y/N]? y SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2018_08_08__10_58/restore.log /mnt/backup/host1/2018_08_08__10_58/etc arthur@host1:/restore_2018_08_08__10_59 SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2018_08_08__10_58/restore.log /mnt/backup/host1/2018_08_08__10_58/Users/* arthur@host1:/Users This example, is the same as the previous one, but restore to other machine and other operating system: arthur@heartofgold$ bb restore --catalog /mnt/backup --backup-id dd6de2f2-9a1e-11e8-82b0-005056a664e0 --computer host2 --type Unix --log --verbose INFO: Build a rsync command INFO: Command flags are: rsync -ahu -vP --log-file=/mnt/backup/host1/2018_08_08__10_58/restore.log Want to do restore path /mnt/backup/host1/2018_08_08__10_58/etc? To continue [Y/N]? y INFO: Build a rsync command INFO: Command flags are: rsync -ahu -vP --log-file=/mnt/backup/host1/2018_08_08__10_58/restore.log Want to do restore path /mnt/backup/host1/2018_08_08__10_58/Users? To continue [Y/N]? y Start restore on host2 INFO: rsync command: rsync -ahu -vP --log-file=/mnt/backup/host1/2018_08_08__10_58/restore.log /mnt/backup/host1/2018_08_08__10_58/etc/* arthur@host2:/etc Start restore on host2 INFO: rsync command: rsync -ahu -vP --log-file=/mnt/backup/host1/2018_08_08__10_58/restore.log /mnt/backup/host1/2018_08_08__10_58/Users/* arthur@host2:/home building file list ... 26633 files to consider ... ... sent 777.53K bytes received 20 bytes 62.20K bytes/sec total size is 6.66G speedup is 8566.41 SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2018_08_08__10_58/restore.log /mnt/backup/host1/2018_08_08__10_58/etc/* arthur@host2:/etc SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2018_08_08__10_58/restore.log /mnt/backup/host1/2018_08_08__10_58/Users/* arthur@host2:/home Archive Archive operations are used to store backups by saving disk space. Backups older than n days are compressed into a zip file. arthur@heartofgold$ bb archive --help usage: bb archive [-h] [--verbose] [--log] [--dry-run] --catalog CATALOG [--days DAYS] --destination DESTINATION optional arguments: -h, --help show this help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode Archive options: --catalog CATALOG, -C CATALOG Folder where is catalog file --days DAYS, -D DAYS Number of days of archive retention --destination DESTINATION, -d DESTINATION Archive destination path Archive options --catalog, -C Select the backups folder (root). --days, -D Number of days for which you want to keep your backups. Default is 30. --destination, -d New destination for compress zipped backup. Archive old backup of 3 days: arthur@heartofgold$ bb archive --catalog /mnt/backup/ --days 3 --destination /mnt/archive/ --verbose --log INFO: Check archive this backup f65e5afe-9734-11e8-b0bb-005056a664e0. Folder /mnt/backup/host2/2018_08_08__17_50 INFO: Check archive this backup 4f2b5f6e-9939-11e8-9ab6-005056a664e0. Folder /mnt/backup/host2/2018_08_04__07_26 SUCCESS: Delete /mnt/backup/host2/2018_08_04__07_26 successfully. SUCCESS: Archive /mnt/backup/host2/2018_08_04__07_26 successfully. arthur@heartofgold$ ls /mnt/archive host1 arthur@heartofgold$ ls /mnt/archive/host1 2018_08_06__07_26.zip The backup-id f65e5afe-9734-11e8-b0bb-005056a664e0 it is not considered, because it falls within the established time. The other, however, is zipped and deleted. Lastly, let’s look in the catalog and see that the backup was actually archived: arthur@heartofgold$ bb list --catalog /mnt/backup/ -i 4f2b5f6e-9939-11e8-9ab6-005056a664e0 Backup id: 4f2b5f6e-9939-11e8-9ab6-005056a664e0 Hostname or ip: host2 Type: Incremental Timestamp: 2018-08-08 07:26:46 Start: 2018-08-08 07:26:46 Finish: 2018-08-08 08:43:45 OS: MacOS ExitCode: 0 Path: /mnt/backup/host2/2018_08_04__07_26 Archived: True Export The export function is used to copy a particular backup to another path. arthur@heartofgold$ bb export -h usage: bb export [-h] [--verbose] [--log] [--dry-run] --catalog CATALOG [--backup-id ID | --all] --destination DESTINATION [--mirror] [--cut] [--include INCLUDE [INCLUDE ...] | --exclude EXCLUDE [EXCLUDE ...]] [--timeout TIMEOUT] [--skip-error] [--rsync-path RSYNC] [--bwlimit BWLIMIT] [--ssh-port PORT] optional arguments: -h, --help show this help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode Export options: --catalog CATALOG, -C CATALOG Folder where is catalog file --backup-id ID, -i ID Backup-id of backup --all, -A All backup --destination DESTINATION, -d DESTINATION Destination path --mirror, -m Mirror mode --cut, -c Cut mode. Delete source --include INCLUDE [INCLUDE ...], -I INCLUDE [INCLUDE ...] Include pattern --exclude EXCLUDE [EXCLUDE ...], -E EXCLUDE [EXCLUDE ...] Exclude pattern --timeout TIMEOUT, -T TIMEOUT I/O timeout in seconds --skip-error, -e Skip error --rsync-path RSYNC, -R RSYNC Custom rsync path --bwlimit BWLIMIT, -b BWLIMIT Bandwidth limit in KBPS. --ssh-port PORT, -P PORT Custom ssh port. Export options --catalog, -C Select the backups folder (root). --backup-id, -i Select backup id in the catalog. --all, -A All backup --destination, -d Destination path. --mirror, -m Mirror backup on destination. --cut, -c Delete source like a move. --include, -I Include pattern. Follow rsync “Include Pattern Rules” --exclude, -E Exclude pattern. Follow rsync “Exclude Pattern Rules” --timeout, -T Specify number of seconds of I/O timeout. --mirror, -m Mirror mode. If a file or folder not exist in destination, will delete it. Overwrite files. --skip-error, -e Skip error. Quiet mode. --rsync-path, -R Select a custom rsync path. --bwlimit, -b Bandwidth limit in KBPS. --ssh-port, -P Custom ssh port. Export a backup in other directory: arthur@heartofgold$ bb export --catalog /mnt/backup/ --backup-id f0f700e8-0435-11e9-9e78-005056a664e0 --destination /mnt/backup/export --verbose INFO: Export backup with id f0f700e8-0435-11e9-9e78-005056a664e0 INFO: Build a rsync command Start export host1 ... INFO: rsync command: rsync -ah --no-links -vP /mnt/backup/host1/2018_12_20__10_02 /mnt/backup/export/host1 SUCCESS: Command rsync -ah --no-links -vP /mnt/backup/host1/2018_12_20__10_02 /mnt/backup/export/host1 Export all backup in other directory: arthur@heartofgold$ bb export --catalog /mnt/backup/ --all --destination /mnt/backup/export --verbose INFO: Export backup with id f0f700e8-0435-11e9-9e78-005056a664e0 INFO: Build a rsync command Start export host1 ... INFO: rsync command: rsync -ah --no-links -vP /mnt/backup/host1/2018_12_20__10_02 /mnt/backup/export/host1 SUCCESS: Command rsync -ah --no-links -vP /mnt/backup/host1/2018_12_20__10_02 /mnt/backup/export/host1 Export a backup with exclude pdf files: arthur@heartofgold$ bb export --catalog /mnt/backup/ --backup-id f0f700e8-0435-11e9-9e78-005056a664e0 --destination /backup/export --verbose --exclude *.pdf INFO: Export backup with id f0f700e8-0435-11e9-9e78-005056a664e0 INFO: Build a rsync command Start export host1 ... INFO: rsync command: rsync -ah --no-links -vP --exclude=*.pdf /mnt/backup/host1/2018_12_20__10_02 /mnt/backup/export/host1 SUCCESS: Command rsync -ah --no-links -vP --exclude=*.pdf /mnt/backup/host1/2018_12_20__10_02 /mnt/backup/export/host1 Donations Donating is important. If you do not want to do it to me, do it to some companies that do not speculate. My main activity and the people of non-profit associations is to work for others, be they male or female, religious or non-religious, white or black or yellow or red, rich and poor. The only real purpose is to serve the humanity of one’s experience. Below you will find some links to do it. Thanks a lot. For me For Telethon The Telethon Foundation is a non-profit organization recognized by the Ministry of University and Scientific and Technological Research. They were born in 1990 to respond to the appeal of patients suffering from rare diseases. Come today, we are organized to dare to listen to them and answers, every day of the year. Adopt the future
136
One of these JPEGs is not like the other
“JPEG” or the image encoding specification by the “Joint Photographic Experts Group” (JPEG) is a truly universal format at this stage. You really cannot go very far on the internet without seeing a JPEG file. The amount of content encoded in JPEGs must be surely biblical by now. If there is one thing that is going to carry into the future for historians, It will surely be a JPEG decoder. But all of this is running under the assumption that JPEG is just a single “format” (ignoring JPEG2000 here for a moment). But oh boy would you be wrong if you thought that. You see, multimedia is basically never ending pain. For almost as long as there has been multimedia compression there have been hardware accelerators for compression formats. These hardware accelerators are the things that allow cheap DVD players, cheap digital TV boxes, and if you’re lucky: thermal and power efficient HD youtube playback. However they often come with drawbacks. Since hardware decoders are harder to design than their software counterparts, they generally come with more bugs. Hardware JPEG decoders may seem strange at first since JPEG decoding is already quite fast on modern day systems (it was not always) for a lot of battery power applications fast and low power JPEG decoding is vital for hitting battery life targets on web browsing workloads. Even most Intel GPUs contain a JPEG decoder: The hardware decoder I am fighting with actually today is the subject of a previous blog post called Ludicrously cheap HDMI capture for Linux, in which I found a cheap HDMI <-> ethernet transmitter and receiver pair on the market that was software decodable and so could be used for HDMI capture on a computer. My flatmate has looked into the audio format for the receiving units, and we use it to output “holding music” to our amplifier if there is nothing plugged into a HDMI port at a given time. However I also wanted to innovate this a little more and ideally display the current time and playing track. Since we already built a receiver for the video and audio, and then built a transmitter for the audio, surely transmitting JPEG’s in the same way can’t be hard right? So the obvious thing to do here is to just encode a JPEG using standard software that can export JPEGs and spit it out in the same framing format. Well we already tried that in the audio post and it didn’t work. Instead just resorting to replaying a captured JPEG from a real transmitter. But why didn’t it work? jpg files are jpg files surely? Just giving a quick look at a JPEG from the transmitter and a JPEG as made from image/jpeg in Go shows a visible difference: Okay. Fine. What is this JFIF thing and why does it seem to get in the way of our ASIC/Hardware decoder from displaying the jpg? To understand this we need to look into what makes up a JPEG file. JPEGs have a packet style header that sits on top of the actual DCT (the actual compressed image bit) data. There are many types of packets but a few are critical. Wikipedia has a full list here It is possible to not have a DHT in the case of MJPEG (since it saves space), however this is appears to be uncommon in my experience (I could not find a file that does it) and breaks non MJPEG decoders, ffmpeg also offers a way to fix this exact trick with mjpeg2jpeg So what is in our image? I wrote a small parser/dumper out of github.com/neilpa/go-jfif to see the difference between files: Ok, so the mjpeg2jpeg is not needed as we have a DHT segment. However it does seem that we are missing a APP0 in our image… Perhaps we need this for the image to render out on the hardware decoder? The wikipedia page for APP0 segment shows it should be quite easy to recreate, as it just has spots for a thumbnail, and some basic pixel density data. With a reasonably simple patch APP0 tags can be written by the golang image/jpeg library. I hope(?) to try and submit this to the golang core, since it should not harm any backwards compatibility, and also helps slightly strange decoders (like my HDMI ethernet thing) decode the outputs of image/jpeg While messing with the JPEG export options of GIMP I found that the hardware decoder also does not accept APP2 headers (EXIF and friends), and does not deal with 4:4:4 sampling. It’s a slightly silly situation where you can have the following table of JPEGs all look the same and be called a JPEG but have wildly different decoding compatibility with some bits of kit. For example, all of these JPEGs are slightly different. if you see a white box, it’s because you don’t support the bizzare Arithmetic coding extention Amusingly, even while editing this post, I found that whatever renders thumbnails in Nemo does not like one of those JPEGs: Right. The reason I’m here is to add the ability for our existing audio playback solution to also show the now playing Artist/Title. With some small fiddling around with the newly patched JPEG encoder. We now have the Date, Now Playing and a tribute to the famous generic DVD player screensaver Regardless of any of this madness. I think I got off lucky, I don’t have to look into hardware H.264 decoders any deeper! A previous job I discovered that there is actually no consistent way to decode a H.264 video stream. Don’t ever expect two decoders (especially hardware assisted) to give you identical outputs. Madness. It turns out lossy compression is lossy! ;) If you want to stay up to date with the blog you can use the RSS feed or you can follow me on Twitter Related Posts: x86 assembly doesn't have to be scary (interactive) (2018) Ludicrously cheap HDMI capture for Linux (2016) Random Post: Stressing the network when it's already down (2020)
1
Jax: Composable transformations of Python+NumPy programs (Google)
{{ message }} google/jax You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
4
Netflix warning on subscriber growth sends shares plummeting
Netflix warning on subscriber growth sends stock plummeting Expert insights, analysis and smart data help you cut through the noise to spot trends, risks and opportunities. Join over 300,000 Finance professionals who already subscribe to the FT. Subscribe to unlock this article Try unlimited access Try full digital access and see why over 1 million readers subscribe to the FT Only$1 for 4 weeks Then $69 per month New customers only Cancel anytime during your trial Then $69 per month New customers only Cancel anytime during your trial What is included in my trial? During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages. Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here. Change the plan you will roll onto at any time during your trial by visiting the “Settings & Account” section. What happens at the end of my trial? If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month. For cost savings, you can change your plan at any time online in the “Settings & Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial. You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here. Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel. When can I cancel? You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side. You can still enjoy your subscription until the end of your current billing period. What forms of payment can I use? We support credit card, debit card and PayPal payments. Read more Explore our subscriptions Individual Find the plan that suits you best. Group Premium access for businesses and educational institutions. Check if your university or organisation offers FT membership to read for free.
2
First Spider-Man Comic, Fantasy No. 15, Sells for Record $3.6M USD
Heat Vision First Spider-Man Comic, Amazing Fantasy No. 15, Sells for Record $3.6M The spectacular purchase sent senses-tingling across the comic collecting world as it sets record for biggest sale ever. Logo text Move over Superman and Batman, there’s a new record-setting hero in town. The sale of Amazing Fantasy no. 15, featuring the first appearance of , has set the record for the most expensive comic ever sold. The comic sold Thursday for $3.6 million as part of Heritage Auction’s Signature Comics & Comic Art auction being held Sept. 8 to 12. The senses-shattering sale beat out the previous record, Action Comics no. 1, published in 1938 and featuring the first appearance of Superman, which sold privately for $3.25 million earlier this year. The Spider-Man comic is graded CGC 9.6 and is one of only four copies known to exist in that near mint condition. There are no copies in CGC 9.8, which is the next grade above. Amazing Fantasy no. 15, from Stan Lee and Steve Ditko, introduced readers to wallflower Peter Parker, who gains incredible powers but whose selfishness leads to the death of his beloved Uncle Ben, leading him to learn the lesson of the great responsibility that comes with great power. The comic proved very popular and launched Spider-Man into his own month periodical. The hero quickly proved to be Marvel’s most popular hero and led to a cartoon and newspaper strip. In modern times, Spider-Man is more popular than ever, headlining million unit-selling video games and billion dollar movie franchises. Comics have been setting and breaking record prices during the pandemic with many key comics doubling in prices just from 2019. Previously the most expensive copy of Amazing Fantasy No. 15 was the CGC 9.4-graded copy Heritage sold in March 2020. The copy sold for $795,000 Heritage’s previous comic-book record was set last January, when the only known Batman No. 1 graded CGC 9.4 sold for $2.22 million. That book shattered the previous $1.5 million world record set for a Batman title in November 2020, a sale involving a copy of 1939’s Detective Comics no. 27. At the time, that was the highest price ever realized for any Batman comic book. THR Newsletters Sign up for THR news straight to your inbox every day
2
Italy’s famous dome is cracking, and cosmic rays could help save it (2018)
The soaring dome atop the Cathedral of St. Mary of the Flower justly dominates the Florence skyline and has stood for centuries, ever since Filippo Brunelleschi designed it in the early 15th century. But scholars aren't quite sure how this goldsmith with no formal architectural training managed to construct it. Brunelleschi built a wooden and brick model of his plan but deliberately left out crucial details and left no comprehensive blueprints so his rivals could not steal his secrets. Elena Guardincerri, a physicist at Los Alamos National Laboratory who grew up in a nearby town in Italy, thinks she can help resolve part of the mystery with the aid of a subatomic particle called a muon. Brunelleschi found inspiration for his design in the inverted catenary shape of the Pantheon, which is an ideal shape for domes because the innate physical forces can support the structure with no need for buttressing. Robert Hooke phrased it best in the 17th century: "As hangs the flexible chain, so but inverted stands the rigid arch." A chain suspended between two points will naturally come to rest in a state of pure tension; inverting that catenary shape into an arch reverses it into a shape of pure compression. Standard building materials like masonry and concrete would break fairly easily under tension, but they can withstand large compressive forces. The Pantheon's circular dome has a single concrete shell. Brunelleschi's design called for an octagonal dome spanning 150 feet and soaring nearly 300 feet in height with no flying buttresses for support. He used two shells: a very thick inner shell and a much thinner outer shell. Historians believe he used three pairs of large stone chains (which are still part of the structure) to act a bit like barrel hoops, applying sufficient pressure to hold the bricks in place while the mortar set. The final dome is a spectacular achievement. Almost immediately, however, cracks began to appear in the structure, albeit very slow-moving cracks. "Nobody is expecting it to fall down any time soon," said Guardincerri. But a botched restoration effort in the 1980s exacerbated the problem, adding a greater sense of urgency to the quest to preserve the dome, which is one of Florence's chief tourist attractions. However, the lack of detailed information about the internal structure remains a stumbling block. Preservationists have employed many different methods over the years to try to fill in their gaps in knowledge. In 1987, 300 different devices were hooked up to the dome, prompting The New York Times to declare it "the world's most carefully monitored structure." But the inner shell is so thick, most conventional methods can't penetrate it. Specifically, it would be nice to know whether the stone chains used to stabilize the dome were reinforced with iron bars, clamps, or more chains to fortify its structural integrity. That's where muon imaging should be able to help. There is a long history of using muons to image archaeological structures, a process made easier by the fact that cosmic rays provide a steady supply of these particles. An engineer named E.P. George used them to make measurements of an Australian tunnel in the 1950s. But Nobel-prize-winning physicist Luis Alvarez really put muon imaging on the map when he teamed up with Egyptian archaeologists to use the technique to search for hidden chambers in the Pyramid of Khafre at Giza. Although it worked in principle, they didn't find any hidden chambers. Just last year, however, scientists used muon imaging to detect a mysterious void in the Great Pyramid of Giza, which could be evidence of a hidden chamber. There are many variations of muon imaging, but they all typically involve gas-filled chambers. As muons zip through the gas, they collide with the gas particles and emit a telltale flash of light, which is recorded by the detector so scientists can calculate the particle's energy and trajectory. It's similar to X-ray imaging or ground-penetrating radar, except with naturally occurring high-energy muons rather than X-rays or radio waves. That higher energy makes it possible to image very thick, dense substances, like the stones used to build pyramids or Il Duomo's seven-foot-thick inner shell. The denser the object being imaged, the more muons are blocked, casting a telltale shadow. Hidden chambers in a pyramid would show up in the final image because they blocked fewer particles. And if Brunelleschi used iron bars to fortify his dome, they would show up as darker patches. The Los Alamos muon tracking technique was first developed in the early 2000s when Guardincerri was still a graduate student. She originally built her muon detectors to prevent the pesky particles from interfering with her attempts to study ghostly neutrinos. When a host of experts on the Florence Cathedral came to a workshop at the lab in 2013, she realized the same technique could be used to learn more about the materials used to build the dome—except she would need to build two smaller, portable detectors. A single muon detector works well for scanning large objects like pyramids or mountains, but there is so much scattering of the muons that you get lower resolution and blurry images. Sandwiching the object of interest between two muon detectors gives you higher resolution, but it limits the field of view to whatever part of the object is between them. That's an acceptable tradeoff for imaging Il Duomo. Over the summer in 2015, Guardincerri and her students built a mock-up of the dome's thick inner shell out of radiation-shielding concrete bricks, which have similar properties to the clay bricks used to build the original, and embedded iron bars of varying thickness within it. They placed the muon trackers on either side of the six-foot-thick mock-up wall and took data for 35 days. After just 17 days, all three iron bars were clearly visible in the resulting image. When she reported her findings to the cathedral's guild members, she quickly gained approval to develop a set of muon-tracking modules to install on site. The two detectors are completed, and Guardincerri is now waiting on collaborators at the University of Pennsylvania to finish testing the custom-made electronics for analyzing the data, based on technology used in the ATLAS experiment at CERN's Large Hadron Collider in Switzerland. Once that's done, both detectors will be shipped to other collaborators at the University of Florence, who will retest them to ensure nothing was damaged during transit. Then the detectors will be mounted in the dome itself: one will press against the inside wall, and the second will rest between the two shells, also against the inner wall. The Florence scientists will collect data for a month in that position and then move the detectors two meters higher for another month of data collection, and so on, until the entire dome has been imaged. And then we may know once and for all whether there are any iron reinforcements in the dome. That would be welcome news to preservationists as they ponder how to address the cracking problem. DOI: AIP Advances, 2016. 10.1063/1.4940897  (About DOIs).
3
Wordle with Grep
By on 22 Jan 2022 Let us solve a couple of Wordle games with the Unix grep command and the Unix words file. The Wordle games #217, #218, and #219 for 22 Jan 2022, 23 Jan 2022, and 24 Jan 22, respectively, are used as examples in this post. The output examples shown below are obtained using the words file /usr/share/dict/words, GNU grep 3.6, and GNU bash 5.1.4 on Debian GNU/Linux 11.2 (bullseye). Note that the original Wordle game uses a different word list. Further, there are several Wordle clones which may have their own word lists. For the purpose of this post, we will use the word list that comes with Debian. We will solve each Wordle in a quick and dirty manner in this post. The focus is going to be on making constant progress and reaching the solution quickly with simple shell commands. Before we start solving Wordle games, we will do some preliminary work. We will create a convenient shell alias that automatically selects all five-letter words from the words files. We will also find a good word to enter as the first guess into the Wordle game. The following steps elaborate this preliminary work: Make a shell alias named words that selects all 5 letter words from the words file. $ alias words='grep "^[a-z]\{5\}$" /usr/share/dict/words' $ words | head -n 3 abaci aback abaft $ words | tail -n 3 zoned zones zooms $ words | wc -l 4594 For each letter in the English alphabet, count the number of five-letter words that contain the letter. Rank each letter by this count. $ for c in {a..z}; do echo $(words | grep $c | wc -l) $c; done | sort -rn | head -n 15 2245 s 2149 e 1736 a 1404 r 1301 o 1231 i 1177 l 1171 t 975 n 924 d 810 u 757 c 708 p 633 h 623 y The output shows that the letter 's' occurs in 2245 five-letter words, followed by 'e' which occurs in 2149 five-letter words, and so on. Find a word that contains the top five letters found in the previous step. $ words | grep s | grep e | grep a | grep r | grep o arose We will enter this word as the first guess in every Wordle game. In case, the word "arose" does not lead to any positive result, we will need another word to enter as our second guess. Find a word that contains the next five top letters in the list found above. $ words | grep i | grep l | grep t | grep n | grep d $ words | grep i | grep l | grep t | grep n | grep u until We found that there is no such word that contains 'i', 'l', 't', 'n', and 'd'. So we got rid of 'd' in our search and included 'u' (the next highest ranking letter after 'd') instead to find the word "until". We will enter this word as the second guess if and only if the first guess (i.e., "arose") does not lead to any positive result. Let us now solve Wordle #217 for Sat, 22 Jan 2022 with the following steps: Use the word "arose" as the first guess. The following result appears: A R O S E The previous result shows that the letter 'e' occurs at the fifth place. Further, the letters 'a', 'r', 'o', and 's' do not occur anywhere in the word. Look for words satisfying these constraints. $ words | grep '....e' | grep -v '[aros]' | head -n 5 beige belie belle bible bilge Pick the word "beige" for the second guess and enter it into the Wordle game. Note that since we are following a quick and dirty approach here, we do not spend any time figuring out which of the various five-letter words ending with the letter 'e' is the most optimal choice for the next guess. We simply pick the first word from the output above and enter it as the second guess. The following result appears now: B E I G E The letter 'i' occurs somewhere in the word but not at the third place. Further the letters 'b' and 'g' do not occur anywhere in the word. Also, the letter 'e' does not occur anywhere apart from the fifth place. The letter 'e' in the gray tile in the second place confirms that the letter 'e' does not repeat in the answer word. Refine the previous command to add these constraints. $ words | grep '[^e][^e][^ie][^e]e' | grep i | grep -v '[arosbg]' | head -n 5 fiche indue lithe mince niche Enter "fiche" as the third guess. The following result appears: F I C H E The previous result shows that the letter 'i' occurs at the second place. Further, the letter 'c' occurs somewhere in the word but not at the third place. Also, the letters 'f' and 'h' do not occur anywhere in the word. Refine the previous command further to add these constraints: $ words | grep '[^e]i[^iec][^e]e' | grep c | grep -v '[arosbgfh]' | head -n 5 mince wince Enter the word "mince" for the fourth guess. It leads to the following result: M I N C E We are almost there! We now have all the letters except the first one. The previous result shows that the letter 'm' does not occur in the word. Thus the answer word must be "wince". For the sake of completeness, here is a refined search that selects the answer word based on the constraints known so far: $ words | grep '[^e]ince' | grep -v '[arosbgfhm]' | head -n 5 wince It looks like we have found the answer word. Enter "wince" as the fifth guess to get the following result: W I N C E Now that the wordle for Sat, 22 Jan 2022 is solved, let us try the same method on Wordle #219 for Sun, 23 Jan 2022 and see how well this method works. Here are the steps: Like before, the first guess is "arose". Entering this word leads to the following result: A R O S E Now search for words based on the previous result. $ words | grep '.r...' | grep -v '[aose]' | head -n 5 brick bring brink briny bruin Enter the word "brick" as the second guess. This leads to the following result: B R I C K Use the previous result to refine the search further. $ words | grep '.ri[^c].' | grep c | grep -v '[aosebk]' | head -n 5 crimp Enter "crimp" as the third guess. This leads to the following result: C R I M P Finally, let us solve Wordle #219 for Mon, 24 Jan 2022. Enter "arose" as the first guess to get this result: A R O S E The previous result shows that the third letter is 'o' and the letters 'a', 'r', 's', and 'e' do not occur anywhere in the word. Search for words that match these constraints. $ words | grep '..o..' | grep -v '[arse]' | head -n 5 block blond blood bloom blown Enter "block" as the second guess. This leads to the following result: B L O C K The previous result shows that the letter 'l' occurs somewhere in the word but not at the second place. Similarly, the letter 'k' occurs somewhere in the word but not at the fifth place. Further, the letters 'b' and 'c' do not occur anywhere in the word. Search for words that match these constraints. $ words | grep '.[^l]o.[^k]' | grep l | grep k | grep -v '[arsebc]' | head -n 5 knoll Enter "knoll" as the third guess. It leads to the following result: K N O L L Comments
1
Ted Lasso – the servant leader we need right now
Josh P. Armstrong, Ph.D. p Follow 5 min read p Nov 25, 2020 -- Listen Share If the political division, pandemic lockdown, and national racial unrest has you seeking new models of leadership, look no further than Ted Lasso, the recent Apple TV+ series. Starring Jason Sudeikis as Ted Lasso, an American football coach hired to manage AFC Richmond, a fictional English Premier League soccer team, this new dramedy provides poignant examples of emotional intelligence, optimism, and servant-leadership. In countless for-profit and not-for-profit organizations today we are seeing traditional, autocratic, and hierarchical modes of leadership yielding to a different way of working — one based on teamwork and community, one that seeks to involve others in decision making, one strongly based in ethical and caring behavior, and one that is attempting to enhance the personal growth of workers while improving the caring and quality of our many institutions. This emerging approach to leadership and service is called servant-leadership. It has been 50 years since Robert K. Greenleaf coined the term “servant-leader” and first wrote in his classic essay, “The Servant as Leader” about the need for a better approach to leadership, one that puts serving others — including employees, customers, and community — as the number one priority. After some years of carefully considering Greenleaf’s original writings, Larry Spears extracted a set of ten characteristics of the servant-leader that have taken on global significance in the understanding of servant-leadership. They are: listening, empathy, healing, awareness, persuasion, conceptualization, foresight, stewardship, commitment to the growth of people, and building community. Ted Lasso embodies the practice of these servant-leadership characteristics in this ten-episode show and provides lasting leadership lessons for all of us. Ted takes the long view. He is hired by Rebecca Welton (Hannah Waddingham), team owner, who intends to burn the club to the ground to get revenge on her ex-husband who loves this soccer club. One aspect of Ted’s appeal is his aw-shucks sensibility and humble approach. While focusing on relationships he instills a “BELIEVE” philosophy and quietly begins implementing change at the club. At its core, servant-leadership is a long-term, transformational approach to life and work — in essence, a way of being — that has the potential for creating positive change throughout our society. This is modeled by Ted through simple and intentional acts — baking biscuits for Rebecca, selecting and giving books to his players — all moving toward creating a new team culture. Early in the show, Ted Lasso is interviewed by skeptical soccer writer, Trent Crimm (James Lance). While many were questioning his ability and leadership, Ted states his mission for this team is about “helping these young fellas be the best versions of themselves, on and off the field.” As the show progresses, this vision allows players to make difficult decisions about their playing time, seeking what is best, ultimately, for the team. Ted’s love for each player and commitment to growth of each individual plays out in the arc of every character surrounding AFC Richmond. Ted Lasso’s quiet belief and unassuming nature allows space for others to lead. He trusts and allows others, from his coaching staff to players, to come to the best solutions by themselves. Team kit man turned assistant coach Nate (Nick Mohammed) slips game strategies to Ted, but rather than present them to the team, Lasso encourages Nate to develop his leadership voice and inspire the team. This same servant-leadership commitment to persuasion, rather than positional authority or coercion, leads to young star player Jamie to “make that extra pass.” Brené Brown, author of Dare to Lead, shame researcher, and important voice on vulnerability and leadership believes Ted Lasso is required viewing for leaders. On her podcast, Unlocking Us , she interviews series creators Jason Sudeikis and Brendan Hunt (Coach Beard), and focuses on a moment from the show where Ted Lasso offers forgiveness instead of unloading shame and blame. Sudeikis suggests while many of our leaders are ignorant and arrogant, “Ted is ignorant and curious. And I think curiosity comes from a power of being able to ask questions and truly empathize what someone is dealing with.” While difficult, Lasso chooses empathy and understanding — reminding us that kindness is power. Ted Lasso provides a number of uncommon moments in modern television of healing and forgiveness. From the pain of divorce, to coming alongside a friend experiencing a panic attack, Ted Lasso displays that many leaders have broken spirits and have suffered from a variety of emotional hurts. Although this is an aspect of being human, servant-leaders recognize that they have an opportunity to help make whole those with whom they come in contact. One of Ted Lasso’s first acts is to place a “complaint box” in the locker room, then proceed to follow up on the feedback. Star player Roy Kent (Brett Goldstein) is surprised to find the shower water pressure has been fixed. Ted makes incredible efforts to listen to his staff, often allowing followers to become leaders. Servant-leaders seek to identify and clarify the will of the team. They seek to listen receptively to what is being said (and not being said!). After one of his players got scored on and was feeling down, Ted reminds him that a goldfish “has a ten-second memory” — be a goldfish. The show’s primary leadership moments are rooted in emotional intelligence. Ted Lasso names that AFC Richmond are a broken team, and proceeds to utilize vulnerability and storytelling to provide healing. He reminds them that change requires being brave. As Greenleaf observed: “Awareness is not a giver of solace — it is just the opposite. It is a disturber and an awakener.” Ted Lasso has enough self-awareness to balance his authority with empowering others, as Trent Crimm observes, “In a business that celebrates ego, Ted reins his in.” While leadership requires confidence, it doesn’t require the leader to be at the center of change. As Sudeikis states on Brené Brown’s podcast, “Ted is egoless. He allows for people to be themselves, and reflect what they think he is, but really what they are.” Ted Lasso exudes optimistic leadership wisdom. He provides a relevant cultural example of servant leadership. Most importantly, Ted Lasso offers tangible leadership practice that calls us to emotionally authentic relationships and injects hope into our organizations. I can’t wait for next season!
1
Rare Gas-Powered Circular Saw Rescue[YouTube]
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
1
I Built a Blog Engine with Vue, Django and Tailwind
A few years ago, I had the idea to use Medium, a managed platform for blogging. At the time, I thought it had many advantages like the community, the simplicity of use, etc. I was right for most of them, but I noticed many drawbacks over the time. Among them, limited tags, no smart way to write in many languages, showing code snippets required another external service like Github, and so on. There were very few ways to customise the displayed text. There were no easy way to export your own data. All of this was kind of annoying but acceptable for a "free" service. But one day, Medium decided to drastically change its business model, and make a user pay a monthly fee to allow him read more than 3 articles per day (and yes, it's easy to find a workaround, but still…). This, was the straw that broke the camels back. 🤬 Without being a Power Blogger, I decided I wanted to be completely free, build my own features with my own my data, put this blog on my own website, and that's why I decided to try to build a whole blog by myself. 😨 I was asking myself "is it so hard? Why would I need an external platform?" I knew I'd never build the ultimate blogging platform, that's not the point at all. But would I be able to build essential features that suits my own needs? In this blog article, I'll share all the questions I get through, and will explain the decisions I took to build this blog. This is definitely not a tutorial with the entire code posted (only some snippets are posted to keep it short) but you will easily make yours with the same tools if this is your goal. If you're in a hurry, you can browse it reading only the titles. Of course, the article you're reading right now is from this blog. #meta :wink: To understand the following choices, let's state that before making new choices to build this blog, there was an existing website (my personal website), using the following tools: The goal was to add a real blog to this website, hence, using the same tools as possible. Anyway I did not want to use a static site for several reasons you'll understand. This idea could be appealing do have a real-time display while I'm writing the article, using Vue hot module replacement. It would probably help with SEO too. However, this would force me to push code to write a new article or edit an existing one. And it would create coupling between front-end and back-end, leading to more errors. This avoids coupling between front-end and back-end. Now, writing an article is as easy as a new entry in the database. Pretty neat. However, we know this will bring new challenges like SEO, and that I'll need some tools to preview the content I write, to ensure the output is the expected one. ✅ Choice: Scenario 2 You already understood why I didn't use an external platform like Medium. But what about intermediary solutions with a headless CMS? There are excellent ones like Prepr or Butter CMS. Well, even if it's often a good idea to not reinvent the wheel, this time it's exactly what I want to do, as explained in the introduction: I want to be free from any external dependancy, any paying service, any "you have to do it this way". Plus, for those who don't know, Django has been created by reporters at the time, and you'll see the framework provides some welcomed features for making our own blog live. Let's start with very simple models to build our base blog logic: This is trickier than expected. 💡Scenario 1: base64 images Embedding base64 images has the real advantage of not requiring uploading any data to an external service, or pushing code. The images are just text living in the middle of your article content. Which means I am able to publish a whole article with a simple database entry. However here are many drawbacks: 💡Scenario 2: flat images Why not uploading images on my server, like any static asset? Remember that unlike SaaS services such as Medium, this blog lives in at least two different environments (the local one on my computer, and the production one, hosted on my hosting service). As I want my blog to work in both of them, I need dynamic URLs for images like http://127.0.0.1/static/blog/my_image.jpg in local env and https://www.david-dahan.com/static/blog/my_image.jpg in production env. This would require to write dynamic code like <img :src="blogUrlPrefix + 'blog_1.png'"> in the content article, expecting to find a trick to being able to interpret blogUrlPrefix at the runtime. Spoiler: I found it! A major drawback is that I can't write an article without pushing code, since the images are currently living in my codebase, decreasing the interest of using a backend for this task. 💡Scenario 3: external hosting services This is how many blog services work (Wordpress, …). This time I post an article without pushing code, just a database entry and image uploads to this service. But doing this, I break the strict environment separation, because my local blog requires external service. This could not be a deal breaker these days, but my blog won't work in local without an internet access. I don't like it. 💡Scenario 4: my own assets hosting feature This external service could be built by myself, plugging vue SPA to Amazon S3. This will remove the need to push some code, but there will still be an issue with the environments. ✅ Choice: Scenario 2 This is not perfect since I still need to push some code currently. However, building a tool to push images on an external service like S3 could be an improvement. Let's add this field to models.py file to set the path of the image used as miniature: ✅ In my opinion, Markdown is clearly superior to HTML for blogging for at least these reasons: But at some point, I need to transform the Markdown syntax to HTML automatically, since our browsers display HTML, not Markdown. This is what VueShowdownPlugin does. Showndown.js is a Javascript Markdown to HTML converter, and VueShowdownPlugin is a wrapper that allows to use it easily in a Vue.js project. Using it it as simple as: How do I want to display the article body? There are two ways of thinking here: Since I have total control over the code, I have the opportunity to fine-tune every article, making crazy layouts. But with great power comes great responsibility. This would become rapidly overwhelming, plus it would force me to use HTML and not Markdown for this purpose, while I just chose Markdown! It would be neat if there would exist a way to apply a default styling on raw HTML. And… 🥁… this is the exact purpose of Tailwind Typography which defines itself as: a plugin that provides a set of prose classes you can use to add beautiful typographic defaults to any vanilla HTML you don't control (like HTML rendered from Markdown, or pulled from a CMS). ✅ Choice: Scenario 2 Perfect, the Tailwind Typography is exactly what I need: to get a beautiful and automatic styling, without ever thinking of it when writing the article itself. I can't emphasise this enough, but NOT dealing with the style is a feature itself: it keeps a global consistency over all your posts, I can change the style for all your articles at once with tuning the style, not the posts. In addition of the default styling that are okay for 95%, I can can customise the plugin itself for very specific purposes. For example, if I want to change hyperlink colors to match my purple theme, I just need to add in tailwing.config.js file: ✅ A no-brainer for this is to use hightlight.js. Even if Vue plugins exist, I use it directly from CDN, and add a pretty theme with additional CSS file. Then hljs.highlightAll() needs to be run (I use it as late as in the updated Vue lifecycle event) to highlight syntax. Since I have a true backend, I can add any feature without relying on external, and potentially limiting services like Disqus. Of course, one could argue that famous external services come with the advantage of an existing community, and that lots of people already have an account. After all, who would signup to my website just to like an article? No one! I strike a balance here, by not using authentication at all! In my case I use IP addresses to know if a user liked the article or not. Of course, it's not perfect at all, since it's easy to change its IP address, and it often changes automatically based on your ISP, but come on, I'm not dealing with a crucial feature here! Let's update the model with: Now I just need to check if the user IP is in the likers array when loading the article. A serializer using Django-Rest-Framework would look like to: And of course adding/removing it when the users triggers the button (no need to write another example, you got it). If you want to test the feature, juste like this article at the end of it 😊 This is just an example for this specific feature. I would probably need more security for other back-end features likes comments. Given the choice I made to use database to store data, creating a new blog article is as easy as adding a new entry to the database (except for the images). For that, the built-in Django admin is a dream since I can do this by adding only 2 lines in admin.py file: Then, the GUI is available to add a new article: One issue here: it seems suboptimal to write markdown in a raw textarea like this. I can't see the output, and I could make silent mistakes with a wrong syntax. Let's review the multiple scenarios: I write the article in the editor, checking syntax, then pasting it in Django admin before saving. Kind of okay, but the output won't be the exact same one than my blog article. The advantage here would be to stay on Django GUI to create the post, but again, the output won't be the exact same one than my blog article. I can create a very simple page with a textarea on the left side (to write Markdown code in it), and the output preview on the right side. This time, since I'm using the same tools (Showdown.js, Tailwind Typography, …) with the same prose CSS class, the output will be strictly identical to the final result, including the styles. Once the article is finished, I can check the preview, then copy-paste the Markdown code in Django GUI with confidence. This is the most compelling result, but the one that requires the most work. I'll need to handle authentication on the back-end, make endpoints, handling all Article fields with forms, etc. At this point I would not need Django-admin GUI anymore. ✅ Choice: Scenario 3 for speed (or scenario 4 for beauty) Building the preview custom tool on Vue is very easy since it's almost the same mechanism that to display the article itself. I just need a textarea with a v-model attribute: ✅ I have two distinct blogs on my website: Tech Blog section and Life Blog section. Both Vue pages use exactly the same component and the query to the back-end uses filtering with Tag object. Here is the Serializer that lists Article objects: Then on the SPA side, to stay DRY and keep a unique component for both pages, I use props with routes: And fetching all articles in the BlogHome component using filters looks like: ✅Luckily, Django comes with a high-level syndication-feed-generating framework for creating RSS and Atom feeds. The usage itself is pretty straightforward, even if there are two notable things: ✅ As a French native speakers, I feel more confident to write in French. However for this article, I thought it would benefit more people if I would make the effort to write it in English. That's why I decided to add a language attribute to the Article object. This is currently used to display the related flag (🇫🇷or 🇬🇧), and could be use as a filter, later. Of course not an real i18n feature, just a quick win. We can imagine more advanced features with i18n like having an article in multiple languages and serving the right on to the reader. NOTE: I'll edit this article every time I add another feature. We saw all features currently implemented for this blog. For now, I considerer the following features valuable to be added: Static websites allow content to be server-side rendered, and pages can be served as raw HTML from a CDN rather than built on the fly by the SPA. This allows the fastest response time possible for the user, and SEO, contrary to our current solution. They're particularly suited for blogs, where content is not supposed to often change. Idea: take a look to nuxt.js and SSR to this purpose When I share a link to this article, it would be neat if a miniature is displayed on website like Facebook, Linked In, etc. While it's very easy to do using static websites, using a SPA brings challenges here. There are currently 10 articles on this website. The more I add new articles, the more this single query will become slower. That's why I need pagination. idea: a clean and modern way to process would be to use infinite scroll, scrolling to load more articles, until there is no more. For now, images can not be zoomed. Well, you can still zoom on your smartphone by pinching the screen, or on your desktop by increasing text size, but I mean a feature to do this the clean way. Idea: use a modal-like component with Tailwind CSS and Headless UI. Parsing the markdown file at article saving would allow to add some nice features like: Idea: use a Python Markdown parser tool, and a background task manager like Celery As I write an article article incrementally, it could be interesting to save all iterations of the content. This would help me to avoid to accidentally delete stuff, of rollback to some previous content if changed my mind. Idea: use a special field for this with Django Exporting data it is as easy as writing a script. Article body can be exported to .md files and metadata written at the top of these .md files. Images can be downloaded in a folder, but this would imply to override image urls in that case. We analysed lots of options to build this blog. It's opinionated, incomplete, and far from perfect, but it seems to be a good starting point, what do you think? I tried to show that external platforms such as Medium are not the only way to go, and that using a backend rather than a static website has some advantages too. I hope you enjoyed reading this article. 🤘
1
Biological Patent
LEGAL FRAMEWORK The sources of patent law in Europe constitute a multidimensional and multilevel system: International, European and national provisions affect the legal status of patents in Europe. The EC Directive 98/44/EC on the legal protection of biotechnological inventions (later referred to as ‘Biotech Directive’) aimed at harmonising patentability criteria for biotechnological inventions. However, it did not create any new patent authority or mechanism for patent application. Therefore, the EPC and national laws constitute the core regulation on patenting. Furthermore, some international agreements oblige the Member States, such as the WTO Agreement on TRIPs of 1994. Relevant in the field of global patent policies and practices is also the trilateral co-operation of Japanese, European and US patent offices. They have issued several joint reports and comparisons of their respective systems and these can be studied at www.trilateral.net. This indicates an attempt at gradual rapprochement between these regimes which would be most welcomed in the global markets. The European Patent Convention of 1973 provides a general legal framework on all kinds of patents, so biotechnological patents are not separately addressed. Currently EPC has 31 members, of which 24 are the EC member states. Other members are Bulgaria, Iceland, Switzerland, Liechtenstein, Monaco, Romania and Turkey. Albania, Bosnia and Herzegovina, Croatia, Former Yugoslav Republic of Macedonia and Serbia and Montenegro recognise a European patent. The patent granted by the EPO, is called a European patent. Despite the term ‘European patent’, it is not an EU institution. The EPO has, however, incorporated the main provisions of the Biotech Directive into the Implementing Regulations of the EPC in 1999. A European patent is not just one single patent but an institution subject to further regulation of national patent laws. A European patent becomes in force in the nominated states according to respective national patent laws, and may also become subject to national legal actions, such as opposition and revocation procedure. Therefore a patent filed and granted by EPO may de facto be treated differently in different national states in terms of time, restrictions, research exemptions, infringements, and so on. There is, however, an attempt to set up a European Patent Court to remedy this problem. The Contracting States of EPO set up a Working Party on Litigation in 1999 to reflect several aspects of a potential common appeal court for patent litigation. The European Patent Litigation Agreement (EPLA) has been negotiated, but an agreement has not been reached (See EPO: Assessment of the impact of the European patent litigation agreement (EPLA) on litigation of European patents. www.european-patent-office.org). One of the problems is that the European Commission opposes the potential conflicting system with the planned Community Patent system, the appearance of which lies somewhere in the future. The proposal for a European Patent Court is also being opposed by some because the European Patent Office would have the power to appoint and remove judges, as well as, appointing EPO patent examiners as judges. This means that the proposed court will not be independent of the EPO and this is regarded as unacceptable. The value of the European patent is in the simplified procedure, when one office can do the examination, and by one application the applicant can reach patent protection in several countries. An applicant may nominate in which of the member states it wants the patent protection. A patent applicant may also or instead choose to apply for national patents in national patent offices that are equally enforceable. Some choose to file both the European patent and national patent. National patent offices are not obliged to and do not necessarily even have possibilities to follow EPO practice. Everybody can communicate with the EPO. Also scientific letters will be considered, even though examiners of EPO are expected to have expertise and follow new technologies. Anyone also has the possibility of following the pending or granted patents in the EPO and start an opposition procedure. The Enlarged Board of Appeal has the final saying about the interpretation of the patent law. World Intellectual Property Organization is a specialised agency of the United Nations. WIPO does not grant any patents but aims to simplify the process of patenting: It administers a number of IP-related treaties and systems, which enable users in the member countries to file international applications for patents, international registrations for trademarks, designs, and appellations of origin. Under The Patent Co-operation Treaty (PCT) 1978 an applicant wishing patent protection on several countries can file only one patent application in a national patent office (or in WIPO), instead of many applications in each country. An International Searching Authority will carry out a search for prior and preliminary examination. The patent process will be, however, concluded in each country, who will decide about granting or revoking the application. For further information about PCT, see www.wipo.int. The Paris Convention of 1883, the International Convention for the Protection of Industrial Property, is based on reciprocity: each Member State must apply to nationals of the other member states the same treatment as it gives to its own nationals. A patent application will also receive a right of priority: Within 12 months from the filing date of an earlier application (priority date) filed by a given applicant in one of the member states, the same applicant may apply for a patent in any other member state. These later applications will then enjoy a priority status with respect to all acts accomplished after the priority date which would normally destroy the patentability of his invention. Under the Article 27(1) World Trade Organization's (WTO) TRIPs agreement of 1994, patent protection shall be guaranteed to products and applications in all the fields of technology. The Article 27(3) TRIPS, however, states that members may exclude from patentability diagnostic, therapeutic and surgical methods for the treatment of humans and animals. Articles 1 and 4 of the United Nations Educational, Scientific, and Cultural Organization's (UNESCO) universal declaration on the human genome and the human rights (1997) state that the human genome is in the symbolic sense the heritage of humanity and it shall not, in its natural state, give rise to financial gains. The concept of natural state has not, however, been able to be defined. In 2001, the International Bioethics Committee (IBC) (2001) advised the Directorate-General as follows: The IBC, after considering this issue, is of the view that there are strong ethical grounds for excluding the human genome from patentability; It further recommends that the WTO, in its review of the TRIPS Agreement, clarifies that in accordance with the provision of Article 27(2)1, the human genome is not patentable on the basis of the public interest considerations set out therein, in particular, public order, morality and the protection of human life and health. (Advice of the IBC on the patentability of the human genome. The 8th session of the IBC, Paris 12–14 September 2001) World Health Organization has in 2003 addressed the issue of patenting and suggested that gene sequences without proven utility should not be granted patents. The WHO also demanded for some return of benefits to those who have contributed, for example, certain family or ethnic group with a particular gene variant on the basis of principle called equity. The Biotech Directive's objective has been to harmonise patent legislations in the member states and to clarify situations, such as what is patentable and what is not in the field of biotechnological activity. The EC member states were obliged to implement the Biotech Directive by the end of July 2000, but the process proved to be difficult in many countries; majority of the old member states implemented it first in 2004. The EPO has applied the Biotech Directive in its practise since 1999. The Biotech Directive does not intend to affect the basics of patent law, that is, patenting criteria, settlement of infringements, and so on. It does not create authority to grant patents and explicitly states that member states shall protect biotechnological inventions under national patent law. There are many patents that were applied for and granted prior to the Biotech Directive becoming law in EU member states. These pre-Directive patents are subject to the EPC and national patent laws of the member states as they applied at the time of application and grant. With regard to these patents it is not certain that merely isolating and cloning a gene, even though an artefact or artificial in the isolated state, is patentable subject matter under art. 52.1 EPC. In the UK, for instance, direct support for this view is found in the UK Court of Appeal decision In Genentechs Patent (1989) and indirect support in the House of Lords decision in Kirin Amgen v TKT (2004). In the Biotech Directive, ‘biological material’ means any material containing genetic information and capable of reproducing itself or being reproduced in a biological system. ‘Microbiological process’ means any process involving or performed upon or resulting in microbiological material. According to the Article 3(1) of the Biotech Directive, inventions which are new, which involve an inventive step and which are susceptible to industrial application shall be patentable even if they concern a product consisting of or containing biological material or a process by means of which biological material is produced, processed or used. Article 3(2) states that biological material which is isolated from its natural environment or produced by means of a technical process may be the subject of an invention even if they previously occurred in nature. Article 5(1) of the Biotech Directive excludes the human body, at the various stages of its formation and development, and the simple discovery of one of its elements, including the sequence or partial sequence of the gene, from patentable inventions. However, the second paragraph 5.2 highlights the difference accrued by isolation of a body element by technical means. Two requirements that make the situation different between articles 5(1) and 5(2) have been thus inserted: isolation and a technical process, that is, involvement of an inventor. Even so, many people consider these articles are contradictory. The judgement of the Case C-377/98 (Netherlands vs EU, application in the European Court of Justice for annulment of the Biotech Directive 98/44/EC) is very important with respect to the Biotech Directive and the EC IPR policy. The application included six pleas which were each rejected. Because of an immense confusion around the Biotech Directive, the Commission has so far initiated two reports: The Commission Report of 7 October 2002 on the development and implications of patent law in the field of biotechnology and genetic engineering (COM(2002) 545 final – not published in the official journal). This first report on biotechnology, genetic engineering and patent law (http://www.eu.int/scadplus/leg/en/lvb/l26026a.htm) concludes that the European legislator has endeavoured to create a functional system, which respects the ethical principles recognised within the European Community. The Commission also highlights its role in monitoring and assessing scientific and legal developments in the biotechnology sector. This report was able to identify two key topics: the scope to be conferred to patents on sequences or partial sequences of genes isolated from the human body; the patentability of human stem cells and of cell lines obtained from them The Commission Report of 14 July 2005 on the development and implications of patent law in the field of biotechnology and genetic engineering (COM(2005) 312 final – not published in the official journal). This second report sets out the key events which have occurred since publication of the first report. It focuses on issues in the area of patenting gene sequences which have been isolated from the human body and the patentability of inventions relating to stem cells. It also reports on the implementation of the Directive. The European Parliament adopted a resolution on patents for biotechnological inventions on 26 October 2006 (P6_TA(2005)0407). It named biotechnology as one of the key technologies for the future and regarded patents as necessary to promote innovation. It urged the importance of the definition of ethically motivated limits. The resolution was preceded by several motions for a resolution (B6-0551/2005 – 0557/2005) representing different views towards gene patents in general. It pointed out that even though the Biotech Directive allows the patenting of human DNA only in connection with a function, it has remained unclear whether a patent on DNA covers only the application of this function or whether other functions are also covered by the patent. In the resolution, the parliament calls on the EPO and the MS to grant patents on human DNA only in connection with a concrete application and for the scope of the patent to be limited to this concrete application so that other users can use and patent the same DNA sequence for other application (purpose-bound protection). The Parliament also regards that patent EP 1257168 violates the Biotech Directive. One of the targets of the European policy with respect to genetic testing has been expressed in the initiative of the European Parliament (Temporary Committee on Human Genetics and Other New Technologies in Modern Medicine: Report on the ethical, legal, economic and social implications of human genetics November 2001) which states that if the advantages of genetic testing are to be understood, three equally important conditions need to be satisfied: reliable tests available on the same basis to all; counselling that respects individual freedom; There have long been attempts to achieve a single European patent covering all member states. For this, the Community Patent Convention (CPC) was signed in 1975. Some initial proposals for process were found implausible by many and thus the process has been delayed. The Community Patent (CP) would still be granted by EPO, but instead of the national courts, the questions of validity and infringement would in the future be handled by a new institution, the Community Patent Court. Organisation for Economic Co-operation and Development has been actively involved in the field of biotechnologies and has issued many policies, such as the very recent Guidelines for the licensing of genetic inventions in 2006. In 2002, it issued a thorough assessment of the impact of patents on genetic inventions called ‘Genetic Inventions, Intellectual Property Rights and Licensing Practices. Evidence and Policies’, including suggestions presented in an adjoining workshop. The Council of Europe has for a long time given recommendations and taken initiatives for international conventions on important bioethical topics through the CDBI. One of the most significant is the Biomedical Convention of 1997 even though not so many European countries have ratified it yet. For instance, UK, Belgium, Germany and Spain have not even signed the document. Currently all the EC member states are also part of the EPC and the TRIPS and should have implemented these instruments and the Biotech Directive to their national laws. Nevertheless, these instruments have left space for national particularities, and therefore there is a need to know some basic differences in different countries. Some national legislations are briefly presented in the following: Austrian patent law (BGBl. I Nr. 42/2005) allows the use of a patented invention for research purposes. The Biotech Directive was transposed in 2005 in conformity with it. The Bioethics Commission gave in 2002 its opinion on the national implementation of the Biotech Directive. It regarded the implementation as a positive development from the ethical point of view, but highlighted that it is only a milestone and does not cover or clarify all the issues. The Belgian Patent Act is from 1984. The Biotech Directive was implemented in 2005 rather literally (The transposition law on 28 April 2005. – Loi modifiant la loi du 28 mars 1984 sur les brevets d’invention, en ce qui concerne la brevetabilité des inventions biotechnologiques http://www.ejustice.just.fgov.be/cgi/welcome.pl). The Belgian Patent Law contains, however, some deviating provisions: a stringent requirement with respect to industrial application of a sequence or a partial sequence of a gene: the industrial application shall be concretely disposed in the patent claim (Art. 4 of the patent law). Furthermore, the transposition law extended the research exemption to cover acts performed on scientific purposes on and/or with (sur et/ou avec) the patented invention (Art. 28 section 1er (b) of the patent law). In addition, the transposition law inserted a new article 31 (bis) to the patent law pertaining to the compulsory licensing for the public health interests. Denmark transposed the Biotech Directive to its Patent Law (479/1967) in 2000. The transposition law (412/2000) followed closely the Biotech Directive. However, there has been a discussion about biotechnological patents. The Danish Council arranged a conference on the ethics of patenting human genes and stem cells in 2004. Subsequent conference report and summaries listed key areas and the Danish Council of Ethics gave recommendations (www.etiskraad.dk). Patents Act of 1994, amended in 1999 to transpose the Biotech Directive as of January 1, 2000. Private and non-commercial use of the patent is not considered to infringe the patent right unless it infringes the interests of the patentee (Art. 16). Compulsory licensing provisions (Art. 47) cover various situations in the interests of promoting development and Estonian economy. The Estonian patent law requires a registration of voluntary licences to have certainty to a third party (Art. 46 (4)). Finnish Patent Act of 1967 was altered to transpose the Biotech Directive in 2000 in conformity with the Directive. The Finnish Patent Act does not have a special provision on research exemption. The exclusive right to use the invention does not cover non-professional activities and the research on the invention (Art. 3 section 3). However, non-professional use has been interpreted very narrowly: non-commercial end of the activity alone does not allow the use of the patented invention. In the legal doctrine, only personal and other purely private use has been regarded as non-commercial. Hence, diverse research activities in the universities, inter alia, may infringe the patent. It is noteworthy; nevertheless that only intentional infringement may be subject to a penalty whereas mere negligence may lead to liability to pay damages (Art. 58 section 1). In the case of pure ignorance, the infringer may be condemned to compensate the use of the patented invention to the amount that the court finds reasonable (Art. 58 section 2). It is finally for the prosecutor or the plaintiff to prove the decree of negligence. Liability under criminal code (1889) requires intentionality and remarkable economical damage. The French regulation of patents is in the ‘Code de la proprieté intellectuelle, Livre VI’. French patent provisions include institutes such as licences of rights (Art. L 613-10) and ex officio licensing (Art. L 613-16). France transposed the Biotech Directive in 2004 (Loi no 2004-1338 du 9/12/2004) (www.legifrance.gouv.fr). The French National Bioethics Committee CCNE (Comité Consultatif National d’Ethique pour les sciences de la vie et de la santé) has issued an opinion nr 64 in 2000 concerning biotechnological inventions. It concluded that the text of the directive left the situation ambiguous and could not secure the ethical and other interests of the stakeholders. Thus, a debate is needed. CCNE did not suggest that genetics should be excluded from the scope of patent law, but ‘the result must not constitute a threat over free access to the field of discovery, a drift in the direction of treating the human body like an instrument, or refusing to share the benefits expected from these scientific advances’. (www.ccne-ethique.fr). German National Ethics Council (Nationaler Ethikrat) published an opinion in 2005 of ‘the patenting of biotechnological inventions involving the use of biological material of human origin’ (www.ethikrat.org). The council favoured transposing the Biotech Directive. It required careful monitoring of further development and of the practice of the patent offices and the courts, in particular, prohibitions on the grant of patents on ordre public grounds and handling of the award of the compulsory licences. The criteria applied should be disclosed and clarified in all the relevant cases relating to ordre public and compulsory licensing. The Council stated that the compulsory licences should be applied in all suitable cases. Among the specific recommendations the Council suggests that the technical function (industrial applicability) of the invention should be included in the claim. Furthermore, the provision of informed consent should be obligatory. The German road to transposition was not easy, and was delayed. The main controversy lied on the composition of matter doctrine for DNA-sequences, that is, the absolute protection. There did not seem to be a balance between the society and reward for the inventor. Product patents on human genes and cells were seen as violating human dignity or it was contrary to common heritage of mankind. Furthermore, new findings indicated the number of human genes was significantly lower than first estimated. Also, most diseases are multifactorial. Public attitudes were against Edinburgh patent granted in 2000. Moreover, Germany waited for the ECJ decision on the claim raised by the Netherlands. Later, there was continuing disagreement concerning the Biotech Directive's content and meaning. The hesitation resulted in the conviction of the ECJ in October 28, 2004. Finally, the law transposing the directive came into force on February 28, 2005 (Bundesgesetzblatt January 28, 2005). However, as in France and Luxembourg, Article 1 a section German Patent Law (Patentgesetz) requires that the industrial application of a sequence or a partial sequence of a gene must be concretely disclosed in the patent application by indicating the function. In case the subject matter of the invention is a sequence or a partial sequence of a gene, the composition of which is similar to a naturally presenting gene, then the use of the sequence for which industrial application has been specified in detail, shall be stated in the claim. The explanatory statement rules the prescriptions of Embryo Protection Act shall prevail. Thus, any patents on germ cells or stem cells are excluded in Germany. The Italian Government issued in January 2006 a decree to implement the Biotech Directive into national legislation (Decreto-Legge 10 gennaio 2006, n.3). The Parliament approved this on 14 February 2006. The main deviations from the Directive relate to the absolute ban to grant patent protection to certain inventions relating to assisted reproduction. Under Article 4, all uses of human embryos, stem cell lines included, and all techniques using human embryonic cells are excluded from patentability. Patents are regulated by the Patent Act of 1995 (Rijksoctrooiwet, http://wetten.overheid.nl). The Netherlands opposed heavily to transposing the Biotech Directive and appealed to the European Court of Justice to invalidate the Directive on several grounds which were all rejected (C-377/98). Finally, it transposed the Directive in 2004. It has been said that after the adoption of the EPC in 1973, the Dutch patent office had to change its previous very strict examination policy to a policy that is rather loose in practice. 13 Spain adapted the Biotech Directive in 2002 (Ley 10/2002 de 20 Abril, por la que se Modifica la Ley 11/1986, de 20 Marzo, de Patentes, para la Incorporación al Derecho Español de la Derectiva 98/44/CE, Boletín Oficial del Estado’, 30/04/2002, No 103, p. 15691). The transposition law followed the Biotech Directive. The Spanish Patent Law provides a list of situations in which an obligatory licence may be granted (Artículo 89). Also, research exemption (Artículo 52) includes many acts that are not considered to infringe the patent, such as, inter alia, acts performed in private and non-commercial contexts, experimental uses on the patented invention. In the tenth chapter to the beginning of the transposition law (2002), it has explicitly paid attention to the requirement of the informed consent: while recognising the notion of informed consent, it is not a precondition for patentability, nor a ground for a revocation. Sweden implemented the Biotech Directive in May 2004. The Swedish patent law (patentlag 1967:837, http://rikslex.riksdagen.se) is consistent with the Directive. The research exemption covers basically only the study of the invention itself. The Swedish Government, however, set up a committee to consider praxis and effects of biotechnological patents. As a priority, the Committee evaluated the issue of the absolute product protection on genetic patents. In a sub report published in 2006 (SOU 2006:70), the committee did not find justifications to change the current system into the purpose-bound patent protection. The final report on several gene patent-related aspects is due in 2008. The ‘Loi federal sur les brevets d’inventions’ (1954) stipulates the patentability of diverse inventions. The Swiss biotechnology industry has an interest in conformity of Swiss national regulations with international regulations. This is in particular the case since Switzerland has a long tradition in the pharmaceutical industry and since Switzerland is the home country of important biotechnology companies. The Swiss regulations has been planned to be adjusted to those of the Biotech Directive. The proposed patent revision is supposed to be discussed in the Parliament in 2007. Under the proposal, Switzerland would not allow the patentability of naturally occurring gene sequences. The protection awarded to claims on derived DNA sequences is limited to those parts of the sequence which fulfil the concretely described function. The list of biotechnical inventions, which are excluded from patentability, will be extended. The compulsory licences are introduced, also to diagnostics (Art. 40 c), as well as the post grant opposition on several grounds like for example ethical principles. Within the revision, it is planned that the opinion of ethics committees can be taken into consideration in the case of an opposition. The National Council approved the law revision on 20 December 2006. It will come in force as later announced. Along with the patent law reform Switzerland is also about to establish a specific federal patent tribunal. For more information on the revision see http://www.ige.ch/F/jurinfo/j100.shtm#a03. The Patents Act 1977 harmonised the British patent law with the EPC. The Biotech Directive was transposed in three stages due to certain constitutional rules governing law making. The articles 1–11 of the Directive were implemented in 2000, article 13 and 14 in 2001 and article 12 in 2002. The patent authority of the United Kingdom is the UK Patent Office (http://www.patent.gov.uk). It has issued extensive examination guidelines for patent applications on biotechnical inventions in 2006 (http://www.ipo.gov.uk/biotech.pdf). The Courts have applied the patentability criteria stringently and thus the UK has avoided many of the problems met in continental Europe. An independent UK organisation, the Nuffield Council of Ethics initiated a discussing paper on ‘The Ethics of patenting DNA’ in 2002. This publication contains also conclusions and recommendations derived from the workshop meetings. The US patent policy has differed from that of the European or Japanese. The basic differences have been the United States principles of first-to-invent and 1 year grace-period, lack of opposition-procedure, and lack of provisions of ordre public or research exemption. The USPTO published new Utility Examination Guidelines becoming effective as of 5 January 2001 to be used by office personnel in their review of patent applications for compliance with the utility requirement of 35 USC 101 (US Constitution) (Federal Register/Notices 2001:66(4);1092–1099). The Guidelines set forth the utility criteria: specific, substantial and credible. See also main opposition arguments and considerate answers to them. An amendment to the US Patent and Trademark law (Bayh Dole Act) in 1980 allowed patentability and also inventions born in the context of research funded by federal money (academia). Universities established Technology Transfer Centres to manage the patents and out licensing. However, the increase of patents has had a negative impact to the traditional open science culture in the United States. National Research Council of the National Academy of Sciences has studied the subject of granting and licensing of intellectual property rights on discoveries relating to genetics and proteomics and the effects of these practices on research and innovation. (Reaping the Benefits of Genomic and Proteomic Research: Intellectual Property Rights, Innovation, and Public Health. National Academy of Sciences 2006 Forthcoming, see www.nap.edu/catalog/11487.html.) It provides several recommendations to create an environment in which it is possible to foster scientific advances and enhance human health, and which avoids conflict between open dissemination and access to scientific discoveries and the inventors’ rights.
796
Signal WhatsApp Chats Import
{{ message }} You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
44
Show HN: Test if your (US) phone number is in the leaked Facebook data
533 million Facebook users' phone numbers and personal data have been leaked online This includes 32,315,281 American phone numbers. Phone numbers are associated with Facebook account IDs, name, gender and sometimes other data Facebook has, such as location, workplace and relationship status. Does your US phone number appear in the data? Enter your phone number (just the digits, in the international format 1?????…). I'm not saving the phone number you enter. New: there is now a more secure version of this.
36
I’ve seen the metaverse – and I don’t want it
The tech world has been overtaken by the seductive idea of a virtual utopia, but what’s on offer looks more like a late-capitalist technocratic nightmare. Keza MacDonald | The Guardian I have spent large portions of my life in virtual worlds. I’ve played video games since I was six; as a millennial, I’ve lived online since adolescence; and I’ve been reporting on games and gaming culture for 16 years. I have been to Iceland for an annual gathering of the players of EVE Online, an online spaceship game whose virtual politics, friendships and rivalries are as real as anything that exists outside its digital universe. I’ve seen companies make millions, then billions from selling virtual clothes and items to players eager to decorate their virtual selves. I’ve encountered people who met in digital worlds and got married in the real one, who have formed some of their most significant relationships and had meaningful life experiences in, well … people used to call it cyberspace, but the current buzzword is “the metaverse”. Ask 50 people what the metaverse means, right now, and you’ll get 50 different answers. If a metaverse is where the real and virtual worlds collide, then Instagram is a metaverse: you create an avatar, curate your image, and use it to interact with other people. What everyone seems to agree on, however, is that it’s worth money. Epic Games and the recently rebranded Facebook are investing billions a year in this idea. When Microsoft bought video game publisher Activision for $70bn last week, it was described as “a bet on the metaverse”. The tech world seems to be leaning towards some kind of early 00s conception of wearing a VR headset and haptic suit and driving a flying car towards your perfect pretend mansion in a soothingly sanitised alternate reality, where you can have anything you want as long as you can pay for it. Look at Mark Zuckerberg’s now-infamous presentation of the future of his company, with its bland cartoonish avatars and emptily pleasant environments. It is the future as envisioned by someone with precious little imagination. I do not deny that some people want this vision. Ready Player One was a runaway hit. But the metaverse as envisioned by the people currently investing in it – by tech billionaires such as Zuckerberg and Activision CEO Bobby Kotick, by techbro hucksters selling astonishingly ugly generative-art NFTs and using words like “cryptoverse” – can only be described as spiritually bereft. It holds no interest for me. Virtual worlds can be incredibly liberating. The promise of cyberspace, right back to its inception, has been that it makes us all equal, allowing us to be judged not by our physical presentation or limitations, but by what’s inside our heads, by how we want to be seen. The dream is of a virtual place where the hierarchies and limitations of the real world fall away, where the nerdy dweeb can be the hero, where the impoverished and bored can get away from their reality and live somewhere more exciting, more rewarding. Anyone who is marginalised in the real world, though, knows that this is not how things go down. Virtual worlds are not inherently any better than the real one. Worker exploitation exists in them – look at World of Warcraft, in which Venezuelans farm currency to sell to first-world players, or Roblox, in which young game developers have put in long hours on unregulated projects for little reward. Misogyny and homophobia exist in them, too – ask anyone who’s ever had the misfortune to sound female on voice chat while playing a multiplayer shooter, or be non-gender-conforming on Twitch. As for racism, well – it is alive and well, and seemingly emboldened, in the digital world. The idea that a metaverse will magically solve any of these problems is a total fantasy. All that they really do is reflect the people that make them and spend time in them. Unfortunately, nothing I have experienced in any virtual world makes me feel good about the idea of the metaverse – because it is being constructed by people to whom the problems of the real world are mostly invisible. Unless companies put immense efforts into dismantling prejudices and unconscious biases, they are thoughtlessly replicated in whatever they create. Nobody has yet found a way to effectively moderate anywhere online to keep it free from abuse and toxicity and manipulation by bad actors. Given what’s happened with Facebook, do you trust Meta with this responsibility? Do you trust Microsoft with it? The NFT gold rush proves that people will pay tens of thousands of dollars for links to jpegs of monkeys generated by a computer – it’s eroding my faith in humanity And what will the metaverse look like? Who gets to decide? Outside the sanitised aesthetic of the Zuckerverse (and old virtual-world standby Second Life), the main artistic references we currently have are either the gaudiness of Fortnite or Roblox or the no-holds-barred neon anime nightmare that is VRChat. Then there are the seemingly endless runs of vapid NFT art, many of which are tied to their own promised metaverses, drawing in their buyers with the promise of community. Every time I see a newly minted set of images (well, links to images) go up for sale I’m like, really? ANOTHER series of rad skulls? It is all just so powerfully adolescent, and yet apparently, they continually sell out. These are currently the people determining what the future might look like. It is depressing. I would feel better about the idea of the metaverse if it wasn’t currently dominated by companies and disaster capitalists trying to figure out a way to make more money as the real world’s resources are dwindling. The metaverse as envisioned by these people, by the tech giants, is not some promising new frontier for humanity. It is another place to spend money on things, except in this place the empty promise that buying stuff will make you happy is left even more exposed by the fact that the things in question do not physically exist. As far as I can work out, the idea is to take the principle of artificial scarcity to an absurdist extreme – to make you want things you absolutely don’t need. The problem is not that I think this won’t work. The problem is that I think it will. The current NFT gold rush proves that people will pay tens of thousands of dollars for links to jpegs of monkeys generated by a computer, and honestly it is eroding my faith in humanity. What gaping deficiency are we living with that makes us feel the need to spend serious money on tokens that prove ownership of a procedurally generated image, just to feel part of something? This is all happening, of course, while the Earth continues to heat up, and at enormous environmental cost. I can’t help but wonder if these giant companies are so intent on selling us and the markets on the idea of a virtual future in order to distract us all from what they are doing to the real one. I have seen what virtual worlds can do for people. I have spent my entire adult life reporting on them, and what people do in them and the meaning that they find there. So the fact that I’m now the one standing here saying that we don’t want this, feels significant. Meta has patented technology that could track what you look at and how your body moves in virtual reality in order to target ads at you. Is that the future of video games and all the other virtual places where we spend time – to have our attention continually tracked and monetised, even more so than it is in real life? The virtual worlds of games and the early internet used to be an escape from the inequalities and injustices of the real one. To see the tendrils of big tech and social media extending towards the places that have been a refuge for me and millions of others is disturbing. I don’t trust these people with the future. The more I hear about the metaverse, the less I want to do with it. Read more
2
Irdest: Decentralised ad-hoc wireless mesh communication
Irdest is a networking research project that explores different technologies and ideas on how to build more sustainable, user-controlled communication networks. Whether you are connected to the internet via your home ISP (internet service provider) or via a mobile phone network, powerful and complex organisations sit between you and your ability to communicate with other people. As part of an Irdest network your home computer, laptop, router, phone, etc connect to each other directly, creating a dynamic mesh network. This means that the communication infrastructure that we collectively rely on to organise ourselves needs to in turn become collectively organised and managed. This approach is very different from the “internet service” you usually currently buy from a company. A lot of decentralised networking technology already exists! A primary motivation for the Irdest project is to take decades of research in this field and make it more accessible to end-users and curious software developers alike. With the Irdest SDK you can write applications that are native to a decentralised mesh network and don’t require a central server, or access to the internet to operate! At the heart of an Irdest network sits Ratman, a router application that runs on phones, computers, laptops, and other devices. Different Ratman instances can be connected over a wide range of connection types. Communicating between Ratman instances works seemlessly, the same way as devices on a WiFi network can, with the added ability to link these networks over long distances or across the entire world. Connections between Ratman instances can be created via local networks, long-range LoRa modems, peer-to-peer Wireless connections, or over the internet as a VPN-like network. Applications using an Irdest network can discover and connect with each other by first connecting to a local Ratman instance. The range of possible applications using this technology is limitless. For a more detailed explanation you should check out the “Concepts & Ideas” section in the user manual.
2
The Radical History of Self-Care
Last year was undoubtedly the year of extreme self-care, and with good reason. Between a pandemic, a presidential election, and a summer of protests against police brutality, self-care has evolved beyond a modern luxury and into a literal means of surviving during a pivotal time in our nation’s history. If you go by social media, it’s easy to think self-care is simply a series of Instagram posts, candles, bubble baths, and yoga pants mainly accessible to well-off young people in expensive urban areas and the suburbs. However, the origins of today’s self-care industry are deeply embedded in the Black Power movement of the 1960s and ’70s, in underserved communities across the country. The medical community latched onto the term self-care in the 1950s, before the Black Panther Party popularized — and politicized — it in the United States during the height of the civil rights movement. The fullness of the Black Panther Party’s legacy has only recently been uncovered, and yet it can be seen everywhere in the wellness space. “Holistic needs of Black communities and Black activists have always been a part of community organizers’ tactics. Black women, often queer, pushed other activists toward caring for themselves as a necessary, everyday revolutionary practice,” says Maryam K. Aziz, Ph.D, postdoctoral research fellow at Penn State University. Trailblazers and former Black Panther leaders Angela Davis and Ericka Huggins adopted mindfulness techniques and movement arts like yoga and meditation while incarcerated. Following their release, they both began championing the power of proper nutrition and physical movement to preserve one’s mental health while navigating an inequitable, sociopolitical system, creating wellness programs for adults and children in recreational centers across the country, in neighborhoods like Brooklyn, New York, and Oakland, California. “It is exactly this type of activism that [is] so prevalent today, and it is really rooted in the work of Black women,” adds Aziz, who is also a self-defense instructor. Self-defense was a key component of the Panther’s famed Ten-Point Program, outlined by cofounders Huey Newton and Bobby Seale in 1966. By the time the Ten-Point Program was modified in 1972, medical racism had become an increased focus. The Black Panther Party’s persona as militant warriors was birthed out of necessity to preserve the health of Black bodies. “Rather than focusing on the physicality of such moves, the [Black Panther] Party’s martial arts program emphasized appreciating one’s Black body as it was,” says Aziz. “Martial arts practice teaches Black women and girls to unlearn the idea that they are not strong and powerful. This is important for not only young Black girls, but for young, Black, nonbinary, and gender-nonconforming youth as well.” Instagram pages like Black Women Martial Arts continue the legacy, showcasing the power of martial arts reinforcing self-defense and self-healing. By the 1980s, activist and writer Audre Lorde amplified the intersectionality of self-care and civil rights as she dealt with cancer, in her book A Burst of Light: and Other Essays, which now stands as a manifesto for the Black female identity.
1
Facebook VP details the history of the company's data centers
To understand Meta’s data centers and how far they've come in the last 10 years, I have to tell you about a napkin. But before I tell you about that, I have to walk you back to the beginning… In 2008, Meta (then Facebook) was nowhere near the size it is today. Before we built our first data center in Prineville, Oregon, and founded the Open Compute Project, we did what many other companies that need data center capacity do — we leased or rented data center space from colocation providers. This sort of arrangement works fine unless the market experiences a major impact … something like the 2008 financial crisis. The financial crisis hit the data center business right at a time when we were in the middle of a negotiation with one of the big colocation providers. They didn’t want to commit to all this spending until they had a better idea of what 2009 would be like. This was totally understandable from a business perspective, but it put us, as a potential customer, in a rather uncomfortable position. We ended up making smaller deals, but they weren’t efficient from the standpoint of what we ultimately wanted — a way to handle how rapidly Facebook was growing. On the Infrastructure team, we always wanted an infrastructure that facilitates the growth of the business rather than holding it back. That’s not easy when your plan for the next two years effectively gets thrown in the trash. That was the moment where we really asked what we could do to ensure that the company had the infrastructure it would need going forward. The only answer was that we had to take control of our data centers, which meant designing and building our own. In 2009, we started looking at what it would really mean to build and operate our own data centers, and what our goals should be. We knew we wanted the most efficient data center and server ecosystem possible. To do that, we decided to create an infrastructure that was open and modular, with disaggregated hardware, and software that is resilient and portable. Having disaggregated hardware — breaking down traditional data center technologies into their core components — makes it easy and efficient to upgrade our hardware as new technologies become available. And having software that can move around and be resilient during outages allows us to minimize the number of redundant systems and build less physical infrastructure. It means the data centers will be less expensive to build and operate, and more efficient. The napkin I had previously designed and constructed data centers for Exodus Communications and Yahoo, so I knew what we needed to do and who I wanted to work with on this for Meta: Jay Park, a brilliant electrical engineer I had worked with at Exodus, who I ultimately brought on to lead the Data Center Design & Engineering team. Jay joined the team in early 2009, and we spent those first six months trying to decide exactly what the scope for this project would be. We had an idea that there is a symbiosis between the data center itself and the hardware inside it, so we were standing up the data center and hardware development teams at the same time. When we think about designing a data center, one thing to remember when data centers are in high availability — operating with limited to no downtime — is that less is often more. Less equipment can yield higher reliability because you’ve eliminated some potential equipment failure. Jay’s view of the electrical system was the same; we want to limit the number of times that we convert electricity from one voltage to another, because each one results in some loss of efficiency. Every time you do that — whether you’re going from utility voltage to medium voltage to voltage inside the data centers — some energy is lost in the form of heat from the transformer. It’s inefficient, and efficiency has always been a core objective of the Infrastructure team. The challenge was how to deal with these transitions in the electrical system, plus the fact that we have to convert from AC to DC. You need AC voltage driving your servers, but you also need a DC battery of some kind to power things in case of an outage. Some big data centers use very large battery banks that serve the whole facility. In our case, we opted to keep the batteries inside the same racks that the servers are in. The catch, however, was that there weren’t any server power supplies available that could switch from AC to the needed DC voltage from the batteries. Then Jay had an epiphany. He told me he was lying in bed, thinking about our need for this shift from AC to DC, when the idea hit. He jumped up and all he had was a napkin by his bedside. He scratched down what he thought this electrical circuit would look like and then went to the hardware team the next day and asked if they could make it work. That was the origin of our highly efficient electrical system, which uses fewer transitions, and the idea that the servers themselves could toggle between AC and DC reasonably simply and quickly. Once this piece of the puzzle was in place, it laid the groundwork for us to start designing and building our very first data center in Prineville. Once we lined up on the strategy to limit the electrical conversions in the system, we sought the most efficient way to remove the heat that’s generated when conversions are necessary. That meant thinking about things like making the servers a bit taller than usual, allowing for bigger heat sinks, and having efficient air flow through the data center itself. We knew we wanted to avoid large-scale mechanical cooling (e.g., air or water cooled chillers) because they were very energy intensive and would’ve led to a significant reduction in overall electrical efficiency of the data center. One idea was to run outside air through the data center and let that be part of the cooling medium. Instead of a traditional air conditioning system, then, we’d have one that uses outside air and direct evaporative cooling to cool the servers and remove the heat generated from the servers from the building entirely. What’s more, today we use an indirect cooling system in locations with less than ideal environmental conditions (e.g., extreme humidity or high dust levels) that could interfere with direct cooling. Not only do these indirect cooling systems protect our servers and equipment, but they’re also more energy- and water-efficient than traditional air conditioners or water chillers. Strategies like this have allowed us to build data centers that use at least 50 percent less water than typical data centers. Optimization and sustainability In the 10 years since we built our first data center in Prineville, the fundamental concepts of our original design have remained the same. But we’re continually making optimizations. Most significantly, we’ve added additional power and cooling to handle our increasing network requirements. In 2018, for example, we introduced our StatePoint Liquid Cooling (SPLC) system into our data centers. SPLC is a first-of-its-kind liquid cooling system that is energy- and water-efficient and allows us to build new data centers in areas where direct cooling isn’t a viable solution. It is probably the single most significant change to our original design and will continue to influence future data center designs. The original focus on minimizing electrical voltage transitions and determining how best to cool are still core attributes of our data centers. It’s why Facebook’s facilities are some of the most efficient in the world. On average, our data centers use 32 percent less energy and 80 percent less water than the industry standard. Software plays an important role in all of this as well. As I mentioned, we knew from the start that software resiliency would play a big part in our data centers’ efficiency. Take my word for it when I say that, back in 2009, the software couldn’t do any of the things it can do today. The strides we made in terms of the ability and the resiliency on the software side are unbelievable. For example, today we employ a series of software tools that help our engineers detect, diagnose, remediate, and repair peripheral component interconnect express (PCIe) hardware faults in our data centers. If I were to characterize the differences between how we thought about our data center program and how more traditional industries do, I think we were much more calculating about trying to assess risk versus the reward to efficiency. And risk can be mitigated by software being more resilient. Software optimizations allow us, for example, to move the server workload away from one data center to another in an emergency without interrupting any of our services. 10 years ahead Now that we have 10 years of history behind us, we’re thinking about the next 10 years and beyond. We share our designs, motherboards, schematics, and more through the Open Compute Project in the hope of spurring collective innovation. In 2021, we’ve furthered our disaggregation efforts by working with new chipmakers and OEMs to expand the open hardware in our data centers. Open hardware drives innovation, and working with more vendors means more opportunity to develop next-generation hardware to support current and emerging features across Meta’s family of technologies. As I’m writing this, we have 48 active buildings and another 47 buildings under construction, so we’re going to have more than 70 buildings in the near future that all look like our original concept. But they also need to stay relevant and in line with future trends — particularly when it comes to sustainability. In 2020, we reached net zero emissions in our direct operations. Our global operations are now supported by 100 percent renewable energy. As of today, we have contracted for over 7 gigawatts of new wind and solar energy, all on the same grids as the data centers they support. The data centers we build in the future will continue this trend. We think about sustainability at every step, from the energy sources that power them all the way down to the design and construction of the data centers themselves. For example, we have set ambitious goals to reach net zero emissions for our value chain and be water positive by 2030, meaning we will restore more water into local watersheds than our data centers consume. In building our newest data centers we’ve been able to divert, on average, 80 percent of our waste footprint away from landfills by reusing and recycling materials. There is a lot of activity in the data center and construction industries today, which puts pressure on us to find the right sites and partners. It also means we need to create more flexible site selection and construction processes. All this effort also involves looking at our vendors and contractors more as partners in all this. We can’t just make this about dollars. We have to make it about performance. We have to make it about driving best practices and continuous improvement. But that’s not the way the construction industry typically works. So, we’ve had to bring a lot of our own ideas about running operations and making improvements and impress them on the companies we work with. Moving into the data center arena was never going to be easy. But I think we’ve ended up with an amazing program at a scale that I never, ever would have imagined. And we’re always being asked to do more. That’s the business challenge, and it’s probably one of the main things that keep me and my team coming in to work every day. We have this enormous challenge ahead of us to do something that is unbelievably massive at scale.
121
Q – Run SQL Directly on CSV or TSV Files
q's purpose is to bring SQL expressive power to the Linux command line by providing easy access to text as actual data, and allowing direct access to multi-file sqlite3 databases. The following table shows the impact of using caching: Notice that for the current version, caching is not enabled by default, since the caches take disk space. Use -C readwrite or -C read to enable it for a query, or add caching_mode to .qrc to set a new default. q treats ordinary files as database tables, and supports all SQL constructs, such as WHERE, GROUP BY, JOINs, etc. It supports automatic column name and type detection, and provides full support for multiple character encodings. The new features - autocaching, direct querying of sqlite database and the use of ~/.qrc file are described in detail in here. Download the tool using the links in the installation below and play with it. Non-english users: q fully supports all types of encoding. Use -e data-encoding to set the input data encoding, -Q query-encoding to set the query encoding, and use -E output-encoding to set the output encoding. Sensible defaults are in place for all three parameters. Please contact me if you encounter any issues and I'd be glad to help. strong Files which contain a BOM (Byte Order Mark) are not properly supported inside python's csv module. q contains a workaround that allows reading UTF8 files which contain a BOM - Use -e utf-8-sig for this. I plan to separate the BOM handling from the encoding itself, which would allow to support BOMs for all encodings. I will add packages for additional Linux Distributions if there's demand for it. If you're interested in another Linux distribution, please ping me. It's relatively easy to add new ones with the new packaging flow. The previous version 2.0.19 can be downloaded directly from here. Please let me know if for some reason the new version is not suitable for your needs, and you're planning on using the previous one. q is packaged as a compiled standalone-executable that has no dependencies, not even python itself. This was done by using the awesome pyoxidizer project. This section shows example flows that highlight the main features. For more basic examples, see here. Query should be an SQL-like query which contains filenames instead of table names (or - for stdin). The query itself should be provided as one parameter to the tool (i.e. enclosed in quotes). All sqlite3 SQL constructs are supported, including joins across files (use an alias for each table). Take a look at the limitations section below for some rarely-used use cases which are not fully supported. q gets a full SQL query as a parameter. Remember to double-quote the query. Historically, q supports multiple queries on the same command-line, loading each data file only once, even if it is used by multiple queries on the same q invocation. This is still supported. However, due to the new automatic-caching capabilities, this is not really required. Activate caching, and a cache file will be automatically created for each file. q Will use the cache behind the scenes in order to speed up queries. The speed up is extremely significant, so consider using caching for large files. The following filename types are supported: Use -H to signify that the input contains a header line. Column names will be detected automatically in that case, and can be used in the query. If this option is not provided, columns will be named cX, starting with 1 (e.g. q "SELECT c3,c8 from ..."). Use -d to specify the input delimiter. Column types are auto detected by the tool, no casting is needed. Note that there's a flag --as-text which forces all columns to be treated as text columns. Please note that column names that include spaces need to be used in the query with back-ticks, as per the sqlite standard. Make sure to use single-quotes around the query, so bash/zsh won't interpret the backticks. Query/Input/Output encodings are fully supported (and q tries to provide out-of-the-box usability in that area). Please use -e,-E and -Q to control encoding if needed. JOINs are supported and Subqueries are supported in the WHERE clause, but unfortunately not in the FROM clause for now. Use table aliases when performing JOINs. The SQL syntax itself is sqlite's syntax. For details look at http://www.sqlite.org/lang.html or search the net for examples. NOTE: When using the -O output header option, use column name aliases if you want to control the output column names. For example, q -O -H "select count(*) cnt,sum(*) as mysum from -" would output cnt and mysum as the output header column names. It's possible to set default values for parameters which are used often by configuring them in the file ~/.qrc. The file format is as follows: It's possible to generate a default .qrc file by running q --dump-defaults and write the output into the .qrc file. One valuable use-case for this could be setting the caching-mode to read. This will make q automatically use generated .qsql cache files if they exist. Whenever you want a cache file to be generated, just use -C readwrite and a .qsql file will be generated if it doesn't exist. Here's the content of the ~/.qrc file for enabling cache reads by default: This section shows some more basic examples of simple SQL constructs. For some more complex use-cases, see the examples at the beginning of the documentation. Perform a COUNT DISTINCT values of specific field (uuid of clicks data). Filter numeric data, controlling ORDERing and LIMITing output Note that q understands that the column is numeric and filters according to its numeric value (real numeric value comparison, not string comparison). More complex GROUP BY (group by time expression) Read input from standard input Calculates the total size per user/group in the /tmp subtree. Use column names from header row Calculate the top 3 user ids with the largest number of owned processes, sorted in descending order. Note the usage of the autodetected column name UID in the query. The following command joins an ls output (exampledatafile) and a file containing rows of group-name,email (group-emails-example) and provides a row of filename,email for each of the emails of the group. For brevity of output, there is also a filter for a specific filename called ppp which is achieved using a WHERE clause. You can see that the ppp filename appears twice, each time matched to one of the emails of the group dip to which it belongs. Take a look at the files exampledatafile and group-emails-example for the data. Column name detection is supported for JOIN scenarios as well. Just specify -H in the command line and make sure that the source files contain the header rows. Behind the scenes q creates a "virtual" sqlite3 database that does not contain data of its own, but attaches to multiple other databases as follows: The user query will be executed directly on the virtual database, using the attached databases. sqlite3 itself has a limit on the number of attached databases (usually 10). If that limit is reached, q will automatically attach databases until that limit is reached, and will load additional tables into the adhoc database's in-memory database. Please make sure to read the limitations section as well. The code includes a test suite runnable through run-tests.sh. By default, it uses the python source code for running the tests. However, it is possible to provide a path to an actual executable to the tests using the Q_EXECUTABLE env var. This is actually being used during the build and packaging process, in order to test the resulting binary. Here's the list of known limitations. Please contact me if you have a use case that needs any of those missing capabilities. Have you ever stared at a text file on the screen, hoping it would have been a database so you could ask anything you want about it? I had that feeling many times, and I've finally understood that it's not the database that I want. It's the language - SQL. SQL is a declarative language for data, and as such it allows me to define what I want without caring about how exactly it's done. This is the reason SQL is so powerful, because it treats data as data and not as bits and bytes (and chars). The goal of this tool is to provide a bridge between the world of text files and of SQL. The standard Linux tools are amazing and I use them all the time, but the whole idea of Linux is mixing-and-matching the best tools for each part of job. This tool adds the declarative power of SQL to the Linux toolset, without loosing any of the other tools' benefits. In fact, I often use q together with other Linux tools, the same way I pipe awk/sed and grep together all the time. One additional thing to note is that many Linux tools treat text as text and not as data. In that sense, you can look at q as a meta-tool which provides access to all the data-related tools that SQL provides (e.g. expressions, ordering, grouping, aggregation etc.). This tool has been designed with general Linux/Unix design principles in mind. If you're interested in these general design principles, read this amazing book and specifically this part. If you believe that the way this tool works goes strongly against any of the principles, I would love to hear your view about it.
2
What fractals, Fibonacci, and the golden ratio have to do with cauliflower
It has long been observed that many plants produce leaves, shoots, or flowers in spiral patterns. Cauliflower provides a unique example of this phenomenon, because those spirals repeat at several different size scales—a hallmark of fractal geometry. This self-similarity is particularly notable in the Romanesco variety because of the distinctive conical shape of its florets. Now, a team of French scientists from the CNRS has identified the underlying mechanism that gives rise to this unusual pattern, according to a new paper published in Science. Fractal geometry is the mathematical offspring of chaos theory; a fractal is the pattern left behind in the wave of chaotic activity. That single geometric pattern repeats thousands of times at different magnifications (self-similarity). For that reason, fractals are often likened to Russian nesting dolls. Many fractal patterns exist only in mathematical theory, but over the last few decades, scientists have found there are fractal aspects to many irregular yet patterned shapes in nature, such as the branchings of rivers and trees—or the strange self-similar repeating buds that make up the Romanesco cauliflower. Each bud is made up of a series of smaller buds, although the pattern doesn't continue down to infinitely smaller size scales, so it's only an approximate fractal. The branched tips, called meristems, make up a logarithmic spiral, and the number of spirals on the head of Romanesco cauliflower is a Fibonacci number, which in turn is related to what's known as the "golden ratio." The person most closely associated with the Fibonacci sequence is the 13th-century mathematician Leonardo Pisano; his nickname was "filius Bonacci" (son of Bonacci), which got shortened to Fibonacci. In his 1202 treatise, Book of Calculation, Fibonacci described the numerical sequence that now bears his name: 1, 2, 3, 5, 8, 13, 21... and on into infinity. Divide each number in the sequence by the one that precedes it, and the answer will be something that comes closer and closer to 1.618, an irrational number known as phi, aka the golden ratio (eg, 5 divided by 3 is 1.666; 13 divided by 8 is 1.625; 21 divided by 13 is 1.615; and so on). And there is a special "golden" logarithmic spiral that grows outward by a factor of the golden ratio for every 90 degrees of rotation, of which a "Fibonacci spiral" is a close approximation. Scientists have long puzzled over possible underlying mechanisms for this unusual patterning in the arrangement of leaves on a stem (phyllotaxis) of so many plants—including pine cones, daisies, dahlias, sunflowers, and cacti—dating all the way back to Leonardo da Vinci. Swiss naturalist Charles Bonnet (who coined the term "phyllotaxis") noted that these spirals exhibited either clockwise or counterclockwise golden ratios in 1754, while French brothers Auguste and Louis Bravais discovered in 1837 that the ratios of phyllotaxis spirals were related to the Fibonacci sequence. Eugenio Azpeitia et al., Science 2021 Eugenio Azpeitia et al., Science 2021 In 1868, German botanist Wilhelm Hofmeister came up with a solid working model. He found that nascent leaves ("primordia") will form at the least crowded part of the meristem, and as the plant grows, each successive leaf will move outward radially, at a rate proportional to the stem's growth. The second leaf, for instance, will grow as far as possible from the first, and the third will grow at a distance farthest from both the first and the second leaves, and so on. It's not a hardcore law of nature or some kind of weird botanical magic: that Fibonacci spiral is simply the most efficient way of packing the leaves. According to the authors of this latest paper, the spiral phyllotaxis of cauliflower is unusual because those spirals are conspicuously visible at several different size scales, particularly in the Romanesco variety. They maintain that cauliflowers are basically failed flowers. The whole process depends on those branched tips, or meristems, which are made up of undifferentiated cells that divide and develop into other organs arranged in a spiral pattern. In the case of cauliflower, these cells produce buds that would normally bloom into flowers. Those buds develop into stems instead, but unlike normal stems, they are able to grow without leaves and thereby produce even more buds that turn into stems instead of flowers. This triggers a chain reaction, resulting in that trademark pattern of repeating stems upon stems that ultimately forms the edible white flesh known as the "curd." In the case of the Romanesco variety, its stems produce buds at an accelerating rate (instead of the constant rate typical of other forms of cauliflower), so, its florets take on that distinctive pyramid-like shape that showcases the fractal patterns so beautifully. The puzzle, per the authors, is how these gene regulatory networks that initially evolved to produce flowers were able to change so drastically. So, co-author Eugenio Azpeitia and several colleagues combined in vivo experiments with 3D computational modeling of plant development to study the molecular underpinnings of how buds form in cauliflower (both edible cauliflower and the Romanesco variant). Apparently, this is the result of self-selected mutations during the process of domestication, which over time drastically changed the shapes of these plants. The authors found that, while the meristems fail to form flowers, the meristems do experience a transient period where they're in a flower-like state, and that influences later steps in development. In the case of Romanesco cauliflower, the curd adopts a more conical shape instead of a round morphology. The end result is those fractal-like forms at several different size scales. "These results reveal how fractal patterns can be generated through growth and developmental networks that alter identities and meristem dynamics," the authors concluded. "Our models now clarify the molecular and morphological changes over time by which meristems gain different identities to form the highly diverse and fascinating array of plant architectures found throughout nature and crops." DOI: Science, 2021. 10.1126/science.abg5999  (About DOIs).
1
This startup has an intriguing concept for EV battery swaps
Ample Ample Ample Ample Ample On Wednesday morning in San Francisco, a startup called Ample launched its new electric vehicle battery-swap technology. The company has designed an extremely small footprint for its swap stations, which only occupy as much ground as a couple standard parking spaces and don't need much in the way of electrical infrastructure. So instead of building one big location able to handle hundreds of cars a day, Ample's plan is to build numerous small stations, which can be deployed quickly. The first five of these are now operational in the Bay Area, servicing a fleet of Uber EVs equipped with Ample's modular battery system. Rightly or wrongly, charging times and charging infrastructure are probably the biggest stumbling blocks to widespread electric vehicle adoption. Since the creation of the first gas station in 1905, society has become accustomed to rapidly refueling with liquid hydrocarbons. As a result, no one minds if their V12-powered grand tourer can't make it 200 miles before stopping, since they know they'll only be stationary for a few minutes. Battery EVs, on the other hand, need to be sold with as much battery as can be crammed underneath the cabin, and even the fastest-charging BEVs currently on sale still take more than 20 minutes to charge back to 80 percent—and even then only with 350 kW fast chargers that are still relatively uncommon. The idea of slow-charging EV batteries and then quickly swapping spent packs for fresh ones is not exactly new. In 2007, Better Place tried to make the idea work, but EVs were too much in their infancy, and Israel was too small a market for that to happen. Tesla tried it, too, with a single experimental station midway between Los Angeles and San Francisco that started testing in 2013. This was meant to swap a pack in 90 seconds but in practice took up to 15 minutes, and it was rendered irrelevant by the success of the Supercharger network. And in China, Nio has 131 battery swap stations that completed more than a half-million battery swaps by August of last year. "The moment you break the battery into smaller modular pieces, a lot of things become easier," said Khaled Hassounah, one of Ample's co-founders. "One of them is the station becomes a lot simpler. So our station actually needs no construction, no digging in the ground. All we need is a couple of parking spots that are flat enough, and then within a few days—typically a week—we can get a station up and running. Literally, everything is flat pack shipped, we assemble it on site, test it, turn it on, and we have the swapping station. And that's why we're able to serve the whole Bay Area, for example, with multiple stations in a matter of a few weeks," he told me this morning. Each station charges a few batteries at a constant rate, so there's no need for the huge power demands (or demand fees) that DC fast-charging stations require. And since there's no permanent structure, new stations shouldn't be held up by red tape and permits—again, unlike DC fast-charging stations. By now, many of you are probably thinking the same thing I thought when I first heard about the idea: wouldn't this require significant re-engineering of an EV? "That's a logical conclusion, or prediction, of how it would work. But we sought to change that in two fundamental ways," Hassounah explained. "One of them is, because our batteries are modular, we really don't have a format that we need the automakers to adapt to or that has to fit into all of the available systems. Instead, we build what we call an Ample adapter plate." Ample gets the specs of a battery pack from an OEM, then designs a structural frame for the pack that can accept its modules while still meeting all the same engineering requirements as the OEM pack. "That has the same shape, the same bolt fasteners, the same connector as the original battery," Hassounah said. Since the modules are already developed, Ample just has to develop a new adapter plate for each new model of EV that it supports. But since the modules are always identical, that means the stations can service different makes of EV, and it simplifies the swapping process. "Typically, you have the high-current connector, you have the big bolts that are holding most of the weight, and all that needs taking out. In our case, we're just removing modules in protective buckets, but the big structural piece that connects to the cooling and the high power and all that always stays in the car," he told me. "The second thing we had to build is the [cell] chemistry changes slightly between vehicles, and the first two years of Ample were just focused purely on that problem before we even built any robots. We build a layer of power electronics in our battery modules—the key is it's flexible enough but cheap enough that it doesn't change the economics of the battery in any meaningful way. That allows the battery to now adapt to the vehicle but also abstract the chemistry," he explained. Each new type of EV requires a little software work to get Ample's battery talking to the car, but Hassounah told me that there's plenty of similarity, despite what the OEMs might say. "When you talk about the communication between the battery and the car, it's very basic. There are five things all of them communicate: voltage, current, temperature, maximum power, and maximum regen. And of course, the sequence to turn on and turn off. But as long as the OEMs are working with you, it simply takes a couple of weeks to update our software," he said. One oft-heard concern about battery swaps (usually in the context of Tesla's failed experiment) was the problem of Someone Else's Battery. No one wants to turn up at a swap station to exchange their brand new, charges-to-100-percent battery for someone else's pack that only goes up to 80 percent. Ample's model is more akin to propane tanks you might use with a grill—it owns the modules, which it leases to customers. That way, you're never stuck with someone else's lemon. "That actually reduces the cost for everyone because it allows us to distribute the risk and say Ample makes sure it maintains the quality but also distributes the risk so no one ends up with a $20,000 bill," Hassounah said. There's also the tantalizing possibility of vehicles that gain more range over time, beyond the improvements that are possible with software updates. That's because Ample can update the chemistry or design of its modules and roll them out to vehicles already on the road. What's more, the modularity means that end users can have some flexibility with how many kilowatt-hours—and therefore how much extra mass—they need for a given journey. Range anxiety—whether it exists or not—means that most people want an EV with the biggest possible battery and greatest theoretical range, even if they only conduct a road trip once a year. "So a lot of your energy is spent moving the battery around, even though you don't need it. With modularization, you can choose how many modules you're going to put in the car," said John de Souza, Ample's other co-founder. Instead of buying a car with 300 miles of lithium-ion aboard, a customer might choose to carry around half as many modules for day-to-day driving, then add extra modules when necessary. For the time being, those customers will be fleets, not individual drivers. Here in the US, Ample is working with Uber, which is renting a fleet of Ample-equipped EVs to Uber drivers in the Bay Area. And Hassounah and de Souza told me that there are pilot deployments in the works for Europe and Japan. Listing image by Ample
1
Voreutils
{{ message }} WIP-Lang/voreutils You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
27
In the Beginning, There Were Taxes
p Why Honor Them? p The Rest Is History
2
Spying on the floating point behavior of existing, unmodified scientific
Spying on the floating point behavior of existing, unmodified scientific applications Dinda et al., HPDC’20 It’s common knowledge that the IEEE standard floating point number representations used in modern computers have their quirks, for example not being able to accurately represent numbers such as 1/3, or 0.1. The wikipedia page on floating point numbers describes a number of related accuracy problems including the difficulty of testing for equality. In day-to-day usage, beyond judicious use of ‘within’ for equality testing, I suspect most of us ignore the potential difficulties of floating point arithmetic even if we shouldn’t. You’d like to think that scientific computing applications which heavily depend on floating point operations would do better than that, but the results in today’s paper choice give us reason to doubt. …despite a superficial similarity, floating point arithmetic is not real number arithmetic, and the intuitive framework one might carry forward from real number arithmetic rarely applies. Furthermore, as hardware and compiler optimisations rapidly evolve, it is challenging even for a knowledgeable developer to keep up. In short, floating point and its implementations present sharp edges for its user, and the edges are getting sharper… recent research has determined that the increasing variability of floating point implementations at the compiler (including optimization choices) and hardware levels is leading to divergent scientific results. Many years ago I had to write routines to convert between the IBM mainframe hexadecimal floating point representation and IEEE 754 while preserving as much information as possible, and it wasn’t much fun! The authors develop a tool called FPSpy which they use to monitor the floating point operations of existing scientific applications. The nature of these applications makes it difficult to say whether or not their outputs are ultimately incorrect as a result of floating point issues, but FPSpy is certainly a smoking gun suggesting that there are a lot of potential lurking problems. The study is conducted using a suite of 7 real-world popular scientific applications, and two well-established benchmark suites: Together they comprise about 7.5M lines of code. FPSpy is designed to be able to run in production using unmodified application binaries, and is implemented as an LD_PRELOAD shared library on Linux. It interposes on process and thread management functions to follow thread and process forks, and on signal hooking and floating point environment control functions to know when it needs to ‘get out of the way’ in case it should interfere with application execution. FPSpys builds on hardware features that detect exceptional floating point events as a side-effect of normal processing… The IEEE floating point standard defines five condition codes, while x64 adds an additional one. These correspond to the events FPSpy observes. The condition codes are sticky, meaning that once a condition code has been set it remains so until it is explicitly cleared. This enables an extremely low overhead mode of FP-spying the authors call aggregate mode, which consists of running the application / benchmark and looking to see which conditions were set during the execution. It also has an individual mode which captures the context of every single instruction causing a floating-point event during execution. Users can filter for the subset of event types they are interested in (e.g., to exclude the very common Inexact event due to rounding), and can also configure subsampling (e.g. only record 1 event in 10, and no more than 10,000 events total). A Poisson sampling mode enables sampling of all event types across the whole program execution, up to a configurable overhead. The chart below shows the overhead of FPSpy for the Miniaero application. The overheads are very low in aggregate mode, and in individual mode with Inexact events filtered out. The three ‘tracing with sampling’ runs do include Inexact events. The first discovery is that there is very little usage of floating point control mechanisms in these applications (e.g. checking for and clearing conditions). In fact, only the WRF weather forecasting tool used any floating point control mechanisms at runtime. Beyond arguing for FPSpy’s generality, the static and dynamic results also suggest something important about the application: the use of floating point control during execution is rare. Because very few applications use any floating point control, problematic events, especially those other than rounding, may remain undetected. Aggregate-mode testing of the applications reveals that indeed they do have problematic events. For example, ENZO has NaNs, and LAGHOS divides by zero. Using individual mode tracing, it’s possible to see not only what conditions occur, but also how often and where in the application. ENZO for example produces NaNs pretty consistently all throughout its execution, whereas LAGHOS shows clear bursts of divide-by-zero errors. Rounding errors (Inexact) deserve a special treatment of their own because they are so common, and expected. Just because they are expected though, doesn’t always mean it’s safe to ignore them. Inexact (rounding) events are a normal part of floating point arithmetic. Nonetheless, they reflect a loss of precision in the computation that the developer does need to reason about to assure reasonable results. In effect, lossses of precision introduce errors into the computation. When modeling, for example, a system that involves chaotic dynamics, such errors, even if they are tiny, can result in diverging or incorrect solutions. The MPFR library supports multiple precision floating point operations with correct rounding. The analysis from FPSpy suggests that a relatively small number of instruction forms are responsible for the vast majority of the rounding errors in the applications under study. For the most part, less than 100 instructions account for more than 99% of the rounding errors. This suggests the potential to trap these instructions and call into MPFR or an equivalent instead. This would allow existing, unmodified application binaries to seamlessly execute with higher precision as necessary, resulting in less or even no rounding… By focusing on less than 5000 instruction sites and handling less than 45 instruction forms at those sites, such a system could radically change the effects of rounding on an application… We are currently implementing various approaches, including trap-and-emulate. Next time out we’ll be looking at a PLDI paper proposing a new API for computations involving real numbers, one that has been designed to give results matching much more closely to our intuitions.
1
How can I become a fossil?
Less than one-10th of 1% of all species that have ever lived became fossils. But from skipping a coffin to avoiding Iran, there are ways to up your chances of lasting forever. E Every fossil is a small miracle. As author Bill Bryson notes in his book A Short History of Nearly Everything, only an estimated one bone in a billion gets fossilised. By that calculation the entire fossil legacy of the 320-odd million people alive in the US today will equate to approximately 60 bones – or a little over a quarter of a human skeleton. But that’s just the chance of getting fossilised in the first place. Assuming this handful of bones could be buried anywhere in the US’s 9.8 million sq km (3.8 million square miles), then the chances of anyone finding these bones in the future are almost non-existent. Fossilisation is so unlikely that scientists estimate that less one-tenth of 1% of all the animal species that have ever lived have become fossils. Far fewer of them have been found. strong • How will future archaeologists study us? • How Western civilisation could collapse • Who will be remembered in 1,000 years? As humans, we have a couple of things going for us: we have hard skeletons and we’re relatively large. So we’re much more likely to make it than a jellyfish or a worm. There are things, however, you can do to increase your chances of success. Taphonomy is the study of burial, decay and preservation – the entire process of what happens after an organism dies and eventually becomes a fossil. To answer the question of how to become a fossil, BBC Future spoke with some of the world’s top taphonomists. 1. Get buried, and quickly “It’s really a question of maintaining a good condition of the body after death – long enough to be buried under sediment and then altered physically and chemically deep underground to become a fossil,” says Sue Beardmore, a taphonomist and collections assistant at the Oxford University Museum of Natural History. “To be preserved for millions of years, you must also survive the first hours, days, seasons, decades, centuries, and thousands of years,” adds Susan Kidwell, a professor at the University of Chicago. “That is, you must survive the initial transition from the ‘taphonomically active zone’… to a zone of permanent burial, where your remains are unlikely to be exhumed.” There are almost endless ways that fossilisation can fail. Many of these happen at, or down to 20-50cm below, the soil or seafloor surface. You don’t want your remains to be eaten and scattered by scavengers, for example, or exposed to the elements for too long. And you don’t want them to be bored into or shifted around by burrowing animals. The sand and mud deposits of Canada’s Badlands quickly buried bones, making the area one of the world’s richest hunting grounds for dinosaur fossils (Credit: Getty) When it comes to rapid burial, sometimes natural disasters can help – such as floods that dump huge amounts of sediment or volcanic eruptions that smother things in mud and ash. “One theory for the occurrence of dinosaur bone beds is firstly drought conditions, that killed the dinosaurs, followed by floods that moved the sediments to bury them,” Beardmore says. Of course, the fact that human bodies are typically buried six feet under (unless cremated) gives you another leg up here. But that isn’t enough on its own. 2. Find some water Obviously the first step is dying, but you can’t die just anywhere. Picking the perfect environment is key. Water is one important thing to consider. If you die in a dry environment, once you’ve been picked over by scavengers, your bones will probably weather away at the surface. Instead, most experts agree you need to get swiftly smothered in sand, mud and sediments – and the best places for that are lakes, floodplains and rivers, or the bottom of the sea. “The palaeoenvironments that we often see the best fossils come out of are lake and river systems,” says Caitlin Syme, a taphonomist at the University of Queensland in Brisbane, Australia. The important thing is the rate at which fresh sediments are burying things. She recommends rivers flowing from mountains which cause erosion and therefore carry a lot of sediment. Another option is a coastal delta or floodplain, where river sediment is rapidly dumped as the water heads out to sea. Ideally, you also want an ‘anoxic’ environment: one very low in oxygen, where animals and microorganisms that would digest and disturb your remains can’t survive. Kidwell recommends avoiding about 50cm below the seafloor, “the maximum burrowing depth of shrimp, crabs and worms that might irrigate the sediments with oxygenated water”, which would promote decomposition and stir up the body. “You want to end up quickly after death in a spot that is relatively low elevation, so that it is a sink for sediment, and preferably with standing water – a pond, lake, estuary or ocean – so that anoxic conditions might develop,” she says. Choose the right conditions and you, too, could be preserved for as long as this 150 million year old archaeopteryx (Credit: Getty) In rare cases, fossils created in these kind of still, anoxic conditions preserve their soft tissues like skin, feathers and internal organs. Examples include the many exquisite feathered dinosaurs from China or the Bavarian quarries that produced the fossils of the earliest bird, archaeopteryx. Once your fossil gets below the biologically active surface layer, then it's stable and will continue to be buried more deeply as further sediments accumulate, Kidwell says. “The risk for destruction then shifts to a completely different geological timescale, namely that of tectonism.” The question, then, is how long before the sediments encasing the corpse are turned to more permanent stone… and are lifted by geological activity to a height where erosion can expose the remains. 3. Skip the coffin Now we come to the thorny technicality of what a fossil actually is – and what kind of fossil you want your body to be. Very generally, anything up to around 50,000 years old is what’s known as a ‘subfossil’. These are largely still made up of the original tissues of the organism. Extinct Pleistocene megafauna found in caves – such as giant ground sloths in South America, cave bears in Europe, and marsupial lions in Australia – are good examples. However, if you want your remains to become a fossil that lasts for millions of years, then you really want minerals to seep through your bones and replace them with harder substances. This process, known as ‘permineralisation’, is what typically creates a fully-fledged fossil. It can take millions of years. As a result, you might skip the coffin. Bones permineralise most rapidly when mineral-rich water can flow through them, imbuing them with things like iron and calcium. A coffin might keep the skeleton nicely together, but it would interfere with this process. There is a way a coffin might work, though. Mike Archer, a palaeontologist at the University of New South Wales, suggests burial in a concrete coffin filled with sand and with hundreds of 5mm holes drilled into the sides. This then needs to be buried deep enough that groundwater can pass through. “If you want to be a classic bony fossil, a bit like something from Dinosaur Provincial Park in Canada, then something like a [coarse] river sand would be pretty good,” says Syme. “All the soft tissues would be destroyed and you’d be left with this beautifully articulated skeleton.” In terms of the minerals, calcium ions which can precipitate into calcite, a form of calcium carbonate, are especially good. “These can start to cement or cover the body which will protect it in the long run, because given time it will most likely be buried at a greater depth,” Syme says. Deliberately seeding your corpse with the appropriate minerals, such as calcite or gypsum, might be a way to accelerate this. Encouraging the growth of tough iron-rich minerals would also be sensible as they withstand weathering well in the long run. If you want to personalise your fossil further, add colour with some copper (Credit: Alamy) Silicates, from the sand, are also a nice durable mineral to have incorporated. Archer even suggests getting buried with copper strips and nickel pellets if you fancy fossilised bones and teeth with a nice blue-green colour to them. 4. Avoid the edges of tectonic plates If you made it through the first few hundred thousand years and minerals begin to replace your bones, congratulations! You’ve successfully become a fossil. As sediments build up on top and you get pushed deeper into Earth’s crust, the heat and pressure will aid the process further. But it’s not a done deal yet. Your fossil might still shift to such depths that it could be melted by the Earth’s heat and pressure. Don’t want that to happen? Steer clear of the edges of tectonic plates, where the crust is going to eventually get sucked under the surface. One such subduction zone is Iran, where the Eurasian Plate is rising over the Iranian Plate. 5. Get discovered Now you need to think about the potential for rediscovery. If you want somebody to chance upon your carefully preserved fossil one day, you need to plan for burial in a spot that currently is low enough to accumulate the necessary sediments for deep burial – but that will eventually be pushed up again. In other words, you need a place with uplift where weathering and erosion will eventually scour off the surface layers, exposing you. Good for more than floating, the Dead Sea may be a good place to preserve your fossil (Credit: Getty) One good spot might be the Mediterranean Sea, Syme says; it’s getting shallower as Africa is pushed towards Europe. Other small, inland seas that will fill with sediment are good bets, too. “Perhaps the Dead Sea,” she says. “The high salt would preserve and pickle you.” 6. Or go rogue We’ve covered the standard method for hard, durable fossils with bone largely replaced by rock. But there are some oddball methods to consider, too. Top of the list is amber. There are astounding fossils perfectly preserved in this gemstone made of tree resin – such as recent finds of birds, lizards and even a feathered dinosaur tail in Myanmar. “If you can find a large enough amount of tree sap and get covered in amber, that’s going to be the best way to preserve your soft tissues as well as your bones,” Syme says. “But it’s obviously pretty difficult for such a large animal.” Can’t find enough amber? The next option is tar pits of the kind that have preserved sabre-toothed cats and mammoths at La Brea in Los Angeles. Although here you would mostly likely end up disarticulated, your bones jumbled in with other animals. There’s also freezing on a mountain or in a glacier, like Ötzi the iceman, found in the European Alps in 1991. Where Ötzi the iceman met his fate may not seem very comfortable, but it proved key for preserving his remains (Credit: Alamy) Another route might be natural mummification, with your body left to dry in a cave system. “There are a lot of cave system remains that get covered with calcium from groundwater, which also forms stalactites and stalagmites,” Syme says. “People like caving and so if the cave systems still exist in the future, they might happen upon you.” One final method to preserve your corpse almost indefinitely, though not in the form of a fossil, would be launching you into space – or leaving you on the surface of a geologically inert celestial body with no atmosphere, such as the Moon. “The vacuum of space would be very good if you want your body to remain perpetually non-decaying,” Syme says. She adds that you could attach a radio beacon if you want to get found again in the distant future. 7. Leave a little something extra Assuming you are found millions of years hence, what else might be preserved alongside you? Plastics (fidget spinners, anyone?), other oil-derived products that don’t biodegrade and inert metals, like alloys, gold and rare metals of the kind found in mobile phones, all might last as long. Will mobile phones be one of the artefacts we leave for generations far in the future? (Credit: Getty) Glass is durable too, and can withstand high temperatures and pressures. You can imagine finding the “outlines or shape of smartphones,” Syme says. Archer notes that the durability of glass means you could chisel ‘ENJOY!’ on a small sheet of glass in a concrete coffin with your body and it would be there to find with your fossil. “To be 100% sure I would use diamond,” Syme adds – it’s immensely stable. Using a laser, you could etch a letter explaining the lengths you went to to get fossilised. If you also want to pre-plan your archaeological context, Syme believes bitumen highways and the foundations of skyscrapers are contenders. “We’ve dug down deep into the ground to build these things. You’ll be able to see… the layouts of cities still there,” she says. Remember, the words you write will fade and your deeds will be forgotten. But a fossil? That, perhaps, could last forever. -- Join 800,000+ Future fans by liking us on Facebook , or follow us on Twitter . If you liked this story, sign up for the weekly bbc.com features newsletter, called “If You Only Read 6 Things This Week”. A handpicked selection of stories from BBC Future, Earth, Culture, Capital, and Travel, delivered to your inbox every Friday. p Share on Twitter Share on Facebook p
3
The Traveling Salesman Problem in the Context of UK Pubs
Impossible. That's what you hear when you set out to solve a traveling salesman problem. In February 2018, the Washington Post reported that it would take at least 1,000 years for a computer to find an optimal route to only 22 points. So we were excited a couple of years ago when we computed the shortest possible walk to 24,727 pubs. A ton of math and 10 months of time on a fast computer brought home the tour. The computation even received fun coverage in The Guardian (October 21, 2016), The Sun (October 22, 2016), and other UK newspapers. Then the comments and email started to roll in. Several thousands of them. Starting with the first messages in The Guardian, posted several minutes after Will Coldwell's article appeared. And on and on. The problem was the following. When we gathered data in the fall of 2015, we estimated our algorithms could handle a TSP instance with about 25,000 points. So, starting with the great data base, we filtered out places based on names, for example, any listing that contained the words "Hotel" or "Inn" were tossed out. That was a mistake. It would have been better to choose blindly 25,000 stops from the Pubs Galore list. The name-pruning method hit parts of the UK much harder than others. For example, we missed . The Scots were not shy about letting me know. But we are applied mathematicians. If the British want a tour through more pubs, then that's what we will deliver. It took another year-and-a-half of work and 250 years of computer time (up from the earlier tour's 10-month computation), but we now have a shortest-possible walking tour through nearly every UK pub. A total of 49,687 places to get a pint. The trip will take you 63,739,687 meters, or about a sixth of the distance to the moon. But, hey, that's what you asked for. And, along the way, we created new algorithms that can help to optimize plenty of mathematical models, well beyond the traveling salesman. William Cook, Combinatorics and Optimization, University of Waterloo, Canada Daniel Espinoza, Gurobi Optimization, USA Marcos Goycoolea, School of Business, Universidad Adolfo Ibanez, Chile Keld Helsgaun, Computer Science, Roskilde University, Denmark We can't really list every UK pub. Drinking houses are opening and closing their doors each week. But we want to reach nearly all of the them. So how many pubs are there? A recent BBC News article contained the following chart, based on data from the British Beer and Pub Association A slow, but steady, decrease in the number of pubs, with just over 50,000 reported in 2016. This number matches nicely the "almost 50,000 pubs" remark made in a Pubs Galore Twitter post, in a response to The Guardian's article on the 24,727-pub tour. The Twitter post agrees with the Pubs Galore page (April 16, 2018) reporting "Pubs Galore currently has 49546 open pubs listed". The Open Pubs data project reports a bit more, 51,566 in total, based on data from the Food Standard Agency's Food Hygine Ratings. So around 50,000 pubs should be the target. Good, since we were able to grab locations of 49,704 sites from the Pubs Galore data base on January 12, 2017. From this set we removed 17 pubs that could not be reached with Google walking directions. Ten of these removed listings are from the Isles of Scilly and several others are from airport terminals. You can find a list of the 17 pubs on the Data page. The following two images display Scottish pubs in the 24,727-pub tour, from 2016, and in the new tour. No wonder the Scots were upset. I can remark that the Turf Tavern in Oxford is now stop #36,233 on the tour (comment 10:26), there are now 981 stops in Northern Ireland (comment 10:30), we have 26 pubs on the Isle of Skye (comment 10:33), and the Oak Tree in Balmaha is stop #14,874 (comment 10:38). So there! For pub-to-pub distances, we rely on the fantastic service provided by Google Maps. Ask Google for the shortest way to walk from The Fiddler's Elbow over to The Bald Face Stag and it will respond with excellent step-by-step directions. The level of detail covered by Google Maps is amazing. We use walking distances for two reasons. First, we obviously don't want to encourage anyone to be behind the wheel of a motorcar after visiting a pub. Secondly, with walking distances it doesn't matter the direction you travel, from The Elbow to The Stag or back from the The Stag to The Elbow. This is not always true for driving distances, particularly when navigating London's one-way streets. You can find more information on this in the Data page. So this is our challenge. Using geographic coordinates of 49,687 pubs provided by Pubs Galore and measuring the distance between any two pubs as the length of the route produced by Google Maps, what is the shortest possible tour that visits all 49,687 and returns to the starting point? We need to make one final assumption. It is something only a mathematician would consider, but we have to assume that the route Google suggests for walking between The Fiddler's Elbow and The Bald Faced Stag is no shorter than the geometric distance between the points, that is, the route a smart crow would fly. This makes it conceivable to solve the problem without actually asking Google for the distance to travel between each pair of pubs, an important consideration since there are 1,232,159,688 pairs and Google puts caps on the number of distance requests per day. This is the problem we have solved. The optimal tour has length 63,739,687 meters. Our result is that there simply does not exist any pub tour that is even one meter shorter (measuring the length using the distances we obtained from Google) than the one produced by our computation. It is the solution to a 49,687-stop traveling salesman problem (TSP). A list of the 49,687 pubs, one after the other, in the correct order, resembles a good-sized phone book and does not convey the structure and complexity of the tour. A better way to get a quick view is to study the two images below, where the tour is depicted as a thin blue line. The drawing on the left includes the pub locations as markers and the one on the right is just the line drawing. Clicking on either image gives a much larger version. Click for larger view. Click for a larger view. You see that we obviously cannot walk several of the indicated routes: to reach the Isle of Man, Northern Ireland, and the islands of Scotland, the tour uses scheduled passenger-ferry routes provided by Google's direction services. To give a detailed view, we make use of the Google Maps drawing tools to display an interactive version of the tour, where you can zoom in and pan from one region to another. The link is given below, but first a word of warning: the map contains a great deal of information and it can take a minute or so to load. We provide tips for using the map on the Tour page. If the map refuses to load for you, please have a look at the Tour page, where you can find smaller, regional maps, as well as further information about the route. How do we know the tour is the shortest possible? Clearly we did not check every tour, one by one by one. The first thing you learn about the TSP is that it is impossible to solve in this way. If you have i cities, then, starting from any point, you have i-1 possibilities for the second city. Then i-2 possibilities for the third city, and so on. The total number of tours is obtained by multiplying these values: i x (i-2) x (i-3) x . . . x 3 x 2 x 1. Now this is a big number. For this new (larger) pubs problem, it is roughly 3 followed by 211,761 zeroes, as computed by WolframAlpha. That is in an unimaginably large number of possibilities. Even for 50 cities, the world's fastest supercomputer has no hope of going through the full count of tours one by one to pick out the shortest. (This is the basis for the 1,000-year estimate reported in the Washington Post.) But this by itself does not mean we can't possibly solve an example of the TSP. If you have 50 words to put into alphabetical order, you don't worry about the 50 x 49 x 48 x ... x 3 x 2 x 1 possible lists you could create. You just sort the words from first to last and build the one correct list among the huge number of possibilities. For the TSP we don't know of any simple and fast solution method like we have for sorting words. And, for technical reasons, it is believed that there may be large, nasty TSP examples that no one can ever solve. (If you are interested in this and could use an extra $1,000,000, check out the P vs NP problem.) But if you need to plot a 50-point route for a holiday or to compute the order of 10,000 items on a DNA strand, then mathematics can help, even if you need the absolute shortest-possible solution. The way to proceed is via a process known as the cutting-plane method. If you have 4 minutes to spare, and don't mind my squeaky voice, click here (109 MByte file) for a video that introduces the method and how it is used to attack the TSP. The full lecture is available on and on the web page for the . (Warning: At the time of the lecture, January 11, 2018, we were deep in the middle of the pubs computation and I didn't yet know if it would have a happy ending.) I expect you are in a hurry, however, so here is how I describe the process in a short piece in Scientific American The idea is to follow Yogi Berra's advice "When you come to a fork in the road, take it." A tool called linear programming allows us to do just this, assigning fractions to roads joining pairs of cities, rather than deciding immediately whether to use a road or not. It is perfectly fine, in this model, to send half a salesman along both branches of the fork. The process begins with the requirement that, for every city, the fractions assigned to the arriving and departing roads each sum to one. Then, step-by-step, further restrictions are added, each involving sums of fractions assigned to roads. Linear programming eventually points us to the best decision for each road, and thus the shortest possible route. Our pubs computation used a improved version of the Concorde implementation of the TSP cutting-plane method. Even if you are in a hurry, you might want to see for yourself how the process solves smaller examples on an iPhone or iPad by downloading the free Concorde App. Our computation also adopted Keld Helsgaun's LKH code. LKH combines a powerful local-search technique with a genetic algorithm to produce a high-quality tour. Remarkably, in the case of the UK pubs problem, LKH delivered, early in our computation, what proved to be the optimal solution. The bulk of our work, spanning 250 years of computation time, was to prove there could be no shorter tour than the one found by Keld's LKH code. In working with road data, we were faced with the challenge of finding the correct TSP solution even though we could not possibly ask Google for all 1,232,159,688 pairs of pub-to-pub distances. In our earlier work on the 24,727-pubs tour, we used an ad hoc, trial-and-error, process to gather a sufficient number of Google pub-to-pub distances to permit the computation to go through. (See the UK24727 page.) For this new, much more difficult, problem, we developed algorithms to automate this portion of the computation, requesting pub-to-pub distances for 2,214,453 pairs, only 1/500th of the total number. Many thanks to Google for providing this data for us! The distance-gathering part of the computation was completed on February 15, 2018. Four days later, LKH had produced 6 different tours, each having length 63,739,687 meters. These tours (or, more precisely, their common length) served as a beacon for Concorde and the cutting-plane method, allowing us to have an excellent measure of the progress we were making towards the solution of the problem. What followed was a long process to build a strong linear-programming relaxation for the pubs TSP, utilizing a 288-core network of computers (whenever it was not otherwise occupied). This process ended on May 16, 2017, after a total (adding up the time spent on each core of the network) of approximately 50 years of computation. The result was that we now knew for certain that no tour could be shorter than 63,732,189 meters. So we knew that the LKH tours were at most 7.5 kilometers longer than an optimal route. To finish off the problem, we turned to Concorde's branch-and-bound search procedure. In this process, the collection of tours is repeatedly subdivided and the cutting-plane method is applied to the resulting TSP subproblems. The simplest form of the division is to select a pair of pubs, say The Black Dog and The Duke of Cornwall in Weymouth, and consider first only tours where the two pubs are visited consecutively, then consider only tours where, between the stops at The Dog and The Duke, we drop in on at least one other pub along the way. This selection divides the set of all tours neatly into two subsets. In this this final phase of the computation, we processed 557,271 subproblems. A big part of the challenge was in making estimates of the remaining computation time, to determine whether or not we would be able to solve the problem before we all reached retirement age. The computation finally finished on March 5, 2018, nearly 14 months after we gathered the data from Pubs Galore. The total amount of computer time for this branch-and-bound portion of the computation was roughly 200 years, bringing the total to 250 years (if carried out on a single processor core of a Linux server). Click here to see a drawing of the search tree, where the position of a subproblem corresponds to the value of its fractional tour. For a closer look, here is a pdf file for the tree. 49,687¥ Reward We have applied some 64 years of mathematics research (going back to the 1954 paper by Dantzig, Fulkerson, and Johnson) to obtain a proof that we have a shortest-possible tour. But it was a huge computation and it wouldn't hurt to have more eyes on this particular example of the TSP. So, we offer 49,687 Japanese Yen to the first person who can find a tour that is even 1 meter shorter than our 63,739,687-meter route. Let's call it an even 50,000 ¥. I have the bank notes ready to ship out. You can find details on the input for the problem on the Data page. But please don't view this as a realistic way to earn enough cash to pick up a pint at the first 100 pubs of the tour. We are confident our solution is correct. I should mention that the branch-and-bound run was made with the input length of 63,739,688, that is, 1 greater than the length of the LKH tours. In this way, as another check, the branch-and-bound search had to itself produce a tour of length 63,739,687. Which it did, on the final day of the search. If you are interested in creating your own local pub tour, the best bet for data is to go back to the original sources, Pubs Galore for locations and Google Maps for up-to-date walking distances. But the information provided by these sources changes over time. Therefore, to document the 49,687-stop TSP instance we have solved, we provide the raw data needed to reproduce the travel distances on the Data page. Early computational studies focused on the most natural class of salesman problems: select an interesting group of cities, look up point-to-point distances in a road atlas, and have a go at finding the shortest tour. Record-setting solutions were found by legendary figures in applied mathematics, operations research, and computer science. The first reference, in particular, is widely viewed as the most important paper in the history of the broad fields of discrete optimization and integer programming. The links are to technical research papers. For lighter viewing, have a look at our page. In the late 1970s, the focus switched to geometric examples of the TSP, where cities are points drawn on a sheet of paper and travel is measured by straight-line distances. The reasons were twofold. First, with over 100 stops it became difficult to obtain driving distances along road networks: printed road atlases included distances only for major cities. Second, there were classes of industrial problems that neatly fit into the geometric TSP setting. Indeed, the next world record, set in 1980 by Harlan Crowder and Manfred Padberg, consisted of locations of 318 holes that had to be drilled into a printed-circuit board. Geometric TSP instances, arising in applications and from geographic locations, were gathered together in the TSPLIB by Gerhard Reinelt in the early 1990s. This collection became the standard test bed for researchers. The largest of the instances, having 85,900 points arising in a VLSI application, was solved by Applegate et al. in 2006. The geometric data sets are worthy adversaries, but the large industrial instances have points clustered into straight lines. These examples are punching below their weight, likely missing aspects of the complexity of the road TSP challenges. A main research interest, for us, in solving road-distance examples of the TSP was to establish whether or not optimization techniques, that since the 1970s have been directed towards geometric examples, would carry over to the non-geometric data provided by Google. The great difficulty we encountered with the pubs example made for fruitful research. But road examples are also two dimensional, with points specified by the latitude and longitude of each pub location. So what about 3-dimensional data, where points are given by xyz-coordinates and travel is measured by the straight-line Euclidean distance between pairs of points? Work here might also lead to interesting optimization research. Fun examples of 3D travel would be to mimic the voyages of the starship Enterprise, going from star to star. Fortunately, astronomers have interesting data bases, with sufficient information to give the approximate 3D positions of stars. Off we go! As a warm up, we first computed an optimal tour to visit the nearest 10,000 stars to our sun. You can see the points for the star locations and the edges of the tour in the following 9-second video. After that, we stepped up to a 109,399-star instance from the HYG Database constructed by David Nash. (Many thanks to Bob Vanderbei for directing us to this collection.) We were able to solve this instance in September 2017. You can find the data for the 109,339-star problem, in TSPLIB format, here. This is now the largest solved instance of the traveling salesman problem. (Actually, we also solved the TSP for the full 119,614 entries of the HYG data base, but for 10,275 of these stars the distance information is missing: the xyz-coordinates in the data base place these 10,275 points all at distance 100,000 parsecs from the Earth, whereas each of the remaining stars has distance less than 1,000 parsecs, so it is really like a separate TSP instance on the outer rim.) It is interesting that the 109,339-star example, despite its size, was much easier for our methods than the UK pubs instance. In fact, we solved it by stealing time from the UK computation during the summer of 2017. The total running time for the computation was 7.5 months. But a nice thing about computational research is that you can always go bigger. If 109,339 stars was not enough to spark the creation of new optimization techniques, then how about a 2,079,470-star instance obtained from combining data from the European Space Agency's Gaia Mission together with the bright stars from the HYG data base? The full Gaia Data Release 1 reports on an amazing total of 1,142,679,769 stars. Most of the entries do not contain sufficient information to estimate distances to the corresponding stars (but future work in the ESA Gaia project should make such estimates possible). However, for 2,057,050 stars in the TGAS (Tycho-Gaia Astronomic Solution) collection, the Gaia data does permit distance estimations. The process used to obtain these distances is described in the following research paper. Estimating distances from parallaxes. III. Distances of two million stars in the Gaia DR1 Catalogue Tri L. Astraatmadija and Coryn A. L. Balier-Jones The Astrophysical Journal, Volume 833, Number 1 (2016). We use the Astraatmadija-Balier-Jones distance estimates to obtain coordinate positions for the TGAS stars. The data set for our TSP instance is given in the gzipped file gaia2079471.tsp.gz. Studying this large-scale example of the TSP is on-going work, together with David Applegate (Google) and Keld Helsgaun (Roskilde University). We currently have a tour of length 28,884,456.3 parsecs, found with a parallel version of LKH. We also know, via a parallel application of the cutting-plane method, that there is no tour shorter than 28,883,773.4 parsecs. That means our tour is at most 1.000024 times longer than a optimal route through the two million stars. That fourth zero in the approximation factor is the money ball: leading commercial mixed-integer programming solvers, such as and , declare, by default, that a problem is solved if they obtain a solution and a bound that differ by at most a factor of 1.0001. We are already 4 times closer. But we are shooting for a shortest-possible route, not just a good approximation. It will certainly be difficult to decrease substantially the gap between the length of the tour and the lower bound. Improvements will come only through advancements in general techniques for the solution of optimization problems of enormous scale. That is what this area of research is all about. The work was carried out over the past three-and-a-half years. We use the UK pubs data, and other large examples of the traveling salesman problem, as a means for developing and testing general-purpose optimization methods. The world has limited resources and the aim of the applied mathematics fields of mathematical optimization and operations research is to create tools to help us to use these resources as efficiently as possible. For general information on mathematical modeling and its impact on industry, commerce, medicine, and the environment, we point you to a number of societies that support mathematics research and education: American Mathematical Society, Mathematical Association of America, Mathematical Optimization Society, INFORMS (operations research), London Mathematical Society, and SIAM (applied mathematics). Google Maps provided the interface between the real world and the abstract mathematical model of the TSP. The engineers at Google do all of the heavy lifting in dealing with paths, roads, traffic circles, construction sites, closures, detours, and on and on. Pubs Galore - The UK Pub Guide is the source for the locations of the stops on our TSP tour. No matter where you are in the UK, the Pubs Galore site will help you find a cozy place for a meal and a drink. The huge number of linear-programming models that arose in the computation were solved with the IBM CPLEX Optimizer. Many thanks to IBM for making their great software freely available for academic research. The work of William Cook was supported by a Discovery Grant from the Natural Sciences and Engineering Research Council (NSERC) of Canada. US History Tour See 49,603 sites from the National Register of Historic Places. UK24727 Tour An optimal tour to 24,727 pubs in the UK. Solved in August 2016. Queen of College Tours Drive to all 647 campuses on Forbes' list of America's Top Colleges. Pokemon Go Tours How to catch 'em all. As quickly as possible. An introduction to the TSP, including its history, applications, and solution techniques. Detailed computational study of the cutting-plane method for the TSP. Gentle introduction to the P vs NP problem and its ramifications.
2
The Music Industry Doesn't Understand Music; It Never Did
PICTOFACT The Music Industry Doesn't Understand Music; It Never Did Every year, whether it's the Oscars, the Video Game Awards, or the Nobel Prize for that matter, we're subjected to infuriated individuals venting at the injustice of the proceedings. And, sometimes, they make a good point. Views for videos ranting about who got robbed are higher than the viewership for most award ceremonies. As long as humans have been judging art, they've been  p . No medium is quite so noteworthy for its glaring omissions or lack of self-awareness as the music world. Imagine a bakery run by a guy with no functioning taste buds. Now imagine Michelin puts that guy in charge of deciding which restaurant gets the stars, and you've got some idea how the Grammys operate. The award show is  p  on by the record labels and corporate insiders every year, handed out at the appropriately corporate-sounding Staples Center in LA. The business never really understood burgeoning trends or youth culture, but damn if it didn't coast for a good 30 years on goodwill before anyone noticed. The more cutting-edge and unclassifiable the music, the more inscrutable it is to non-fans. 20 years after the New York Dolls established the punk genre; the music biz still hadn't figured out what the hell punk was even after milking every last dime out of the fad then driving it underground, as evidenced by them slapping the designation on an arbitrarily assembled new wave/pop/dance compilation. Music nerds, take a deep breath and try not to throw up in your mouth: Any self-respecting punk would have shoplifted it anyway. The utter ignorance of classifying the Thompson Twins as "punk" wasn't a one-time clerical error by a suburban mom in Warner Brother's marketing department or a mistake by some dinosaur in a suit running CBS Records. No, this kind of perplexing out-of-touch attitude is pervasive -- Exhibit B, the 1989 Grammys, which marked the worst award show misjudgment of all time. Keeping up with what the kids were hip to, the newly inaugurated Best Hard Rock/Heavy Metal Performance award was first handed out in 1989 to Jethro Tull. A few issues with this. As any music aficionado will immediately notice, Jethro Tull isn't a metal band. (As a reminder, when you Google Jethro Tull, make sure you click on the correct Wikipedia entry,  p .) While you could argue that their sound had changed since the '60s, the British prog-rock band were synth jazz or experimental folk. Our guess is that the organizers lumped them into the category to fill out the new category quota, and no one in charge could rattle off more than one or two legit metal bands. That or the Grammy people still thought metal was an anti-social  p  and were terrified of giving them more press. Evidently, not a lot of Bathory or Ministry fans on the Beverly Hills-area voting roll when they were deciding the Grammy awards that afternoon. Megadeth was snubbed, too, presumably for encouraging poor spelling. How hard can it be to find metal bands? In the situation any record store patron got confused, Pantera literally stamped their genre on their album cover in bright red, legible, non-Gothic typeface. Metal Magic And chose a font that made everyone think they were suffering a seizure. The second oversight? Jethro Tull's competition that night was Metallica. Not later-period, short-haired Metallica, but the iconic thrash metal gods in their peak years. Though presenter Alice Cooper tried to keep the proceedings professional, the embarrassment on Lita Ford's face was unmistakable as she learned Iggy Pop, AC/DC, and Jane's Addiction were also overlooked in favor of a washed-up hippie with a flute. The crowd (those in attendance who realized the monumental scale of the upset that had just occurred) fell silent for minutes before  p , under the impression that Alice Cooper was pulling another practical joke in style with his wacky persona and shocking stage antics: Guillotines and killing chickens doesn't seem so horrifying in comparison, now does it? That same award show Will Smith won for best rap song over more controversial bands NWA and Public Enemy; neither one nominated despite releasing socially relevant, generation-shaping, genre-defining albums that still hold up. The Grammy people saved public humiliation of going with the safest, lamest choice by  p . This kind of terrible judgment and lack of respect caused a  p  that led to them trying to fix the issue. And that sure wasn't the first time they bungled a major award. The Grammys were always terrible at differentiating fads from new genres. The first metal album award wasn't handed out until the metal genre was already petering out. Rap didn't get its own dedicated Best Album category until the mid-'90s, the year Tupac died. Great timing, guys. 12 years before the Jethro Tull debacle, the one-hit-wonder Starland Vocal Band won the  p  prize beating out a bunch of incredible bands launching debut albums in 1976 like Heart, Tom Petty, Blondie, Billy Ocean, The Runaways, and The Ramones. The truly mortifying part was that none of those bands or performers were even nominated. Boston was, but multiple charting singles couldn't help them from losing to the blandest, least innovative choice. Should you think we're being too mean to the Grammy people: Macklemore has more Grammy awards to his name than Snoop Dogg. Milli Vanilli has returned more Grammys than Run DMC has won. Queen, The Who, Diana Ross? Yup, empty trophy cases. It sort of makes sense. What better place to celebrate music more boring than a manila folder than inside of an arena named after a store that sells them? Considering the Grammys' reputation, we're lucky there isn't a Muzak category. Bohao Zhao "And the Best Fluegelhorn Solo Performance goes to ..." The situation is getting dire. A music publisher is supposed to be focused on finding new artists, anticipating and capitalizing on new trends. As streaming (and pirating) became the norm, labels  p , aka the talent scouts. Why bother trying to get ahead of trends and scour dingy underground clubs when you can just look at the  p  on social media sites? Instead of figuring out a way to get better, they are turning into the skid and doing the next best thing,  p  of old artists that they can get their hands on. Record labels are now slobbering at the chance to own a piece of five-decades-old Fleetwood Mac whose most culturally relevant moment in the last 30 years was the  p . You can still get lucky and get a viral video on YouTube, TikTok, or something to land a contract, but the odds are against you. Not surprisingly, the very concept of  p  is starting to die out. Long story short, unless you're a  p , your music career is screwed in 2021. Might we propose, for the sake of everyone's sanity, that we simply stop caring about award shows-- all of them. Just sit back and view them with cold detachment, realizing none of this matters. None of the millionaires in charge know what they are doing or care. Much like the Oscars, all these shows are industry hype fests where everyone is too high on coke, out-of-touch, or jaded to take art or  p . Neither should we. It was never really about memorializing art that would stand the test of time. Caring about who wins will only drive you crazy in the long run. Just ask  p . Top Image: Elektra Records Sign up for the Cracked Newsletter Get the best of Cracked sent directly to your inbox! SIGN ME UP
95
Malware Could Use SSD Over-Provisioning to Bypass Security Measures
(Image credit: Shutterstock) Korean researchers have detected a vulnerability in SSDs that allows malware to plant itself directly in an SSD's empty over-provisioning partition. As reported by BleepingComputer, this allows the malware to be nearly invincible to security countermeasures. Over-provisioning is a feature included in all modern SSDs that improves the lifespan and performance of the SSD's built-in NAND storage. Over-provisioning in essentially just empty storage space. But, it gives the SSD a chance to ensure that data is evenly distributed between all the NAND cells by shuffling data to the over-provisioning pool when needed. While this space is supposed to be inaccessible by the operating system -- and thus anti-virus tools -- this new malware can infiltrate it and use it as a base of operations. (Image credit: BleepingComputer) Korean researchers at the Korea University in Seoul modeled two attacks that utilize the over-provisioned space. The first one demonstrates a vulnerability that targets invalid data (data deleted in the OS but not physically wiped) within the SSD. To gain more potentially sensitive data, the attacker can choose to change the size of the over-provisioned data pool to provide additional empty space to the operating system. So when a user goes to delete more data, extra data remains physically intact within the SSD. SSDs rarely physically delete data, unless it's absolutely necessary, to preserve resources. (Image credit: BleepingComputer) The second one is similar to what was discussed earlier, injecting firmware directly into the over-provisioning pool. In this example, two SSDs are connected as one device, and over-provisioning is set to 50%. When an attacker injects malware into the OP partition of the SSD, they reduce the first SSD's OP range to 25% of the SSD's total size, then increase the OP range of the second SSD to 75%. This gives the attacker room on the 2nd SSD to inject malware directly into the OP partition while setting the first SSD's OP range to 25%, making it seem like the OP area on both drives has remained unaffected. This is because the OP range for both SSDs combined is still 50%. The researchers suggest implementing a pseudo-erase algorithm that physically deletes data on an SSD without affecting real-world performance to counter the first attack model. It is recommended to implement a new monitoring system that can closely watch the over-provisioned size of the SSDs in real-time to counter the second attack model. Plus, access to SSD management tools that can change over-provisioned sizes should have more robust security features against unauthorized access. Thankfully, these attacks were created by researchers and were not discovered by an actual attack. However, an attack like this could very well happen, so hopefully, SSD manufacturers will start patching these security vulnerabilities quickly before someone gets a chance to exploit them. Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors Latest Diablo IV PC Performance: We're Testing a Bunch of GPUs
73
Multiway Turing Machines
PRELIMINARY VERSION Over the years I’ve studied the simplest ordinary Turing machines quite a bit, but I’ve barely looked at multiway Turing machines (also known as nondeterministic Turing machines or NDTMs). Recently, though, I realized that multiway Turing machines can be thought of as “maximally minimal” models both of concurrent computing and of the way we think about quantum mechanics in our Physics Project. So now this piece is my attempt to “do the obvious explorations” of multiway Turing machines. And as I’ve found so often in the computational universe, even cases with some of the very simplest possible rules yield some significant surprises…. An ordinary Turing machine has a rule such as that specifies a unique successor for each configuration of the system (here shown going down the page starting from an initial condition consisting of a blank tape): In a multiway Turing machine more than one possible outcome can be specified for a particular case in the rule: The result is that there can be many possible paths of evolution for the system—as represented by a multiway graph that connects successive possible configurations: The evolution according to the ordinary Turing machine rule corresponds to a particular path in the multiway graph: Continuing for a few more steps, one begins to see a fairly complex pattern of branching and merging in the multiway graph: My purpose here is to explore what multiway Turing machines with simple rules can do. The basic setup for my multiway Turing machines is the same as for what are usually called “nondeterministic Turing machines” (NDTMs), but in NDTMs one is usually interested in whether single paths have particular properties, while we will be interested in the complete multiway structure of all possible paths. (And by using “multiway” rather than “nondeterministic” we avoid the confusion that we might be thinking about probabilistic or random paths—and emphasize that we’re studying the structure of all possible paths.) In our Physics Project, multiway systems are what lead to quantum mechanics, and our multiway Turing machines here correspond quite directly to quantum Turing machines, with the magnitudes of quantum amplitudes being determined by path weighting, and their phases being determined by locations in the branchial space that emerges from transverse slices of the multiway graph. There are 4096 possible ordinary Turing machines with s = 2 head states and k = 2 colors. All these machines ultimately behave in simple ways, with examples of the most complex behavior being (the last case is a binary counter with logarithmic growth): Things get only slightly more complex with s = 3, k = 2, but with s = 2, k = 3 one finds the simplest Turing machine with complex behavior (and this behavior occurs even with a blank initial tape): As suggested by the Principle of Computational Equivalence, this Turing machine turns out be computation universal (making it the very simplest possible universal Turing machine). So what this means is that the threshold for universality in ordinary Turing machines is s = 2, k = 3: i.e. rules in which there are 6 cases specified. But one question we can then ask is: what is the threshold for universality in multiway Turing machines? To say that our s = 2, k = 3 ordinary Turing machine is universal is to say that with appropriate initial conditions the evolution of this Turing machine can emulate the evolution of any other Turing machine (at least with a suitable encoding, implementable by a bounded computation). The closest analogous definition of computation universality for multiway Turing machines is to say that with appropriate initial conditions the multiway evolution of a Turing machine can emulate (up to suitable encoding) the multiway evolution of any other Turing machine. (As we will discuss later, however, the “nondeterministic interpretation” suggests a nominally different definition based on whether initial conditions can be set up to make corresponding individual paths with particular identifying features exist.) To specify the rule for an ordinary Turing machine, one normally just defines the outcome for each of the s k possible “inputs” such as But what if one omits some of these inputs? In analogy to our usual treatment of string or hypergraph rewriting systems it is natural to just say that if one of the omitted inputs is reached in the evolution of the system, then no rewrite is performed, or—in standard Turing machine terms—the system “halts”. With s = 2, k = 2, the longest that machines that will eventually halt survive (starting from a blank tape) is just 6 steps—and there are 8 distinct machines (up to reflection) which do this, 3 with only 2 cases out of 4 in their rules, and 5 with 3 cases: In a sense each of these rules can be thought of as using some subset of all 2 s 2 k 2 possible rule cases, which for k = 2, s = 2 is: And in general any multiway Turing machine rule corresponds to some such subset. If all possible inputs appear exactly once, one gets an ordinary Turing machine that does not halt. If inputs appear at most once, but some do not appear at all, the Turing machine may halt (though if it never reaches that input it won’t halt). But if any input appears more than once, the Turing machine may exhibit nontrivial multiway evolution. If all inputs appear at least once, no branch of the multiway evolution can halt. But if some inputs appear more than once, while some do not appear at all, some branches of the multiway evolution may halt, while others may continue forever. In general, there are 22s 2 k 2 possible multiway Turing machine rules. In talking about rulial space and rulial multiway systems I previously discussed the particular example of the (“rulial”) multiway Turing machine rule in which every single case is included (i.e. all 32 cases for s = 2, k = 2), so that the Turing machine is in a sense “as multiway as possible”. But here we are concerned with multiway Turing machines whose rules are instead as simple as possible—and “far from the rulial limit”. The simplest nontrivial multiway Turing machines have just two cases in their rules. In general, there are Binomial [2 s 2 k 2, 2] such rules, or for s = 2, k = 2, 496—of which 112 have actual multiway behavior. A very simple example is the rule which generates tape configurations containing all binary numbers and gives a multiway system which forms an infinite binary tree: A slightly less trivial example is the rule which produces a multiway graph in which branches stop as soon as the head is on a non-blank cell: Continuing this, the overall structure of the multiway graph is (where halting states are indicated by red dots): As another example, consider the rule: Once again, the system halts if the head gets onto a non-blank cell, but this happens in a slightly more elaborate way, giving a slightly more complicated multiway graph: In the example we’ve just seen, there is explicit halting, in the sense that the Turing machine can reach a state in which none of its rules apply. Another thing that can happen is that Turing machines can get into infinite loops—and since in our multiway graphs identical states are merged, such loops show up as actual loops in the multiway graph, as in this example: As soon as one allows 3 cases in the rules, things rapidly get more complicated. Consider for example the rule (which can never halt): The first few steps in its multiway graph are: This example exposes a somewhat subtle issue. In our earlier examples, all the states generated on a particular step could consistently be arranged in a single layer in the multiway graph. But here the state is generated both at step 2 and at step 4—so no such “global layering” is possible (at least, assuming, as we do, that we combine identical copies of this state). And a consequence of this lack of global layering is that if we compute for a given number of steps we can end up with “dangling states”—like —that appear in our rendering well above the “final layer” shown. After another step, the previous “dangling states” now have successors, but there are new dangling states: Continuing for more steps, a somewhat elaborate but nevertheless fairly regular multiway begins to develop: Here’s another example of a multiway Turing machine with 3 cases in its rule: The first few steps in its multiway graph are: Continuing for another step, things get more complicated, and, notably, we see a loop: Continuing for a few more steps we get: (Note that since in this rule there is no halting, the dangling ends visible here must show additional growth if the evolution is continued.) It’s perfectly possible to get both rapid growth, and halting. Here’s an example It’s fairly easy to get a sense of the behavior of an ordinary Turing machine just by displaying its successive configurations down the page. But what about for a multiway Turing machine? The multiway graph shows one the relationships defined by evolution between complete configurations (states) of the system: But such a picture does not make clear the relationships between the detailed forms of different configurations, and for example the “spatial motion” of the head. So how can we do this? One possibility is in effect to display all the different possible configurations at a given step “overlaid on top of each other”. For the machine above the new configurations obtained at successive steps are (though note that these do not appear on specific layers in the rendering of the multiway graph above): Putting these “on top of each other”, and “averaging colors” we get: In this case, such a visualization reveals the “diffusive” character of the behavior: the motion of the head in different possible evolutions effectively corresponds to all possible paths in an ensemble of random walks, so that the probability to find the head at offset x is Binomial [t, x]/2 t . We can see the lattice of paths more explicitly if we join head positions obtained at successive steps of evolution: What does this approach reveal about other multiway Turing machines we’ve discussed? Here’s one of our first examples: And here’s another of our examples: But generally this kind of averaged picture is not particularly helpful. So is there an alternative? Basically we need some way to simultaneously visualize both multiway structure, and the detailed forms of individual configurations. In analogy to our Physics Project, this is essentially the problem of simultaneously visualizing both branchial and spatial structure. So where should we start? Since our everyday experience is with ordinary space, not branchial space, it seems potentially better to start with ordinary space, and then add in the branchial component. And thinking about this led to the concept of multispace: a generalization of space in which there is in effect at every point a stack of multiway possibilities, with these different possibilities being connected at different places in branchial space. In our Physics Project, even visualizing ordinary space itself is difficult, because it’s built up from lower-level structures corresponding to hypergraphs. But for Turing machines the basic structure of space is constrained to be very simple: there is just a one-dimensional tape with a head that can go backwards and forwards on it. But in understanding multispace, it is probably easier to start from the branchial side. Let’s for now ignore the values written on the tape of a Turing machine, and consider only the position of the head, say relative to where it started. Every state in the multiway graph can then be labeled with the position of the head in that configuration of the Turing machine. So for the first rule above we get: But now let’s imagine setting this up in 3D, with the timelike direction down the page, the branchlike direction across the page, and the spacelike direction into the page: But if we now rotate this, we can see the interplay of branchial and spatial structure: In this particular case, we see that the main branches visible in the multiway system (i.e. in the branchlike direction) correspond to progressively more separated positions of the head in the spacelike direction. Things are not always this simple. Here’s the head-position-annotated multiway graph for the Turing machine from the beginning of this section: And now here’s a 3D view of the evolution of this Turing machine in multispace. First we’re effectively projecting into the branchtime plane. Note however that because connections can be made in 3D, the layout chosen for the rendering is different from what gets chosen when the multiway graph is rendered in 2D: Rotating in multispace we can see the interplay between branchlike and spacelike structure: Even for this comparatively simple multiway Turing machine, it’s already quite difficult to tell what’s going on. But continuing for a few more steps, one can see definite patterns in multispace (the narrowing at the bottom reflects the fact that this is run only for a finite number of steps; it would fill in if one ran it for longer): In our Physics Project, there is no intrinsic coordinatization of either physical space or branchial space—so any coordinatization must be purely emergent. But the basic structure of Turing machines implies an immediate definition of space and time coordinates—which is what we use in the pictures above to place nodes in the spacelike and timelike directions. (The timelike direction is slightly subtle: we’re assigning a coordinate based on where a state appears in the layering of the multiway graph, which may or may not be the “time step” at which it first appears.) But in Turing machines there is no immediate definition of branchial coordinates—and in the pictures above we’re deriving branchial coordinates from the (somewhat arbitrary) layout used in the 2D rendering of the multiway graph. So is there a better alternative? In effect, what one has to do is to find some good way to coordinatize the complete state of the Turing machine, including the configuration of its tape. One possibility is just to treat the configuration of the tape as defining a base-k number—and then for example to compute a coordinate from the log of this number. (One can also imagine shifting the number so that its “decimal” point is at the position of the head, but the head position is in a sense already “handled” through the spatial coordinate.) Here’s the result from doing this for the Turing machine above: Here are a couple of other examples: Thinking about tape configurations suggests another visualization: just stack up all tape configurations that can be generated on successive steps. In the case of the rule one just gets at step t the binary digits of successive integers 0 through 2 t–1 on the tape: But even for a rule like the result is more complicated where now the grayed rows correspond to states on which the Turing machine has already halted. When we construct a multiway graph—and render it in multispace or elsewhere—we are directly representing the evolution of states of something like a multiway Turing machine. But particularly the study of quantum mechanics in our Physics Project emphasizes the importance of a complementary view—of looking not at the evolution of individual states, but rather at relationships (or “entanglements”) between states defined by their common ancestry in the multiway graph. that generates a multiway graph: Now make a foliation of this graph: If we look at the last slice here, there are several pairs of nodes that have common ancestors—each coming from the use of different cases in the original rule. The branchial graph that joins configurations (nodes) with immediate common ancestors is, however, rather trivial: But even for the rule the branchial graphs on successive steps are slightly less straightforward: One can also construct “thickened branchial graphs” in which one looks not just at immediate common ancestors, but say at common ancestors up to τ steps back. For τ > 1 the result is a hypergraph, and even in the first case here, it can be somewhat nontrivial: Unless an ordinary Turing machine has both at least s = 2 states and k = 2 colors it will always have essentially trivial behavior. But what about multiway Turing machines? With k = 1 color the system in effect cannot store anything on the tape, and it must always “do the same dance” at whatever position the head is in. Consider for example the s = 2 rule: The states of this system can be represented in the form {σ, i} where σ is the head state, and i is the position of the head. The multiway graph for the evolution of the system after 3 steps starting from a blank tape with the head in head state 1 can then be drawn as (with the coordinates of each state being determined from {σ, i}): and we can see that eventually the multiway graph is effectively a repeating braid—though with “slightly frayed” ends at any given step. The braid can be constructed from a sequence of “motifs”, here consisting of 3 “vectors”, where each vector is directly determined by a case in the underlying rule, or in this case: If we look at all s = 2 multiway Turing machines with 3 cases in their rules, these are the multiway graph structures we get: In all cases, the effective repetition length for any path is 1, 2 or 4. With s = 3 possible head states, slightly more elaborate multiway graphs become possible. For example To get beyond “pure-braid” multiway graphs, we must consider k ≥ 2 colors. But what happens if we use just one head state s = 1 (so we basically have a “mobile automaton” with rules of range 0)? For s = 1, k = 2, there are 8 possible cases that can occur in the rule and thus 256 possible multiway Turing machines, of which 231 are not “purely deterministic”. If we allow only p = 2 possible cases in the rule, there are 28 possible machines, of which 12 are not purely deterministic, and the distinct possible nontrivial forms of multiway graphs that we can get are just (where in the first rule the second color is not used): With p = 3 possible cases in the rule, there are 56 possible machines, none purely deterministic. The distinct multiway graphs they generate are: Some of these we have already seen above. But it is remarkable how complex these graphs appear to be. Laying the graphs out in multispace, however, some show very clear regularities: If we look at the complicated cases for more steps, we still see considerable complexity: The number of states reached at successive steps for the first rule is while for the other two it is: Both of these appear to grow exponentially, but with slightly different bases, both near 1.46. Looking at the collections of states generated also does not give immediate insight into the behavior: So what happens if we allow p > 3 cases in the rule? With p = 4 cases, there are 70 possible machines, none purely deterministic. Here are some examples of the more complicated behavior that is seen: We can go on and look at machines with larger p. Here are examples with p = 5: We can also ask what happens if we allow s = 2 instead of s = 1, with k = 2. With p = 3 cases in the rule, the most complicated multiway graphs are just: With p = 4 cases in the rule, the behavior can be slightly more complicated, but does not appear to go beyond what we already saw with a single head state s = 1: We might have assumed that to get complicated behavior in a multiway Turing machine it would need complicated rules, and that as the rules got more complicated the behavior would somehow get correspondingly more complicated. But that’s not what we saw in the previous section. Instead, even multiway Turing machines with very simple rules were already capable of apparently complex behavior, and the behavior didn’t seem to get fundamentally more complicated as the rules got more complicated. This might seem surprising. But it’s actually very much the same as we’ve seen all over the computational universe—and indeed it’s just what my Principle of Computational Equivalence implies should happen. Because what the principle says is that as soon as one goes beyond systems with obviously simple behavior, one gets to systems that are equivalent in the sophistication of the computations they perform. Or, in other words, that above a very low threshold, one sees systems that all achieve a maximal level of computational sophistication that is always the same. The Principle of Computational Equivalence talks about “typical computations” that a system does, and implies, for example, that these computations will tend to be computationally irreducible, in the sense that one cannot determine their outcomes except by computations that are essentially just as long as the ones the system itself is performing. But another consequence of the Principle of Computational Equivalence is that as soon as the behavior of a system is not obviously simple, it is likely to be computation universal. Or, in other words, once a system can exhibit complicated behavior, it is essentially always possible to find a particular initial condition that will make it emulate any other system—at least up to a translation given by some kind of bounded computation. And we have good evidence that this is how things work in simple cellular automata, and in ordinary, deterministic Turing machines. With s = 2, k = 2 such Turing machines always show simple, highly regular behavior, and no such machines are capable of universal computation. But with s = 2, k = 3 there are a few machines that show more complex behavior. And we know that these machines are indeed computation universal, establishing that among ordinary Turing machines the threshold of universality is in a sense as low as it could be. So what about multiway Turing machines? Where might the threshold for universality for such machines be? Based on our experience with ordinary Turing machines (and many other kinds of systems) the results in the previous section might tend to suggest that s = 1, k = 2, p = 3 would already be sufficient. With ordinary Turing machines, universality is achieved when s = 2, k = 3, corresponding to 6 cases in the rule. Here universality seems possible even with just 3 cases in the rule—and with s = 1, k = 2. Perhaps the most surprising part of this is the implication that universality might be possible even with just a single head state s = 1. And this implies that we do not even need to have a Turing machine with any head states at all; we can just use a mobile automaton, and actually one with “radius 0” (i.e. with rules that are not directly affected by any neighbors)—that we can indicate for example as: Turing machines are often viewed as extensions of finite automata, in which a tape is added to provide potentially infinite memory. Ordinary deterministic Turing machines are then extensions of deterministic finite automata (DFAs), and multiway Turing machines of nondeterministic finite automata (NDFAs). And finite automata with just a single state s = 1 are always trivial—whether they are deterministic or nondeterministic. We also know that ordinary deterministic Turing machines with s = 1 are always trivial, regardless of how many tape colors k they have; s = 2 is the minimum to achieve nontrivial behavior (and indeed we know that s = 2, k = 3 is sufficient to achieve in a sense maximally complex behavior). But what we saw above is that multiway Turing machines can show nontrivial behavior even when s = 1. Quite likely there is a direct construction that can show how an s = 1 multiway Turing machine can emulate an s > 1 one. But here is some intuition about why this might be possible. In an ordinary Turing machine, the head moves “in space” along the tape, and the head state carries state information with it. But in a multiway Turing machine, there is in effect motion not only in space, but also in branchial space. And even though without multiple head states the head may not be able to carry information in ordinary space, the collection of nearby configurations in branchial space can potentially carry information in branchial space. In other words, in branchial space one can potentially think of there being a “cloud” of nearby configurations that represent some kind of “super head” that can in effect assume multiple possible head states. Crucial to this picture is the fact that multiway systems are set up to merge identical states. If it were not for this merging, the multiway graph for a multiway Turing machine would just be a tree. But the merging “knits together” branchial space, and allows one to make sense of concepts like distance and motion in it. In an ordinary Turing machine, one can think of the evolution as progressively transforming the state of the system. In a multiway Turing machine it may be better to think of the evolution as knitting together elementary pieces of the multiway graph—with the result that it matters less what the individual states of the Turing machine are like, and more how different states are related in branchial space, or in the multiway system. But what exactly do we mean by universality anyway? In an ordinary deterministic system the basic definition is that a universal system—if given appropriate initial conditions—must be able to emulate any other computational system (say any Turing machine). Inevitably, there will be encoding and decoding required. Given a specification of the system one is trying to emulate (say, the rules for a Turing machine), one must have a procedure for setting up appropriate initial conditions—or, in effect, for “compiling” the rules to the appropriate “code” for the universal machine. Then when one runs the universal machine, one must have a way of “decoding” its evolution to identify the steps in the evolution of the system one is emulating. For the idea of universality to be meaningful, it’s important of course that the processes of encoding and decoding don’t do too much computation. In particular, the amount of computation they need to do must be bounded: so even if the “main computation” is computationally irreducible and needs to run progressively longer, these do not. Or, put another way, any feature of the encoding and decoding should be decidable, whereas the main computation can show computational irreducibility, and undecidability. We have talked here about universality being a story of one machine emulating another. And ultimately this is what it is about. But in traditional computation theory (which was developed before actual digital computers were commonplace) there are typically two additional features of the setup. First, one often thinks about the Turing machine as just “computing a function”, so that when fed certain input it gives certain output—and we don’t ask “what it’s doing inside”. And second, one thinks of the Turing machine as “halting” when it has produced its answer. But particularly when one wants to get intuition about computational processes, it’s crucial to be able to “see the process”. And in keeping with the operation of modern digital computers, one is less concerning with “halting”, and more just with knowing how to decode the behavior (say by looking for “signals” that indicate that something should be “considered to be a result”). (Even beyond this, Turing machines are sometimes thought of purely as “decision machines”: given a particular input they eventually give the result “true” or “false”, rather than generating different forms of result.) In studying multiway Turing machines I think the best way to define universality is the one that is most directly analogous to what I’ve used for ordinary Turing machines: the universal machine must be able to emulate the multiway graph for any other machine. In other words, given a universal multiway Turing machine there must be a way to set up the initial conditions so that the multiway graph it generates can be “decoded” to be the multiway graph for any other system (or, in particular, for any other multiway Turing machine). It is worth realizing that the “initial conditions” here may not just be a single Turing machine configuration; they can be an ensemble of configurations. In an ordinary deterministic Turing machine one gives initial conditions which are “spread out in space”, in the sense that they specify colors of cells at different positions on the tape. In a multiway Turing machine one can also give initial conditions which are “spread out in branchial space”, i.e. involve different configurations that will initiate different branches (which might merge) in the multiway system. In an ordinary Turing machine—in physics terms—it is as if one specifies one’s initial data on a “spacelike hypersurface”. In a multiway Turing machine, one also specifies one’s initial data on a “branchlike hypersurface”. By the way, needless to say, a universal deterministic Turing machine can always emulate a multiway Turing machine (which is why we can run multiway Turing machines on practical computers). But at least in the most obvious approach it can potentially require exponentially many steps of the ordinary Turing machine to follow all the branches of the multiway graph for the multiway Turing machine. But what is the multiway analog of an ordinary Turing machine “computing a function”? The most direct possibility is that a multiway Turing machine defines a mapping from some set of configurations to some other set. (There is some subtlety to this, associated with the foliation of the multiway graph—and specifying just what set of configurations should be considered to be “the result”.) But then we just require that with appropriate encoding and decoding a universal multiway Turing machine should be able to compute any function that any multiway Turing machine can compute. There is another possibility, however, suggested by the term “nondeterministic Turing machine”: require just that there exists some branch of the multiway system that has the output one wants. For example, if one is trying to determine the answer to a decision problem, just see if there is any “yes” branch, and, if so, conclude that the answer is yes. As a simple analog, consider the string substitution with transformations {“()”→“”,”()”→“|”}. One can compute whether a given sequence of parentheses is balanced by seeing whether its evolution will eventually generate an empty string (“”) state—but even if this state is generated, all sorts of “irrelevant” states will also be generated: There is presumably always a way to translate the concept of universality from our “emulate the whole multiway evolution” to the “see if there’s any path that leads to some specific configuration”. But for the purposes of intuition and empirical investigation, it seems much better to look at the whole evolution, which can for example be visualized as a graph. So what is the simplest universal multiway Turing machine? I am not certain, but I think it quite likely that it has s = 1, k = 2, p = 3. Two candidates are: To prove universality one would have to show that there exists a procedure for setting up initial conditions that will make a particular rule here yield a multiway graph that corresponds (after decoding) to the multiway graph for any other specified system, say any other multiway Turing machine. To get some sense of how this might be possible, here are the somewhat diverse multispace graphs generated by the first rule above, starting from a single initial configuration whose tape contains the digits of successive binary numbers: If a system is computation universal, it is inevitable that there will be computational irreducibility associated with determining its behavior, and that to answer questions about what it will do after an arbitrarily long time may require unbounded amounts of computation, and must therefore in general be considered formally undecidable. A classic example is the halting problem—of asking whether a Turing machine starting from a particular initial condition will ever reach a particular “halt” state. Most often, this question is formulated for ordinary deterministic Turing machines, and one asks whether these machines can reach a special “halting” head state. But in the context of multiway Turing machines, a more natural version of this question is just to ask, as we have done above, whether the multiway system will reach a configuration where none of its possible rules apply. And in the simplest case, we can consider “deterministic but incomplete rules”, in which there is never more than one possible successor for a given state, but there may be no successor at all. An example is the rule which contains no case for . Starting this rule from a blank tape it can evolve for 5 steps, but then reaches a state where none of its rules apply: As we saw above, for s = 2, k = 2 rules, 6 steps is the longest any ordinary Turing machine rule will “survive” starting from a blank tape. For s = 3, k = 2 it is 21 steps, for s = 2, k = 3 it is 38 and for s = 4, k = 2 it is 107: For larger s and k the survival times for the best-known “busy beaver” machines rapidly become very long; for s = 3, k = 3 it is known to be at least 1017. So what about multiway Turing machines in general—where we allow more than one successor for a given state (leading to branching in the multiway graph)? For s = 1 or k = 1 the results are always trivial. But for s = 2, k = 2 things start to be more interesting. With only p = 2 possible cases in the rule, the longest halting time is achieved by the deterministic rule which halts in 4 steps as represented by the (single-path) multiway graph: The p = 3 case includes standard deterministic s = 2, k = 2 Turing machines with a single halt state, and the longest halting time is achieved by the deterministic machine we saw above: But if we look at slightly shorter halting times, we start seeing branching in the multiway graph: When we think in terms of multiway graphs, it is not so natural to distinguish “no-rule-applies” halting from other situations that lead to finite multiway graphs. An example is which is a deterministic machine that after 4 steps ends up in a loop: With p = 4 total cases in the rule we can have a “full deterministic s = 2, k = 2 Turing machine”, with no possible cases omitted. The longest “halting time” (of 8 steps) is achieved by a machine which enters a loop: But if we consider shorter maximal halting times, then we get both “genuine halting” and branching in the multiway system: Note that unlike in the deterministic case where there is a single, definite halting time from a given initial state, a multiway Turing machine can have different halting times on different branches in its multiway evolution. So this means that there are multiple ways to define the multiway analog of the busy beaver problem for deterministic Turing machines. One approach perhaps the closest to the spirit of the deterministic case is to ask for machines that maximize the maximal finite halting time obtained by following any branch. But one can also ask for machines which maximize the minimal halting time. Or one can ask for machines that give multiway graphs that involve as many states as possible while still being finite (because all their branches either halt or cycle). (Yet another criterion could be to ask for the maximum finite number of distinct halting states.) For the p = 2 case, the maximum-halting-time and maximum-states criteria turn out to be satisfied by the same deterministic machine shown above (with 4 total states). For p = 3, though, the maximum-halting-time criterion is satisfied by the first machine we showed, which evolves through 6 states, while the maximum-states-criterion is instead satisfied by the second machine, which has 7 states in its multiway graph. For the k = 2, s = 2, p = 4 case above the maximum-halting-time machine is again deterministic (and goes through 7 states), but maximum-states machines have 11 states: Even for ordinary deterministic Turing machines, finding busy beavers involves direct confrontation with the halting problem, and computational irreducibility. Let’s say a machine has been running for a while. If it halts, well, then one knows it halts. But what if it doesn’t halt? If its behavior is sufficiently simple, then one may be able to recognize—or somehow prove—that it can never halt. But if the behavior is more complex one may simply not be able to tell what will happen. And the problem is even worse for a multiway Turing machine. Because now it is not just a question of evolving one configuration; instead there may be a whole, potentially growing, multiway graph of configurations one has to consider. Usually when the multiway graph gets big, it has a simple enough structure that one can readily determine that it will never “terminate”, and just grow forever. But when the multiway graph gets complex it can be extremely difficult to be sure what will eventually happen. Still, at least in most cases, one can be fairly certain of the results. With p = 5, the rule whose behavior allows the longest halting (or, in this case, cycling) time (8 steps) is: With p = 5, many other types of “finite behavior” can also occur, such as: With p = 6 there is basically just more of the same—with no extension in the maximum halting time, though with larger total numbers of states: The maximum halting times and finite numbers of states reached by s = 2, k = 2 machines with p cases is as follows and the distributions of halting times and finite numbers of steps (for p for up 7) are: What if one considers, say, s = 3, k = 2? For p = 2 and p = 3 there are not enough cases in the rule to sample the rule range of head states, so the longest-surviving rules are basically the same as for s = 2, k = 2. Meanwhile, for p = 5 one has the 21-step “deterministic” busy beaver shown above, while for p = 4 there is a 17-step-halting-time rule: In the p = 5 case, the maximum-state machine is the following, with 30 states: There are several interesting potential variants of the things we have discussed here. For example, instead of considering multiway Turing machines where all branches halt (or cycle), we can consider ones where some branches continue, perhaps even branching forever—but where either, say, one or all of the branches that do halt survive longest. We can also consider Turing machines that start from non-blank tapes. And—in analogy to what one might study in computational complexity theory—we can ask how the number (or size of region) of non-blank cells affects halting times. (We can also study the “functions” computed by Turing machines by looking at the transformations they imply between initial tapes and final outputs.) Our main concern here so far has been in mapping out the successions of states that can be obtained in the evolution of multiway Turing machines. But one of the discoveries of our Physics Project is that there is a complementary way to understand the behaviors of systems, by looking not at successions of states but at causal relationships between events that update these states. And it is the causal graph of these relationships that is of most relevance if one wants to understand what observers embedded within a system can perceive. In our multiway Turing machines, we can think of each application of a Turing machine rule as an “event”. Each such event takes certain “input” (the state of the head and the color of the cell under it), and generates certain “output” (the new state of the head, the color under it, and its position). The causal graph maps out what outputs from other events the input to a given event requires, or, in other words, what the causal dependence is of one event on others. For example, for the ordinary Turing machine evolution the network of causal relationships between updating events is just or with a different graph rendering just: One can also define causal graphs for multiway Turing machines. As an example, look at a rule we considered above: The multiway graph that describes possible successions of states in this case is: Now let’s explicitly show the update event that transforms each state to its successor: Just as for the deterministic case, we can identify the causal relationships between these update events, here indicated by orange edges: Keeping only the events and their causal relationships we get the final multiway causal graph: Continuing for more steps we get: Informed by our Physics Project, we can think of each edge in the causal graph as representing a causal relationship between two events which follow each other in time. In the causal graph for an ordinary “single-way” Turing machine these two events may also be separated in space, so that in effect the causal graph defines how space and time are knitted together. In the (multiway) causal graph for a multiway Turing machine, events can be separated not only in space, but also in branchial space (or, in other words, they can occur on different branches in the evolution)—so the multiway causal graph can be thought of as defining how space, branchial space and time are knitted together. We discussed above how states in a multiway graph can be thought of as being laid out in “multispace” which includes both ordinary space and branchial space coordinates. One can do more or less the same for events in a multiway causal graph—suggesting a “multispace rendering of a multiway causal graph”: Different multiway Turing machines can have quite different multiway causal graphs. Here are some samples for various rules we considered above: These multiway causal graphs in a sense capture all causal relationships within and between different possible paths followed in the multiway graph. If we pick a particular path in the multiway graph (corresponding to a particular sequence of choices about which case in the multiway Turing machine rule to apply at each step) then this will yield a “deterministic” evolution. And this evolution will have a corresponding causal graph—that must always be a subgraph of the full multiway causal graph. Whenever there is more than one possible successor for a given state in the evolution of a multiway Turing machine, this will lead to multiple branches in the multiway graph. Different possible “paths of history” (with different choices about which case in the rule to apply at each step) then correspond to following different branches. And in a case like this once one has picked a branch one is “committed”: one can never subsequently reach any other branches—and paths that diverge never converge again. But in a case like this it’s a different story because here—at least if one goes far enough in generating the multiway graph—every pair of paths that diverge must eventually converge again. In other words, if one takes a “wrong turn”, one can always recover. Or, put another way, whatever sequence of rules one applies, it is always possible to reach eventual consistency in the results one gets. This property is closely related to a property that’s very important in our Physics Project: causal invariance. When causal invariance is present, it implies that the causal graphs generated by following any possible path must always be the same. In other words, even though the multiway system in a sense allows many possible histories, the network of causal relationships obtained in each case will always be the same—so that with respect to causal relationships there is essentially just one “objective reality” about how the system behaves. (By the way, the multiway causal graph contains more information than the individual causal graphs, because it also describes how all the causal graphs—from all possible histories—“knit together” across space, time and branchial space. There’s a subtlety which we will not explore in detail here. Whether one considers branches in the multiway graph to have converged depends on how one defines equivalence of states. In the multiway graphs we have drawn, we have done this just by looking at whether the instantaneous states of Turing machines are the same. And in doing this, the merging of branches is related to a property often called confluence. But to ensure that we get full causal invariance we can instead consider states to be equivalent if in addition to having the same tape configurations, the causal graphs that lead to them are isomorphic (so that in a sense we’re considering a “causal multiway graph”). So what multiway Turing machines are causal invariant (or at least confluent)? If a multiway Turing machine has rules that make it deterministic (without halting), and thus effectively “single way”, then it is trivially causal invariant. If a multiway Turing machine has multiple halting states it is inevitably not causal invariant—because if it “falls into” any one of the halting states, then it can never “get out” and reach another. On the other hand, if there is just a single halting state, and all possible histories lead to it, then a multiway Turing machine will at least be confluent (and in terms of its computation can be thought of as always “reaching a final unique answer”, or “normal form”). Finally, in the “rulial limit” where the multiway Turing machine can use any possible rule, causal invariance is inevitable. In general, causal invariance is not particularly rare among multiway Turing machines. For s = 1, k = 1 rules, it is inevitable. For s = 1, k = 2 rules here are some examples of multiway graphs and multiway causal graphs for rules that appear to be at least confluent and ones that do not: (Note that in general it can be undecidable if a given multiway Turing machine is causal invariant—or confluent—or not. For even if two branches eventually merge, there is no a priori upper bound on how many steps this will take—though in practice for simple rules it typically seems to resolve quite quickly.) So far we’ve always assumed that the tapes in our Turing machines are unbounded. But just as we did in our earlier project of studying the rulial space of Turing machines, we can also consider the case of Turing machines with bounded—say cyclic—tapes with a finite number of “cells” n. Such a Turing machine has a total of n s k n possible complete states—and for a given n we can construct a complete state transition graph. For an ordinary deterministic Turing machine, these graphs always have a single outgoing edge from each state node (though possibly several incoming edges). So, for example, with the rule the state transition graph for a length-3 tape is: Here are some other s = 2, k = 2 examples (all for n = 3): In all cases, what we see are certain cycles of states that are visited repeatedly, fed by transients containing other states. For multiway Turing machines such as the structure of the state transition graph can be different, with nodes having both more or less than 1 outgoing edge: Starting from the node corresponding to a particular state, the subgraph reached from that state is the multiway graph corresponding to the evolution from that state. Halting occurs when there is outdegree 0 at a node—and the 3 nodes at the bottom of the image correspond to states where the Turing machine has immediately halted. Here are results for some other multiway Turing machines, here with tape size n = 5, indicating halting states in red: What happens when there are more cases p in the multiway Turing machine rule? If all possible cases are allowed, so that p = 2 s 2 k 2, then we get the full rulial multiway graph. For size n = 3 this is while for n = 5 it is: This “rulial-limit” multiway graph has the property that it is in a sense completely uniform: the graph is vertex transitive, so that the neighborhood of every node is the same. (The graph corresponds to the Cayley graph of a finite “Turing machine group”, here \!\( \*SubsuperscriptBox[\(TM\), \(1, 2\), \(n\)]\ = \ \( \*SubscriptBox[\(\[DoubleStruckCapitalZ]\), \(n\)]\ ⋉\ \*SuperscriptBox[\(( \*SubscriptBox[\(\[DoubleStruckCapitalZ]\), \(2\)])\), \(n\)]\)\) .) But while this “rulial-limit” graph in a sense involves maximal nondeterminism and maximal branching, it also shows maximal merging, and in fact starting from any node, paths that branch must always be able to merge again—as in this example for the n = 3 case (here shown with two different graph renderings): So what this means is that—just as in the infinite-tape case discussed above—multiway Turing machines must always be causal invariant in the rulial limit. So what about “below the rulial limit”? When is there causal invariance? For s = 1, k = 1 it is inevitable: For s = 1, k = 2 it is somewhat rarer; here is an example where paths can branch and not merge: When we discussed causal invariance (and confluence) in the case of infinite tapes, we did so in the context of starting from a particular configuration (such as a blank tape), and then seeing whether all paths from this node in the multiway graph eventually merge. We can do something similar in the case of finite tapes, say asking about paths starting from the node corresponding to a blank tape. Doing this for tapes of sizes between 2 and 4, we get the following results for the number of s = 1, k = 2 rules (excluding purely deterministic ones) that exhibit confluence as a function of the number of cases p in the rule: As expected, it is more common to see confluence with shorter tapes, since there is then less opportunity for branching in the multiway graph. (By the way, even for arbitrarily large n, cyclic tapes make more machines confluent than infinite tapes do, because they allow merging of branches associated with the head “going all the way around”. If instead of cyclic boundary conditions, one uses boundary conditions that “reflect the head”, fewer machines are confluent.) With finite tapes, it is possible to consider starting not just from a particular initial condition, but from all possible initial conditions—leading to slightly fewer rules that show “full confluence”: Here are examples of multiway graphs for rules that show full confluence and ones that do not: Typically non-confluent cases look more “tree like” while confluent ones look more “cyclic”. But sometimes the overall forms can be very similar, with the only significant difference being the directionality of edges. (Note that if a multiway graph is Hamiltonian, then it inevitably corresponds to a system that exhibits confluence.) What I call multiway Turing machines are definitionally the same as what are usually called nondeterministic Turing machines (NDTMs) in the theory of computation literature. I use “multiway” rather than “nondeterministic”, however, to indicate that my interest is in the whole multiway graph of possible evolutions, rather than in whether a particular outcome can “nondeterministically” be reached. The idea of nondeterminism seems to have diffused quite gradually into the theory of computation, with the specific concept of nondeterministic Turing machines emerging around 1970, and promptly being used to formulate the class of NP computations. The original Turing machines from 1936 were purely deterministic. However, although they were not yet well understood, combinators introduced in 1920 already involved what we now think of as nondeterministic reductions, as did lambda calculus. Many mathematical problems are of the form “Does there exist a ___ that does ___?”, and the concept of looking “nondeterministically” for a particular solution has arisen many times. In a specifically computational context it seems to have been emphasized when formal grammars became established in 1956, and it was common to ask whether a particular string was in a given formal language, in the sense that some parse tree (or equivalent) could be found for it. Nondeterministic finite automata seem to have first been explicitly discussed in 1959, and various other forms of nondeterministic language descriptions appeared in the course of the 1960s. In the early 1980s, quantum generalizations of Turing machines began to be considered (and I in fact studied them a little at that time). The obvious basic setup was structurally the same as for nondeterministic Turing machines (and now for multiway Turing machines), except that in the quantum case different “nondeterministic states” were identified as quantum states, assigned quantum amplitudes, and viewed as being combined in superpositions. (There was also significant complexity in imagining a physical implementation, with an actual Hamiltonian, etc.) Turing machines are most often used as theoretical constructs, rather than being investigated at the level of specific simple rules. But while some work on specific ordinary Turing machines has been done (for example by me), almost nothing seems to have been done on specific nondeterministic (i.e. multiway) Turing machines. The fact that two head states are sufficient for universality in ordinary Turing machines was established by Claude Shannon in 1956 (and we now know that s = 2, k = 3 is sufficient), but I know of no similar results for nondeterministic Turing machines. Note that in my formulation of multiway Turing machines halting occurs just because there are no rules that apply. Following Alan Turing, most treatments of Turing machines introduce an explicit special “halt” state—but in my formulation this is not needed. I have considered only Turing machines with single heads. If multiple heads are introduced in ordinary Turing machines (or mobile automata) there typically have to be special rules added to define how they should interact. But with my formulation of multiway Turing machines, there is no need to specify this; heads that would “collide in space” just end up at different places in branchial space. A basic multiway generalization of the TuringMachine function in Wolfram Language is (note that the specification of configurations is slightly different from what is used in A New Kind of Science ): Running this for 2 steps from a blank tape gives: The number of distinct states increases as: This creates a multiway graph for the evolution: A more complete multiway Turing machine function is in the Wolfram Function Repository.
8
Devs Should Smile
Vatsal Patel / January 09, 2022 4 min read • ––– views As software engineers we tend to write a lot, whether that be through code, comments or conversations in pull requests on github (or other places). Writing is an often overlooked skill in a software developer. I'm not talking about writing with beautiful words, captivating metaphors or becoming the next Shakespeare. I'm talking about the ability to communicate with the right emotions through a simple writing style. Using simple words and sentences but being clear about your meaning and intention. As a coder I've realised that expanding my UTF-8 vocabulary has been more useful than memorising the dictionary! Reviewing and creating pull requests are the bread and butter of a software developer. In good engineering teams, it’s where knowledge sharing happens, technical decisions are challenged and ideas are openly discussed. When I first started reviewing pull requests I was naturally shy and thought my comments would not be worthy. Over time I gained confidence by understanding the codebase and the technologies used by our team. I gradually started to ask why the author chose certain ways of doing things. "Why did you choose to extract that specific code into a function and not the rest? What is the benefit of using a new library instead of writing the logic ourselves?" I also started to highlight things I thought would cause us pain in the future. Eventually I was brimming with so much confidence that I even started rejecting pull requests (don't worry I'm not addicted, yet)! Anyone that knows me well knows that I love a good debate and form strong opinions but I have never let them get personal. I value my relationships more than winning debates. At work the points and questions I raised were (mostly) all useful and worth raising but I started to get a feeling that I was coming across a bit big-headed. I spoke directly with my team members to address this, which helped considerably, but I still couldn't help the feeling of lacking something. That’s when I received an invaluable piece of advice from my manager. He suggested I start using emojis in my writing, especially in pull requests; he told me to sprinkle in some 😀s and 🚀s in my comments. I had observed emojis being used by others but I never thought much of it. So I began to use emojis in my pull requests. (Auto)Magically, my comments and suggestions seemed to come across a lot more friendly and warm. The 😀 at the end of a suggestion comforted the receiver that although I was questioning his/her thinking, I was trying to help. On the other hand, adding 🚀 and 👌 next to praises about great work made them feel more genuine and uplifting. Supplementing (sometimes difficult) questions with 🤔 gave a tone of curiosity that I could never achieve before. Eventually I got feedback from my peers that they enjoyed my reviews and were grateful for the questions I raised since it saved them extra work in the long term 🎉. Since then, using emojis has become a crucial part of my software engineering toolkit 🛠 and has started to creep into other aspects of my life too! My current go-to emojis are Smile 😀 Ok 👌 (I like to call this 'perfect' in my head) Thinking 🤔 Eyes 👀 Thumbs up 👍 Rocket 🚀 I would love to know your thoughts on using emojis as a software engineer, so please reach out using any of the social networks below. Also feel free to share this post with your friends! Special thanks to Neha Patel and Darshan Hindocha for reviewing this post.
2
Features of a Modern Terminal Emulator (2020)
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
2
Destiny video game C++ guidelines
JavaScript is required to use Bungie.net i Chatp Notificationsp No items found {{messageItem.iconCoin.text}} {{messageItem.iconCoin.iconName}} {{messageItem.flairCoin.text}} Loading... {{getTimestampAgo(message)}} {{whoIsTyping}} New Messages No notifications {{notification.iconCoin.text}} {{notification.iconCoin.iconName}} {{notification.flairCoin.text}} There's a lot of teamwork and ingenuity that goes into making a game like Destiny. We have talented people across all disciplines working together to make the best game that we can. However, achieving the level of coordination needed to make Destiny isn’t easy. It's like giving a bunch of people paintbrushes but only one canvas to share between them and expecting a high-quality portrait at the end. In order to make something that isn't pure chaos, some ground rules need to be agreed upon. Like deciding on the color palette, what size brushes to use in what situations, or what the heck you’re trying to paint in the first place. Getting that alignment amongst a team is incredibly important. One of the ways that we achieve that alignment over in engineering land is through coding guidelines: rules that our engineers follow to help keep the codebase maintainable. Today, I'm going to share how we decide what guidelines we should have, and how they help address the challenges we face in a large studio. The focus of this post will be on the game development side of things, using the C++ programming language, but even if you don't know C++ or aren't an engineer, I think you'll still find it interesting. h2 A coding guideline is a rule that our engineers follow while they're writing code. They're commonly used to mandate a particular format style, to ensure proper usage of a system, and to prevent common issues from occurring. A well-written guideline is clearly actionable in its wording, along the lines of "Do X" or "Don't do Y" and explains the rationale for its inclusion as a guideline. To demonstrate, here’s a couple examples from our C++ guidelines: Don't use the static keyword directly The "static" keyword performs a bunch of different jobs in C++, including declaring incredibly dangerous static function-local variables. You should use the more specific wrapper keywords in cseries_declarations.h, such as static_global, static_local, etc. This allows us to audit dangerous static function-locals efficiently. Braces On Their Own Lines Braces are always placed on a line by themselves. There is an exception permitted for single-line inline function definitions. Notice how there’s an exception called out in that second guideline? Guidelines are expected to be followed most of the time, but there's always room to go against one if it results in better code. The reasoning for that exception must be compelling though, such as producing objectively clearer code or sidestepping a particular system edge case that can't otherwise be worked around. If it’s a common occurrence, and the situation for it is well-defined, then we’ll add it as an official exception within the guideline. To further ground the qualities of a guideline, let’s look at an example of one from everyday life. In the USA, the most common rule you follow when driving is to drive on the right side of the road. You're pretty much always doing that. But on a small country road where there's light traffic, you'll likely find a dashed road divider that indicates that you're allowed to move onto the left side of the road to pass a slow-moving car. An exception to the rule. (Check with your state/county/city to see if passing is right for you. Please do not take driving advice from a tech blog post.) Now, even if you have a lot of well-written, thought-out guidelines, how do you make sure people follow them? At Bungie, our primary tool for enforcing our guidelines is through code reviews. A code review is where you show your code change to fellow engineers, and they’ll provide feedback on it before you share it with the rest of the team. Kind of like how this post was reviewed by other people to spot grammar mistakes or funky sentences I’d written before it was shared with all of you. Code reviews are great for maintaining guideline compliance, spreading knowledge of a system, and giving reviewers/reviewees the opportunity to spot bugs before they happen, making them indispensable for the health of the codebase and team. You can also have a tool check and potentially auto-fix your code for any easily identifiable guideline violations, usually for ones around formatting or proper usage of the programming language. We don't have this setup for our C++ codebase yet unfortunately, since we have some special markup that we use for type reflection and metadata annotation that the tool can't understand out-of-the-box, but we're working on it! Ok, that pretty much sums up the mechanics of writing and working with guidelines. But we haven't covered the most important part yet: making sure that guidelines provide value to the team and codebase. So how do we go about figuring out what's valuable? Well, let's first look at some of the challenges that can make development difficult and then go from there. Challenges, you say? The first challenge is the programming language that we’re using for game development: C++. This is a powerful high-performance language that straddles the line between modern concepts and old school principles. It’s one of the most common choices for AAA game development to pack the most computations in the smallest amount of time. That performance is mainly achieved by giving developers more control over low-level resources that they need to manually manage. All of this (great) power means that engineers need to take (great) responsibility, to make sure resources are managed correctly and arcane parts of the language are handled appropriately. Our codebase is also fairly large now, at about 5.1 million lines of C++ code for the game solution. Some of that is freshly written code, like the code to support Cross Play in Destiny. Some of it is 20 years old, such as the code to check gamepad button presses. Some of it is platform-specific to support all the environments we ship on. And some of it is cruft that needs to be deleted. Changes to long-standing guidelines can introduce inconsistency between old and new code (unless we can pay the cost of global fixup), so we need to balance any guideline changes we want to make against the weight of the code that already exists. Not only do we have all of that code, but we're working on multiple versions of that code in parallel! For example, the development branch for Season of the Splicer is called v520, and the one for our latest Season content is called v530. v600 is where major changes are taking place to support The Witch Queen, our next major expansion. Changes made in v520 automatically integrate into the downstream branches, to v530 and then onto v600, so that the developers in those branches are working against the most up-to-date version of those files. This integration process can cause issues, though, when the same code location is modified in multiple branches and a conflict needs to be manually resolved. Or worse, something merges cleanly but causes a logic change that introduces a bug. Our guidelines need to have practices that help reduce the odds of these issues occurring. Finally, Bungie is a large company; much larger than a couple college students hacking away at games in a dorm room back in 1991. We're 150+ engineers strong at this point, with about 75 regularly working on the C++ game client. Each one is a smart, hardworking individual, with their own experiences and perspectives to share. That diversity is a major strength of ours, and we need to take full advantage of it by making sure code written by each person is accessible and clear to everyone else. Now that we know the challenges that we face, we can derive a set of principles to focus our guidelines on tackling them. At Bungie, we call those principles our C++ Coding Guideline Razors. Razors? Like for shaving? Well, yes. But no. The idea behind the term razor here is that you use them to "shave off" complexity and provide a sharp focus for your goals (addressing the challenges we went through above). Any guidelines that we author are expected to align with one or more of these razors, and ones that don't are either harmful or just not worth the mental overhead for the team to follow. I'll walk you through each of the razors that Bungie has arrived at and explain the rationale behind each one, along with a few example guidelines that support the razor. #1 Favor understandability at the expense of time-to-write Every line of code will be read many times by many people of varying backgrounds for every time an expert edits it, so prefer explicit-but-verbose to concise-but-implicit. When we make changes to the codebase, most of the time we're taking time to understand the surrounding systems to make sure our change fits well within them before we write new code or make a modification. The author of the surrounding code could've been a teammate, a former coworker, or you from three years ago, but you've lost all the context you originally had. No matter who it was, it's a better productivity aid to all the future readers for the code to be clear and explanative when it was originally written, even if that means it takes a little longer to type things out or find the right words. Some Bungie guidelines that support this razor are: snake_case as our naming convention. Avoiding abbreviation (eg ‪screen_manager instead of ‪scrn_mngr) Encouraging the addition of helpful inline comments. Below is a snippet from some of our UI code to demonstrate these guidelines in action. Even without seeing the surrounding code, you can probably get a sense of what it's trying to do. int32 new_held_milliseconds= update_context->get_timestamp_milliseconds() - m_start_hold_timestamp_milliseconds; set_output_property_value_and_accumulate( &m_current_held_milliseconds, new_held_milliseconds, &change_flags, FLAG(_input_event_listener_change_flag_current_held_milliseconds)); bool should_trigger_hold_event= m_total_hold_milliseconds > NONE && m_current_held_milliseconds > m_total_hold_milliseconds && !m_flags.test(_flag_hold_event_triggered); if (should_trigger_hold_event) { // Raise a flag to emit the hold event during event processing, and another // to prevent emitting more events until the hold is released m_flags.set(_flag_hold_event_desired, true); m_flags.set(_flag_hold_event_triggered, true); } #2 Avoid distinction without difference When possible without loss of generality, reduce mental tax by proscribing redundant and arbitrary alternatives. This razor and the following razor go hand in hand; they both deal with our ability to spot differences. You can write a particular behavior in code multiple ways, and sometimes the difference between them is unimportant. When that happens, we'd rather remove the potential for that difference from the codebase so that readers don't need to recognize it. It costs brain power to map multiple things to the same concept, so by eliminating these unnecessary differences we can streamline the reader's ability to pick up code patterns and mentally process the code at a glance. An infamous example of this is "tabs vs. spaces" for indentation. It doesn't really matter which you choose at the end of the day, but a choice needs to be made to avoid code with mixed formatting, which can quickly become unreadable. Some Bungie coding guidelines that support this razor are: Use American English spelling (ex "color" instead of "colour"). Use post increment in general usage (‪index++ over ‪++index). ‪* and ‪& go next to the variable name instead of the type name (‪int32 *my_pointer over ‪int32* my_pointer). Miscellaneous whitespace rules and high-level code organization within a file. #3 Leverage visual consistency Use visually-distinct patterns to convey complexity and signpost hazards The opposite hand of the previous razor, where now we want differences that indicate an important concept to really stand out. This aids code readers while they're debugging to see things worth their consideration when identifying issues. Here's an example of when we want something to be really noticeable. In C++ we can use the preprocessor to remove sections of code from being compiled based on whether we're building an internal-only version of the game or not. We'll typically have a lot of debug utilities embedded in the game that are unnecessary when we ship, so those will be removed when we compile for retail. We want to make sure that code meant to be shipped doesn’t accidentally get marked as internal-only though, otherwise we could get bugs that only manifest in a retail environment. Those aren't very fun to deal with. We mitigate this by making the C++ preprocessor directives really obvious. We use all-uppercase names for our defined switches, and left align all our preprocessor commands to make them standout against the flow of the rest of the code. Here's some example code of how that looks: void c_screen_manager::render() { bool ui_rendering_enabled= true; #ifdef UI_DEBUG_ENABLED const c_ui_debug_globals *debug_globals= ui::get_debug_globals(); if (debug_globals != nullptr && debug_globals->render.disabled) { ui_rendering_enabled= false; } #endif // UI_DEBUG_ENABLED if (ui_rendering_enabled) { // ... } } Some Bungie coding guidelines that support this razor are: Braces should always be on their own line, clearly denoting nested logic. Uppercase for preprocessor symbols (eg ‪#ifdef PLATFORM_WIN64). No space left of the assignment operator, to distinguish from comparisons (eg ‪my_number= 42 vs ‪my_number == 42). Leverage pointer operators (‪*/‪&/‪->) to advertise memory indirection instead of references #4 Avoid misleading abstractions. When hiding complexity, signpost characteristics that are important for the customer to understand. We use abstractions all the time to reduce complexity when communicating concepts. Instead of saying, "I want a dish with two slices of bread on top of each other with some slices of ham and cheese between them", you're much more likely to say, "I want a ham and cheese sandwich". A sandwich is an abstraction for a common kind of food. Naturally we use abstractions extensively in code. Functions wrap a set of instructions with a name, parameters, and an output, to be easily reused in multiple places in the codebase. Operators allow us to perform work in a concise readable way. Classes will bundle data and functionality together into a modular unit. Abstractions are why we have programming languages today instead of creating applications using only raw machine opcodes. An abstraction can be misleading at times though. If you ask someone for a sandwich, there's a chance you could get a hot dog back or a quesadilla depending on how the person interprets what a sandwich is. Abstractions in code can similarly be abused leading to confusion. For example, operators on classes can be overridden and associated with any functionality, but do you think it'd be clear that ‪m_game_simulation++ corresponds to calling the per-frame update function on the simulation state? No! That's a confusing abstraction and should instead be something like ‪m_game_simulation.update() to plainly say what the intent is. The goal with this razor is to avoid usages of unconventional abstractions while making the abstractions we do have clear in their intent. We do that through guidelines like the following: Use standardized prefixes on variables and types for quick recognition. eg: ‪c_ for class types, ‪e_ for enums. eg: ‪m_ for member variables, ‪k_ for constants. No operator overloading for non-standard functionality. Function names should have obvious implications. eg: ‪get_blank() should have a trivial cost. eg: ‪try_to_get_blank() may fail, but will do so gracefully. eg: ‪compute_blank() or ‪query_blank() are expected to have a non-trivial cost. #5 Favor patterns that make code more robust. It’s desirable to reduce the odds that a future change (or a conflicting change in another branch) introduces a non-obvious bug and to facilitate finding bugs, because we spend far more time extending and debugging than implementing. Just write perfectly logical code and then no bugs will happen. Easy right? Well... no, not really. A lot of the challenges we talked about earlier make it really likely for a bug to occur, and sometimes something just gets overlooked during development. Mistakes happen and that's ok. Thankfully there's a few ways that we can encourage code to be authored to reduce the chance that a bug will be introduced. One way is to increase the amount of state validation that happens at runtime, making sure that an engineer's assumptions about how a system behaves hold true. At Bungie, we like to use asserts to do that. An assert is a function that simply checks that a particular condition is true, and if it isn't then the game crashes in a controlled manner. That crash can be debugged immediately at an engineer’s workstation, or uploaded to our TicketTrack system with the assert description, function callstack, and the dump file for investigation later. Most asserts are also stripped out in the retail version of the game, since internal game usage and QA testing will have validated that the asserts aren't hit, meaning that the retail game will not need to pay the performance cost of that validation. Another way is to put in place practices that can reduce the potential wake a code change will have. For example, one of our C++ guidelines is to only allow a single ‪return statement to exist in a function. A danger with having multiple ‪return statements is that adding new ‪return statements to an existing function can potentially miss a required piece of logic that was setup further down in the function. It also means that future engineers need to understand all exit points of a function, instead of relying on nesting conditionals with indentations to visualize the flow of the function. By allowing only a single ‪return statement at the bottom of a function, an engineer instead needs to make a conditional to show the branching of logic within the function and is then more likely to consider the code wrapped by the conditional and the impact it'll have. Some Bungie coding guidelines that support this razor are: Initialize variables at declaration time. Follow const correctness principles for class interfaces. Single ‪return statement at the bottom of a function. Leverage asserts to validate state. Avoid native arrays and use our own containers. #6 Centralize lifecycle management. Distributing lifecycle management across systems with different policies makes it difficult to reason about correctness when composing systems and behaviors. Instead, leverage the shared toolbox and idioms and avoid managing your own lifecycle whenever possible. When this razor is talking about lifecycle management, the main thing it's talking about is the allocation of memory within the game. One of the double-edged swords of C++ is that the management of that memory is largely left up to the engineer. This means we can develop allocation and usage strategies that are most effective for us, but it also means that we take on all of the bug risk. Improper memory usage can lead to bugs that reproduce intermittently and in non-obvious ways, and those are a real bear to track down and fix. Instead of each engineer needing to come up with their own way of managing memory for their system, we have a bunch of tools we've already written that can be used as a drop-in solution. Not only are they battle tested and stable, they include tracking capabilities so that we can see the entire memory usage of our application and identify problematic allocations. Some Bungie coding guidelines that support this razor are: Use engine-specified allocation patterns. Do not allocate memory directly from the operating system. Avoid using the Standard Template Library for game code. Recap Please Alright, let's review. Guideline razors help us evaluate our guidelines to ensure that they help us address the challenges we face when writing code at scale. Our razors are: Favor understandability at the expense of time-to-write Avoid distinction without difference Leverage visual consistency Avoid misleading abstractions Favor patterns that make code more robust Centralize lifecycle management Also, you may have noticed that the wording of the razors doesn't talk about any C++ specifics, and that’s intentional. What's great about these is that they're primarily focused on establishing a general philosophy around producing maintainable code. They're mostly applicable to other languages and frameworks, whereas the guidelines that are generated from them are specific to the target language, project, and team culture. If you're an engineer, you may find them useful when evaluating the guidelines for your next project. Who Guides the Guidelines? Speaking of evaluation, who's responsible at Bungie for evaluating our guidelines? That would be our own C++ Coding Guidelines Committee. It's the committee's job to add, modify, or delete guidelines as new code patterns and language features develop. We have four people on the committee to debate and discuss changes on a regular basis, with a majority vote needed to enact a change. The committee also acts as a lightning rod for debate. Writing code can be a very personal experience with subjective opinions based on stylistic expression or strategic practices, and this can lead to a fair amount of controversy over what's best for the codebase. Rather than have the entire engineering org debating amongst themselves, and losing time and energy because of it, requests are sent to the committee where the members there can review, debate, and champion them in a focused manner with an authoritative conclusion. Of course, it can be hard for even four people to agree on something, and that’s why the razors are so important: they give the members of the committee a common reference for what makes a guideline valuable while evaluating those requests. Alignment Achieved As we were talking about at the beginning of this article, alignment amongst a team is incredibly important for that team to be effective. We have coding guidelines to drive alignment amongst our engineers, and we have guideline razors to help us determine if our guidelines are addressing the challenges we face within the studio. The need for alignment scales as the studio and codebase grows, and it doesn't look like that growth is going to slow down here anytime soon, so we’ll keep iterating on our guidelines as new challenges and changes appear. Now that I've made you read the word alignment too many times, I think it's time to wrap this up. I hope you've enjoyed this insight into some of the engineering practices we have at Bungie. Thanks for reading! - Ricky Senft We’d love to talk to you. Here are some of the tech roles we’re hiring for, with many more on our careers page! Graphics Engineer Low-Level Security Engineer Senior Gameplay Engineer - AI/Animation I am over the age of #AGE You are not allowed to view this content. Disrespect/Hate Speech Harassment/Personal Attacks/Bullying Name and Shame/Privacy Violation Gory Violence/Explicit Sexuality Threats of Violence/Illegal Activity Political/Religious Discussion Cheating/Hacking Spoilers/Distributing Stolen Content Soliciting/Plagiarism/Phishing/Impersonation Disruption/Evasion Harassment/Personal Attacks/Bullying Name and Shame/Privacy Violation Gory Violence/Explicit Sexuality Threats of Violence/Illegal Activity Political/Religious Discussion Spoilers/Distributing Stolen Content Soliciting/Plagiarism/Phishing/Impersonation Disrespect/Hate Speech Harassment/Personal Attacks/Bullying Name and Shame/Privacy Violation Gory Violence/Explicit Sexuality Threats of Violence/Illegal Activity Political/Religious Discussion Cheating/Hacking Spoilers/Distributing Stolen Content Soliciting/Plagiarism/Phishing/Impersonation Disruption/Evasion Harassment/Personal Attacks/Bullying Name and Shame/Privacy Violation Gory Violence/Explicit Sexuality Threats of Violence/Illegal Activity Political/Religious Discussion Spoilers/Distributing Stolen Content Soliciting/Plagiarism/Phishing/Impersonation Your role as a moderator enables you immediately ban this user from messaging (bypassing the report queue) if you select a punishment. 7 Day Ban 30 Day Ban Permanent Ban
2
In Antarctic sea squirt is a bacterial species with anti-melanoma properties
December 1, 2021 by Kelsey Fitzgerald, Synoicum adareanum lives on the Antarctic sea floor and gets its nutrition from microorganisms and organic carbon in the seawater. This type grows in colonies with many individual lobes that are connected at their base. Its microbiome hosts a suite of different microorganisms, including a bacterium in the phylum, Verrucomicrobium, that produces a compound with anti-melanoma properties. Credit: Bill J. Baker, University of South Florida There are few places farther from your medicine cabinet than the tissues of an ascidian, or "sea squirt," on the icy Antarctic sea floor—but this is precisely where scientists are looking to find a new treatment for melanoma, one of the most dangerous types of skin cancer. In a new paper that was published today in mSphere, a research team from DRI, Los Alamos National Laboratory (LANL), and the University of South Florida (USF) made strides toward their goal, successfully tracing a naturally-produced melanoma-fighting compound called "palmerolide A" to its source: A microbe that resides within Synoicum adareanum, a species of ascidian common to the waters of Antarctica's Anvers Island archipelago. "We have long suspected that palmerolide A was produced by one of the many types of bacteria that live within this ascidian host species, S. adareanum," explained lead author Alison Murray, Ph.D., research professor of biology at DRI. "Now, we have actually been able to identify the specific microbe that produces this compound, which is a huge step forward toward developing a naturally-derived treatment for melanoma." Late spring at Arthur Harbor. The waters surrounding Anvers Island, Antarctica, are home to a species of sea squirt called Synoicum adareanum. New research has traced the production of palmerolide A, a key compound with anti-melanoma properties, to a member of this sea squirt's microbiome. Credit: Alison E. Murray, DRI The bacterium that the team identified is a member of a new and previously unstudied genus, Candidatus Synoicihabitans palmerolidicus. This advance in knowledge builds on what Murray and her colleagues have learned across more than a decade of research on palmerolide A and its association with the microbiome (collective suite of microbes and their genomes) of the host ascidian, S. adareanum. In 2008, Murray worked with Bill Baker, Ph.D., professor of chemistry at USF and Christian Riesenfeld, Ph.D., postdoctoral researcher at DRI to publish a study on the microbial diversity of a single S. adareanum organism. In 2020, the team expanded to include additional researchers from LANL, USF, and the Université de Nantes, and published new work identifying the "core microbiome" of S. adareanum—a common suite of 21 bacterial species that were present across 63 different samples of S. adareanum collected from around the Anvers Island archipelago. In the team's latest research, they looked more closely at the core microbiome members identified in their 2020 paper to determine which of the 21 types of bacteria were responsible for the production of palmerolide A. They conducted several rounds of environmental genome sequencing, followed by automated and manual assembly, gene mining, and phylogenomic analyses, which resulted in the identification of the biosynthetic gene cluster and palmerolide A-producing organism. Synoicum adareanum in 80 feet of water at Bonaparte Point, Antarctica. New research has traced the production of palmerolide A, a key compound with anti-melanoma properties, to a suite of genes coded in the genome by a member of this sea squirt's microbiome. Credit: Bill J. Baker, University of South Florida "This is the first time that we've matched an Antarctic natural product to the genetic machinery that is responsible for its biosynthesis," Murray said. "As an anti-cancer therapeutic, we can't just go to Antarctica and harvest these sea squirts en masse, but now that we understand the underlying genetic machinery, it opens the door for us to find a biotechnological solution to produce this compound." "Knowing the producer of palmerolide A enables cultivation, which will finally provide sufficient quantity of the compound for needed studies of its pharmacological properties," added Baker. Many additional questions remain, such as how S. adareanum and its palmerolide-producing symbiont are distributed across the landscape in Antarctic Oceans, or what role palmerolide A plays in the ecology of this species of ascidian. Likewise, a detailed investigation into how the genes code for the enzymes that make palmerolide A is the subject of a new report soon to be published. Andrew Schilling (University of South Florida) dives in 100 feet of water at Cormorant Wall, Antarctica. Samples for microbiome characterization were collected by SCUBA divers working in the chilly subzero seas off Anvers Island, in the Antarctic Peninsula. Credit: Bill J. Baker, University of South Florida To survive in the harsh and unusual environment of the Antarctic sea floor, ascidians and other invertebrates such as sponges and corals have developed symbiotic relationships with diverse microbes that play a role in the production of features such as photoprotective pigments, bioluminescence, and chemical defense agents. The compounds produced by these microbes may have medicinal and biotechnological applications useful to humans in science, health and industry. Palmerolide A is one of many examples yet to be discovered. "Throughout the course of disentangling the many genomic fragments of the various species in the microbiome, we discovered that this novel microbe's genome appears to harbor multiple copies of the genes responsible for palmerolide production," said Patrick Chain, Ph.D., senior scientist and Laboratory Fellow with LANL. "However the role of each copy, and regulation, for example, are unknown. This suggests palmerolide is likely quite important to the bacterium or the host, though we have yet to understand it's biological or ecological role within this Antarctic setting." "This is a beautiful example of how nature is the best chemist out there," Murray added. "The fact that microbes can make these bioactive and sometimes toxic compounds that can help the hosts to facilitate their survival is exemplary of the evolutionary intricacies found between hosts and their microbial partners and the chemical handshakes that are going on under our feet on all corners of the planet." More information: Alison E. Murray et al, Discovery of an Antarctic Ascidian-Associated Uncultivated Verrucomicrobia with Antimelanoma Palmerolide Biosynthetic Potential, mSphere (2021). DOI: 10.1128/mSphere.00759-21 Provided by Desert Research Institute
4
Self-driving race cars zip into history at CES
January 8, 2022 by Julie Jammot Self-driving cars race at the Las Vegas Motor Speedway. The race pitted teams of students from around the world against one another to rev up the capabilities of autonomous cars. A racecar with nobody at the wheel snaked around another to snatch the lead on an oval track at the Consumer Electronics Show in Las Vegas Friday in an unprecedented high-speed match between self-driving vehicles. Members of Italian-American team PoliMOVE cheered as their racecar, nicknamed "Minerva," repeatedly passed a rival entered by South Korean team Kaist. Minerva was doing nearly 115 miles per hour (185 kilometers per hour) when it blew past the Kaist car. Every racer was deemed a winner by organizers who saw the real victory as the fact that self-driving algorithms could handle the high-speed competition. "It's a success," Indy Autonomous Challenge (IAC) co-organizer Paul Mitchell said to AFP before the checkered flag was waved. The race pitted teams of students from around the world against one another to rev up the capabilities of self-driving cars, improving the technology for use anywhere. In October, the IAC put the brakes on self-driving cars racing together to allow more time to ready technology for the challenge, opting instead to let them do laps individually to see which had the best time. "This almost holds the world record for speed of an autonomous car," PoliMOVE engineer Davide Rigamonti boasted as he gazed lovingly at the white-and-black beauty. The cars are packed with electronic sensors where the driver would normally be. The single seat usually reserved for a driver was during this race instead packed with electronics. PoliMOVE had a shot at victory at another race in October in Indianapolis, clocking some 155 miles per hour (250 kilometers per hour) before skidding out on a curve, according to Rigamonti. Friday, it was the South Korean entry that spun out after overtaking a car fielded by a team from the University of Auburn in the southern US state of Alabama. "The students who program these cars are not mechanics; most of them knew nothing about racing," said IndyCar specialist Lee Anne Patterson. "We taught them about racing." The students program the software that pilots the car by quickly analyzing data from sophisticated sensors. The software piloting the cars has to anticipate how other vehicles on the course will behave, then maneuver accordingly, according to Markus Lienkamp, a professor at Munich, TUM, which won the October competition. The PoliMOVE autonomous race car from Politecnico di Milano (Italy) and the University of Alabama. The software piloting the cars has to anticipate how other vehicles on the course will behave, then maneuver accordingly. Nearby, Lienkamp's students are glued to screens. "It plays out in milliseconds," said Mitchell. "The computer has to make the same decisions as a human driver, despite the speed." The IAC plans to organize other races on the model of Friday's—pitting two cars against each other, with the hope of reaching a level sufficient to one day launch all the vehicles together. © 2022 AFP
2
Yet another HN front-end
{{ message }} often/hn-front-end You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Composer Alfred Schnittke basically has the greatest gravestone ever
Composer Alfred Schnittke has what could be the most provocative gravestone ever 22 April 2021, 17:12 | Updated: 3 February 2023, 17:44 Leonard Elschenbroich plays some blistering Schnittke A great composer of the 20th century left us with one final artistic testament: Rest in Fortississimo... Deafening slience? A silent roar? Alfred Schnittke was a Soviet-German composer, who is known for his monumental Symphony No. 1 (1969-72) and his first Concerto Grosso (1977). He also composed the score to John Neumeier’s ballet based on Ibsen’s play Peer Gynt. His early work had echoes of Shostakovich, but his music developed a unique polystylistic voice throughout his career. Read more: 10 of the best 20th-century composers > The composer died in 1998 at the age of 63, but not before he left the world with another deeply inspired work of art. One that embodied a creative mind that was always challenging, contradicting and elusive. Placed in Moscow's historic Novodevichy Cemetery, his gravestone shows a fermata (a pause), over a whole rest marked triple forte. Or in music: an extended silence, but very, very loud. Alfred Schnittke gravestone. Picture: Imgyr / UberWagen Another interesting fact about Schnittke, he was another of the remarkably long list of great composers who wrote nine symphonies. He joins Beethoven, Schubert, Dvořák, Bruckner, Mahler, and Vaughan Williams who have all penned nine symphonic epics in their lifetime. Just in case that helps you out during your next classical music trivia night.
3
Recent cancer drugs saved over 1.2M people in the US over 16 years, study shows
November 9, 2020 by Credit: CC0 Public Domain More than 1.2 million people in the US prevented facing death following a cancer diagnosis, between the year 2000 and 2016, thanks to ever improving treatment options—a large new national study shows. Published in the peer-reviewed Journal of Medical Economics, the new findings highlight how new drugs commissioned during this period to target the 15 most common cancer types helped to reduce mortality by 24% per 100,000 people in the States. The study, carried out by experts at PRECISIONheor and Pfizer, also show that 106 new treatments were approved across these 15 most common tumours—including colorectal cancer, lung cancer, breast cancer, non-Hodgkin's lymphoma, leukemia, melanoma, gastric cancer, and renal cancer. These new cancer drug approvals were associated with significant decreases in deaths—as measured by treatment stocks. In 2016 alone, the team estimate that new treatments were associated with 156,749 fewer cancer deaths for the 15 most common tumor types. Across the 16 years this mortality figure was down by 1,291,769, whilst the following cancers were also reduced significantly: Delivering the good news, lead author Dr. Joanna MacEwan, from PRECISIONheor, says: "These findings can help contribute to a better understanding of whether increased spending on cancer drugs are worth the investment. While we do not answer this question directly, our results demonstrate that the result of successful investment—i.e., new cancer therapy approvals—generates significant benefits to patients. "The efficacy of each treatment is estimated from clinical trial results, but this study provides evidence that the gains in survival measured in clinical trials are translating into health benefits for patients in the real world and confirms previous research that has also shown that new pharmaceutical treatments are associated with improved survival outcomes for patients." Whilst mortality rates were down across many cancers, estimated deaths were up by 825 in people with thyroid cancer, and 7,768 for those with bladder cancer. The study explains these rises are likely due to the result of sparse drug approvals during this period: five for thyroid cancer and three for bladder cancer. Co-author Rebecca Kee states more can still be done. "There were no approvals in liver or uterine cancer from 2000 to 2016, and few approvals in pancreatic and oral cancer. Seven in 10 of the drug approvals came after 2008, in the latter half of the study time period. Thus, we haven't yet observed the full effect of their introductions in terms of reduced mortality," she added. The study—funded by Pfizer—used a series of national data sets from sources including the Centers for Disease Control and Prevention, the US Mortality Files by the National Center of Health Statistics, Survival, Epidemiology and End Results program (SEER), and United States Cancer Statistics data. The team calculated age-adjusted cancer mortality rates per year by the 15 most common tumor types. They also looked at incident cases of cancer by tumor type—represented as per 100,000 population for all ages, races, and genders. They then translated the change in cancer mortality in the U.S. from 2000 to 2016 associated with treatment stocks in each year into deaths averted per year from 2000 to 2016. The treatment stock for each year was calculated as the weighted sum of new indication approvals since 1976 (which is a standard measure in this field of research). The findings highlight how drugs prescribed are having a huge effect; with the mortality changes largest in prevalent tumor types with relatively more drug approvals: lung cancer, breast cancer, melanoma, lymphoma and leukemia. "When interpreting these results, however, one must carefully evaluate whether there are alternative explanations for observed mortality reductions that may be correlated over time with new treatments," Dr. MacEwan says. Therefore, in order to control external factors—such as smoking rates, age distribution of the population, and cancer screening practices—and differentiate them from the impact of drug approvals, the team controlled for tumor-specific cancer incidence, driven by these underlying population level trends. "Improved screening could partially explain the decline in mortality in some tumor types," Dr. MacEwan explained, however. "For instance, uptake of screening programs for breast, cervical, and colorectal cancer remained relatively high at >50% in 2015, all of which have been associated with mortality declines." The overall figures uncovered likely understate the impact of new drugs approved between 2000 and 2016, the study suggests. This is because many drugs were approved later during the study period, meaning the bulk of mortality reductions are likely to be realized after the end of the study period. Other limitations of the study include that new treatment interventions were limited to cancer pharmaceutical interventions and did not account for non-pharmaceutical innovations, such as robotic surgery, advances in radiotherapy and other surgical techniques, which may also affect cancer mortality. The authors now call for future research to evaluate the relationship between drug approvals and cancer mortality post 2016. More information: Journal of Medical Economics, DOI: 10.1080/13696998.2020.1834403 Provided by Taylor & Francis
3
Fred Wilson: Paternalism in the Office
Jason Fried, CEO of Basecamp, posted a message to his team yesterday in which he outlined a bunch of changes they are making to the way they run the business. Some will be familiar as others have done similar things (no more politics on the company’s communication channels, no more committees, rethinking the review process). I think it is a good thing to revisit the ways a company does things and make changes when issues arise. And posting these changes publicly so that others can see them and think about them is very helpful. I had chats with a number of portfolio CEOs yesterday about this post. It is making people think. That’s a good thing. One change that got my attention was this one: 2. No more paternalistic benefits. For years we’ve offered a fitness benefit, a wellness allowance, a farmer’s market share, and continuing education allowances. They felt good at the time, but we’ve had a change of heart. It’s none of our business what you do outside of work, and it’s not Basecamp’s place to encourage certain behaviors — regardless of good intention. By providing funds for certain things, we’re getting too deep into nudging people’s personal, individual choices. So we’ve ended these benefits, and, as compensation, paid every employee the full cash value of the benefits for this year. In addition, we recently introduced a 10% profit sharing plan to provide direct compensation that people can spend on whatever they’d like, privately, without company involvement or judgement. That does not feel right to me. If you care about the mental and physical well-being of your team, I believe it makes sense to support them by investing in that. Companies can do that tax efficiently and employees cannot. Paying employees more so that they can then make these investments personally sounds rational but I don’t believe it will be as effective as company-funded programs that employees can opt into or not. It is also the case that companies carry much of the cost of insuring their employees health in the US. While that may not be great health care policy, it is what it is right now. And so companies do have a vested interest in the health of their employees that goes beyond wanting them to be well and feel well. It may be paternalistic, but I believe that companies can and should invest in the health and wellbeing of their team. I think it makes good business sense to do so. #management
80
70 First Co-Founder Dates
👋 Hey I’m Abe ( @abe_clark ). I recently quit my job as Director of Engineering at LoanSnap . I subsequently killed my first startup experiment . I’m still planning to co-found a startup, but now I’m on the hunt for the right idea and co-founder(s). So, I decided to give co-founder dating a shot. It’s exactly what it sounds like. Meeting lots of new people in hopes of finding one you can stand (enjoy🤞) spending most of your time with for the next 5-15 years (till death do us part?). I’ve been married for almost 8 years, so dating is a distant memory for me (But I made a great choice!). I set my expectations for co-founder dating pretty low across the board. But, I was blown away. Amazing response rate, extremely qualified people, and promising ideas. I learned a lot. Here are a few notes about my process. I reached out to 178 people who had expressed interest in finding a co-founder. Thanks to Rex and Cambrian for curating this group. My experience with cold outreach in technical recruiting taught me that I’d have a low response rate. I assumed less than 10% would convert to a conversation. 15 meetings seemed like a reasonable sample size to figure out if co-founder dating was a good use of my time. So I drafted a quick intro and sent it out on LinkedIn. Within 24 hours, 35 people responded and scheduled a meeting. Many were anxious to meet later that day or the next. I set aside one week for these meetings. I blocked the days on my Calendly link before and after my target week. I did, in fact, have jury duty the following week, which was a great way to keep things compressed. Within 3 days, I was booked solid for an entire week. 30-minute blocks from 9am to 5pm. Wow. What a response. Way more than I expected. What impacted my response rate? It came down to a few key factors: 95% of people I talked to were non-technical. Many were actively searching for a technical co-founder or early engineer. I’m convinced that me being technical is the single biggest contributor to the high response rate. My most recent role came with a Director title. I was an early employee at a venture-backed startup. This gave people confidence that I could be hands on as a co-founder but also scale a team. I made it clear that I had just left my job. This made the interchange more urgent. Rex has done a great job curating the Cambrian community. Members have confidence that others in the group are high caliber and worth their time. I’ve had limited interactions with the YC co-founder matching tool and did not find the same vetting rigor. I didn’t commit to anything concrete in my initial message. I emphasized that I wanted to connect and see if I can be helpful. Lush, Thor-like hair in my avatar With a full schedule locked in, the real work was just beginning. I started by organizing my note taking setup. I knew there was no way I’d be able to track all of these conversations without detailed notes. I created a google doc that mirrored my calendar schedule. Each section had the person’s name, LinkedIn, email, and a few lines of notes about their background. I also added an initial ranking next to each person (scale of 1-5) to show how bullish I was on the person / idea based on the information I had to date. This gave me at-a-glance visibility into the priority of the next call. Next, I planned out my goals and script for the calls. Create a meaningful connection Find some way to add value Get enough information to know whether a second meeting would be worthwhile 30 minutes sounds like a lot of time. In reality, it’s just barely enough time for intros and a brief discussion of one idea. I planned my core questions to make the most of the time. Here’s the cadence the proved the most fruitful: Ask them to go first. This gives you the best context to know how to frame your intro. It can also be an indicator to know how much time you want to spend on the call. “Should we start with intros? Do you want to lead off?” Make it clear what you are hoping to gain from the relationship, as well as what you can add. Be deliberate to not mislead / give the wrong impression. “Right now I’m talking to a lot of smart people. Listening & learning. Looking for ways to help. Sometimes that is as a technical advisor or contractor. Sometimes it’s connecting someone who could be helpful. In the end, I’m looking for someone to co-found a business with.” Dig in on their idea. Flush out if it’s viable. Take notes on the way they talk. The way a person expresses an idea and why they think it’s great tells a lot about their experience in the space and as an entrepreneur. “What is the elevator pitch?” “What have you heard from potential customers?” “Is this similar to XX competitor? What makes it better?” “How do you think about distribution?” “What assumptions do you need to validate to make sure this can work?” Get a sense of where they are at (Idea stage, product, raising money) “What is the state of your team?” “Are you planning on raising money?” “What is your timeline for next steps with this business?” Understand what they are currently looking for. “What is needed to unlock the next step for your business?” “How can I be helpful?” Follow ups. Reiterate any follow ups from the call. Ideally commit to do something, even as simple as following up via email. “I’m going to think this over and email you with any questions that come up” “I thought of 1-2 people that might resonate with this space, are you ok if I reach out and see if they’d be open for a chat?” “I’m probably not the right person to help with this project, but I love being connected to fellow entrepreneurs. Let’s cheer each other on.” Monday morning, 9am arrived. Meeting. Meeting. Meeting. I was ready for a break. But, alas, no rest. The entire week I grabbed quick bites of snacks and rushed bathroom breaks in the 2-3 minutes I could sometimes shave off of a less-promising call. It was completely exhausting. But, this is the way I prefer it. Get in the zone, push myself super hard, emerge quickly with the insight I need to move forward. I was very surprised by the caliber of people I got to talk with. Serial Entrepreneurs. Harvard/Stanford Business school graduates. PhDs. Genuine people, excited to discuss their passions and plans for the future. I feel super lucky to have gained new friends in the process. After 70+ conversations (35+ hours of zoom time!) I amassed 67 pages of notes. I spent several sessions pouring over them. Organizing, categorizing, and ranking in a spreadsheet. Thinking about trends, what I’m drawn to, and next steps. I’m convinced that there are 8-10 companies in this group that will grow to > $10 million in value. 2-3 That will be unicorns or bigger. I’ll write more about the process of narrowing the field and eventually choosing a co-founder in separate posts. For now, let me share my core takeaways from co-founder speed dating, as well as some stats about the experience. Technical co-founders have high leverage In retrospect, it’s no surprise. The labor market for engineering talent is tighter than my hamstring. As a technical person, for years I’ve over indexed on “I can’t found a company until I have a great idea”. Coming up with the startup idea is over-romanticized and borderline irrelevant. If I could do it again, I would have allocated more time throughout my career to these sorts of conversations. Consumer problems are obvious Over 50% of people I talked with were focused on B2C business models. Most of these followed the path of “I’ve had this problem personally”. It’s even a commonly advised approach to finding startup ideas. But, this approach leads more people down the wrong path than it helps. Are some wildly successful companies built this way? Of course. But there are also several times as many people failing at solving the same problems. It’s crazy hard to differentiate your “app that’s kinda like Mint.com but better because…”. A better approach is to put in the work researching, interviewing, and embedding yourself in a large, tech-deficient industry. Listen. Learn. Solve those problems. Interviewing and sales are crucial skills During my career, I’ve spent time as a missionary, a door-to-door salesman, and a technical manager in charge of recruiting. Outbound sales positions like this are not always fun, and are far from my ideal job description. But they are so crucial. Sales teaches you the power of a simple, well-crafted question. The right balance of talk vs. listen. The confidence to ask clearly for what you want. For technical people, opportunities to build these skills can be hard to find. Lean into these chances. Go to the career fair. Be the recruiting manager. A technical person who can sell unlocks a faster and higher career ladder. Here are a few stats from my interactions. 35+ Hours of Zoom Calls 89% Male (Clearly a problem!) Business Model * * Simplified – Many targeting B2B2C or another variation General Problem Space I plan to continue to keep building in the open. Feel free to subscribe for future updates! Or, if you want to be in my first 100 twitter followers , the round is undersubscribed. Questions? Comments? Want introductions to promising investments? Let me know!
1
I VTuber
If you've watched tech talks I've done and any of my Twitch streams recently, you probably have noticed that I don't use a webcam for any of them. Well, technically I do, but that webcam view shows an anime looking character. This is because I am a VTuber. I use software that combines 3d animation and motion capture technology instead of a webcam. This allows me to have a unique presentation experience and helps me stand out from all the other people that create technical content. < Cadey > I stream on Twitch when I get the inspiration to. I usually announce streams about a half hour in advance on Twitter. I plan to get a proper schedule soon. This also makes it so much easier to edit videos because of the fact that the face on the avatar I use isn't too expressive. This allows me to do multiple takes of a single paragraph in the same recording because I can reset the face to neutral and you will not be able to see the edit happen unless you look really closely at my head position. Some of the best things in life start as the worst mistakes imaginable and the people responsible could never really see them coming. This all traces back to my boss buying everyone an Oculus Quest 2 last year. Working at @Tailscale is great. They sent us all an Oculus Quest 2! pic.twitter.com/dDhbwO9cFd — Xe Iaso (@theprincessxena) February 19, 2021 This got me to play around with things and see what I could do. I found out that I could use it with my PC using Virtual Desktop. This opened a whole new world of software to me. The Quest 2 is no slouch, but it's not entirely a supercomputer either. However, my gaming PC is better than the Quest 2 at GPU muscle. One of the main things I started playing with was VRChat. VRChat is the IMVU for VR. You pick an avatar, you go into a world with some friends and you hang out. This was a godsend as the world was locked down all throughout 2020. I hadn't really gotten to talk with my friends very much, and VRChat allowed us to have an experience doing it more than a giant Zoom call or group chat in Discord. One of the big features of VRChat is the in-game camera. The in-game camera functions like an actual physical camera and lets you also enable a mode where that camera controls the view that the VRChat desktop window renders. This mode became the focus of my research and experimentation for the next few weeks. With this and OBS' Webcam Emulation Support, I could make the world in VRChat render out to a webcam which could then be picked up by Google Meet. The only major problem with this was the avatar I was using. I didn't really have a good avatar then. I was drifting between freely available models. Then I found the one that I used as a base to get my way to the one I am using now. Version 1.x was only ever really used experimentally and never used anywhere publicly. I mentioned above that I did VR wirelessly but didn't go into much detail about how much of an excruciating, mind-numbing pain it was. It was an excruciating, mind-numbingly painful thing to set up. At the time my only real options for this were ALVR and Virtual Desktop. A friend was working on ALVR so that's what I decided to use first. < Cadey > At the time of experimentation, Oculus Air Link didn't exist. ALVR isn't on the Oculus store, so I had to use SideQuest to sideload the ALVR application on my headset. I did this by creating a developer account on the Oculus store to unlock developer mode on my headset (if you do this in the future, you will need to have bought something from the store in order to activate developer mode) and then flashed the apk onto the headset. < Mara > Fun fact: the Oculus Quest 2 is an Android tablet that you strap to your face! I set up the PC software and fired up VRChat. The most shocking thing to me at the time was that it all worked. I was able to play VRChat without having to be wired up to the PC. Then I realized how bad the latency was. A lot of this can be traced down to how Wi-Fi as a protocol works. Wi-Fi (and by extension all other wireless protocols) are built on shouting. Wi-Fi devices shout out everywhere and hope that the access point can hear it. The access point shouts back and hopes that the Wi-Fi devices can hear it. The advantage of this is that you can have your phone anywhere within shouting range and you'll be able to get a victory royale in Fortnite, or whatever it is people do with phones these days. The downside of Wi-Fi being based on shouting is that only one device can shout at a time, and latency is critical for VR to avoid motion sickness. Even though these packets are pretty small, the overhead for them is not zero, so lots of significant Wi-Fi traffic on the same network (or even interference from your neighbors that have like a billion Wi-Fi hotspots named almost identical things even though it's an apartment and doing that makes no sense but here we are) can totally tank your latency. It was good enough to get me started. I was able to use it in work calls and one of my first experiences with it was my first 1:1 with a poor intern that had a difficult to describe kind of flabbergasted expression on his face once the call connected. By now I had found an avatar model and was getting it customized to look a bit more business casual. I chose a model based on a jRPG character and have been customizing it to meet my needs (and as I learn how to desperately glue together things in Unity). During this process I was able to get a Valve Index second-hand off a friend in IRC. The headset was like new (I just now remembered that I bought it used as I was writing this article) and it allowed me to experience low-latency PC VR in its true form. I had used my husband's Vive a bit, but this was the first time that it really stuck for me. It also ruined me horribly and now going back to wireless VR via Wi-Fi is difficult because I can't help but notice the latency. I am ruined. Doing all this with a VR headset works, but it really does get uncomfortable and warm after a while. Strapping a display to your head makes your head get surprisingly warm after a while. It can also be slightly claustrophobic at times. Not to mention the fact that VR eats up all the system resources trying to render things into your face at 120 frames per second as consistently as possible. Other VTubers on Twitch and YouTube don't always use VR headsets for their streams though. They use software that attempts to pick out their face from a webcam and then attempts to map changes in that face to a 2d/3d model. After looking over several options, I arbitrarily chose VSeeFace. When I have it all set up with my VRM model that I converted from VRChat, the VSeeFace UI looks something like this: The green point cloud you see on the left of this is the data that VSeeFace is inferring from the webcam data. It uses that to pick out a small set of animations for my avatar to do. This only really tracks a few sound animations (the sounds of vowels "A", "I", "U", "E", "O") and some emotions ("fun", "angry", "joy", "sorrow", "surprised"). This is enough to create a reasonable facsimile of speech. It's not perfect. It really could be a lot better, but it is very cheap to calculate and leaves a lot of CPU headroom for games and other things. < Mara > VRChat uses microphone audio to calculate what speech sounds you are actually making, and this allows for capturing consonant sounds as well. The end result with that is a bit higher quality and is a lot better for tech talks and other things where you expect people to be looking at your face for shorter periods of time. Otherwise webcam based vowel sounds are good enough. It works though. It's enough for Twitch, my coworkers and more to appreciate it. I'm gonna make it better in the future, but I'm very, very happy with the progress I've made so far with this. Especially seeing as I have no idea what I am doing with Unity, Blender and other such programs. < Cadey > Advice for people trying to use Unity for messing with things like spring bone damping force constants: take notes. Do it. You will run into cases where you mess with something for a half an hour, unclick the play button in Unity and then watch all your customization go down the drain. I had to learn this the hard way. Don't do what I did. Right now my VTubing setup doesn't have a way for me to track my hands. I tend to emote with my hands when I am explaining things. When I am doing that on stream with the VTubing setup, I feel like an idiot. VMagicMirror would let me do hand tracking with my webcam, but I may end up getting a Leap Motion to do hand tracking with VSeeFace. Most of the other VTubing scene seems to have Leap Motions for hand tracking, so I may follow along there. I want to use this for a conference talk directly related to my employer. I have gotten executive signoff for doing this, so it shouldn't be that hard assuming I can find a decent subject to talk about. I officially double dare you — apenwarr (@apenwarr) December 30, 2021 I also want to make the model a bit more expressive than it currently is. I am limited by the software I use, so I may have to make my own, but for something that is largely a hackjob I'm really happy with this experience. Right now my avatar is very, very unoptimized. I want to figure out how to make it a lot more optimized so that I can further reduce GPU load on my machine rendering it. Less GPU for the avatar means more GPU for games. I also want to create a conference talk stage thing that I can use to give talks on and record the results more easily in higher resolution and detail. I'm very much in early research stages for it, but I'm calling it "Bigstage". If you see me talking about that online, that's what I'm referring to. starting to draw out the design for Bigstage (a VR based conference stage for me to prerecord talk videos on pic.twitter.com/n8osEv9BQI — Xe Iaso (@theprincessxena) December 14, 2021 I hope this was an amusing trip through all of the things I use to make my VTubing work. Or at least pretend to work. I'm doing my best to make sure that I document things I learn in forms that are not badly organized YouTube tutorials. I have a few things in the pipeline and will stream writing them on Twitch when they are ready to be fully written out. This post was written live on Twitch. You can catch the VOD on Twitch here. If the Twitch link 404's, you can catch the VOD on YouTube here. The YouTube link will not be live immediately when this post is, but when it is up on Saturday January 15th, you should be able to watch it there to your heart's content. My favorite chat message from the stream was this: kouhaidev: I guess all of the cool people are using nix
4
Ask HN: Why aren't we talking more about matrix?
An open network for secure, decentralized communication Imagine a world... ...where it is as simple to message or call anyone as it is to send them an email. ...where you can communicate without being forced to install the same app. ...where you can choose who hosts your communication. ...where your conversations are secured by E2E encryption. ...where there’s a simple standard HTTP API for sharing real-time data on the web. This is Matrix. Matrix is an open source project that publishes the Matrix open standard for secure, decentralised, real-time communication, and its Apache licensed reference implementations. Maintained by the non-profit Matrix.org Foundation, we aim to create an open platform which is as independent, vibrant and evolving as the Web itself... but for communication. As of June 2019, Matrix is out of beta, and the protocol is fully suitable for production usage. Messaging Matrix gives you simple HTTP APIs and SDKs (iOS, Android, Web) to create chatrooms, direct chats and chat bots, complete with end-to-end encryption, file transfer, synchronised conversation history, formatted messages, read receipts and more. Conversations are replicated over all the servers participating in them, meaning there are no single point of control or failure. You can reach any other user in the global Matrix ecosystem of over 40M users, even including those on other networks via bridges. End-to-End Encryption Matrix provides state-of-the-art end-to-end-encryption via the Olm and Megolm cryptographic ratchets. This ensures that only the intended recipients can ever decrypt your messages, while warning if any unexpected devices are added to the conversation. Matrix’s encryption is based on the Double Ratchet Algorithm popularised by Signal, but extended to support encryption to rooms containing thousands of devices. Olm and Megolm are specified as an open standard and implementations are released under the Apache license, independently audited by NCC Group. VoIP With the advent of WebRTC, developers gained the ability to exchange high quality voice and video calls – but no standard way to actually route the calls. Matrix is the missing signalling layer for WebRTC. If you are building VoIP into your app, or want to expose your existing VoIP app to a wider audience, building on Matrix’s SDKs and bridges should be a no-brainer. Bridging Matrix owes its name to its ability to bridge existing platforms into a global open matrix of communication. Bridges are core to Matrix and designed to be as easy to write as possible, with Matrix providing the highest common denominator language to link the networks together. The core Matrix team maintains bridges to Slack, IRC, XMPP and Gitter, and meanwhile the wider Matrix community provides bridges for Telegram, Discord, WhatsApp, Facebook, Signal and many more. IOT, VR and more... Matrix can handle any type of real-time data, not only messaging and VoIP. By building bridges to as many IoT silos as possible, data can be securely published on the Matrix network. IoT solutions built on Matrix are unified, rather than locked to specific vendors, and can even publish or consume Matrix data directly from devices via ultra-low bandwidth transports (100bps or less) Meanwhile AR and VR vendors are recreating the silos we’ve seen in instant messaging rather than working together towards an open ecosystem. Matrix can be the unifying layer for both communication and world data in AR and VR. How does it work? Matrix is really a decentralised conversation store rather than a messaging protocol. When you send a message in Matrix, it is replicated over all the servers whose users are participating in a given conversation - similarly to how commits are replicated between Git repositories. There is no single point of control or failure in a Matrix conversation which spans multiple servers: the act of communication with someone elsewhere in Matrix shares ownership of the conversation equally with them. Even if your server goes offline, the conversation can continue uninterrupted elsewhere until it returns. This means that every server has total self-sovereignty over its users data - and anyone can choose or run their own server and participate in the wider Matrix network. This is how Matrix democratises control over communication. By default, Matrix uses simple HTTPS+JSON APIs as its baseline transport, but also embraces more sophisticated transports such as WebSockets or ultra-low-bandwidth Matrix via CoAP+Noise. An Open Standard Simple pragmatic RESTful HTTP/JSON APIs by default Open specification of the Matrix standard Fully decentralised conversations with no single points of control or failure End-to-end encryption via Olm and Megolm WebRTC VoIP/Video calling using Matrix signalling Real-time synchronised history and state across all clients Integrates with existing 3rd party IDs to authenticate and discover users Maintained by the non-profit Matrix.org Foundation Group conversations, read receipts, typing notifications, presence... Latest News This Week in Matrix 2023-06-02 Matrix Live Dept of Spec 📜 Andrew Morgan (anoa) reports Here's your weekly spec update! The heart of Matrix is the specification - and… This Week in Matrix 2023-05-26 Matrix Spec ( website ) uhoreg announces Here's your weekly spec update! The heart of Matrix is the specification - and this is modified by… Matrix v1.7 release Hey all, Matrix 1.7 has just been released! The last spec release was about 3 months ago , keeping us on track for regular quarterly… Disclosing Synapse security advisories Today we are retroactively publishing advisories for security bugs in Synapse . From oldest to most recent, they are: GHSA-p9qp-c452-f9r… Explore Matrix Try Matrix Clients Bots SDKs Hosting SDKs Native SDKs for multiple platforms, including: Python JavaScript Android iOS Open Source Join thousands of other developers in our open source repositories, including: Synapse JavaScript SDK Android SDK iOS SDK The Matrix Foundation Matrix is managed through an open governance process, looked after by The Matrix.org Foundation - a non-profit UK Community Interest Company. It acts as a neutral guardian of the Matrix spec, nurturing and growing Matrix for the benefit of the whole ecosystem. The Guardians are the legal directors of the Foundation, responsible for ensuring that it keeps on mission and neutrally protects the development of Matrix. I have seen the future of distributed collaboration and it is Matrix. The .NET binding looks old, incomplete and I maintained. If we get GSoC students this year, I’ll be happy to mentor, in the meantime I should probably contribute to it: https://t.co/nJY4iNHaLQ — Miguel de Icaza (@migueldeicaza) February 6, 2019 I finally started a spreadsheet to compare relative security, privacy, compatibility, and features of various messenger systems. TL;DR @RiotChat / @matrixdotorg is winning on all fronts. https://t.co/7zxczdjwwJ — Lance R. Vick (@lrvick) October 13, 2018 We are spending more and more time in @matrixdotorg. @RiotChat works like a charm, better than @SlackHQ for many things and of course way better than IRC. It's awesome to have so many open communities forming and being able to jump from one channel to the other. Give it a try! 📢 pic.twitter.com/5uL1D4ryQo — poliastro (@poliastro_py) March 5, 2019 Thank you to our incredible sponsors
1
Stopping A/B Tests: How Many Conversions Do I Need?
A/B testing is great and very easy to do these days. Tools are getting better and better. As a result, people rely more and more on the tools. As a result, critical thinking is much less common. It’s not fair to just blame the tools of course. It’s very human to try to (over)simplify everything. Now the internet is flooded with A/B testing posts and case studies full of bullshit data, imaginary wins. Be wary when you read any testing case study, or whenever you hear someone say “we tested that”. We’re all learning about A/B testing. It’s like anything else – the more you do it, the better you get at it. So it’s only natural that every optimizer (including myself) has made a ton of testing mistakes in the past. Many mistakes are more common than others, but there’s one that is the most prevalent: ending the test too soon. Table of contents Don’t stop the test just when you reach 95% confidence (or higher) Magic numbers don’t exist How representative is the traffic in the test? Be wary of statistical significance numbers (even if it’s 99%) when the sample size is small With low traffic, you need bigger wins to run a test per month, but… Without seeing absolute numbers, be very suspicious Conclusion This is the first rule, and very important. It’s human to scream “yeah!” and want to stop the test, and roll the treatment out live. Many who do discover later (if they bother to check) that even though their test got like +20% uplift, it didn’t have any impact on the business. Because there was no actual lift – it was imaginary. Consider this: One thousand A/A tests (two identical pages tested against each other) were run. This means if you’ve run 1.000 experiments and didn’t control for repeat testing error in any way, a rate of successful positive experiments up to 25% might be explained by a false positive rate. But you’ll see a temporary significant effect in around half of your experiments! So if you stop your test as soon as you see significance, there’s a 50% chance it’s a complete fluke. A coin toss. Totally kills the idea of testing in the first place. Once he altered the experiment so that he would pre-determine the needed sample size in advance, only 51 experiments out of 1.000 were significant at 95%. So by checking the sample size, we went from 531 winning tests to 51 winning tests. How to pre-determine the needed sample size? There are many great tools out there for that, like this one. Or here’s how you would do it with Evan Miller’s tool: In this case, we told the tool that we have a 3% conversion rate, and want to detect at least 10% uplift. The tool tells us that we need 51,486 visitors per variation before can look at the statistical significance levels and statistical power. What about the rules like X amount of conversions per variation? Even though you might come across statements like “you need 100 conversions per variation to end the test” – there is no magical traffic or conversion number. It’s slightly more complex than that. Andrew Anderson, Head of Optimization at Malwarebytes It is never about how many conversions, it is about having enough data to validate based on representative samples and representative behavior. 100 conversions is possible in only the most remote cases and with an incredibly high delta in behavior, but only if other requirements like behavior over time, consistency, and normal distribution take place. Even then it is has a really high chance of a type I error, false positive. Anytime you see X number of conversions it is a pretty glaring sign that the person talking doesn’t understand the statistics at all. And – if 100 conversions was the magic number, then big sites could end their tests just in minutes! That’s silly. If you have a site that does 100,000 transactions per day, then 100 conversions can’t possibly be a representative of overall traffic. So this leads to the next thing you need to take into account – representativeness of your sample size. By running tests you include a sample of visitors in an experiment. You need to make sure that the sample is representative of your overall, regular traffic. So that the sample would behave just as your real buyers behave. Some want to suddenly increase the sample size by sending a bunch of atypical traffic to the experiment. If your traffic is low, should you blast your email list, or temporarily buy traffic to get large enough sample size for the test? In most cases you’d be falling victim to selection effect – you wrongly assume some portion of the traffic represents the totality of the traffic. You might increase conversion for that segment, but don’t confuse that with an increase across segments. Your test should run for 1 or better yet 2 business cycles, so it includes everything that goes on: Lukas Vermeer, Data Scientist at Booking.com What matters much, much more than the exact number of visitors in your experiment is the representativeness of the sample, the size of the effect and your initial test intent. If your sample is not a good representation of your overall traffic, then your results are not either. If your effect size is very large, then you need only a few visitors to detect. If you intended to run your test for a month, and you ran it for a month, and the difference is significant, then it’s frikkin’ significant. Don’t waste your time looking for magic numbers: this is Science, not magic. So you ran a test where B beat A, and it was an impressive lift – perhaps +30%, +50% or even +100%. And then you look at the absolute numbers – and see that the sample size was something like 425 visitors. If B was 100% better, it could be 21 vs 42 conversions. So when we punch the numbers into a calculator, we can definitely see how this could be significant. BUT – hold your horses. Calculating statistical significance is an exercise is algebra, it’s not telling you what the reality is. The thing is that since the sample size is so tiny (only 425 visitors), it’s prone to change dramatically if you keep the experiment going and increase the sample (the lift either vanishes or becomes much smaller, regression toward the mean). I typically ignore test results that have less than 250-350 conversions per variation since I’ve seen time and again that those numbers will change if you keep the test running, and the sample size gets bigger. Anyone who has experience of running hundreds of tests can tell you that. A lot of the “early wins”disappear as you test longer, and increase the sample size. I run most of my tests for at least 4 full weeks (even if needed sample size reached much earlier) – unless I get proof first that the numbers stabilize sooner (2 or 3 weeks) for a given site. Many sites have low traffic and low total monthly transaction count. So in order to call a test within 30 days, you need a big lift. Kyle Rush from Optimizely explains it eloquently here. If you have bigger wins (e.g. +50%), you definitely can get by with smaller sample sizes. But it would be naive to think that smaller sites somehow can get bigger wins more easily than large sites. Everyone wants big wins. So saying “I’m going to swing big” is quite meaningless. The only true tidbit here is that in order to get a more radical lift, you also need to test a more radical change. You can’t expect a large win when you just change the call to action. Also, keep in mind: testing is not a must-have, mandatory component of optimization. You can also improve without testing. Most A/B testing case studies only publish relative increases. We got a 20% lift! 30% more signups! That’s very good, we want to know the relative difference. But can we trust these claims? Without knowing the absolute numbers, we can’t. There are many reasons why someone doesn’t want to publish absolute numbers (fear of humiliation, fear of competition, overzealous legal department etc). I get it. There are a lot of case studies I’d like to publish, but my clients won’t allow it. But the point remains – unless you can see test the duration, total sample size and conversion count per variation, you should remain skeptical. There’s a high chance they didn’t do it right, and the lift is imaginary. Before you can declare a test “cooked”, you need to make sure there’s adequate sample size and test duration (to ensure good representativeness) before looking at confidence levels.
2
Scientists rename human genes to stop MS Excel from misreading them as dates
Best-In-Class Stock Research Tools Monitor your portfolio in real-time. Access our top stock picks, proprietary research reports, stock screeners and more. Try MarketBeat All Access for free today. p Latest Stock Market News 10 hours ago Saudi Arabia to cut oil output by 1 million barrels per day to boost slumping prices 13 hours ago Slow start to New York's legal pot market leaves farmers holding the bag 13 hours ago Slow start to New York's legal pot market leaves farmers holding the bag 1+ days ago MarketBeat Week in Review – 5/29 - 6/2 1+ days ago Trading Channel Breakout in Lululemon Brought by Earnings 1+ days ago What Should Investors Make of These 3 Dividend Cuts? 1+ days ago Constellation Brands Taps into Growth: Analysts Bullish on Stock 1+ days ago CrowdStrike: Another Tech Stock to Buy on the Dip 1+ days ago Salesforce: Time to Snap it Up as the Market Buys the Dip? 1+ days ago Zscaler: Analysts Raise the Bar for the AI Cloud Security Company 1+ days ago High-Quality, High-Yield Hormel Looks Tasty at These Levels 1+ days ago Twilio Up As Activist Investor Spurs Change, Cathie Wood Invests
44
Taliban entering Kabul 'from all sides'
BBC News Services On your mobile On smart speakers Get news alerts Contact BBC News
1
How Engineers Can Thrive in a Customer Obsessed Culture
Over my relatively short career, I’ve noticed a gradual shift in how I think about my roles and responsibilities at work. When I first started out, I felt that engineers should be valued for their technical skill-set alone. I believed that the role of an engineer was to take a set of requirements, usually from someone with product expertise, and convert it into functional code. And this is mainly true. Engineers are supposed to be technical experts. They need to know what challenges to expect when scaling products from thousands to millions of users. They need to know how to build the product in a way that is secure and reliable. That is our bread and butter. Depending on what we’re working with, we need expertise in various fields, like mobile, data, web, hardware, etc. On top of that, engineers, by nature, love to tinker with stuff. We love to get our hands dirty and just build something. I enjoy my work largely because I like to solve complicated technical problems — a trait I’m sure I share with other engineers. I’ve always tended to focus more on how we build something than why. For example, when given a choice between building a simple solution to a problem and building a complex one, I would almost always opt for the latter; for the simple reason that I enjoy challenging myself. I feel more satisfied with my work when I know I didn’t take any “shortcuts.” However, I’m starting to appreciate that engineering does not exist in a vacuum. We don’t write software or create products to make ourselves feel better. We do it to meet some goal for our customers or the business. Ultimately, every feature we build ties back to either a customer need or a business requirement. In other words, engineering exists to deliver value to the business and the customer. Even though this sounds a bit trivial, this perspective has changed how I view my role as an engineer. I’m learning to base my decisions on business and customer value instead of focusing entirely on the technology involved. Technology is an excellent tool that we can leverage to improve customer experience. When used properly, good technology has the potential to make businesses tremendously more efficient. But technology itself is not the end goal. Your customers won’t care about a feature’s technical complexity or about the obstacles you had to overcome to make it a reality. All they care about is having something reliable, secure and cheap. The additional cost you incur from the complexity is irrelevant if nobody is willing to pay for it. This makes it all the more important to prioritize your decisions. The answer might very well be yes, but these are important considerations to keep in mind. Remember that engineering effort is not an infinite resource. Any time you spend solving an unnecessary problem could have been spent making your business more efficient, or improving the lives of your customers. And being able to think about your tasks in the context of the broader business is just as important as having a strong technical skillset. Amazon is a famously customer-obsessed company. Every decision we make starts with the customer in mind. We identify problems that could have the biggest impact on our customer's experience and work backward toward a solution, inventing new technologies along the way as needed. This mindset is core to the company culture and has been a major influence on me. It is everybody's job to think about the customer - and engineers are no exception.
1
Future Defense Technologies
A vast majority of nerd-dom are into gaming and into military technology. I know for as long as I can remember I have been a personal fan of both. My love for technology traces it’s roots to Runescape and WoW, as well as an early fascination with cutting edge military technology (if you also watched G4TV and Future Weapons simultaneously then God bless you ha). As I got older and began my career, I quickly learned — much to my chargin — that the rate of technical progress that has occurred in gaming has not occurred within the defense sector. In fact, our military is vastly behind in critical technologies, so much so that were we to end up in a conflict with a hostile foreign power, we might lose. For all of it’s faults, America is still the defender of the free world. Society and human propensity for evil has been held at bay for almost a century by the defending of liberty, democracy, human rights, free markets, open economies, and morality by America. The defense of these ideals is directly related to our country’s ability to project force to deter conflict. A world where that strength can be challenged, is a world where another nation that does not hold these values writes the rules — and the fragility of humanity will most certainly be tested. For a deeper dive into this, I highly recommend Christian Brose’s book —  The Kill Chain: Defending America in the Future of High Tech Warfare Congressional Research Service,  U.S. Research and Development Funding and Performance: Fact Sheet  (Jan. 2020) Congressional Research Service,  Global Research and Development Expenditures: Fact Sheet  (Sept. 2019) Our US government used to lead the way in R&D spend, In 1960, the US government accounted for 69% of the world’s spend on R&D. In 2017, it was only 28%, yet in 2019 our defense expenditures were more than double Russia and China’s. We are still stuck supporting legacy systems (which are valuable but vulnerable) while our competitors are free to invest in future capabilities without the burden of a century worth of investments and assets. Government does still play a critical role in advancing future technologies, but the private sector has replaced government as the biggest investor in R&D all the while not translating those innovations at scale to our armed services. A recent GAO report found that in past years, defense contractors put only 40 percent of their independent research funding towards DOD priority areas such as AI, autonomous vehicles, hypersonic weapons, and directed energy weapons. The solutions are well known — change our military strategy, increase our support of new defense contractors and startups, optimize our budgeting process, give transparency to Congress to make allies, refine our government acquisition process, and partner with the private sector to get the best talent working on the most important problems. Below are some of the future defense technologies modernizing 21st century warfare that will continue to need support from government and company funded R&D, as well as continued venture capital investment. Drones have played, and will play, a critical role in the battlefield. C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance) has become a bedrock of 21st century warfare and heavily relies on our ability to constantly — and autonomously — survey an area and provide real time visual data, object detection, and analysis. Fully autonomous systems, and drones specifically, are critical for the defense of US assets, as well as for an effective and informed military on the battlefield. The defense sector will continue to invest heavily in, and deploy at scale, sUAS and cUAS technologies. Anduril’s Ghost 4 The US Reaper Drone has been a game changing vehicle in our military. Yet UAVs are one of the only pilotless vehicles deployed at scale, and are not fully autonomous. In fact, UAVs require the use of multiple humans to pilot the vehicle and do real time visual data analysis. This is not an efficient system. We need fully autonomous systems for a myriad of use cases within our military to react with greater precision and not put American lives at risk. Currently, our military is developing autonomous vehicles, such as aerial tankers, but we have seen slower adoption in this area due to the legacy systems we have from tanks to ship — as well as the incentives that keep us funding those legacy systems. Boeing’s MQ-25 Stringray aerial tanker The Navy’s Unmanned Ship “Sea Hunter” Although still in the early stages, directed energy weapons could provide a near-unlimited, inexpensive, and instantaneous supply of precise firepower without having to reload, resupply, or even manufacture munitions. They may be the most effective means to defend against massed attacks from weapons such as swarming drones or a barrage of guided missiles. Directed energy weapons use concentrated electromagnetic energy to strike targets. Two types of weapons currently in development are high-energy lasers (HELs) and high-powered microwave (HPM). Laser weapons heat a target until it melts and microwave weapons disrupt the electronics of a target. Navy’s Laser Weapon System aboard the USS Ponce Hypersonic weapons will increase the speed and distance of modern conflict. Hypersonic glide vehicles and hypersonic cruise missiles are maneuverable, long-range weapons that fly at a speed of Mach 5 or greater and do not follow a ballistic trajectory but rather can maneuver while traveling to their target. Prompt Global Strike (PGS) capabilities will change the way we think about conflict. Hypersonics will play a critical role in our A2/AD capabilities as we have invested very little in domestic defense infrastructure because no other country could ever have been able to mount any attack here, until the 21st century. I believe hypersonics will play a key role as a deterrent, similar to the role of nuclear weapons during the past century. At this stage, the technology is still nascent and there is currently no program of record for Hypersonics. Hypersonic missile and vehicle rendering from Raytheon The global cybersecurity market is currently worth $173B in 2020, growing to $270B by 2026.   Here are some interesting statistics: Enterprises are predicted to spend $12.6B on cloud security tools by 2023, up from $5.6B in 2018,  according to Forrester . Enterprise spending on cloud security solutions  is predicted to increase from $636M in 2020 to $1.63B in 2023, attaining a 26.5% CAGR. Spending on Infrastructure Protection  is predicted to increase from $18.3B in 2020 to $24.6B in 2023, attaining a 7.68% CAGR Cybersecurity is not only a critical industry within the commercial sector, but it is now known to be critical to national security. Our daily lives are increasingly enhanced by technology, but concurrently are made more vulnerable. The protection of our data, information, critical infrastructure (satellites, energy grids, transportation, etc.), financial systems, and more are of the utmost importance. There are interesting innovations in less talked about cyber defense strategies, such as crowd-sourcing (see  Synak ), as well as in Operational Technology cyber defense (see  Shift5 ). JADC2 (Joint All Domain Command & Control) is the emerging term senior DoD officials are using to describe linking military sensors to all war fighters — across all services and domains — providing decision makers with the most accurate situational awareness possible. It is a multifaceted concept that requires near and long-term integration and modernization efforts to have all systems and departments fully integrated. Our government is great at building massive platforms. No other country can build carriers and jets like we can. Yet our weak point, and one that is amplified in today’s kill chains, is software. Our government is just not good at building software, and the innovations in the private sector have not made there way into the defense sector. It is critical that our military invest in, and adopt, software that will enable a more integrated and efficient stack to empower our best and brightest to protect and serve our country. Our government operates on a fragmented, slow, and vulnerable tech stack, offering a fragmented experience with a myriad of applications for a multitude of use cases. In short — its not good. Today, in 2020, generals in the field can’t even check rely on networks to check their e-mail, or have difference vehicles and systems talk with eachother. Data, systems, vehicles, and individuals are siloed. Recent initiatives such as the Navy’s ABMS can provide battlefield management (insights, sensory information, and control of products) in a single platform. Recent  demonstrations  have shown the potential effectiveness of these future systems. We need more of the leading Silicon Valley companies to support our government so we are remain at the cutting edge for AI, data management and analysis, cybersecurity, computer vision, AutoML, and more. Palantir Foundry Early Anduril ABMS tech showcased in recent DOD demonstrations Whoever maintains control of space, and holds the ability to use it to communicate, navigate, and see anywhere in the world almost instantaneously — in both peace and wartime — will have a critical advantage. ISR, GPS, and PNT technologies are critical for our military, and still remain extremely vulnerable in space. We have relied on this technology since we first demonstrated its immense value on the Desert War, and some of our most critical infrastructure will continue to be in Space. The Space Force is a step in the right direction. However, we need to further invest in this frontier which will rely on executing an integrated, focused, and long-term Defense strategy. The continued acquisition of real estate in space, the ability to expand our infastructure in space, and the ability to protect our assets in space are primary focuses for the DoD right now. Starlink concept art In conclusion, these new and exciting developments are only made possible through public-private partnerships. Since Silicon Valley’s beginnings, rapid technological advancement was accomplished through our most talented people and companies working with the support of federal dollars and resources. The recent  Future of Defense Task Force 2020 report  published by the Senate Armed Services committed correctly stated that  “to secure vital nation security interests both at home and abroad, the United States should embrace the doctrine of collective security by strengthening existing alliances and working to build new ones. A whole of government approach that engages global partners through diplomacy, economics, humanitarian aid, security cooperation, and military to military relations is among the most notable actions the United States can take to ensure continued peace, financial stability, and strategic overmatch when gaming out the future of defense.”  A whole of government approach is indeed the key. It is important to note that better and smarter investments in our military will not just yield a more robust and capable force, but also free up wasted dollars to invest in domestic and societal problems such as poverty, inequality, healthcare, and infrastructure. Congress has long done what we in the venture and finance wold like to call putting good dollars after bad. It is important to also understand that developing new technologies for the purpose of deterrence is vastly different than for the purposes of aggression. This difference in philosophy, and goal, is what separates our country from others. The investments we make are to maintain our military superiority in order to protect and maintain the free world. It is a great economic, social, and military responsibility we took on post World War II that no other country is capable of doing. I recommend checking out Trae Stephens work on the  ethics of investing in defense tech  for further insight. All in all, it is a true pleasure and blessing of mine to be able to partner with the entrepreneurs and companies building these technologies to help advance peace and freedom in our world. These companies are filled with people who unite across party lines, beliefs, and race in the support of what unites us all — our humanity, and our country. — Opinions expressed are solely my own and do not express the views or opinions of my employer, 137 Ventures
1
Radical acceptance – my MVP sucks
top of page Producer Trader Importer Distributors Client p p p p Simple Safe Digital Get invited by a member and start Negotiating the food we need, buying and selling the food we eat. We believe that together and collaboratively, we can go further. Our population needs food, and we can collaborate to make it more accessible to all. Us the B2B are the ones driving food to people. Collaborative Technological Innovative A new approach to food using technology and connectivity to feed people Contact we would love to hear from you contact us MORE THAN A dashboard A community A marketplace Built to feed 10 billion people we are focus to contribute to the ODS bottom of page
1
What a CEO Does (2010)
Why is it that when a company has an interim CEO and is engaged in a search, so many things seem to grind to a halt? Is it just human nature that we expect the boss at the top to make decisions, when in reality the boss at the top will simply delegate it? Is it that the kind of person who steps up to be an Interim CEO doesn’t want to delegate, and draws those decisions to themself?Edit: I suspect the CEO enables far more activities than they personally take part in. Delegating encompasses a whole range of activities to keep the wheels turning. i think the company is marking time during those periods waiting for newleadership to emerge Fred, when a new CEO comes into a struggling company, how long does she get to accomplish those three things? What other things have you found (consistently) that great CEO’s do? it takes six months to get comfortable and another six months to get thingsmoving in the right directioni think a year is good In the turnaround business, everything takes twice as long and costs twice as much as your original estimate. I promise you. Totally agree with both JLM and Fred.Rather like buying a Fixer-Upper and thinking you can upgrade for just $xxK within say 12 weeks and then flip: Rarely if ever happens the way you think it will or it should because there are always things that are not “discovered” until you (metaphorically) take down the sheet rock and look behind the covers ~ always takes longer and always cost more than you ever might have expected from first review of details. That paragraph quote is so true and on the mark. The last sentence transcends a lot of activities, from fund raising to sound financial management. But I would say that at under 8-10 employees, the CEO/founder does a bit more diversification because there aren’t enough bodies to delegate to. Yup, as CEO of my own startup DailySnap with just 5 employees, I’m basically wearing many hats everyday.. do some coding one day, hiring the next, strategizing everyday… it’s a tough job. Love this advice — and think you´re right on the mark. Though I´ve never been a CEO, I will be one day. A corollary to “What a CEO Does” was told to me a few months ago by a friend: “What a Board Can Do”: 1) Invest more money (or not).2) Fire the CEO.That kind of wise simplicity makes the rest of a startup’s life a whole lot easier to navigate, in my opinion.Thanks for all of your wisdom, Fred! I would add 1.5 between 1 and 2 which is:A Board should act as a sounding board and counselor for the CEO in his or her constant refinement of the strategic direction of the company yes, but the CEO has to want thatotherwise it doesn’t work This comment is similar to an earlier blog post about the role of mentors and coaches. A good CEO is in constant consultation w/ his own well-developed “kitchen cabinet”. Experts who have been cultivated who answer the classic question honestly.Does this dress make my ass look fat?As big as a freakin’ Duomo, sweetheart!The constant refinement of thinking followed by the minor adjustment of the azimuth is the surest way of staying on course.One of my greatest pleasures in life is testing my view of things w/ folks I consider to be wise, sages and just plain smart.Today I had the great pleasure of having lunch w/ a mentor of mine, a sagacious and wise guy w/ huge business interests in China. I also brought my 24-year old banker son along w/ me and made him listen and participate.It was a just a joyful experience. And great fun. I learned a bit.You are supremely lucky and blessed if you have but a single Board member with whom you can maintain such a relationship. You must invest the time to cultivate such a relationship. Some things you just can’t tell by looking in the mirror.Someone needs to give it to you straight, and has no other agenda than for you to be better than you are today. Also folks verify that little voice which has been whispering in the back of your head. Guys in particular are trained out of following their instincts. Women maintain that talent. Fred, I like to feel that there is an additional role that good CEOs take on – that of the final authority on product utility, usability and quality. These functions need to be the responsibility of experts but ultimately the buck must stop with the CEO and he cannot run a company successfully if he is not on top of these areas. He should hold a veto on the launch of any product which he feels don’t match up to the standards he would expect in each of these areas. I would agree if in the early parts of a company, and if the CEO/founder is also the product/service visionary, but not after. Better to hire a professional Product Manager to do that. Agree with you William that the role is even more important in the early days of a company. However, lets say I was the CEO of Ford. If I was unable to deliver a great car, could I be doing my job as CEO? A CEO holds the veto pen on everything that happens in the company. That goes without saying. The good and great CEOs know when to use it and when not to use. Agree John. It’s just that I feel CEOs need to pay more attention to the ultimate customer experience. I always thought that the CEO role is more about vision than operations, so I agree with you and would even put your “three things” in decreasing (same) order of priorities. That’s also why I thought founders or young inexperienced CEOs were ideal CEOs because/when/if they can communicate their vision and passion, therefore retain talents. It is great to see a company evolve along with a CEO. What used to be done by the CEO or 1 or 2 others – all of a sudden gets done by a larger staff of others.In some capacity – if the CEO does his or her job well – he or she will ultimately outsource almost all of the initial responsibilities except for the strategy and vision thing. That’s a beautiful way of putting it Harry, Growing into the role as the company ages.How many ceo’s desire nothing more than for their businesses to outlast them? The more I learn the more I respect companies that span a century, it’s truly awesome. I agree with this 100%… although when I’ve done it (evolved), it occasionally results in weird vibes or backlash from the people who were there at the beginning (eg when there were just 3-4 of us and I did a significant chunk of the day-to-day work).As the org has grown, and I spend more time at exec level/board, on strategy, etc., the team does occasionally ask “OK, WTF are you doing with all your time”. I tend to see that as a failure in communication on my part, but wondering if others have seen similar growing pains and have any ideas…great post and great comments thread What say you about Steve Jobs in the context described? A++++ on #1A+ on #2and i don’t think a company he has run has ever come close to running out of cashi don’t like the way Apple operates as a company, but i think Jobs is one of the best CEOs of his generation. maybe the best Totally. So much so that his eventual departure already raises concerns about the company itself. And I wonder if that should be a #4 on the list of CEO criteria: becoming, within some reasonable timeframe, dispensable. This is a really difficult question though. He’s amazing…And he knows what he knows, which is the magic of making tech products that enrich human behavior.And he knows what he doesn’t, hiring the best for ops, purchasing and retail. I think he came near to running out of cash in 1995-1996, before selling NeXT to Apple. Both NeXT and Pixar were losing money, and both companies’ cash positions, as well as his fortune, were getting smaller. I don’t think it was a near-death situation, but it wasn’t far either. Fred…close to the definition I’ve carried around with me”Exec with market vision, inspiration and who can manage to a P & L” Yep inspires (in all aspects of the business) is a special characteristic I believe. Not all smart folks with a vision of the market and management chops can inspire though.And in early days of a company, that inspiration not only drives the internal teams and recruiting and fund raising but is often the voice to building early community and attracting enthusiasts. the most important of all you’ve shortened it to a tweetbrilliant My favorite tweet-length quote about being a great CEO comes from Jack Welch (paraphrasing): “Anyone can manage for the short term. Anyone can have vision for the long term. A great CEO is someone who does both at the same time.” Great one Pascal…thanks. Thanks.Serendipity: just read your post on the what a CEO does meme & quoted the “thick skin” part. Hit home for me.PEGhttp://card.biz/peg This is perfect. There is an important distinction here … you put in manage to a P&L. Ensuring there is cash in the bank does not mean you are managing to a P&L. You could be a great visionary and sell the “idea” that never materializes but still be able to bring in investment income.In my opinion, Managing to a P&L, to predefined goals, to revenue and profit targets, not just for this month, this quarter, or this year, but to ensure growth over the long run is the key responsibility of the CEO. Agreed.CEOs build value…and to do that they need to have an eye on the business not just the pitch. re-blogged for my own future reference, not re-tweeted but worthy of it if I did it 😉 Glad it worked for you. Thanks! Forgot the part about recruiting I’m not a maker of long lists Ryan…Recruiting is key but if you believe (which we all do) that teams win, not people, then this falls easily in line under building a business that is poised for growth. In other words, you’re not really a CEO until those three things are the three things you’re completely focused on.Until then, you’re a “founder”.p.s. I take back last week’s comment…..I’ve never been a CEO. Yeah, and to be able to just focus on those things you really need to have already formed a team to take care of the other things… that’s why I think that giving names to positions in a small companies is useless. true, but even when you are a founder/CEO, you need to be able to do those three thingsand i’d argue that you need to do them really well So concise, so true. “Retaining talent” entails building a great culture and ensuring the company is a place where people want to work hard but it may be worth calling out on its own. Wow! What a great redesign, is that a template I can steal? Anyhow, on to the topic!I never got a straight anser when conducting due diligence:Diff CFO / VP FinController / ComptrollerGM Fin (gets the cars with Fed $$$??) I think a comptroller is a public sector, not business, position. Fred, How do you evaluate a person for those three criteria? Is this only for when you are searching for an experienced CEO to replace a founder or the same holds when you are evaluating a founder?How do you evaluate a younger person (say 25 – 30) years?Thanks. i believe a board should always be evaluating the CEOand i think you should evaluate someone younger on the same criteria You probably can’t be a great CEO without these things, but I think there’s more. Leaders like Bill Gates, Steve Jobs, Walt Disney, Jack Welch, Andrew Carnegie and J.P. Morgan all have a rich legacy of getting involved in managing operations beyond just leadership. Dear Fred, how could a CEO (considered as you suggest) be accepted by his/her employes?I think that in a Startup this must be considered unsustainable.I’ll tell a short story.I’m trying to be a Startupper myself (here in Italy) and I’ve a friend (and roommate) who worked in a promising web Startup for two years. She was fir… ehm, “unconfirmed” in July after a period while she was really hating her job. She was continuously asked to work overtime and the pay was always the same, fastidiously low.”We work in a Startup – the chief said -, and startuppers must work overtime!”.But this is not the main reason because she began to hate her daily occupation – something able to make your life horrible. The main reason was the CEO (who was founder, too).He did (and does, because he’s still there) exactly what you suggested: manage and rest, without seriously contributing, having the only care to keep his hands clean.To my friend (and to the rest of the employes), this attitude had become intolerable, especially every time that there were overtimes to offer for free, overtimes imposed from one who simply looked – to them – otiose.In the meantime something started to go bad, while the money was ending. I don’t think that the promising startup will last, but it taught me much.This is what I learned. Manage and rest: your team will hate you.A top-down approach generates conflict, and it will always do. The approach you suggest could be good – perhaps – for a big and structured company; it’s not definitely good in a startup, where I like a CEO who’s always on the front.Thanks for your time,Andrea Giannangelo There is a certain merit to leading by example. If the company is tiny and everyone else is working overtime, so should you. If only for solidarity. However, if you don’t have any skills to contribute the best you’re doing is buying takeout for everyone else who are actually working. What’s worse, you’ll tend to get in the way of real work with questions, comments, engaging people in conversation who would otherwise be hard at work. There is a line between moral support and distraction, especially in a tiny company with a CEO who doesn’t do much work that requires in-house overtime. If people need to see you there to stay themselves is because they are not enthusiastic about what they do. You either:a) screwed recruitingb) or screwed keeping them engagedAnyway, if the company is to tiny that the ceo can do nothing I’d say he is not a ceo. As Andy said before, he is just a founder. Well – it’s not to say that he can do nothing. It could be that he’s raising money or he has contacts with business that will become good customers, in either case his job doesn’t require him to stay much past 5, in fact his job might require a lot of smoozing after 5 outside of the office. I agree about the second part, he’s probably a founder and not the CEO in that case, but I think the point remains. Wow, I didn’t see anything about resting in Fred’s words. Sounds to me like this CEO should be working on #3 and maybe #2 … no resting, ever. See rule above, “Lead by example”. And I will argue that overtime is completely unnecessary.Overtime is the result of setting unrealistic goals given available resources. The deadline as set becomes unattainable and requires more resources that are available. Humans are capable of adding extra time beyond 8 hours. The problem with that is that expecting additional output for the same compensation tends to generate dissatisfaction.Abusing that emergency capability is simply not sustainable as people get tired and start making mistakes. Additionally, personal relationships suffer, which can introduce distractions even while the person is not physically tired. Dissatisfaction over a period of time translates into depressed morale company-wide.We have project management tools that should make overtime completely unnecessary. I use them. Set realistic deadlines (how long realistically would it take you to build feature X a team leader should ask his team) and let your people enjoy their work. It may take some extra time to finish the product, but the results will be better. The gains from slavedriving are not very significant and come at a tremendous cost. I love the insight, your blog is now in my one of 3 must reads before I mark all read in Google reader :-). Thank you. …and you said last week you weren’t a CEO i don’t think i amand it starts with what you believe about yourself Do you set vision? Do you recruit great talent for your firm? Do ensure there is enough cash in the bank? That “communicate with stakeholders” seems to have become a much fuzzier picture in a number of places. There is a demarcation between the community engagement/social interaction requirements met by the employee running a company’s social media campaign, and the major move, overall direction communication provided by a CEO to the board, on conference calls, to investors, etc.That’s an important difference, and it strikes me that an organization had best delineate between the two forms. They have different formats, styles, and intents – and require a different set of skills. Your CEO may very well be able to do both, but he had better carefully consider the message and audience and choose his tools appropriately. Really great advice, thanks!I would have a slightly nuanced view for the third point. “Makes sure there is always enough cash in the bank to execute the strategy”. Great post. My politics may be different from yours, but if the role of President is like that of CEO… Obama has three strikes.Has he sets the overall vision and strategy of the country? FAIL (Far too many “top priorities”.)Has he recruited the best talent? FAIL (Too many academics… And too many life-long politicians.)And has he ensured that there is always enough cash in the bank? FAIL (Just look at the debt and deficit.) i’d say the same is true of Bush and many other recent presidentsthe only president that did a good job of this in my lifetime was Clinton If the entire town is burning you have too many top priorities. Obama was not dealt this hand, he willingly walked into the fire and said I will work on putting them all out.You’ll get a lot of disagreement on the top talent comment. But, I would start with the fact that he kept Gates on from the Bush era which was a big win on the retention line and brought in Paul Volcker who has been given credit for engineering the US economy out of the the stagflation economy of the 1970s and server for 7 years during the Reagan Administration.Our debt cycle has been growing and growing. At no point has the deficit grown like it did from 2001 to 2008. That is not Obama’s fault and it wasn’t entirely Bush’s fault either. Bush didn’t cause the stock market bubble and 9/11 certainly wasn’t a help for him either. There is a lot of mis information and predictions based upon incomplete data about what will happen in the next 4 to 6 years, but there is one thing that is certain, if consumer confidence goes up because consumers are less concerned about their house values declining or what will happen if they get sick then the economy will grow, more people will start companies, and more people will find work. The jury is out on that. In the mean time, GM is going to go public and pay back the Gov’t, Tarp is being paid back, and there is less concern about money in the bank today than there was 18 months ago when Obama started. Seems like a victory to me. “At no point has the deficit grown like it did from 2001 to 2008.”Obama will have borrowed more in the first two years of his administration than Bush did in all eight years of his. Before the financial crisis hit, the deficit during the Bush years had peaked (at under $500 billion) in ’04, then steadily declined in ’05, ’06, and ’07 before spiking up again to around the $500 billion range in ’08. Obama’s deficit in his first year in office was $1.4 trillion, and the Obama administration’s own forecasts expect it to be even higher this year.Compared to Obama, Bush was a piker when it comes to deficit spending.I will say this, though, in regard to Bush: the decline in the deficits from ’04 to ’07 was likely due largely to the credit bubble fueling tax receipts, similar to how the surpluses at the end of the Clinton administration were largely due to the tech bubble (even the Clinton administration’s own initial projections hadn’t anticipated a surplus). First, I voted for Bush, twice. I also voted for Obama. But facts are facts.National Debt on Jan 20th, 2001 = $5,728,195,796,181.57 (Bush’s Inauguration)National Debt on Jan 20th, 2009 = $10,626,877,048,913.00 (Obama’s Inauguration)National Debt on Aug, 26, 2010 = $13,376,189,739,693.60Source = http://www.treasurydirect.g…The increase during Bush’s Presidency almost doubled the debt. The point here is that during the last 12 months of Bush’s term, the increase in the debt was 15.6%. The increase in the debt over the last 12 months (Aug 26 ’09 to Aug 26 ’10) was 14.2%. This is still horrible. The burden of the national debt is something that too may people “write off” as being unimportant. While it is ok to carry some debt in the interest of growth, it is not a good life style choice and it is transferring our wealth to other countries. So, I am not saying that the continued spend spend spend is ok, I was simply putting the original comment into context to say that Obama has not Failed Failed Failed. He is dealing with a huge situation. Maybe he is not dealing with it fast enough, but he is dealing with it. Alex,Who you voted for is irrelevant to our discussion. And you are shifting the terms of the debate. Your initial comment — the one I responded to — referred to the growth of the deficit, not the debt. I refuted it, showing that Obama’s deficits (his first actual deficit and his administration’s projections for the next two) are far larger than the biggest deficits during the Bush years.As far as the federal debt goes, it remains to be seen by how much it will increase during the time Obama is in office, but his own administration’s projections and those of the CBO are not encouraging. Also, it’s worth noting that what matters more is debt as a percentage of GDP, rather than absolute debt (i.e., an increase in government debt is manageable if GDP increases at a faster rate). On this score, things don’t look promising either. I only included who I voted for so as to show that I am not leaning in one direction or the other, you know, transparency.Budget Deficits and Surpluses drive the National Debt, thus they are interlinked.Yes, debt as a percentage of GDP is important, but debt as an absolute number is critical. Countries with a surplus, in other words “wealth” are in a position to positively affect the standard of living of their citizens in ways that countries that are in debt are not. This basic principle is true for countries, companies, and individuals. Debt is a burden.As Mike pointed out, how much of the current deficit is Obama’s fault per se. The jury is out on what Obama’s policies will do from a macro point of view and thus what the impact will be in years 3 4 and beyond. From 2001 to 2009, the decline in the annual deficit was over $600 MM, almost 5x GHW Bush, almost 8x Reagan. It is too early to say what Obama’s legacy will be, perhaps it will exceed Bush’s, the most recent signs show that the situation is getting better in the last 12 months as the debt is starting to grow at a slower pace.Current CBO projection: http://cbo.gov/ “CBO estimates that the federal budget deficit for 2010 will exceed $1.3 trillion—$71 billion below last year’s total and $27 billion lower than the amount that CBO projected in March 2010.” “Budget Deficits and Surpluses drive the National Debt, thus they are interlinked.Fine, but when you switched from deficits to debt, you compared the growth of the debt over 8 years of the Bush administration to a year and a half of the Obama administration — not exactly an apples-to-apples comparison. “the most recent signs show that the situation is getting better in the last 12 months as the debt is starting to grow at a slower pace.Current CBO projection: http://cbo.gov/“CBO estimates that the federal budget deficit for 2010 will exceed $1.3 trillion—$71 billion below last year’s total and $27 billion lower than the amount that CBO projected in March 2010.” Even the Obama administration isn’t claiming that. The White House just upped its 10 year deficit projection by $2 trillion. The reason why the CBO’s projection is lower is that the CBO projection assumes that all of the Bush tax cuts will expire at the end of the year, while the Obama administration has said it wants to extend all the Bush tax cuts except those on the highest earners (even though 75% of the deficit impact of the Bush tax cuts comes from ones the Obama administration wants to extend). I brought debt into the discussion because it is real time data. We can measure the actual debt as of last Thursday and for the prior 12 months. I wasn’t trying to debate the future or claims of what will or won’t happen, just the actual data points of what has happened and where we are.My original comments were focused in on the Fail Fail Fail statement that the original commenter put in. I am not trying to debate with you, I am nervous about the future too. I am just tired of reading posts by people that say Obama is Failing. Obama is dealing; could he be dealing with the situation better, perhaps. Could things be worse? Perhaps. We are where we are and the negative jabs just don’t help. We need more inspiration and “can do” attitudes. Alex,You attempted to compare Bush’s deficits invidiously to Obama’s, and I called you out on that. I didn’t engage in any of your broader comments about Obama’s performance.Regarding those deficits, a picture is worth a thousand words (and before you or someone else attempts to dispute this chart because it was prepared by the Heritage Foundation, note that the data for the chart comes from the CBO and the Obama White House’s Office of Management and Budget.) http://blog.heritage.org/wp-content/uploads/oba… Dave, you are mistaken. I was responding to the original comment:”And has he ensured that there is always enough cash in the bank? FAIL (Just look at the debt and deficit.)”My point is that the debt and deficit growth started under Bush and that Obama stepped into the preexisting flood and to date has been able to make a really bad problem a little less bad. I purposely am not using forward looking data because it is often misrepresented by organizations with a political slant, such as the Heritage Foundation.As I stated in my original comment, I wasn’t laying the blame for this situation solely at Bush’s feet either, I was simply stating the fact that Obama hasn’t failed. The future hasn’t happened yet. Whether its Obama or Bush, the Tech Bubble or the Housing Bubble, Greenspan or Bernanke, we have too much debt. Whether you look at it in terms of % of GDP or as an absolute number. That is the fact and the reality is that nobody in a position of power seems to worry enough about it in my opinion. Last comment, we need more energy focused on what to change to make the future better rather than on what the future “will be” based upon unknowable assumptions. Alex,This will be my last comment in this exchange because it’s clear by now that you will not acknowledge that this statement of yours,””At no point has the deficit grown like it did from 2001 to 2008.”Is belied by the facts (that the deficit actually declined from 2004 to 2007; that the deficit in 2009 was more than double the size of the largest Bush-era deficit; and that — by the Obama administration’s own projections — the deficits this year and next will be similarly huge*).And by the way, this,”I purposely am not using forward looking data because it is often misrepresented by organizations with a political slant, such as the Heritage Foundation.”Is a red herring. Taking data from the White House’s OMB and the CBO and putting it in bar chart form isn’t misrepresenting it. Last year’s chart from the Washington Post showed similar deficit projections, using the same OMB and CBO data: http://media-files.gather.c…I would have preferred to use a 2010 version of that chart from the WaPo, but I couldn’t find one.*The Obama administration itself just estimated that its deficit for this fiscal year (which ends September 30th) will be $1.58 trillion. And regarding your other comment, asking if I was aware that the Bush administration had some impact on the 2009 deficit, yes I was aware of that. Since the federal government’s fiscal year ends in September every year, but new presidents don’t get inaugurated until Jan 20th, spending that occurs between September 30th and when they get inaugurated counts toward their first year’s fiscal deficit (or surplus). Shortly after Obama came into office though, he passed the largest fiscal stimulus in history, which significantly added to the 2009 fiscal deficit. Dave, the date is clear. Analyze it yourself. The National debt grew during the last 12 months of the Bush administration by more than it has in the last 12 months ending August 26, 2010. That means the current budget deficit is lower than it was when Obama came into office and was dealing with the last Bush budget. I would rather get the raw data and do the analysis instead of taking prepared graphics by organizations with a political agenda. Have a great day. Not necessarily a fan of either, but I’m curious as to how you can arrive at this conclusion when it’s been widely reported that this is not the case. For example:http://www.nytimes.com/2009…I hope this doesn’t sound condescending. I’m generally curious what the evidence is that the bulk of the current deficit is indeed Obama’s fault. Does that include extending Bush era policies? Mike,You are responding to things I didn’t write. Please feel free to read my comment again and respond to what I wrote there. If you have a comment about what I actually wrote, I’ll be happy to engage you in a discussion about it. Perhaps “fault” was the wrong word choice, as I may have assumed you were saying something other than what you intended. This is what I was responding to:”Compared to Obama, Bush was a piker when it comes to deficit spending.” The sentence you quoted just means that Obama has been a much bigger deficit spender so far than Bush, which has been the case. I completely understood what you meant. My original comment was intended to point out that this is a mistaken assumption as the article I linked to pointed out. That Obama’s deficits (his first actual one, and his own projections for the next two) have been far higher than Bush’s deficits isn’t a “mistaken assumption”. It’s a fact.________________________________ Dave, do you realize that the FY 2009 budget was prepared by George Bush’s team and that the first year of Obama’s term was determined in part by the existing budgetary plan that Bush put in place? http://www.america.gov/st/u… As close to a perfect definition as it gets.Fred, can you throw more light on how do you attract and retain the best talent in one of your MBA Monday’s? This is always a very real issue for startups. i might need a guest post since that is the CEO’s job, not minei have a person in mind for it You have to be unafraid to hire folks who are better than you. Coldly, unabashedly unafraid. Like an assassin. You have to step outside yourself.You will ultimately love yourself for doing it because you will have no work to do, if you do it right. That is a great pleasure and luxury.Recently I have hired three West Point MBAs (one was Marine officer) who had just gotten out of the military because they are just worn out from the constant deployments, stress and, oh yeah, combat.These guys are so good, it is scary. They like any job which does not include incoming mortar fire.I am in the process of hiring three more to fill out my management team.You have to look in funny places. You have to develop a world class interview technique and to work the deal. Hard. Totally agree: We always need to ADD to and EXPAND our (both our own and our team’s) base of knowledge, experience, skills and capabilities; not to dilute it which would be the outcome if we were to create a team of carbon-copies of our self or of “ghosts” (less-capable) of our self.There is always something new to be learned and we cannot ever afford to be so egotistical as to believe “we are the smartest and brightest” without, in that very process, actually proving to others that in fact we are not so smart. On the contrary, you prove your own “smarts” by hiring smarter and better-capable people than yourself in relative terms for specific responsibilities.It is also perhaps ironic that ex-military officer’s such as JLM describes more often than not can be the most effective team-players and business people you could hire because few else in today’s world have had to actually perform disciplined real-time people, resources and logistics management under such demanding, real-world situations: A great resource upon which to draw. As a company grows, only corollary I would add to the description is that the CEO should start focusing on hiring talent that can attract talent themselves for their respective teams. At that point the CEO becomes a closer for a lot of hires rather than the primary magnet.Abbas. I’d also add preparing the company for his departure. Hi Fernando – Wouldn’t that be covered by #2 and #3 Some of your posts should be framed Fred, seriously.tho I’ll have to agree with some commenters that you ought to do a bit more than that when your startup is really only you and 2 other guys… I think the term “CEO” should not apply at that stage. Just be a founder or co-founder. Anoint yourself as CEO once there is an actual company to run I think CEO = Leader. Leader means come follow me. That applies with 2 or 200 people. You just have different challenges at different sizes of companies which often means that the CEO for a company of 2 will be a different person then the CEO for a company of 200 … even if it is the same company. I had two clients who were founders in the same investment portfolio and would not take the title CEO — but rather president. Thought that was interesting at the time, but knowing what I now know, I understand.The biggest challenge was that they also wouldn’t hire “Vice Presidents” which gave me the difficult job of convincing people (1) to move to a startup and (2) to go from VP to Director title.What did work well though is that when it was time to hire a CEO, the founder could still remain as President. Pretty damn………………………………………………………………………..shrewd! That is really, clearly and thoughtfully thinking ahead ~ as well as rather unusual because choosing to accept such roles from the outset flies in the face of the more common ego-driven pattern. This longer range planning approach to staffing beautifully enables founders as co-presidents to drive the start-up in the direction of their vision without compromising their own relative positions in the future structure of the business as it develops. It effectively keeps future staffing options open even for them by keeping the upper and relative position staffing doors open for bringing in the brightest best-fit candidates at the appropriate time; one of the founders in that time frame quite possibly then being found to be the fittest candidate for CEO but neither losing out if neither is such a fit in the long run since they would otherwise remain in a strong position of influence in their own company. something went wrong with disqus and I’ve only seen your answer one week later… go figure.thanks for your input, I thought the same when I wrote my answer.Cheers. yes, i agree with them too Maybe we could add an temporary / interim duty … clean up the mess made by the previous CEO. Put the cart right. Great definition. It was when my focus shifted to these three areas that I changed my title from Founder to CEO. There was actually a specific situation that caused me to think about all three of these items and forced me to make a tough, but necessary, decision. It made me realize that I’m doing more than building a product… I’m running a business. such an important realization Doesn’t a CEO — particularly in a start-up — also need to be able to sell (to occasionally help close deals with large corporate clients, and to pitch investors if the company needs to raise additional funds)? Yes and yes, I would say Dave.But not everyone can do everything well and it’s super difficult to build the product and raise funds at the same time.Friends who are start-up CEOs with DNA on the product side all invested early in a smart financial person who can take the lead in raising the funds, modeling and often, operations. There’s a new breed of financial folks who can bridge financial, HR and ops. Indeed. I thought the title was Cash Extraction Officer. 🙂 That is part of ensuring there is enough cash in the bank. True…but I hope most see it that way and don’t just think about raising capital. A CEO has to be deeply involved in sales in most startups, imho. Thanks for your post. That’s the first time i come to your website and i found it great ! Useful 25 year old advice, which had probably been passed down to that board member by someone else who had gotten it from someone else, and so on. Nice reminder that good ideas are timeless and that there are a core set of tools which anyone in business needs to be aware of and conversant in. That has been the basic theme of MBA Mondays and that while the tools we are using today are new, shiny and fun, that underlying everything are some core concepts and principles. Forget them or struggle to re-invent them at your own peril. great way to think about MBA Mondays Johnthanks for putting it that way A succinct mnemonic – definitely something I’ll keep in my own hip pocket!Would love to see someone take a shot at a similar definition for CMOs — perhaps the second most beleaguered position after CEO. Agreed, but only after a company gets to a point of say 10+ employees. Until then your hands are in a lot of baskets. My translation: a CEO is a master salesmen1) selling a product that cannot be made, selling a service that cannot be provided, a vision is something that has to be described and sold to business owners before it is ever realized.2) selling the opportunity of taking part in a legendary business to the brightest and most fanatic team3) selling the product/service and everything else, including shares in the business to keep capital flowing into the business. I think this definition of CEO applies to a growing company – that I can’t define the stage accurately but may be one that has already passed break-even point or has got a “cash engine” in place.Early stage CEO is quite different and it could be that CEO as explained in this post might not be required. Early stage CEO takes care of validating business model and getting customers on board. While early stage CTO takes care of having best talent (this applies in technology company). I am very interested in the talent question too. It seems to me to be by far the hardest one. In order of difficulty, I’d say talent>money>vision. Perhaps because founders learn the skills in the order vision first, followed by money hunting, followed by talent hunting.I am particularly struck by the difficulty of hiring into the very fluid and hard-to-define “pre PMF” roles. Tech is somewhat manageable. But things like customer discovery require a very curious mix of marketing/sales savvy, experimental personality, and a swiss-army-knife ability to do all sorts of new-fangled things like SEM, A/B testing etc.I’ve found the following to be true except in VERY rare cases:1. “Classical” marketing and sales people just aren’t experimental enough or creative enough in looking for “fit” hypotheses. This is exacerbated by the fact that they don’t really understand tech in depth, or get any of the tech-fueled methods like SEM, online A/B tests with landing pages and AdWords etc. To be constantly creating and tweaking fit hypotheses, you need a depth of business/product knowledge that these people typically lack. You also need an element of product visioning capability, since you are dealing with a minimum viable product and must be able to talk “complete vision” with users.2. OTOH, the SEM/landing page/AdWords hackers usually write terrible copy from a human perspective, don’t understand classic positioning theory at all, and often get face-to-face human psychology wrong.3. People who write good copy don’t get Web analytics, and vice versa.This makes it VERY hard to find single multi-talented people who can really drive CD. You end up having to either a) doing it yourself (i.e. it is not easily delegate-able) b) hiring multi-person teams, which pushes up burn rate (even if you get 2-3 parttimers, the team comm overheads add to costs).The problem is relatively easier in enterprise, where there are plenty of experienced enterprise sales people, for whom every big contract is an exercise in CD (many consulting company veterans are good at this). But for consumer/SMB the talent is hard to find.This problem is severe enough that I’d say customer development is also one of the functions of at least a pre-PMF CEO. The talent is just too thin on the ground.For the record, I am not a CEO. I am an EiR at a big company, but I find myself dealing with exactly these problems. I still think these are trainable qualities- It’s like yoga- you can train people to be flexible, it is just painful at first.I’m also in JLM’s camp, not every business will need every one of these qualities to succeed- choose wisely what methods (and I think that’s the utmost form of creativity) Got to be the best salesman thew company has. period. At loeast in a start-up but I guess CEO does not mean much that early. So Howie, how’s dem speed typin’ classes going ? Perhaps it is implicit in the definition of stakeholder, but the care and feeding of the board of directors is a huge component of the CEO’s job that shouldn’t be minimized. I agree with these three, but highly suggest a fourth: CEO’s also need to sell. Hmmm. I wrote a somewhat similar post a while back. More often than not these things seem pretty obvious but they do end up become hard to learn lessons. Spot on advice, the vision/strategy made me think of Joel Barker’s quote: “Vision without action is merely a dream. Action without vision just passes the time. Vision with action can change the world.”I’m always intrigued on historical advice which is still very relevant to our times no matter the worldly changes that has happened in-between.Thanks Fred – Great quote. Insightful. Thought provoking. Very good sum up. But I think the smaller company is, the more time CEO should do things and less things you described in this post. But never forget about them! Thanks. This is kind of off topic — but have you thought about turning all these MBA Monday blog posts into an actual book? I’m guessing, among the AVC community, everyone could pitch in and merge the blog post & comments to come up with one fantastic article for a physical (or electronic) book on each topic. And, knowing you, I’m guessing you’d donate all the proceeds of the book in any case — which would be a fantastic thing. I’d be willing to spend some time helping out if all the proceeds of the eventual book went to charity.Give it some thought 🙂 with regards to the recruiting piece — hire people who are smarter than you. don’t be afraid. Not surprised at the 3 cores (vision & strategy, talents and finance) of a CEO, unfortunately as a start-up entrepreneur, I don’t have the luxury of not doing the execution part myself… but you never know things could change quickly… Nice post, Fred. One thing I would add from the perspective of an employee … when evaluating a CEO to work for, his/her ability to determine if a strategic direction is likely to succeed or fail before the outcome itself. And the ability to pivot direction without being emotionally connected to the original direction. This may be a technique to your 3rd point of preserving cash. Great post and I agree with the overall job description. I would add one tweak.” A CEO has passion for the vision of the company”Managing the CF,creating and executing near term goals and recruiting talent are functions of a job most executives with a general knowledge of business are capable of.Passion is something you have our you do not. I think passion added to being able to execute the basic functions of a CEO is what helps drive companies over the finish line. Is the CEO responsible for the company’s execution?With your three roles, the CEO has set the stage for success, but who is on the hook when it doesn’t pan out as planned? The CEO is on the hook, but is still only as good as those he delegates work to — which is why attracting top talent is so crucial to their job duties. Agreed. If the team is right, and everyone is executing to their fullest without success, than it comes back around to the CEO to steer the company in the right direction.At the same time, I don’t think the failure of a company necessarily means the failure of the CEO, what do you think? I don’t think the failure of a company always means the failure of the CEO — but the CEO unfortunately bears the brunt of the blame in most cases anyway. Good CEO’s take the blame even when it’s not their fault — shielding your employees is part of attracting top talent. If the CEO just blamed everyone else for all problems that arise, no one would want to work for them. Never, ever take counsel of your fears. They will find you and, if strong enough, will kill you.Never, ever be afraid of failure. It will find you and, if strong enough, will kill you.If you give everything you have to an enterprise and you are brave enough to be tested and stand naked to be judged by the results, then you will ultimately emerge stronger even when you are lying in your own blood. What does not kill you, strengthens you.Life is a campaign, not a single battle and you should not be afraid to retreat and then attack or to attack and then retreat. When the opening presents itself, go in for the kill.There is no greater fun than saying — I voluntarily allowed myself to be tested to my limits and found out they were far greater than I ever thought. We are often our own greatest limitations. Good CEOs attract talent (athletes) and then train them for the position that needs to be played and then makes a team of the whole lot of them and then coaches up the team and then executes the game plan.Attracting talent is just the first step. Carly Fiorina had some interesting comments on the deltas between running a company and running the country, not that she knew much about either, but her comments did spark some interesting conversations about CEO skill sets. Hi, Fred.I have long done a presentation called “Francis’s Favorite PR Fictions,” in which I debunk some of the myths surrounding public relations. The subtitle is “Everything I know that’s wrong about PR I learned from technology company executives.”Where am I going with this?All my favorite fictions are direct quotes from C-level technology company executives. One, from a CEO, is, “I do all my own PR.”In keeping with your what-a-CEO-does philosophy, if “doing all my own PR” is part of a CEO’s personal job description, then that guy is not doing a CEO’s job. In fact, when I present, I tell the audience that if they own stock in a public company whose CEO says something like that (And the CEO who said that to me headed a publicly traded company!), then run, don’t walk, to your broker to place the sell order.My point is, if a CEO is so distracted from the strategic vision-setting and the two other critical functions you list as to be mucking about in the tactical implementation of PR and messaging, then that company is probably doomed.-Francis. Obviously having been a CEO for over 30 years, I have had ample time to think about what it takes to be a “good” CEO.I agree with what Fred says — completely — from the perspective from whence he speaks as a Boardmember and an investor. It is a however a bit formulaic and simplistic — not in a “bad” way but in an Executive Summary kind of way.I have been a CEO of fairly large private and small public companies. In multiple industries. Companies which were stable and ones which were growing and ones which were terminal. I have started companies, bought companies and sold companies. I have tickled companies out of the bassinet and I have drowned a few in the bathtub.I write to you not as someone who is “observing” but rather as one who is a practitioner. Who has made just about every mistake possible but has made it to the pay window enough times to make it all worthwhile and who has found a few ideas that work and enhance the probability of making it to the pay window again. I have hit a few good licks.The CEO has to be a leader in the context of leaders being folks who get organizations to places they would never get by themselves. He has to want to be a leader. He has to be comfortable being a leader — or able to deal with the attendant discomfort gracefully. He has to be able to say — “I am responsible for EVERYTHING that happens or fails to happen at this company.” And then make it so and believable and live it.Let’s not bullshit each other — the most important characteristic of any CEO is to be a moneymaker. It is not particularly dignified to say it that bluntly but you can sugar coat whatever you want and it all comes down to “can you ring the bell.”The CEO has to set the tone of everything — big and small by the values he projects, by the way he conducts himself and by the he deals with everyone and everything. The CEO has to go to the trouble to formulate these values in a concrete way. I have a little booklet which I have developed over 30 years which quantifies the values I am looking for and I personally give it to every new employee their first day of employment.The CEO has to set a vision for the company. Commit it to writing. Argue for its attainability — even when it is seemingly nuts. This is the FIRST sale that any company has to make. It has to sell the idea for its own existence.The CEO has to be a good thinker, a better writer and a powerful communicator. When a CEO gets done communicating — you have to believe you really could bite the ass off a bear. Guess what, you could!The CEO has to tranform the idealistic vision into bite sized specific objectives which are SMART — specific, measurable, attainable, realistic and temporal. How do you eat an elephant? ONe bit at a time — the CEO has to hack off the first bites for the rest of the executive staff.The CEO has to organize the functions of the company to provide a logical way of accomplishing the vision both through the objectives noted above but also by talking folks through their respective duties. By coaching his subordinates to execute at a high level of competence and excellence even — especially — when the fledling efforts are terrible. This is particularly true in new companies.The CEO has to be a trainer able to train folks — different than coaching which is primarily mental — to discharge their duties. When you hire good experienced folks then all the CEO has to do is document what is going to be done rather than training them as to what to do.The CEO has to be a disciplinarian and has to invoke accountability within the organization — not a raised voice stinging kind of discipline but rather a “you have to eat your vegetables” kind of discipline that says not only is this a good way to do things, you have to do it this way because we are in a competition to win. Discipline means — in it, to win it.The CEO has to recruit, inspire and drive talent to levels of personal and team performance that exceed the expectations of the practitioners themselves. He has to remember that there are cycles of energy and that you have to find the right time to make things come together particularly when dealing with a team and team building. You do not produce Rangers in a long weekend and a company offsite is not going to transform your company from good to great. You have to be patient and painstaking. You have to be wiley and canny and sly.The CEO has to be able to reduce what the company does to logical processes (SOx 404 type process thinking) that can be documented, understood, streamlined, lubricated and improved. But you have to start w/ identifying and documenting the most important processes.The CEO has to force the company to be “customer centric” and focused completely on from whence the cash flows. You cannot tolerate a single disparaging comment about the customers.Most importantly — the CEO has to face down risk with a steady gaze which inspires confidence amongst the executive team. “Well, if he isn’t scared, then I guess I shouldn’t be scared.”All of these things are subsumed in Fred’s comments, there is no original or opposing thought intended to be expressed but like everything else — the Devil is in the details. Another great blog/comment, so good that I re-blogged it for my own future reference. Thx JLM Above all, a CEO *MUST* lead by example. David, you are my new hero and exemplar. To be able to live in Italy and speak English and Italian is what I want to do. I just spent a couple of weeks in Rome, Amalfi, Positano, Nocello, Novella, Siena (saw the Paleo, wow, and had dinner w/ the winning contrada (Tortugas) the night before the race), Assissi, San Gimingnano, Firenze!You are the king! Yes, JLM, Italy can certainly have an enchanting effect on people who appreciate beauty, fine things and passion.Unfortunately, the country is in a constant, and seemingly irredeemable, state of chaos. Many Italians express genuine amazement that anyone would actually choose to live here. And in a unlikely, and extremely tenuous, segue back to today’s topic, it would appear that Italy is like an eternal startup: where pandemonium masks, or even facilitates, innovation.Orson Welles captured this notion in a speech from the film The Third Man:In Italy, for thirty years under the Borgias, they had warfare, terror, murder and bloodshed, but they produced Michelangelo, Leonardo da Vinci and the Renaissance. In Switzerland, they had brotherly love, they had five hundred years of democracy and peace – and what did that produce? The cuckoo clock.Even more paradoxically, the state of Italy has never had a strong CEO, à là Churchill, Roosevelt or, De Gaulle. And yet it continues to lead the world in many industries.If a country can succeed without a good CEO, perhaps a company can as well…As regards your trip, knowing what I do of you from your comments, I’m sure you were captivated the spectacle of the Palio – a frenetic and pulsating bareback horse race in a beveled cobblestone piazza the size of half a football field . The setting and the race itself have not changed at all in the last 350 years…As for the being the king – thanks very much, but we all know that title applies to one man only: Elvis! So Berlusconi….doesn’t lead by example?:-) Good point!The answer goes to the very heart of the problem – and which is too long to explain here in detail.In extreme synthesis: Silvio Berlusconi came from a modest background and became the richest man in Italy by virtue of hard work, a good business brain, and (crucially) a great political mind. There is a general consensus on his having these three attributes – but, depending on who you speak to, the relevance of each will vary enormously.He is undoubtedly a very talented man. But just as a poacher feels uncomfortable in his new role as gamekeeper, Berlusconi finds it hard to dismantle the very apparatus which brought him his own success.In short: yes, he leads by example – but it’s not a very good one. I can’t really believe that you ended talking about Italy! ;DI really appreciate to listen to your external point of view.Italy has many problems, we know.But Italy is a place you can’t resist to love – and its language too -, especially when it’s the place you were born.That’s our vanity: another Italian problem.Thank you David, and thank you JLM. It is a well earned vanity. I look at the Coliseum and try to figure out, as an engineer, how such a magnificent structure could have been built. Hell, they had naval warfare re-enactments and we can’t keep the Boston Garden floor from sweating.Of course being a Christian, it was a bit of a downer standing above where the Christians used to consort w/ the lions.Are there any bad restaurants in Rome? The first time I saw the Coliseum, I really freaked out.If any one structure can invoke the concept of ‘ history’ it is the Coliseum. You stare at the very place where 2000 years ago Christians were fed to the lions; where the emperors controlled a man’s fate based purely on the position of their thumb.Before I saw the Coliseum, history was a nebulous concept, afterwards it was real.The Coliseum taught me that ours is but a brief (and potentially unremarkable) chapter in a very, very long novel. The other thing that is cool about the Coliseum is that the Popes robbed (recycled) all the marble for St Peter’s from the Coliseum. It is cool to think that the Coliseum was actually completely covered w/ marble. They were some thrifty fellas or they wanted to make it clear that the Church was more powerful than the Romans.I really dug the Tomb of the Unknown Soldier. What a monument. JLM, I have never seen this site but I am so very impressed by the level of the comments. I would love to visit Italy myself and I am fascinated with ancient rome. I am reading Plutarch now.I reget that the reason I am detracting from this discussion is to make a bad pun. I love that you just said you “dug” the tomb of the unknown soldier. I had to point that out! Many parts of the Coliseum were ‘recycled’ by many parties over the centuries – such is life..My favourite Coliseum trivia regards their capacity to rapidly flood the arena — this was before the underground tunnels were added — to allow the recreation of famous sea battles.Those Roman engineers had a pretty ‘can do’ attitude which would still go down well today. Hearing your words makes me feel lucky, because I can see some of this beauty every time I wake up. I really thank you for your fascinated words.This makes me forget how we have just 100M invested per year in VC+Private Equity, as whole Country. This can’t be solved by beauty, which can only be a palliative. At the same time, beauty itself is the reason which forces you to stay. Don’t you feel the trap?If you look for a seed funding, the best you can hope here is 0.1M, and you’ve really been lucky. If you all love Italy so much, how that I’ve never seen an American doing VC here?All is difficult here on this side. We completely lack an ecosystem; you can always try to startup here, but you’ll have to do it with one tenth of the resources and a mother language that’s spoken by just a few million people.Well, let’s look at the best side: this teaches us how to manage scarcity (gh).Trying to startup in Italy is difficult, especially at 21. Difficult but full of aesthetic, I’d say.Andrea Giannangelo In order to judge that can-do-attitude you have to consider that the ceo/emperor had their heads on the line, literally! Lots to love about Italy. Most of all, Italian people.I’ve never not had an excellent vacation in Italy. Italy v Switzerland —- priceless comment and brilliant! Great summary, David.Welcome back, JLM – sounds like you’ve had a great break. Well done!I have spent years working with sales and technical teams across most of Europe – and Israel/USA. The variety of cultural and operational approaches to business and styles of leadership never fails to intrigue.Understanding how best to interact with a variety of people in a business – and social – context is a great challenge when working in Europe. One day meeting with a team in Germany and the next in Italy certainly ensured one never forgot the wonderful variety of people and how one had to learn to understand their psyche to get the best results from them. Empathy is developed in many different ways and is especially challenging to nurture when meetings are collaborative across many different regions.Being a ‘neutral’ Brit helps, and having to liaise with so many different cultures was/is a perpetual delight.Oh, and the post-business meetings al fresco evening meals in Rome were always some of the best! Vienna is always a close second, though 😉 I’ll take tips on restaurants and things to do in Vienna Carl..never been, going in Oct. Cool. Will drop you a personal message later in the week, Arnold.You’ll love it. I was just in Vienna.I come from a hard-core Austro-Hungarian fam and would say don’t miss the coffeehouse Demel, purported inventor of Sachertorte.In fact, you must plan your meals around coffee and cake, which here is an Olympic-calibre event.Reco’s include Kaiserschmarren, Apfelstrudel, Doboschtorte, Rigo Jancsi (a dessert with a priceless backstory). The list goes on. Although I found the Demel’s sachertorte in fact a bit dry.The cafe on the second floor of Julius Meinl on Graben around the corner from Stephansdom has nice cushy sofas, a great view and they make a superb light wienerschnitzel. Tereza: You got me at “Apfelstrudel” and “wienerschnitzel”; I’m sending for your tour guide. Agree on that. I’m Italian and left 7 years ago, mainly because I found impossible to work there. I spent time in Brazil, Singapore and now live in China. When I go back to Italy, I’m amazed about how little changes, and how everything seems frozen in time. I compare it with my experience in China and other Asian countries, where my teams seemingly work 24/7 and are full of passion and a desire to improve their situation.The biggest problem of Italy is most likely a lack of drive, which stems most likely that the country is so old and has such a storied past. I often think that, in the same way, the strength of the USA is that, not having much of a past, the country can look forward.Naturally, it’s also about the Italians. I will add another quote. It’s from a thoroughly despicable character, but he got this one right: “To govern Italians is not difficult: it’s useless” – Benito Mussolini Nocello!!!The walk from Positano up the terraced mountainside covered in Anglianica vines to the cliffs of Nocello, is one of my favorites.Rivaled only by the walk from Revello down to Amalfi.Lucky you! Hi Arnold,Since I know you are (or were) planning a trip to Italy, I just wanted to give some advice on when to come. It’s too late for Fred (July – worst month) or JLM (August -second worst).Mid June is the time to come. The beaches are open, and there is wonderful aura of anticipation in the air..Most of all it’s not too hot or muggy. In July and August the Italians abandon their cities and leave the tourists to queue-up outside the museums in the stifling heat and humidity. The seaside locations are packed with tourists and the aforementioned Italians. Chaos!September is okay – beautiful clear skies – but everywhere you go people seem tired after the long summer season.June !!! Hi David…Sage advice…thnx. Unfortunately, with a new biz, was unable to go this summer so missed being put into the Fred or JLM wrong-month camp ;)In Austria in October to talk with wine bloggers about social media and I’m guessing a sweater might be advisable!But, next June, I am planning on being in Italy, thinking about renting a house in Luca possibly…and if so, my long threatened visit to Milan to knock on your door may just happen. The pleasure will be all mine, Arnold. Knock away! rome was steaming hot in July but we did get to watch two world cup semis and the final in a cool bar in the center of rome drinking peroni and listening to italians shoutingthat was memorable You should have seen Spain then! by the way, who where they supporting? Our neighbor’s daughter has a wedding there September 2011. Hoping like anything we can make it. Wonder if we can get her to change it to June. 🙂 September is fine, especially for a wedding. Not very many Americans in Italy. I’ll remember that David. FYI, great overview of Italian Polisci in history. “Il Sentiero degli Dei” — trail of the gods. Hiked from Nocello to the Convent of San Domenico (built in the 1500s) above Pairano and then down 1000 steps to the road below. I did not make it to Ravello on the trail.What a hike! Literally almost killed me. It was about 100F.I loved Ravello and the Villa Chambrone.I love Italy. You and I both JLM.My promise to myself is to get back at least once a year! i want to go to Siena, see the Paleo, and eat with the winner the night BEFORE the race The Palio was unbelievable and the pagentry was out of this world. I had the great fortune of viewing the race from a Pallazo balcony just above the start/finish line. It makes the Super Bowl and the Final 4 and World Series look like a tea party.We had eaten in a square w/ the Tortuga contrade at a dinner for 2000 in which the young girls served, the young men sang and chanted and the adults got rip roaring drunk.The horse spent the night in a church overnight guarded by the young men of the contrade (neighborhood).I have been to some pretty cool things in my life but this was just over the top.Each contrade has a “friendly” contrade and some “enemy” contrades who play tricks on each other.Supposedly the winning jockey (who had won 2 in the last 5 years for the Tortugas) won 300,000 euros. Not a bad days work. Eat with the winner the night before the race — isn’t that the best summary of what you do for a living? 😉 I know yesterday everyone was discussing features they would like to see Disqus have. I would like one that alerts me when JLM has spoken. We had a “follow” feature that was unimaginative. It was used but not to the degree we wanted. We slowly kept hiding it until it’s basically gone.Now we’re going to bring it back and make it useful. “Liked” this comment…but wish there was a way to “Love” or “LOVE” in all caps. Great to hear from a practitioner – fabulous thoughts JLM, as always dittobut how would you make it hard to click on a love button? one option is to limit the amount of love you can give out in a given periodyou can give as much like out as you want but your love you’d have to guardmore closelyI imagine JLM would end up as one of the most loved commenters on AVC. maybe you have to click it ten times or something annoying like that. A treatise on business leadership to hold on to JLM. Thanks! this comment is possibly better than any blog post written todayi reblogged the money quote on fredwilson.vc Can you train people to leadership JLM- to these qualities? Yes. Anybody can be trained to lead though some folks have a more natural talent for it than others.Almost every characteristic of a leader is a programmed response. The challenge is whether the response comes naturally — under pressure — or simply has to be learned even if learned by rote.I wish I could tell you some of my favorite leadership stories but they would take up too much space. Well how on earth do we MAKE the space???!!!Somebody needs to brand you!(Not like cattle, like marketing,) We should just stick all his comments into a Tumblog. Pity that would be interesting Thanks, JLM! Just saved your comment & the post to my computer – truly words to remember & aspire to (I would argue much is applicable even to those serving in mgmt below the CEO level). Would you be willing to share the booklet of values you’ve developed (or even just a subset / high-level summary)? Regardless, thanks again for taking the time. Send me a snail mail address and I will send you one. Thanks, JLM! I’d be happy to. I’m pretty new here & don’t see an email on your profile. If you don’t mind emailing me at I’d be happy to shoot you a snail mail address.Thanks again,Greg Hello JLM,I am sure many reader will be interested in reviewing the “booklet”. I know I am. What is the best way to get a copy?Thanks, Great post. Truly. Going to the tattoo shop now, j/k ;). I would add one more task for the CEO. The overall tone and feel of the office(s). You can usually tell within a few minutes the general tone of of company. I’ve visited companies where most everyone is in deep fright and have nothing to say.The atmosphere comes for the top whether it’s good or bad. This is fatuous advice. The third item in the list is, ” make sure there is always enough cash in the bank”. That is just about everything. Making money and staying solvent is what a corporation is about. The advice is as simplistic as saying all a coach has to do is win.Besides you left out the most important thing a CEO actually does. Collude with the board to be paid 100’s of times what the average employee makes and guarantee yourself a gigantic golden parachute should you fail. I don’t necessarily deisagree with you, but let’s be careful here. Fred boiled down three key responsibilities to a short post. Almost by definition, a short post has to be, at some level, simplistic. You can’t caveat, or provide too many boundary conditions.let me try an analogy here. Wellington in Spain ordered a retreat, when things were going well against the French. His men were demoralized, but they retreated, only to find a garrison stocked with provisions. They realized that Wellington had planned for their retreat several months in advance, and placed it about where it would prove crucial. A historian could write a book about just Wellington’s time in Spain. But in one ‘graf, the lesson is about long term planning and securing resources in time for their use.(IMHO) too many entrepreneurs react too slowly to cash concerns, and turn a molehill into a mountain. A good CEO anticipates the crisis, and has a solution ready to go. founder CEOs don’t do thatbut they own so much of the company that they’d effectively be paying themselves This sounds like Joe Madden’s famous commentary, “If this team wants to win, it better score some more points.” If being a CEO is like being a general in an army, how do these three things correspond? Machiavelli said:For as those who make maps of countries place themselves low down in the plains to study the character of mountains and elevated lands, and place themselves high up on the mountain to get a better view of the plains, so in like manner to understand the people a man should be a Prince, and to have a clear notion of Princes he should belong to the People.So, to get a clear picture of what a CEO should be, ask a worker bee – and as a worker bee, I can say the top most quality a CEO must have is he/she should have an ego thats bigger than that of all his investor’s combined. Very specific and to the point and precise, but it takes many years to get it right especially for a CEO of a startup when your hands are in every pot. You want to delegate, but you find yourself doing so much to ensure the success. Little do you know that you could be more successful and have more time to concentrate on the business if you just trusted and handed things off to your teammates. Only someone who has been a CEO should ever write on this topic. It is often the case that observers think they know by watching, but only those that do really know. Those that do know that while those three things are core to a CEO’s job, in fact the CEO needs to do everything, but at different times during the evolution of the company, and never everything all the time. I know I am late, I think the job of the ceo is to be the parent-that quote is all about nurturing and protecting the company through any possible skillset JLM,As a constant reader of Fred’s blog, I would also be very interested in seeing one of your booklets. It strikes me as a fascinating read on your first day of Than you very much. I’ll need this. There are so many successful CEO comments here which are inspiring… on how to be a successful CEO. Here is mine on how not to be a failed CEO.I always tell my friend “I know what i should not do to be a CEO…still learning what to do to be a good CEO”.As a failed CEO I cansider CASH part very important than anything else before revenues. Keep your burn-rate as minimal until you reach break-even.If you have cash u get talent, u get confidence, u live a life of a leader….for that matter you can speak yourself or else money makes you think/speak/act differntly. Every bloody morning you wake with dreaded dream of money…the growing debt/loan and growing burn-rate… that kills the innovator in you, the leader in you, the speaker in you and finally the CEO in you.After 3-years when my company could not take off on revenues … it finally it boiled down to selling the code to settle the debts/loan and close.Make sure you have enough cash until revenue stream starts flowing.Cheers and good luck. My father ran a Billion dollar plus company till he retired some 20 years ago. He would have people wanting to negotiate with him. He would constantly refer them back to his team of VPs, each of whom followed the same approach down through their organizations. HAving enough money was never an issue, and this was in the days when said company had a couple of Gulfstreams, and several Jetrangers, and bought operating equipment in chunks of $200M. Dad was more focused on spinning off local businesses to keep the community happy. As a CEO, this the best signle piece of advice I have ever received from a Board Member, but as I have told Fred, over the years, I have become convinced he has missed one: you must manage your Board effectively. Would love to know who the experienced VC board member was. Do you remember/ hi dorothyi do not remember I used to think that leadership was about (a) setting direction and (b) eliminating obstacles. But that was when I worked in a big company and had a world of in-house talent and got my money from an annual budget request process. When I started my company entrepreneurs gave me three pieces of advice, all true.1. Money is a full-time job. Until you earn it, you have to raise it 100% of the time. And when you earn it, focus on cash flow 100% of the time.2. The most important thing is attracting the best possible team. Recruiting should be way more than 50% of your time.3. Keeping clients happy is the only way to succeed. Be 100% focused on creating and delivering value to your clients.In my big company life there were virtually no consequences to doing any of these poorly. In my startup, failure at any one is a potentially deadly problem. And there is a rank order. With cash you can hire well (inspiration alone doesn’t butter the bread of a strong developer); with good people you can understand customer need and build products to suit. High level and direct. Thoughts on the job of the CEO growing a successful, small niche business in this economy? Woman owned serving corporate clients and hiring. Proofread, please. When offering advice it’s hard to take note when distracted by typos.Thanks. i’d try to drop this attitude if i were you. look for gold anywhere you can find it. the more senior and successful people you begin to interact with, you will learn this isn’t as important as they tell you it is in school. obviously external communication must be nailed, but replies to emails from junior people, random blog comments, etc don’t need to be scrutinized to this extent. when you come across them, please let me know and i’ll fix them As I learned the hard way before selling one of my businesses in 2007… delegate or die! Thank God I learned it a few years before selling so my sale went better and I was more relaxed.Good [email protected]@gmail.com FOLLOW-ON QUESTION FOR MBA MONDAYS:Is there a rule of thumb for how much cash is “enough cash in the bank”? at least six months of burn A CEO should always lead from the front. He may not be able to hire the best talent because he does not have enough funds but he should be able to drive above average performance from average employees.He should be able to motivate employeesHe should be able to differentiate between vision and illusion. For tech startups specifically, I think the CEO or at least a founder/co-founder, needs to own the product as well. Vison and strategy is closest to what I’m describing, but it’s not the same.I think startups need a super high level person sweating the details, eating the dog food, getting customer feedback, testing, iterating, and so on.I don’t think you can hire someone else do to this or outsource it either. (If you could, why wouldn’t that person start their own company?) for founding CEOs, that is the #1 thing we look forit becomes a bit less important over time, but remains critical forever I think it takes a lot of work and a commitment to self improvement. It is a very hard job. It is lonely. And it requires discipline and decisiveness. Most of these traits can be learned.But who do you learn them from?So I encourage most of the CEOs I work with to get mentors or coaches (or both). I have seen this work so well for so many people. You might ask “what can a coach or a mentor really help me with?” Very Good advise–However setting and communicating the Strategy is not enough–HE or She must LEAD the Execution of the Strategy or it will fail. That advise was so very simple and so very profound that it is unfortunately lost on so very many. Every position has core responsibilities, and within each position on the org chart, I’d bet that the list is never greater than three core responsibilities for any position. Great post…full of “pith”! But ~one thing missing. For the CEO “the buck stops here”…so the CEO also has to make sure that what’s supposed to be getting done IS getting done and any problems are getting solved. I’d add that as something like a fourth responsibility. The CEO is also a taskmaster/problem solver of last resort. You have to hire the best but also make sure the best live up to their potential. In fact, beyond an early phase for a start-up, for the CEO to do anything more than those 3 or 4 things (on a regular basis) is NOT GOOD as just doing all that is much more than a full-time job. I like the “3” things and we can all apply them to our own lives . . .we are all CEOs of “Me”, Inc. Having life strategy and vision along with growth and development both professionally and personally with “cash flow” is living a created life! Kim U Fred – I shared this with my Vistage chair – simple is always best. Thanks. Mark Kolier http://blog.cgsm.com I’d like to add this quote to my email signature line. To whom can I give credit? being CEO is one of the most pressuring thing in the World of Business but you earn respect from other people because of your position. Play your role and it will work out just fine Not just for “for-profit” companies – these 3 valuable skills are needed in the non-profit sector too. I think a 4th one is missing.#4 Measure the output of results from the heads of each department. Make sure their output levels will achieve the level of growth expected. Additionally, give feedback if gaps or issues prevent the outputs of each department from resulting in the entire company growing at rates needed. – Growth = value to customers, number of customers, top-line revenue, and net margin progressing to target levels I agree about the three things a CEO must do, and I like the way JLM unpacks this from his vast experiences. The best leaders are three-dimensional. They keep a big picture view of the mission, resources and contexts. They see how to accomplish the mission with the resources available to accomplish it, knowing that people are the most important resource, so how your people feel about what is going on is a good barometer of leadership acumen. The most competent leaders also understand the context of relevant variables that impact how their resources must be deployed. Thus they can make quick decisions that help them get or keep their organizations on a good financial footing. Two-dimensional leaders only comprehend one of two of those MRC factors. One-dimensional leadership is not about what the organization needs, but is about “me.” This self-focus is why leaders can’t see the contextual big picture, and make appropriate organizational decisions. Excellent points to note here from the veterans.For a small startup company say a web development company, the CEO in addition to performing all the mentioned duties, if the CEO is a programmer, has to get involved in programming too.After working 14 hours a day without suitable pay for months together (sometimes years), once the company becomes big, one fine day someone asks “why that CEO guys gets paid so much ?” well how do you bring out those qualities in a person (and I feel like I’ve been missing you around, so welcome back) Unequivocally true and well said Charlie, especially the part where you say “Companies succeed because of leadership, and fail from lack thereof. No amount of capital or cajoling can replace the role of a strong leader who is willing and capable of digging in deep to make each aspect work.” Not sure, the ones that would be most needed to be a leader? Exactly. People think “Why take a stand on anything or start a big new initiative when i don’t know who my new boss will be or what type or biases and priorities they will bring in with them?” There is a lot going on beneath the surface with your better employees, a big part of the CEOs job is ensuring that the “best talent” that they recruit is allowed to flourish and bring it all to the table.
1
The Intercept (2012)
Bletchley Park, 1942. A component from the Bombe machine, used to decode intercepted German messages, has gone missing. One of the cryptographers is waiting to be interviewed, under direst suspicion. Is he stupid enough to have attempted treason? Or is he clever enough to get away? The Intercept is a small demo game written in ink and Unity that we built to demonstrate how to author a simple project. See how we like to structure our own ink files, and how easy it is to use the Unity plugin within a real game. We built The Intercept in a couple days for a game jam! Read more about ink
242
SpaceX sends laser-linked Starlinks to the polar orbit
Kennedy Space Center, Cape Canaveral Space Port Sation, Florida — SpaceX made history again by sending a record number of 143 satellites on a single mission to the lower Earth orbit yesterday. The record was previously held by the Indian Space Research Organisation (ISRO) with 103 satellites on a single launch. Among these were 10 SpaceX Starlink satellites that are deployed to the Polar Orbit (see animation below for explanation). This will make high-speed internet available in the most remote polar locations of our planet. The SmallSat rideshare program by SpaceX enables customers in launching their satellites to low Earth orbit (LEO) with a budget as low as $1M. In this dedicated launch, most of the small satellites were of SpaceX customers, in the past, SpaceX has been making space for a small number of customer satellites with the Starlink constellation satellites. – Sponsored – Elon Musk‘s space exploration & commercial spaceflight company has used optical laser-linking of the satellites for the first time. Musk has confirmed on Twitter that the black pipe-type objects at the end of each Starlink satellite are actually laser links. I have marked these objects in the following photo of the SmallSat satellite stack photo shared by SpaceX before the launch. According to Musk, only the polar Starlink satellites sent to orbit this year will have laser links, all the other Starlink satellites will get laser-linking feature next year. SpaceX has given the current laser-linked satellites a version number of v0.9, Musk explained. See more All sats launched next year will have laser links. Only our polar sats have lasers this year & are v0.9. — Elon Musk (@elonmusk) January 25, 2021 Update: u/snoshy provided much-needed insight on the use of ‘laser links’ on Starlink satellites as the article made it to the HN front page. These laser links are purely intended for satellite-to-satellite communications for Starlink. They are not (at least at this time, and for the foreseeable future) intended for ground-to-satellite communications. The value that sat-to-sat laser links provide is that they create a low latency, high bandwidth path that stays within the Starlink satellite network. Before these 10 satellites, each Starlink satellite has only been capable of communicating directly to ground terminals (either consumer, transit or SpaceX control). For traffic that is intended to move large geographic distances (think transcontinental), this can require several hops back and forth between ground and space, or the traffic from the user terminal is exited at a node that is geared for transiting traffic and most of the data transits along with existing ground Internet links. By performing this type of transit directly in space, and exiting at a transit node nearest the destination for the data, you greatly reduce latency. Bandwidth still might not be great, but what this does is unlocks a very financially lucrative consumer use case: low latency finance traffic and critical communications. There are many use cases around the world where shaving even 10-20 milliseconds of latency on a data path can unlock finance and emergency capabilities, and this is a long-fought battle throughout the history of these industries. As an example, if you got a piece of news about a company in Australia, and wanted to trade on it as quickly as possible in the USA if you can beat your competitors by 10-20 milliseconds, that can mean a lot of money. Laser comms for Starlink sats have long been planned, but have historically proven to be quite hard to get working. They also depend on a sufficient critical mass of satellites so that a given satellite actually does have another satellite within the lock to send the traffic towards. Later on, the first stage of the Falcon 9 rocket successfully landed on the Of Course I Still Love You drone ship. SpaceX posted the landing video on the company’s official Twitter account — still mesmerizes. Other space flight companies and government bodies have yet to achieve this level of precision in Rocket engineering. Follow us for more interesting SpaceX stories & news: Google News | Flipboard | RSS (Feedly) See more Falcon 9’s first stage has landed on the Of Course I Still Love You droneship pic.twitter.com/6gWWlLiXdG — SpaceX (@SpaceX) January 24, 2021
1
DataSecOps: Solving Global Data Security Challenges
21 FIRESIDE CHAT DataSecOps: Solving Global Data Security Challenges About the Event Organizations increasingly recognize the importance of data and data-driven decisions. But, when it comes to ensuring security and meeting all data privacy compliances, the process becomes tedious and can take months to get up and running. During this fireside chat, we’ll learn more about DataSecOps which offers an automated process to manage both data and security. It comes as a natural progression of the DevOps and DevSecOps principles and it’s a collaboration between engineers and admins on how to securely store, analyze, archive and deliver data. p Privacy Policy Website Terms
7
Facebook Needs Trump Even More Than Trump Needs Facebook
To continue, please click the box below to let us know you're not a robot.