labels
float32
1
4.24k
Title
stringlengths
1
91
Text
stringlengths
1
61.1M
1
Console – A free weekly roundup of the latest in open-source software
This space is reserved for sponsors that support us to keep the newsletter going! Want to support Console? Send us a note at osh@codesee.io span OpenSourceHub.io p Tinygrad is a deep learning framework using PyTorch and micrograd. It's simple with a sub-1000 line core. It aims to be the easiest framework to add new accelerators to, with support for both inference and training strong Python strong 13089 strong today strong github.com/geohot/tinygrad strong tinygrad.org A visual no-code web crawler/spider, for design and execution of crawlers. Also has a command line interface. language: JS, stars: 7416 last commit: yesterday repo: github.com/NaiboWang/EasySpider Ready-to-use RTSP / RTMP / LL-HLS / WebRTC server and proxy that allows to read, publish and proxy video and audio streams. strong Go strong 6269 strong today p github.com/bluenviron/mediamtx span Open Source Hub Discord server p Hey Alessandro! Thanks for joining us! Let us start with your background. I’m from Monza, Italy (home of the Formula One circuit, which I’ve been attending whenever possible), I hold an M.Sc. in Mechatronic Engineering, and I’m a software architect in a relevant IT corporation. I started programming when I was 12 by trying to edit the source code of my favorite website with right click, “view source”, and I never stopped since then. I build something in almost every major language (PHP, HTTP/CSS/SASS, JS, TypeScript, Python, C/C++, C#, x86 Assembly, Objective C, Scala, Java/Spring, Rust, Bash and of course Golang) at the backend, frontend, architecture and DevOps level. Regarding preferences, Golang is my choice when building bandwidth-intensive backends (although Rust is slightly more optimized), React for Frontend, InfluxDB and Mongo respectively for a time-series and general purpose database, Deepstream and PyTorch for machine learning, GStreamer for video processing. Who or what are your biggest influences as a developer? Without doubts Linus Torvalds (author of Linux), he managed to create a software that is used indistinctly from microcontrollers to supercomputers and was a pioneer with respect to open source, although Linux modularity could be improved and has been a major point of discussion since the beginning. What’s your most controversial programming opinion? I’ll answer by considering the reaction of my colleagues that followed the opinion 😂 Personally, I don’t like class inheritance, a concept that is present in most programming languages. Once I did a lengthy discussion with a colleague when I said to him that “Animal must be a member of Dog and not its parent”, and he started yelling “nono, it’s not possible”... programmers aren’t known for their mental openness (sorry!) Furthermore, in the last months I’m realizing that the microservice architecture can be really inefficient when microservices are too much, and I’m trying to reduce their number even if it means producing diagrams which are simpler. This can be interpreted as an involution, even if in my opinion it’s not. What is your favorite software tool? Wireshark. I’ve used it daily since I was fifteen, and it’s the only way to understand networking. I know a lot of engineers who never opened a network dump, I sincerely don’t know how they manage to get their stuff working. Docker, Kubernetes, Helm, Skaffold, OpenShift. They made deployment a piece of cake. Why was MediaMTX started? In 2019, I was working on two separate projects: an autonomous rover that had to be controlled underground in an 8 km gallery of a dam in north-west Italy, and an automatic surveillance system for multiple industrial plants. Both projects had a common issue: live video routing. Video had to be watched or processed by multiple entities at the same time. And existing solutions were eating most of the CPU. For the rover, we were using Live555 (video proxy), while for the surveillance system we were using ROS (Robot Operating System), which was routing raw, uncompressed frames over TCP, a nightmare, justified only by the fact that ROS offers ready-to-use components that process frames with Machine Learning. Other solutions were not portable (Wowza) or feasible. I was looking through Wireshark at the content of the video stream ingested by Live555, the protocol (RTSP) seemed simple and really similar to HTTP and i didn’t understand why Live555 was consuming so much CPU to handle it. In a short timespan, I wrote two separate software: rtsp-simple-server, which allowed clients to publish video streams, and rtsp-simple-proxy, which pulled video streams from existing servers. Both were less than 500 lines of code and required a low amount of CPU. Then I merged them together, creating a server / proxy hybrid, which is the base of the current project. I immediately started receiving a lot of traffic and feedback from all around the world, and that allowed me to continue the development. The project currently allows to route video streams in the order of thousands (I manage enterprise servers for cities with 2000 streams each), convert video streams from a format to another and even to automatically heal video streams. Everything with the constraint of using as less CPU as possible, which was the main bottleneck of video streaming until some time ago. How does MediaMTX work? I like to think of MediaMTX as a message broker for video streams, hence a “media broker”. MediaMTX is able to receive live video streams from multiple sources and multiple protocols, and to broadcast them to anyone that needs them, with a protocol of choice. Internally, it is based on 4 libraries, one for each of the supported protocols (RTSP, HLS, RTMP, UDP/MPEG-TS and WebRTC), which are gortsplib, gohlslib (I’m the author of them too), go-astits and pion/webrtc. Using specialized libraries grants modularity and helps developers to look in the right place when they need to analyze code and send pull requests. MediaMTX decodes the minimum necessary only in order to minimize resource consumption. Why did you pick Go? Parallelism has always been an issue for me (and not only me). I used to spend 70% of the development time fixing race conditions, allocating additional threads for every single detail, or messing with inter-thread communication systems. Event-based languages like JavaScript didn’t improve the situation, since they are single threaded and can’t be scaled. Golang offers a native, event-based, multithreaded routine system that merges the efficiency of event-based programming with the efficiency of multi-threading. Basically, threads are created on the basis of available resources and events are distributed on them. All transparently. Furthermore, Golang offers native inter-routine communication through channels. Furthermore, Golang offers native cross compilation and allows building dependency-free binaries for every operating system (Linux, Windows, macOS) and architecture without any hassle. That’s all I’ve ever wanted. I’ve a lot of respect for Rust too, but syntax is a little more complex. Where did the name for MediaMTX come from? MTX stands for “Media Transmission”. I spent a month thinking about a name which has to be protocol free and needs to be “developer-compatible”, since developers always find a way to compress names that are more than 8 characters long (Even “Kubernetes” is too long, and it has become k8s). Who, or what, was the biggest inspiration for MediaMTX? The idea of building a server / proxy hybrid started from the need of merging together two software in order to perform maintenance on a single one. There were no particular inspirations for that 😂 The proxy was certainly inspired by live555 proxy . The subscriber - publisher model was inspired by the Robot Operating System (ROS), which is similar to the one of MQTT or Kafka, even though at the time I didn’t know them since I was coming from the mechatronic world. Nowadays, I’m really inspired by the Pion project, which is a set of media libraries that all together implement the WebRTC protocol. I try to contribute to it as much as possible and to structure my libraries (gortsplib and gohlslib) in a similar way. Are there any overarching goals of MediaMTX that drive design or implementation? If so, what trade-offs have been made in MediaMTX as a consequence of these goals? I’ve been a student for many years, and affordable things like the Raspberry Pis allowed me to develop my skills. Therefore, one of the goals of this project (and all my projects) is to be compatible with any hardware, from enterprise servers to microcontrollers. The drawback is that I left out from the project some advanced features that would have required computational power and would have been of use to many people. What is the most challenging problem that’s been solved in MediaMTX, so far? One of the challenges was detaching publishers (clients that are sending a stream) from readers (clients that are reading a stream). This is an important operation since a single laggy reader could either slow down all others or fill the RAM up to the exhaustion point. That was performed by developing a custom ring buffer that allows publishers to push data, and readers to pull data asynchronously. The buffer makes use of synchronization primitives and unsafe pointers in order to maximize throughput, that can reach 10000 data units per second: https://github.com/bluenviron/gortsplib/blob/main/pkg/ringbuffer/ringbuffer.go Another challenge was finding a way to route video frames independently from the protocol, since the server supports multiple protocols and each of them has its own way to encode video frames. Popular libraries like GStreamer and FFmpeg have solved the issue by decoding video frames up to their elementary units and use these units as the basic data unit, but I find this mechanism not quite efficient. Therefore, I chose to route video frames in their original format and to decode them if and only if they are requested in another format. Another gigantic challenge, although less technical, is granting compatibility with most devices, a thing that can be done only by reverse-engineering the minimal details of every protocol. For example, supporting Apple devices is always a challenge, since a single bit is enough for them to discard a WebRTC stream or a HLS stream with no apparent reason. Last week I spent an entire day to find out why a video stream generated by OBS Studio was causing a blank screen on Chrome: it was because of a single bit in a header, that had to be set to 1 when the current video frame is an I-frame (frame that can be read independently from the others). Not a single line of log was coming from Chrome, it was like moving in the darkness, but after tons of tries the issue was fixed. Are there any projects similar to MediaMTX? If so, what were they lacking that made you consider building something new? This project was started before the pandemic, and I already discussed the state of the art of that period and the reasons behind the project. The pandemic caused the video streaming sector to flourish and nowadays, there are a lot of open source media servers available, each with its peculiarity. Nonetheless, MediaMTX is still appreciated for its speed, versatility and compatibility, personally I use it as a building block of more complex architectures, something that other solutions can’t offer since they’re either over-engineered or under-engineered. There’s a solid community that uses MediaMTX for a wide range of needs, and it’s seen as an established tool. What was the most surprising thing you learned while working on MediaMTX? Certainly the fact that nowadays Open Source has a critical role in every company, small to big. I published the server from my bedroom and in less than two years I was contacted by companies like NASA (and a lot more that I can’t write about) and government agencies in Europe and Australia. The Shodan Search Engine lists thousands of installations of my servers in all continents. This is both trilling and worrying, I think that open source must be used with more care, since trusting it too much could result in a huge security threat. What is your typical approach to debugging issues filed in the MediaMTX repo? When filling issues, users are guided in order to provide data that allows maintainers to replicate their problem. Invalid issues are automatically discarded. Every valid issue is reviewed and never closed until solved. I don’t like repositories that automatically close issues after a certain period, regardless of the fact that they have been solved or not. Issues are split in two categories: bugs and feature requests. Bugs are reviewed immediately if multiple users are confirming the bug, otherwise, they are reviewed with a priority that depends on their content. If the user provides enough data to replicate the issue (and most does), everything can be solved in a matter of minutes or hours, otherwise the user is asked for more data until the maintainer is able to replicate the issue. Feature requests are another matter. First, there must be support and consensus from the community regarding new features. Second, if the feature is trivial, I try to encourage users to contribute the feature themselves. Major features are implemented by following an internal strategic plan. What is the release process like for MediaMTX? There are two kinds of releases: minor and major releases. Minor releases are mostly for bug fixing and don’t contain major improvements. An in-depth testing procedure is generally not needed, and these are published as soon as the automated tests pass. Major releases are for introducing major improvements. After all automated tests have passed, binaries are usually deployed on servers with thousands of video streams for some days, and then released. This procedure is often not enough to avoid regressions, but the community usually reports regressions within 24 hours, and this results in the publishing of a minor release in the following days. Binaries are compiled by GitHub actions without human intervention, minimizing security risks, and published in parallel on GitHub and Docker Hub. Is MediaMTX intended to eventually be monetized if it isn’t monetized already? If so, how? If it’s already monetized, what is your main source of revenue? MediaMTX is currently not monetized. I’m a private employer, and I don’t plan to make open source my main source of revenue in the near future. What are you most proud of? I’m certainly proud but also scared from the attention from big companies for the reasons I explained above. When I joined my current company, there was a manager that already knew my name because it has used the server to route its private cameras, he came to me and asked: “was it you?” it was really funny. A lot of users thanked me during the years and this cheered me up in difficult times. How do you balance your work on open-source with your day job and other responsibilities? I learned to give things the right priority. Job comes first, my private life comes second (I can’t say “first” or I may get fired 😂) and then there’s Open Source. When I’m lucky, job and open source become the same thing: there are projects that are based on open source components, and fixing or improving open source components becomes critical. Sometimes we’re even obliged to release everything on GitHub because it’s part of the agreement with the client. Have you ever experienced burnout? How did you deal with it? Yes. It happened when renovating my apartment, dealing with a high priority task at a job, and finalizing a major release of MediaMTX. At the same time 🎉 I have solutions for everything: running (10 to 15 kilometers), mindfulness meditation and drumming (14 years). They are my strength. They are not universal, I strongly encourage everyone to find their personal activities for both body and mind. Do you think any of your projects do more harm than good? Since the start of the Russia - Ukraine war the world has changed, and I’m not sure that my open source libraries, that include connectors to communicate with Drones (gomavlib), autonomous vehicles (goroslib) and cameras (gortsplib and MediaMTX) are always used for peaceful purposes. I’m not at ease with this idea. What is the best way for a new developer to contribute to MediaMTX? Issues in the server and its dependencies are kept organized and categorized. It’s easy to spot small tasks and start developing them. Where do you see the project heading next? The project has to support next-generation streaming protocols like SRT and RIST. Since MediaMTX already offers automatic conversion from a protocol to the other, this will allow users to transition from legacy protocols to newer ones, without changing their infrastructure or hardware. This is the main feature that I’m working on. What motivates you to continue contributing to MediaMTX? I’d like to provide a stable building block for architectures of any scale, in the same way as Kafka or PostgreSQL do. I also think that new features should be limited in number and focus should be on existing ones. Like Neo said to Smith in Matrix 3: “everything that has a beginning has an end”. I think the same too and when all the objectives of this project will be fulfilled, I’d like for it to enter into a maintenance-only mode. Are there any other projects besides MediaMTX that you’re working on? During the years I’ve released Golang-based connectors for a series of protocols, including Mavlink (drones), ROS (autonomous vehicles) and standalone libraries for interfacing with cameras (gortsplib, gohlslib). They are all in the bluenviron organization: https://github.com/bluenviron Where do you see software development heading next? Generative AI, no doubts. Where do you see open-source heading next? I’d like to see a general purpose, Generative AI-based framework for creating applications or improving part of them. Do you have any suggestions for someone trying to make their first contribution to an open-source project? After you create your fix or new feature, take a look at the existing code in order to adapt the style of your work to the one of the project. This second step is often missing and is the reason why most pull requests get rejected. What is one question you would like to ask another open-source developer that I didn’t ask you? “What’s the percentage of time you spend on writing automated tests? Do you use fuzz testing?” (personally, 50% of time is spent on writing automated tests, and I’ve setup fuzz testing on all primitives) Want to join the conversation about one of the projects featured this week? Drop a comment, or see what others are saying! span osh@codesee.io or mention us @ConsoleWeekly !
76
Geographic Spice Index
In this index, you will find spices ordered according to the region they probably stem from. Since spice trade is nearly as old as humanity itself, we cannot reconstruct the natural occurrence of spice plants in all cases. For every region, I have included the most important spices used in present-day local cuisine. Of course, this information cannot be exhaustive, in part because spice usage may differ even in relatively small regions and in part because since I have not travelled to all these places, I rely on second-hand information, which is rather sparse about some topics. You may find that this index is rather Asia-centered; although certainly true, I claim that this is not due to my personal interest in Indian and South-East Asian cooking, but rather due to the fact that nearly all spices important in our days are of Asian origin (exclude allspice, vanilla and chile from this statement). Therefore, it seemed convenient to split up the Asian section of this index in several parts, while only one section deals with African or American spices, respectively. This index contains short hints about more than 60 herbs and spices that are not treated on my pages. Some of these spices are very obscure, have highly specialized (often non-culinary) applications, are only used in a small region or are merely of historic interest. Some others are quite interesting and deserve a fuller treatment, but I do not know enough about them to write a full article. Whenever that changes (maybe because of your help?), I will gladly write more about these spices. Surprisingly few spices actually stem from Europe, although many have been imported. The Romans brought many of their Mediterranean spices to the countries north of the Alps, and some of them found the climate acceptable and were easy to cultivate; some even spread over the new habitat and became part of the local flora. The following plants are commonly believed to be of European origin, although you might find different opinion expressed in some literature. Today, Europe’s local cuisines use a lot of herbs from the Mediterranean, of general importance are bay leaf, marjoram, oregano, rosemary, savoury and thyme, most of which can be grown in cool temperate climate (in our days, though, they get mostly imported because of cost and quality considerations). Since ancient times, onion and garlic are cultivated in Europe. However, because of its strong odour, garlic is less appreciated especially in North Europe, where excessive garlic consumption seems to be regarded as a kind of social crime. Onion is more used as a vegetable. Hungary is well-known for its paprika (bell pepper) and its variety of diverse chiles (a gift from the New World). In other European countries, hot chiles are less enjoyed, although they do play some rôle in South East Europe (Balkan peninsular) and in some of the Mediterranean states. Tropic spices are usually not essential ingredients in traditional European cuisine – with the exception of black pepper, which is held in high esteem all over the world. Cinnamon and cloves find their main applications in sweet dishes, ginger and nutmeg are used even less. Although cardamom is nearly unknown in most of Europe, Scandinavians are very fond of it and use it to flavour bread and pastries. There are more European plants that get used culinarily, though in most cases use is rare, or restricted to a small area; others are mainly of historical interest. In the first place, there are truffles (black or Périgord truffle, Tuber melanosporum , and white or Alba truffle, Tuber magnatum ), whose absence from this page can only be regarded as a serious demerit. They played an eminent rôle in French cuisine of the 18.th century, and still have much importance despite their high price. Angelica (Angelica archangelica , Apiaceae) is distributed over Northern Eurasia. All plant parts have a strong and penetrating odour and are occasionally used for cooking, particularly in Northern Europe (e. g., for fish soups). The plant is, however, more important for flavouring liqueurs. Asarabacca (European ginger, Asarum europaeum , Aristolochiaceae/Aristolichiales/Magnoliidae ) is a perennial herb of forests in Europe except the Mediterranean. The fleshy rhizome contains an essential oil of variable composition and has a pleasant aromatic flavour. In Chinese (A. sieboldii , A. heterotropoides ) and North American (A. canadensis , wild ginger) relatives, a nephrotoxic compound called aristolochic acid has been found. Nevertheless, both the European and a American species enjoy some popularity as wild vegetable and flavouring. b (Sweet flag, i , Araceae/Arales/Arecidae ), though native to India, is now naturalized all over the Northern hemisphere. The rhizome is very aromatic and can be candied like ginger (whence the name German ginger), but is rarely used to flavour food. It is quite bitter (which is why it often appears in liqueurs), and the high content of β-asarone makes it rather unsafe on regular use. Calamus traded in pharmacies nearly always stems from American plants that are low in β-asarone. Elder (Sambucus nigra , Caprifoliaceae/Dipsacales/Cornidae ) bears highly scented flowers, which are used as flavouring for desserts and beverages. The dark purple fruits have, in times fortunately long past, been used as a wine colourant. b (i , Brassicaceae) has leaves with a distinct garlic flavour and seeds that are pungent like mustard. It is used occasionally by peasants, especially in Eastern Europe. b (i , Lamiaceae) is an extremely common weed in Central and Western Europe. The leaves, which have an aroma slightly reminiscent of mint and thyme, are an interesting if seldom-used culinary spice; I have heard of Czech recipes using it. In the past, they were also employed for beer brewing whence the name alehoof. Hop (Humulus lupulus , Cannabaceae/Urticales/Dilleniidae ) is, of course, very important for beer brewing, but is hardly ever used for cooking. Also, beer has (other than wine) not much use in the kitchen except, maybe, to quench the cook’s thirst. Poplar (Populus alba , Salicaceae/Salicales/Dilleniidae ) yields leaf buds and young leaves with characteristic, aromatic odour; some sources state that they have been used as a flavouring in the past. They are still employed as flavourant for local types of liquor. Reflexed stonecrop (Sedum reflexum , Crassulaceae/Rosales ) has fleshy leaves with fresh flavour which are used mainly in Western Europe as a garnish. Chopped stonecrop leaves have formerly been quite popular to add extra sensation to salads, but in our days it is no longer fashionable. b (i , Rosaceae) is a wild plant in Western Europe that gets occasionally cultivated. It is rich in tannines to which the leaves owe a astringent but nutty taste. The leaves are used to spice up lettuce, salads and particularly the Frankfurt Green Sauce (see borage). Flowering marsh tea Wild rosemary, flowers b (Marsh tea, i , Ericaceae/Ericales/ Cornidae ) is a wild plant of bogs and swamps of the Northern hemisphere. There are several subspecies, one of which (Labrador tea) is a popular tea plant in Canada. The European form was, like the ecologically similar gale, used for gruit beer, although it contains narcotic sesquiterpene alcohols and is not fully harmless. Sorrel (Rumex acetosa , Polygonaceae) is known for its acidic and pungent leaves which contain oxalic acid. It is used occasionally, e. g., in Green Sauce. b (i , Asteraceae) grows all over Europe, but as far as I know, its culinary usage is restricted to Britain. The leaves have a dominant, not very agreeable, odour which is mostly due to the toxic thujone (see also southernwood). Woodruff (Galium odoratum , Rubiaceae/Gentianales/Cornidae Rubiaceae/Gentianales/Cornidae ) grows wild in the forests of Western Europe. On wilting of the aerial parts, coumarine is liberated (see also tonka bean) which gives its incomparable flavour to some traditional flavoured wines. The area around the Mediterranean Sea, belonging in part to Europe, Asia and Africa, has always been a cultural unity. Early spice trading routes lead from China and India via the Arab peninsular to the Mediterranean Sea, which made the region an important place of cultural and culinary exchange. In the warm Mediterranean climate, many fragrant plants grew abundantly; and in the course of millennia, even more have been introduced by traders, refugees or immigrants from further East. The following are generally considered as native Mediterranean plants; however, some are open to dispute, i, cumin or even the apparently typical Mediterranean olive. Asian spices became popular in Europe first in the Age of Hellenism. Later, spice trade blossomed in the late days of the Romans, about two thousand years ago; from the beginning, spice trade was dominated by the Arabs. Apicius’ i is one of the oldest European cookbooks; it lists some tropical spices, of which long pepper was most valued. Black pepper, cloves and Chinese cinnamon (cassia) also figure prominently. The enigmatic spice silphion (probably of Northern African origin) became extinct around 100 AD and was substituted by asafetida (from Central Asia). The usage of olive oil is a cultural constant in the Mediterranean since five millennia. Today, Mediterranean Europe mostly relies on its native or imported herbs. Basil (stemming originally from South or even South East Asia) today grows wild all over South Europe and is used extensively, especially in Italian cuisine; the same holds for the indigenous oregano. Garlic figures more prominently than in Northern European countries. Regionally, saffron is used for fish or sea food specialties, but the high price of this spice limits its usage. Throughout the region, some dishes require small amounts of chiles; fiery food, however, is not typical. Typical spice mixtures from Southern Europe are i (see parsley) and i (see lavender). In Asia Minor and West Asia, herbs cease to be dominant. Coriander and cumin (from Persia, but grown locally) are popular, and the use of pungent spices (mainly black pepper and chiles) becomes more common. The berries of the sumac tree are essential to reproduce the astringent and sour taste found in many dishes from Turkey to Israel. In Northern Africa, chiles take an important part in fiery stews and sauces. Coriander and cumin both are used extensively, but also African spices (grains of paradise) are common. Of the spices from tropical Asia, cinnamon and cloves find most use. All these, and more, may appear in Moroccan spice mixtures (ras el hanout, see cubeb pepper). Although a large number of Mediterranean herbs is discussed here, the treatment is not exhaustive: There are many more that find their way in the kitchen on occasion. Sometimes, these are wild relatives of herbs treated here which are collected by knowledgeable family members, because their flavour is regarded superior to that of commercially grown ones. This usage is often very local and is hardly mentioned in cookbooks. This applies particularly to herbs of the mint family, i, thyme, marjoram and especially oregano. Further interesting plants from the Mediterranean are: b (alexanders, i , Apiaceae) is similar to lovage and celery, having aromatic roots, leaves and fruits. Today, the culinary importance of that herb is low. b (i [μάστιχα]) is a resin obtained from Pistacia lentiscus var. chia (Anacardiaceae), a tree growing only on the island of Chios in Eastern Greece, though lesser grades are harvested from related species. It was an important commodity in the Middle Ages, but is now only used in Greek cooking (see mahaleb cherry for more). Samphire Samphire (Crithmum maritimum , Apiaceae) grows along all coasts of Europe, from the Atlantic Ocean to the Black Sea. The leaves are succulent with a salty–aromatic flavour and have been a popular flavouring for salads in the past; samphire pickle, formerly much eaten in Britain, is now still popular in the Mediterranean. b, i (Lamiaceae), differs markedly from culinary mints. It is used since antiquity in Roman cooking (see silphion). Despite its mild toxicity, it is a traditional herb in Britain. b, i , is an aromatic herb used in regional Italian cooking (i). Its flavour reminds of related Lamiaceae herbs, e.g., thyme, mint savory or oregano. Pine nuts (pignoli) are the seeds collected from the Mediterranean stone pine (Pinus pinea , Pinaceae/Pinales Pinaceae/Pinales ); in temperate Asia, related pine species are also used. They have a wonderful ethereal–aromatic flavour and are particularly important in Spanish and Italian cooking, e. g., for pesto (see basil). Purslane (Portulaca oleracea , Portulacaceae/Caryophyllales Portulacaceae/Caryophyllales ) is an annual herb probably native to the Himalayas, but today naturalized in Western Asia and Southern Europe. Although often eaten cooked as a vegetable, the raw leaves and stems have a crispy texture and a salty, fresh taste that makes them a good garnish for Mediterranean cold foods, e. g., West Asian appetizers. The flower buds have a more pronounced flavour and have been tried as a caper substitute. Many important spices actually stem from West or Central Asia, even if some of them are in our days cultivated from Morocco to Vietnam: Possibly, cumin and some other of the spices listed in the previous section have their origin actually in western Central Asia, being spread westwards by migrating peoples in prehistoric times. Today’s Persian or Arabian cooking uses a multitude of spices, having easy access to Indian or Southeast Asian ingredients. Cardamom is much valued as an essential component of Arab-style coffee. Cooking styles of the Arabic peninsular have a preference for aromatic but fiery food. Yemeni i (see coriander), a spicy chili-laden paste, and Saudi Arabic baharat (see paprika) may serve as examples. The Caucasus republics, situated between the Black Sea and the Caspian Sea, have developed a unique style of foods, although Russian and Turkish influences can bee seen. Georgia has a mild yet flavourful cuisine much basing on the flavours of dried herbs (see blue fenugreek for the Georgian spice mix i [ხმელი-სუნელი]) and of sour–fruity sauces prepared from fresh or preserved fruits. Fresh herbs are often sprinkled over warm and cold dishes; uniquely, Georgian cooking makes parallel use of both parsley and coriander leaves; the latter are not used anywhere else in the region. Barberry flowers Imeretian saffron Barbery fruits A similar inkling to fruity flavours is found in neighbouring Azərbaycan (Azerbaijan) and in Iran. A typical Irani spice that is, unfortunately, missing from these spice pages is b, i (Berberidaceae/Ranunculales), called i or i [زرشک] in Farsi and i [კოწახური] in Georgian; it is often used to flavour ground meats or Persian rice dishes (i [پلو]). Another source of sour flavour in Irani foods are dried limes (see also fenugreek for khoreshte ghorme sabzi). An interesting herb typical for Georgian cooking is b (i , Asteraceae), which appears in several recipes including the spice mix i (see blue fenugrek). In Georgian, it is simply called yellow flower (q’vit’eli q’vavili [ყვითელი ყვავილი]) or Imeretian saffron (imeruli zaprana [იმერული ზაფრანა]), and sometimes just zaprana which can lead to confusion with the much different saffron. The marigold flowers are dried and ground to yield a yellow powder that hat a mild, sweet scent. It can best be substituted by safflower. Sometimes, also the fresh sprigs are used that have a different, much stronger flavour reminscent of the South American huacatay . The proper Central Asian region, between the Caspian Sea and the Tianshan mountains [天山], is a region rather devoid of local spices, although imported spices are available since antiquity, because the ancient Spice Route running from China to the Mediterranean cuts through that region. Cookbooks of Kazakhstan sometimes mention local herbs with cress-like flavour. Combinations of dried fruits with meats are very popular, where cooks often use local species of genus Prunus (apricot, plum). South Asia, which encompasses the Deccan peninsular and the southern slopes of the Himalayas, has a variety of indigenous spice plants. Furthermore, Southeast Asian spices have been traded in India since thousands of years. Therefore, Indian cuisine is one of the most fragrant and aromatic in the world. A large number of spices native to South Asia has been exported long ago either to the West or to the East. For example, in today’s South East Asia, we find spices of Indian origin that have no place in today’s Indian cooking, i, lemon grass or lesser galangale. The following table shows only those South Asian spices that flavour the contemporary South Asian kitchen. In today’s Indian cuisine, many more spices play an important part. Chiles, brought to Asia from the New World by the Portuguese, are used generously, especially in South India and Sri Lanka. Tamarind (from East Africa) is used to give some Southern Indian curry dishes a sour and tart flavour. Of the European and Central Asian spices, coriander, cumin and garlic are now indispensable for the taste of Indian food. Cinnamon, originally growing on the island of Sri Lanka, is now valued all over India and frequently combined with cloves, which stem from Southeast Asia. Arab influence in South Asia is strongest in Afghanistan, Pakistan and North India. Cooks in these regions tend to use less chiles but more fragrant spices (cloves, saffron and cinnamon). There are numerous spice mixtures in India, but most of them have nothing in common with the i of Western supermarkets (see curry leaves). Most mixtures are actually not powders but pastes, made from ground spices, garlic, ginger and oil and are neither stored nor traded. Mixtures containing only dried spices are the Bengali panch phoron [পাঁচ ফোড়ন] (see fenugreek), the North Indian garam masala [गरम मसाला, گرم مسالحہ, also گرم مصالحہ] and the more Southern sambar podi [சாம்பார் பொடி] (for the latter two, see cumin and coriander, respectively). Southern Indian mixture (bese bele powder) is mentioned under coconut. See black cumin about Northern Indian (Moghul style) cooking, and ajwain about spiced butter (tadka or tarka). See also onion. For a few typical recipes, see Indonesian bay-leaf for the aromatic Northern biriyani and tamarind for the fiery Southern vindaloo. Indian spiced tea (chai masala [चाय मसाला]) is discussed under cardamom. Nepali cooking resembles Indian cooking in several ways, and some preparations, e.g., pickles, are quite comparable. Nepali food is typically milder than Indian food, both with respect to actual heat and usage of aromatic spices. This doesn’t make the food of Nepal bland or uninteresting, because due to Chinese influence, there are several additional flavourings made by fermentation: Cheese, soy products and the typical Nepali gundruk [गुन्द्रुक], dried fermented vegetable leaves. Noodles in various styles are another culinary mark left by neighbouring China. Finally, Burma, or Myanmar, as it is now called, is the meeting place and melting pot of the great cooking traditions of India and Southeast Asia. Noodles, shrimp paste, soy sauce and sesame oil on one side and cardamom, cinnamon, turmeric and cumin on the other side witness the mixed heritage and give Burmese curries their distinct and very tasty character. Fish flavourings are rare in the Indian subcontinent; this is in line with the observation that fermented products generally have only little tradition. The main exception to that is dried fish in Sri Lanka (umbalakada [උම්බලකඩ]), which is usually referred to as Maldive fish in English. Yet, fermented preparation from water-living organisms play an imortant rôle in North Eastern India and the Chittagong Hill Tracts in Eastern Bangladesh, which indicates South East Asian influence (mostly, Burmese). The Khasi people use a paste of fermented fish with various spices (tungtap) as a condiment, the Chakma employ shrimp paste (sidol [𑄥𑄨𑄘𑄮𑄣𑄴]) for cooking, and in Manipur, the Meitei have a dry-fermented fish called ngari [ঙারি, ꯉꯥꯔꯤ]. More rarely, fermented soy bean products are found in that region: Khasi tungrymbai, Manipuri hawaijar [হাৰায়জার, ꯍꯥꯋꯢꯖꯥꯔ]. I am fascinated by Indian cooking; consequently, my treatment of Indian spices is intended quite exhaustive. Nevertheless, there are some Indian spices of which I still know too little to write a detailed description: Dried kokam Goraka fruits grow on high trees Due to its tropical climate, Southeast Asia has a large number of native aromatic plants, most of which are preferred fresh in local cuisines. The Moluccas, a group of small islands on the border between Asia and Australia and home of nutmeg and cloves, have been the center of European spice policy in the late Middle Ages and the first centuries of the modern times. Today, all these spices (with the exception of cinnamon varieties, cloves and nutmeg, which are not so much in use) feature prominently in at least some of the major South East Asian cuisines. Furthermore, chiles, ginger and garlic are found all over the region, as are coconut products: coconut milk and coconut oil. In Southeast Asia, numerous independent culinary styles have evolved; yet most of them prefer spices fresh (if available), and also fresh herbs (basil, coriander leaves and mint) are popular as a fragrant decoration in Vietnam, Cambodia and Thailand. Throughout the region, pungent fish preparations are essential: Fish sauces (nam pla [น้ำปลา] in Thailand, nuoc mam [nước mắm] in Vietnam), shrimp pastes (gapi [ငပိ] in Burma, trassi in Malaysia and Indonesia) and the unique Cambodian paste prepared from fresh water fish, prahok [ប្រហុក]. Fish sauce is also known in Southern China, where it is called yu lu [魚露]; but in Chinese cuisine, it is only a minor flavouring. Thai cooks use even more spices (i, kaffir lime leaves, lemon grass and fingerroot) and other strong-smelling ingredients like dried fish to achieve the characteristic aroma of Thai dishes. Since they use chiles generously, Thai food is sometimes extremely hot and fiery. For Thai curries, see coconut. See also basil and mint for more Thai recipes. In Cambodia and Vietnam, spice usage is not that dominant, and also Philippinos cook rather mildly. Besides garlic and ginger, Philippine cuisine makes use of the South American annatto seeds. This spice was introduced to the Philippines by the Spaniards and is hardly known in other Asian countries. Vietnamese cuisine is unique for its massive use of fresh herbs, some of which are used only rarely outside of Vietnam (Vietnamese coriander, long coriander), while others (rice paddy herb, chameleon herb) do not appear in other any other cooking style at all. On the numerous islands of Indonesia, lots of very different regional cuisines have developed, which is to be explained by different life conditions (jungle nomads, farmers or seafarers; village-bound or cosmopolitan urban cultures), food taboos because of different religions (Islâm, Christianity, Hinduism, Buddhism, Animism), different climates (tropical jungle, mountain woods, highlands or even dry areas) and several other factors. Most Indonesian cuisines do not use sweet spices, which is all the more remarkable because cloves, nutmeg and the Sumatra cinnamon variety are indigenous to Indonesia. Instead of these, the most popular spices are ginger, onion, garlic and moderate amounts of chiles, furthermore galanga and turmeric. Indonesian dishes frequently need shrimp paste (trassi) and soy sauce (kecap), which is also used in a thick and very sweet variety (kecap manis). Especially Jawanese dishes sometimes contain large amounts of sugar and taste sweet–spicy, while I enjoyed rather hot food in Sumatra, and Bali certainly displays the largest variety of different spices. Some highlights of Indonesian cookery are shortly discussed under greater galangale (i, a buffalo stew from Western Sumatra), Sichuan pepper (sangsang, a spicy pork variety meats stew from Northern Sumatra), coconut (ayam pa’piong, a chicken dish from Sulawesi), mango (the pan-Indonesian fruit salad rujak) and lesser galangale (bebek batulu, Balinese roast duck). About Indonesian spice pastes (bumbu) in general, see lemon grass, for information about Balinese cuisine see Indonesian bay leaf and for Jawa cookery see tamarind. Many more herbs and spices are used in the many and varied culinary styles of that large region. Particularly in Vietnam, there is a large whealth of local herbs that are not commonly available in the West. The following are particularly worth noting: Torch ginger Torch ginger (Etlingera elatior , Zin­gi­beraceae) is a unique spice: The inflores­cence is used to flavour curries in Singa­pore and Malaysia (bunga kantan). In Thailand (cha pluu [ช้าพลู]) and Vietnam (la lot [lá lốt]), fragrant wild betel leaves are commonly used to wrap rice or other foods into. These leaves stem from a member of the pepper genus (Piper sarmentosum , Piperaceae) which is closely related to the so-called betel pepper, Piper betle , in indispensable part of the betel bits consumed in South East Asia and India (pan [पान]). Musk mallow (ambrette, Abelmoschus moschatus , Malvaceae/Malvales/Dilleniidae ) is a closely related plant with aromatic seeds. There is constant rumour of it being used as a coffee flavourant, but I don’t even know where this usage is supposed to happen. Vietnamese balm Butterfly pea b (i , Lamiaceae) plays some rôle in Southern Vietnam (i [rau kinh giới]) as part of the canonical herb garnish (see Vietnamese coriander). Butterfly pea (Clitoria ternatea , Fabaceae) has large, deeply blue flowers that are used to give a bluish hue to desserts in Thailand (anchan, anjan [อัญชัน]) and Malaysia (bunga telang). In our days, it is mostly substituted by synthetic food colourants. Broadleaf thyme (Cuban oregano, Indian borage, Mexican mint, Plectranthus amboinicus , Coleus amboinicus , Lamiaceae) is a herb native to South East Asia, though it has been introduced to the Caribbean. The leaves posses a strong odour due to an essential oil rich in carvacrol. The fresh herb is used in Indonesia (daun jinten), but especially in Vietnam (rau day tan las [rau tần dầy lás]), as a garnish. Quite rarely, I have read reports claiming that the pungent seeds of some members of the Araceae family (e. g., Giant Elephant’s Ear, Colocasia gigantea ) are used as pepper surrogate in South East Asia. The fruits of the tree Garcinia atroviridis (Clusiaceae/Theales/Dilleniidae ) are used as a source of acidity especially in Malaysia ( asam gelugur ), similar to the use of other Garcinia species in South India and Sri Lanka. i (Euphorbiaceae/Euphorbiales/Dilleniidae ) yields seeds ( candle nut , kemiri) which are a very common although bland ingredient of Indonesian spice pastes. See also lemon grass about spice mixtures containing candlenuts. Pangi seeds (shelled and unshelled) A quite interesting spice is derived from the Indonesian pangi or kepayang tree, Pangium edule (Flacourtiaceae/Violales ). The seeds, known as kluak or kluwak in Indonesian and as pamarassan in bahasa toraja, are an ingredient typical for a few Indonesian local cuisines, e. g. in East Jawa and Central Sulawesi. They provide a dark colour, an intensive nutty taste and a smooth, somewhat oily texture. For flavour development and removal of hydrocyanic acid, the seeds need a fermentation procedure by which they turn from cream colour to almost black. Sandalwood (Santalum album , Santalaceae/Santalales/Rosidae ) is the core wood of a parasitic plant native to the Lesser Sunda Islands, probably Timor. Today much of it is grown in Southern India and used for incenses. Though powerfully fragrant, it has never been used much for cooking. The whole East Asian region is dominated by Chinese culture. Chinese cookery is very varied and highly sophisticated; it has influenced all East Asian cuisines, and is also a important contribution to all South East Asian culinary styles. Chinese cuisine derives its attraction not so much from different spices, but from a multitude of meat and vegetable ingredients with different flavour, shape, colour and texture, and from a wealth of standardized cooking and frying methods; the only common spice mixture is the famous five spice powder (wu xiang fen [五香粉], see star anise), which is frequently used to flavour fried meat all over China. Soy sauce (jiang you [酱油]) is the most important condiment in China, but to prepare authentic Chinese foods, also other soy products are needed, for example sweet bean paste (haixian jiang [海鲜酱], better known by its Cantonese name hoisin jeung [海鮮醬]), hot bean paste (douban jiang [豆瓣酱]) and fermented black beans (dou chi [豆豉]). The least spicy cooking style in China is Cantonese cuisine, which is native to the Guangdong province [广东, 廣東]. The name Cantonese derives from the province capital Guangzhou [广州, 廣州] that was formerly known as Canton in the West. Cantonese cuisine has a reputation for its exotic meat dishes made from dogs, cats, monkeys and snakes. It is also known for a varity of barbecued meats (siu mei [燒味], Mandarin shao wei [烧味]), for example spare ribs (cha siu [叉燒], often spelled char siu in the West, Mandarin cha shao [叉烧]). A famous Cantonese food term is dim sam [點心] (in English also spelt dim sum), which is not a dish but a light meal composed a selection of small dishes; a most popular choice are meat-stuffed dumplings made from ground pork, chicken or shrimps with light yet subtle flavourings. Outside of Guangdong, the term has mainly come to mean a variety of such steamed pasta. Though Cantonese in origin, dim sam is now enjoyed all over China (Mandarin dian xin [点心]). By tradition, fiery food is rather uncommon in China, except in two Central Chinese provinces: Hunan [湖南, 湘] and Sichuan (Szechwan) [四川, 川], which is also known as Tian-fu [天府] (heavenly province or land of plenty). In these both provinces, but especially in Sichuan, chiles, garlic and aromatic sesame oil are popular. An important flavouring of Central Chinese cookery is red hot bean paste, doubanjiang [豆瓣酱] made from fermented broad beans. Due to domestic migration, spicy Sichuan and Hunan foods have recently become available and popular in wider parts of China. In contrast, the cuisine of the mountainous Yunnan province [云南, 雲南] has not yet attracted much interest, though it is spicy and related to the Sichuan cuisine. The North-Eastern Chinese cooking is usually termed the Shanghai [上海] style. It is particularly rich and often uses sweet flavours. A typical motive of Shanghai cooking is the use of rice wine (liao jiu [料酒]). Red-braising (hongshao [红烧]) is a cooking technique that originates in Shanghai, although it is today commonly found all over China. The fourth and last Great Cuisine is the Northern Beijing [北京] style, which has a large repertoire of baked foods (a Central Asian influence) and uses more wheat than rice due to climatical reasons. Two signature dishes are Beijing duck (beijing kao ya [北京烤鸭]) and Mongolian hotpot (meng-gu huo-guo [蒙古火锅]). Furthermore, sweet and sour dishes are popular: Fish ore meat are battered, deep-fried and served with a sweet–sour sauce (tangcu [糖醋] sugar and vinegar) A handful of Chinese dishes are shortly discussed at this site: See ginger on i [宫保] (stir-fried chicken with peanuts in Sichuan style), orange on au larm (Sichuan braised beef), Sichuan pepper on shui zhu niu rou [水煮牛肉] (Sichuan water-boiled beef) and chile on mapo doufu [麻婆豆腐] (bean cheese with ground pork in spicy sauce). See also star anise about five-spice-powder (wu xiang fen [五香粉]) and cassia on red braising (hongshao [红烧]) and cooking in master sauce (lu shui [鹵水]). Cuisine in Japan restricts itself to utmost simplicity with respect to spices: Only Sichuan pepper (more precisely, a closely related Japanese species) is used as a condiment, either alone or mixed with tangerine or orange peel and chiles in form of the spice mixture shichimi tōgarashi [七味 唐辛子]. Japanese dishes, thus, owe most of their flavour to their ingredients, whose freshness and skilful preparation are crucial, furthermore to dried sea grass and kelp, several different soy products (e. g., soy sauce shōyu [醤油, しょうゆ]) and other fermented crops (miso [味噌, みそ]). A pungent root, wasabi, is served as a green paste to raw fish (sashimi [刺身, さしみ]) and rice bits (sushi [ 寿司, すし]); several herbs (water pepper, perilla and the young leaves of Sichuan pepper) are used both for flavour and as a decoration. In sharp contrast, the cuisine in Korea, the most Eastern country of East Asia, is fiery and pungent, dominated by chiles, toasted sesame seeds and garlic; pickled vegetables (kim chi [김치]), both spicy and sour, are also very popular. Soy bean paste (den jang [된장], also spelled doen jang or doin jang) similar to Japanese miso and bean-chile paste (gochu jang [고추장], also spelled kochu jang) are essential flavourings. In both Korea and Japan, fresh spring onions are a common garnish. There are some further local herbs and spices that are occasionally used. For example, Chinese cuisine utilizes several local onion species (i , see chives); for Sichuan, particularly, cookbooks mention local Himalaya herbs but don’t give any clear identification. We should also note the following: Ginseng (Panax ginseng , Araliaceae/Araliales ) is mainly known as an expensive herb in traditional Chinese medicine, and as a flavouring for alcoholic drinks. Nevertheless, it is also used as a culinary spice, especially in Korea. b is of old an important aromatic, although it has never much been used for cooking. Yet in China, camphor has been used in the past for flavouring frozen desserts, and even now it is sometimes part of smoking mixtures, giving rise to specialties like i (i [樟茶鸭子]). There are two different products commonly named camphor: The better-known Chinese or Japanese camphor (from Cinnamomum camphora , Lauraceae) is composed of 2-bornanone and generally considered much inferior to the much more pricey Sumatra camphor or camphor of Baros (from Dryobalanops aromatica , Dipterocarpaceae/Malvales/Dilleniidae ) which is mostly composed of borneol. Japanese cuisine uses the fresh leaves of mitsuba [ミツバ, みつば] (Cryptotaenia japonica , Apiaceae ), as a culinary herb. Fresh leaves are chopped and sprinkled over soups or salads. In Chinese, the herb is known as ya er qin [鸭儿芹]). Few African spices have ever become known in the West. Personally, I know only four, of which sesame’s origin is uncertain. During the Age of Explorations, the former two (from West Asia) were traded as cheep substitute of black pepper, unless the sea route to India was established. Later, people lost interest in them and they are now nearly forgotten (and difficult to obtain). Silphion is the name of a legendary spice in ancient Rome, which was so popular that it became extinct in the early Imperial era. Its botanic classification is subject to debate. Tamarind probably stems from East Africa, but is in our days grown in tropical climate all over the world and is an important ingredient in Asian or Latin American cuisine. Sesame is one of the most important oil seeds of mankind, yet little of the crop is used as a spice. Specialties containing sesame are found all over the Old World, from Europe to Korea. Today’s African cooking is dominated by Arabic influences, mostly so in the North and East, where Islâm prevails. In the South, there is much colonial influence, both by European colonists and immigrants from India and Malaysia. East Africa has absorbed Arabic and Indian cooking techniques and developed a unique cuisine by blending foreign influences with local traditions. Cooking in West and Central Africa has conserved its distinct character and is hardly comparable to any other culinary style. In West Africa, i in Nigeria, Cameroon, Ghana, Benin, food is often very pungent due to the use of extrahot chiles that have been imported from the Caribbean. Other important flavourings are dried fish products, smoked meats and toasted peanuts; the typical cooking medium is unrefined palm oil (from i ) whose flavour also contributes significantly to the character of West African cooking. Furthermore, a number of local spices are used that are, however, hardly available outside the region (except grains of paradise and, if one is very lucky, negro pepper). In North Africa, however, subtle spice mixtures based on cumin and coriander dominate, and aromatic Asian spices are popular. See cubeb pepper about the exceedingly complex mixture ras el hanout. Arabic or Indian influence is manifest in spice mixtures like Tunisian gâlat dagga (see grains of paradise) and Ethiopian berbere (see long pepper). Quite many spices of other continents are grown in today’s tropical Africa, where they are mostly planted as cash crops and exported. Nigeria, for instance, is a large producer of ginger. The tiny but fertile islands East of Africa are sources for several of the finest spices for European consumers: Réunion (formerly known as Bourbon) exports vanilla and allspice, and Zanzibar has long outgrown Indonesia as the major clove producing country. I don’t know much about other native African spices, which of course does not mean that those do not exist. For example, various b are native to South Africa; they are often referred to as scented geraniums but belong not to genus Geranium but Pelargonium , which is closely related but distinct (Geraniaceae/Geraniales ). These herbs have an amazing spectrum of different flavours, most often lemony or rose-like floral, but there are also types with fragrance resembling mint, cinnamon and even nutmeg. Nevertheless, these astonishing plants have not yet found much application in cooking, although a few varieties are grown for the perfume industry. Also in West Africa, the potentials of indigenous spices have not yet been exploited. Most of the native West African spices are unavailable in the rest of the world. In some cases, like the akob bark and felom fruits (seeds?), I don’t even know the botanical identity. Some more West African spices are mentioned in the below list. Several species of genus b (Zingiberaceae) yield edible fruits and pungent seeds, i, i and i ( mbongo spice) See also grains of paradise The related genus b also has representatives growing in the tropic belt from Senegal to Ethiopia which are used locally. Some of these have been traded as cardamom adulterants or surrogates in the past. See also black cardamom. Furthermore, there are African pepper species like Piper clusii (see cubeb pepper). Another source of pungent flavour might be found in the numerous indigenous b species (Rutaceae) found in tropical Africa, but the literature is scarce (see Sichuan pepper about Asian relatives). Calabash nutmeg leaf Calabash nutmegs Calabash nutmeg is the seed of Monodora myristica (Annonaceae) which was a common surrogate for nutmeg in 16.th century Europe; today, the species is also grown on Jamaica. However, I do not know about usage of calabash nutmegs in contemporary African or Caribbean cuisines. The oily seeds of the tree Ricinodendron heudelotii (Euphorbiaceae/Euphorbiales/Dilleniidae ) have a characteristic, strong flavour and are used as a spice and thickener for sauces (local names njangsa , njasang ). b or bush mango is the fruit of the jungle tree i and the related species i (Irvingiaceae/Sapindales/Rosidae ); there is only a loose botanical relationship to mango. The seeds, dried and ground, are known as i and lend a sticky texture and presumably some flavour to West African chicken stews (sauces). b [ኮሰርት] is the Amharic name of the herb i (Verbenaceae) which is used as a culinary spice in Ethiopia. It figures prominently in i [ክትፎ], raw ground beef flavoured with spiced butter. Most Ethiopian cookbooks silently replace it by basil. See also long pepper about the spice mixture berbere. Roselle (red sorrel, Hibiscus sabdariffa , Malvaceae/Malvales/Dilleniidae , Arabic karkadi [كركديه]) is the purple, dried calyx of a plant related to the popular ornamental hibiscus species. A refreshing acidic beverage prepared from the calyces is quite popular in parts of Northern and Western Africa; more rarely, one reads about roselle calyces being used in salty food, e. g., Indian and Malaysian curries. The contribution of the two Americas to the list of spices is, unfortunately, rather short. This is not for lack of aromatic plants, but mostly for lack of information regarding native American spices in Europe. In the USA, due to immigration, Latin American spices are easier to get by, but few of them have found a permanent place in the spice shelf. Of course, there is this one American nightshade plant that revolutionized almost any cuisine in the world … Because in Northern America (the US and Canada) the cooking style is largely derived from and not very different from European cuisine, spice usage is generally rather low (exclude the Mexican-influenced cuisine of the Southern states of the US from this statement). Currently, there is only one plant native to North America treated on these pages: Sassafras (filè) has great though only regional importance in New Orleans cooking. Allspice was introduced to Europe from the Caribbean islands; its alternative name i indicates its origin from the New World. Vanilla is native to México and has been used for flavouring a chocolate-like drink since Aztec times. A culinary herb native to México is epazote. Toasted pumpkin seeds are an ancient flavouring of Central American peoples that goes back to pre-Columbian times; yet extraction of oil from toasted pumpkin seeds, as practiced in Central Europe, is a much more recent invention. From South America stem annatto seeds, much used locally, and pink pepper, a spice that became popular during the past decades in the nouvelle cuisine. Further South American spices are tonka beans and paracress, which have, however, found only limited use outside of South America. Lemon verbena is another spice generally underrated. The most important spice of both Americas are, however, chiles and bell peppers, which are both thought to be native to the Amazon region, but have been traded extensively as far north as the southern states of today’s USA before the arrival of the Europeans. Today, they are high valued in all tropical countries of America, Asia and Africa. Some more interesting plants from North, Central and South America are, unfortunately, not yet treated on this page. Some of these are: Few plants of Australia have ever gained economical importance, macadamia nuts (Macadamia integrifolia and M. tetraphylla , Proteaceae/Proteales/Rosidae ) being the chief example. There are, however, plenty of aromatic plants, some of which might gain some importance in the cuisines to come. Both spices are currently hardly known (less used) outside Australia, but in our global world, these things may change quickly. Note that in Australia, there are more indigenous flavourings that can be considered spices: The dried tiny berries of b (i , Solanaceae) have a complex taste not altogether dissimilar to Italian sun-dried tomatoes, although less fruity and more spicy. Another candidate is the so-called b, dried and roasted seeds of various i species, i, i , i and i (Mimosaceae/Fabales ). Both plants have a long record of indigenous usage by Aborigines. I know of no spices originating from Oceania, but on Tahiti, a relative of vanilla is grown. The origin of coconut was long a matter of scientific dispute, but it has now been shown that the plant actually stems from Asia.
53
How to Navigate Being the Only Woman on Your Team
Jobs from companies in this blog Colorado startup guides LOCAL GUIDE Best Companies to Work for in Denver & Boulder View LOCAL GUIDE Coolest Tech Offices in Denver & Colorado Tech View LOCAL GUIDE Best Perks at Colorado Tech Companies View LOCAL GUIDE Women in Colorado Tech View See All Guides Built In offers a paid for Premium Partnership wherein we work with employers to highlight their cultures, values and job openings. Built In maintains full editorial control.Learn More
1
Why an Electric Car Battery Is So Expensive, for Now
To continue, please click the box below to let us know you're not a robot.
1
Ecommerce Revenue
Buy the premium report to read about detailed experience in e-commerce for $4.58 Every year ecommerce sales have increased. New platforms have led to more online stores from everyday people. Services that aid online sales have added to ecommerce revenue too. Well-known brands that allow consumers to order items online. These businesses have built trust with consumers, giving them an advantage. Some Big Brands allow people to create their own storefront. Those running hosted stores do not have to worry about generating traffic. In a self-hosted store, the seller controls and owns everything. A seller might make less money but will take home more because there are fewer fees. There is a difference between e-commerce and digital art. People can buy anything from an e-commerce store, such as PDFs, videos, or physical products. The products in an e-commerce store have a market price. If I sell shirts, people will buy them for $20-$30. But if I raise the price to $200 people will not buy them. Since other shirts’ get priced at $20-$30. Digital art uses digital technology as part of the creation process. At the same time, there is no market price. Art will sell for anything from $50 to $69 million. Some store owners optimize their pages to show up in search results. They use SEO to bring traffic in for free. A pop-up might ask for a consumer’s email address when they enter the store. The store owner wants to send consumers emails to come back to the store to buy products. Sometimes the emails that get sent out have coupons. The coupons provide more incentive for consumers to make purchases. If a store sells products for $50 and it costs them $5 to make, they might send consumers a $2 coupon. The store is better off losing the $2 than the entire sale. Sellers post their products on social media. A few people click on the links then, the seller can get their email address to send coupons. A seller can automate promoting the store by hiring an influencer. The influencer will post the product and bring their audience to the store. Another type of ad is Pay Per Click. For the PPC model, sellers pay a platform to host their ads. Every time a consumer clicks on the ad, the seller has to pay. Some people have an ecommerce store that are unsure about best practices. A company can offer to optimize the store to improve average revenue. The company can improve SEO, product pages, and images. Which will bring in new customers. Many new sellers will pay someone to increase online sales. The service can offer a revenue share agreement to reluctant sellers. Some people hate dealing with shipping and handling. I am some people.😂 These people might want to pay a service to ship physical products for them. Only if the service picks up the products and handles all shipping elements then bill them. Big shipping companies might have this as a service now. I hate shipping and handling to the point that I refuse to sell any physical products!! Giving websites access to a consumer’s credit card signals trust. The consumers trust the store will take care of their information. Consumers have trouble knowing if they can trust a store with their information. Stores can slap on any image or review, and people will think they can trust the store. But consumers need a third party to help verify if a store is trustworthy. A company can buy products from a store and verify with consumers that a store is trusted. Which will save consumers time and money. Does anyone ever know the location of their package? My tracking information will say, “Your package is in Texas and will get to you in 4 days.” Then, 6 days later, the package shows up. Tracking packages has room for improvement. Shipping adds to traffic on the road. How much time gets added to the average commute because someone needed their Oreos in 24 hours? Why is there not a shipping method that creates less traffic? Many boxes get shipped by plane. The boxes do not need to go from plane to car. An idea I have is to create a network from the airport. Each neighborhood would have a hub where the packages get delivered. The packages from the airport go to their hubs from a sky tram. A person would need to stay in the tram to put the packages in their hub. Airports and mail services can work together with public transit to deliver packages. The packages go from the airport then get attached to buses or trains. At every stop, someone puts the packages in their hub. Mail services lose thousands of packages every month. Some lost packages might have items worth thousands of dollars. The consumers that never got their packages might pay someone to find their package. A package-private investigator, if you will😂 When selling certain products, the physical element needs consideration. Not everyone will buy a couch online right away. They might want to see and feel it before buying. An opportunity in ecommerce is a physical showroom. A location for consumers to go to see and feel products before they buy them online. Every order comes with the packaging. The order has the box, protective material, and the packaging of the product. Most of the packaging gets thrown away right after arrival. The packaging gets made with cheap plastic that will get broken down into tiny pieces and go to a landfill. This type of packaging hurts the environment. There are materials to make packaging that would harm the environment less. Any platform used to sell products will charge for the right. They might charge a percentage for each transaction or monthly fees. I always discuss transaction fees. Here’s my take on them from Digital Art, “If it is a transaction on the internet, there is no avoiding the 3% credit card fees. An artist will have to pay the fees for the art that gets purchased.” Sellers need to spend money to find consumers to buy their products. A seller will need to pay an email provider if the list has over 1,000 emails. If a seller wants ad space, they need to pay the platform. Or pay an influencer for a post. Physical products need to get packaged up and shipped. The materials cost money, and the seller needs to take the time to ship them. All of this cuts into revenue. Digital products get uploaded to ecommerce platforms. A few platforms will charge to allow sellers to store files on their servers. This expense might depend on the size of the files. If the file is small, ~ 5 MB, they will not charge. But for a 50 MB file, they will charge. In some cases, the platforms do not charge. But they “ask” that a seller upgrades to the next tier. The internet has many products available for purchase. If someone searched “podcasting book,” they would find 14 pages of links. To get consumers to buy my podcasting book, I need to make the book stand out from those other links. If not, I will lose potential revenue. If a seller uses a hosted store, then they compete with every store on the platform. These stores compete on price. A self-hosted store needs to generate traffic on its own. Generating traffic can get challenging. Some sellers might not know where to find their consumers. In this case, they should find people that have created my revenue ideas. Once sellers get consumers to buy products, they want consumers to buy more. But some consumers do not need to come back. How many products does one consumer need for their phone? A few chargers, a case, and a screen protector. For the sellers that offer phone products, what can they do? They can increase prices. Which will bring up the customer lifetime value and average order value. But that does not bring them back. Many consumers will put products in their cart and do not checkout. The concept gets called cart abandonment. One method to decrease abandoned carts is sending coupons. Although, the coupons only work if the consumer has an account. Every step of the ecommerce process adds to a consumer’s carbon footprint. When a consumer orders a product online, they use a laptop or phone. The data created gets stored in a data center. A data center uses water and adds pollution. Then, the product itself adds to a consumer's carbon footprint. The product needs to get shipped to them. Some products get flown to consumers too. Once the consumer has the product, how long will they use it? Until next year when the product becomes obsolete? Or next season when the fashion companies come out with their new line? The product comes in cheap plastic packaging with a box. The packaging gets broken down into tiny plastic pieces because it is difficult to open. Those pieces will go to a landfill, and a bird will eat them, harming the ecosystem. I am not saying that every step in the process needs to get more eco-friendly. But there is room for improvement in this chain of events. The packaging is one step. Or stop buying garbage that will not get used 6 months later. Selling physical products takes extra time. The seller needs to package, ship and put a stamp on it. The price of the product should reflect the extra work. When I create a product, my goal is to make money while I sleep. Physical products do not allow me to do that. If the page does not load then, potential customers will leave. Sellers should keep the tech stack of the platform in mind. They might lose customers because the page does not load. Some platforms optimize page speed. Big Brands mass produce their products. This allows them to have a low unit price to maximize their profit margin. Small ecommerce stores cannot compete on price with Big Brands. Small brands can win customers over with personalized products. Big Brands cannot personalize products because they mass produce. If a store gets less than 100 orders a month, they can handle personalized orders. They can give consumers the ability to design their own decorations, jewelry, or clothing. Some customers might like to design their own products. A Christmas tree farm can allow customers to choose the fabric color or lights on the wreaths. A store can charge more for personalized products and increase profit margins. There are many reasons ecommerce businesses provide value. The main selling point is that ecommerce allows customers to buy items they cannot buy in stores. Consumers can buy vintage clothes or an out-of-print book. The value for sellers is that ecommerce lowers barriers. Anyone can create a product and sell it. For example, to get a book published, someone might need a book deal in the past. Now, someone can write, edit, and promote the book themselves. Perishable goods create a barrier for online shopping. Some people like to see their perishable goods before buying them. Since they do not want to buy perishable goods online, they avoid online shopping. I refuse to buy ice cream online because I am not sure how shipping works. I might buy some to see how it goes. Ecommerce unlocked many opportunities for sellers. The barriers broken down give people the chance to get creative and make money. The more online purchases grow the more opportunities will arise.
2
Alexander Graham Bell Goes and Flies a Kite – For Science
When he was 29 years old, Alexander Graham Bell patented the telephone—a claim that is reportedly one of the most lucrative ever filed in the U.S. Patent Office. Not long after, the young inventor lost interest in the device and put his growing wealth toward other pursuits—such as giant kites capable of lifting people off the ground. “It is fortunate for those interested in aeronautics and the exploration of the air that Professor Alexander Graham Bell has joined the band of experimenters and is lending his inventive genius to the cause,” wrote meteorologist Henry Helm Clayton, one of Bell’s admirers, in 1903. The goal of flying people on kites was hundreds of years old. But the late 19th- and early 20th-century work evolved directly into the planes we have today. A crucial step in the Wright brothers’ first successful powered flight in 1903 depended on their realization that a kite’s wings could be warped as the craft flew. Bell and his team, called the Aerial Experiment Association, ultimately focused their kite designs on tetrahedrons, or pyramids made of four triangles, and biplane structures, several of which used red silk. When he died, Bell’s coffin was lined with the red silk. Circular kite composed of smaller tetrahedral shapes that was built by Bell and his team. The triangular design helped the researchers disprove skeptics who thought kites composed of many identical structures could ever lift someone off the ground. Bell kite composed of triangular sections. The original rectangular box kites needed internal bracing to keep their shape while flying, which added dead weight. Bell’s idea to use the stronger, self-bracing triangle shape made for durable but light kites. Alexander Graham Bell and his wife Mabel Bell kissing through a kite structure. Mabel Bell was integral to her husband’s work. She advocated for him to assemble the Aerial Experiment Association. She even sold a home she owned to front the costs of putting the group together. Alexander Graham Bell’s designs for tetrahedral kites, which grew large enough to hold a human aloft. One of his first passengers, Lieutenant Thomas Selfridge, later became the first person to die in an airplane accident when working with the Wright brothers. Tetrahedral kite. Unlike traditional rectangular box kites, Bell’s tetrahedral shape could make increasingly larger structures, such as this 64-celled model. Aggregated rectangles increased kite weight faster than they expanded wing surface area. Tetrahedrons kept the ratio nearly constant. One of Bell’s tetrahedral kites, towed on water. With kites, taking off and landing, particularly with people onboard, were difficult parts of the flying process. Bell’s team thought that an aquatic runway would be less dangerous and launched a series of kites—Cygnet I, II and III—this way. Bell team kite made of triangular cells. The researchers moved on from triangular to tetrahedral kites because of the design hurdle seen here. Triangle-based kites had to be arranged in two sections connected by wood—deadweight that tetrahedral designs avoided. Relatively small tetrahedral kite in flight. The boom in kite innovation and subsequent engine-powered flight led to the founding of the National Advisory Committee for Aeronautics in 1915. The agency turned over its operations to NASA in 1958. Onlookers watch a flight. Early flight researchers had rocky relationships with the public and press—a theme that extended to Robert H. Goddard, the inventor of the liquid-fueled rocket. Goddard first proposed launching such a rocket to the moon in 1920, which earned him heavy criticism from newspapers. Science in Images
2
Unreliable sources increased % of social media interactions in 2020
Dec 22, 2020 - "Unreliable" news sources got more traction in 2020 , author of Data: NewsGuard; Chart: Sara Wise/Axios Unreliable news websites significantly increased their share of engagement among the top performing news sources on social media this year, according to a new analysis from NewsGuard provided to Axios. Why it matters: Quality filters from Big Tech platforms didn’t stop inflammatory headlines from gaining lots of traction, especially from fringe-right sources. By the numbers: In 2020, nearly one-fifth (17%) of engagement among the top 100 news sources on social media came from sources that NewsGuard deems generally unreliable, compared to about 8% in 2019. NewsGuard found that its top rated "unreliable" site, The Daily Wire, saw 2.5 times as many interactions in 2020 as 2019. Bongino.com increased engagement by more than 1700% this year. How it works: NewsGuard uses trained journalists to rate thousands of news and information websites. It uses a long list of criteria, like whether the news site discloses its funding or repeatedly publishes content deemed false by fact-checkers, to determine whether sites are credible or unreliable. The big picture: Engagement from the top 100 U.S. news sources on social media nearly doubled from the first eleven months of 2019 compared to the same period in 2020, the study found. That's not surprising given the major events swallowing the news cycle this year, including the election, COVID-19 and the Black Lives Matter protests. But the report, which was created using data from social intelligence company NewsWhip, shows that low-quality news sources tend to flourish amid lots of breaking news cycles, where a lack of certainty can be exploited. Flashback: Earlier this year, an investigation from NewsGuard found that the vast majority of Facebook groups that were "super-spreaders" of election-related misinformation were affiliated with right-wing movements, including pages like Gateway Pundit, Viral Patriot and MAGA Revolution.
109
A Pedometer in the Real World (2015)
Dessy is an engineer by trade, an entrepreneur by passion, and a developer at heart. She's currently the CTO and co-founder of Nudge Rewards When she’s not busy building product with her team, she can be found teaching others to code, attending or hosting a Toronto tech event, and online at dessydaskalov.com and @dess_e. Many software engineers reflecting on their training will remember having the pleasure of living in a very perfect world. We were taught to solve well-defined problems in idealized domains. Then we were thrown into the real world, with all of its complexities and challenges. It's messy, which makes it all the more exciting. When you can solve a real-life problem, with all of its quirks, you can build software that really helps people. In this chapter, we'll examine a problem that looks straightforward on the surface, and gets tangled very quickly when the real world, and real people, are thrown into the mix. We'll work together to build a basic pedometer. We'll start by discussing the theory behind a pedometer and creating a step counting solution outside of code. Then, we'll implement our solution in code. Finally, we'll add a web layer to our code so that we have a friendly interface for a user to work with. Let's roll up our sleeves, and prepare to untangle a real-world problem. The rise of the mobile device brought with it a trend to collect more and more data on our daily lives. One type of data many people collect is the number of steps they've taken over a period of time. This data can be used for health tracking, training for sporting events, or, for those of us obsessed with collecting and analyzing data, just for kicks. Steps can be counted using a pedometer, which often uses data from a hardware accelerometer as input. An accelerometer is a piece of hardware that measures acceleration in the \(x\), \(y\), and \(z\) directions. Many people carry an accelerometer with them wherever they go, as it's built into almost all smartphones currently on the market. The \(x\), \(y\), and \(z\) directions are relative to the phone. An accelerometer returns a signal in 3-dimensional space. A signal is a set of data points recorded over time. Each component of the signal is a time series representing acceleration in one of the \(x\), \(y\), or \(z\) directions. Each point in a time series is the acceleration in that direction at a specific point in time. Acceleration is measured in units of g-force, or g. One g is equal to 9.8 \(m/s^2\), the average acceleration due to gravity on Earth. Figure 16.1 shows an example signal from an accelerometer with the three time series. The sampling rate of the accelerometer, which can often be calibrated, determines the number of measurements per second. For instance, an accelerometer with a sampling rate of 100 returns 100 data points for each \(x\), \(y\), and \(z\) time series every second. When a person walks, they bounce slightly with each step. Just watch the top of a person's head as they walk away from you. Their head, torso, and hips are synchronized in a smooth bouncing motion. While people don't bounce very far, only one or two centimeters, it is one of the clearest, most constant, and most recognizable parts of a person's walking acceleration signal. A person bounces up and down, in the vertical direction, with each step. If you are walking on Earth (or another big ball of mass floating in space) the bounce is conveniently in the same direction as gravity. We are going to count steps by using the accelerometer to count bounces up and down. Because the phone can rotate in any direction, we will use gravity to know which direction down is. A pedometer can count steps by counting the number of bounces in the direction of gravity. Let's look at a person walking with an accelerometer-equipped smartphone in his or her shirt pocket (Figure 16.2). For the sake of simplicity, we'll assume that the person: In our perfect world, acceleration from step bounces will form a perfect sine wave in the \(y\) direction. Each peak in the sine wave is exactly one step. Step counting becomes a matter of counting these perfect peaks. Ah, the joys of a perfect world, which we only ever experience in texts like this. Don't fret, things are about to get a little messier, and a lot more exciting. Let's add a little more reality to our world. The force of gravity causes an acceleration in the direction of gravity, which we refer to as gravitational acceleration. This acceleration is unique because it is always present and, for the purposes of this chapter, is constant at 9.8 \(m/s^2\). Suppose a smartphone is lying on a table screen-side up. In this orientation, our coordinate system is such that the negative \(z\) direction is the one that gravity is acting on. Gravity will pull our phone in the negative \(z\) direction, so our accelerometer, even when perfectly still, will record an acceleration of 9.8 \(m/s^2\) in the negative \(z\) direction. Accelerometer data from our phone in this orientation is shown in Figure 16.3. Note that \(x(t)\) and \(y(t)\) remain constant at 0, while \(z(t)\) is constant at -1 g. Our accelerometer records all acceleration, including gravitational acceleration. Each time series measures the total acceleration in that direction. Total acceleration is the sum of user acceleration and gravitational acceleration. User acceleration is the acceleration of the device due to the movement of the user, and is constant at 0 when the phone is perfectly still. However, when the user is moving with the device, user acceleration is rarely constant, since it's difficult for a person to move with a constant acceleration. To count steps, we're interested in the bounces created by the user in the direction of gravity. That means we're interested in isolating the 1-dimensional time series which describes user acceleration in the direction of gravity from our 3-dimensional acceleration signal (Figure 16.4). In our simple example, gravitational acceleration is 0 in \(x(t)\) and \(z(t)\) and constant at 9.8 \(m/s^2\) in \(y(t)\). Therefore, in our total acceleration plot, \(x(t)\) and \(z(t)\) fluctuate around 0 while \(y(t)\) fluctuates around -1 g. In our user acceleration plot, we notice that—because we have removed gravitational acceleration—all three time series fluctuate around 0. Note the obvious peaks in \(y_{u}(t)\). Those are due to step bounces! In our last plot, gravitational acceleration, \(y_{g}(t)\) is constant at -1 g, and \(x_{g}(t)\) and \(z_{g}(t)\) are constant at 0. So, in our example, the 1-dimensional user acceleration in the direction of gravity time series we're interested in is \(y_{u}(t)\). Although \(y_{u}(t)\) isn't as smooth as our perfect sine wave, we can identify the peaks, and use those peaks to count steps. So far, so good. Now, let's add even more reality to our world. What if a person carries the phone in a bag on their shoulder, with the phone in a more wonky position? To make matters worse, what if the phone rotates in the bag part way through the walk, as in Figure 16.5? Yikes. Now all three of our components have a non-zero gravitational acceleration, so the user acceleration in the direction of gravity is now split amongst all three time series. To determine user acceleration in the direction of gravity, we first have to determine which direction gravity is acting in. To do this, we have to split total acceleration in each of the three time series into a user acceleration time series and a gravitational acceleration time series (Figure 16.6). Then we can isolate the portion of user acceleration in each component that is in the direction of gravity, resulting in just the user acceleration in the direction of gravity time series. Let's define this as two steps below: We'll look at each step separately, and put on our mathematician hats. We can use a tool called a filter to split a total acceleration time series into a user acceleration time series and a gravitational acceleration time series. A filter is a tool used in signal processing to remove an unwanted component from a signal. A low-pass filter allows low-frequency signals through, while attenuating signals higher than a set threshold. Conversely, a high-pass filter allows high-frequency signals through, while attenuating signals below a set threshold. Using music as an analogy, a low-pass filter can eliminate treble, and a high-pass filter can eliminate bass. In our situation, the frequency, measured in Hz, indicates how quickly the acceleration is changing. A constant acceleration has a frequency of 0 Hz, while a non-constant acceleration has a non-zero frequency. This means that our constant gravitational acceleration is a 0 Hz signal, while user acceleration is not. For each component, we can pass total acceleration through a low-pass filter, and we'll be left with just the gravitational acceleration time series. Then we can subtract gravitational acceleration from total acceleration, and we'll have the user acceleration time series (Figure 16.7). There are numerous varieties of filters. The one we'll use is called an infinite impulse response (IIR) filter. We've chosen an IIR filter because of its low overhead and ease of implementation. The IIR filter we've chosen is implemented using the formula: \[ output_{i} = \alpha_{0}(input_{i}\beta_{0} + input_{i-1}\beta_{1} + input_{i-2}\beta_{2} - output_{i-1}\alpha_{1} - output_{i-2}\alpha_{2}) \] The design of digital filters is outside of the scope of this chapter, but a very short teaser discussion is warranted. It's a well-studied, fascinating topic, with numerous practical applications. A digital filter can be designed to cancel any frequency or range of frequencies desired. The \(\alpha\) and \(\beta\) values in the formula are coefficients, set based on the cutoff frequency, and the range of frequencies we want to preserve. We want to cancel all frequencies except for our constant gravitational acceleration, so we've chosen coefficients that attenuate frequencies higher than 0.2 Hz. Notice that we've set our threshold slightly higher than 0 Hz. While gravity does create a true 0 Hz acceleration, our real, imperfect world has real, imperfect accelerometers, so we're allowing for a slight margin of error in measurement. Let's work through a low-pass filter implementation using our earlier example. We'll split: We'll initialize the first two values of gravitational acceleration to 0, so that the formula has initial values to work with. \[x_{g}(0) = x_{g}(1) = y_{g}(0) = y_{g}(1) = z_{g}(0) = z_{g}(1) = 0\] Then we'll implement the filter formula for each time series. \[x_{g}(t) = \alpha_{0}(x(t)\beta_{0} + x(t-1)\beta_{1} + x(t-2)\beta_{2} - x_{g}(t-1)\alpha_{1} - x_{g}(t-2)\alpha_{2})\] \[y_{g}(t) = \alpha_{0}(y(t)\beta_{0} + y(t-1)\beta_{1} + y(t-2)\beta_{2} - y_{g}(t-1)\alpha_{1} - y_{g}(t-2)\alpha_{2})\] \[z_{g}(t) = \alpha_{0}(z(t)\beta_{0} + z(t-1)\beta_{1} + z(t-2)\beta_{2} - z_{g}(t-1)\alpha_{1} - z_{g}(t-2)\alpha_{2})\] The resulting time series after low-pass filtering are in Figure 16.8. \(x_{g}(t)\) and \(z_{g}(t)\) hover around 0, and \(y_{g}(t)\) very quickly drops to \(-1g\). The initial 0 value in \(y_{g}(t)\) is from the initialization of the formula. Now, to calculate user acceleration, we can subtract gravitational acceleration from our total acceleration: \[ x_{u}(t) = x(t) - x_{g}(t) \] \[ y_{u}(t) = y(t) - y_{g}(t) \] \[ z_{u}(t) = z(t) - z_{g}(t) \] The result is the time series seen in Figure 16.9. We've successfully split our total acceleration into user acceleration and gravitational acceleration! \(x_{u}(t)\), \(y_{u}(t)\), and \(z_{u}(t)\) include all movements of the user, not just movements in the direction of gravity. Our goal here is to end up with a 1-dimensional time series representing user acceleration in the direction of gravity. This will include portions of user acceleration in each of the directions. Let's get to it. First, some linear algebra 101. Don't take that mathematician hat off just yet! When working with coordinates, you won't get very far before being introduced to the dot product, one of the fundamental tools used in comparing the magnitude and direction of \(x\), \(y\), and \(z\) coordinates. The dot product takes us from 3-dimensional space to 1-dimensional space (Figure 16.10). When we take the dot product of the two time series, user acceleration and gravitational acceleration, both of which are in 3-dimensional space, we'll be left with a single time series in 1-dimensional space representing the portion of user acceleration in the direction of gravity. We'll arbitrarily call this new time series \(a(t)\), because, well, every important time series deserves a name. We can implement the dot product for our earlier example using the formula \(a(t) = x_{u}(t)x_{g}(t) + y_{u}(t)y_{g}(t) + z_{u}(t)z_{g}(t)\), leaving us with \(a(t)\) in 1-dimensional space (Figure 16.11). We can now visually pick out where the steps are in \(a(t)\). The dot product is very powerful, yet beautifully simple. We saw how quickly our seemingly simple problem became more complex when we threw in the challenges of the real world and real people. However, we're getting a lot closer to counting steps, and we can see how \(a(t)\) is starting to resemble our ideal sine wave. But, only "kinda, sorta" starting to. We still need to make our messy \(a(t)\) time series smoother. There are four main issues (Figure 16.12) with \(a(t)\) in its current state. Let's examine each one. \(a(t)\) is very "jumpy", because a phone can jiggle with each step, adding a high-frequency component to our time series. This jumpiness is called noise. By studying numerous data sets, we've determined that a step acceleration is at maximum 5 Hz. We can use a low-pass IIR filter to remove the noise, picking \(\alpha\) and \(\beta\) to attenuate all signals above 5 Hz. With a sampling rate of 100, the slow peak displayed in \(a(t)\) spans 1.5 seconds, which is too slow to be a step. In studying enough samples of data, we've determined that the slowest step we can take is at a 1 Hz frequency. Slower accelerations are due to a low-frequency component, that we can again remove using a high-pass IIR filter, setting \(\alpha\) and \(\beta\) to cancel all signals below 1 Hz. As a person is using an app or making a call, the accelerometer registers small movements in the direction of gravity, presenting themselves as short peaks in our time series. We can eliminate these short peaks by setting a minimum threshold, and counting a step every time \(a(t)\) crosses that threshold in the positive direction. Our pedometer should accommodate many people with different walks, so we've set minimum and maximum step frequencies based on a large sample size of people and walks. This means that we may sometimes filter slightly too much or too little. While we'll often have fairly smooth peaks, we can, once in a while, get a "bumpier" peak. Figure 16.12 zooms in on one such peak. When bumpiness occurs at our threshold, we can mistakenly count too many steps for one peak. We'll use a method called hysteresis to address this. Hysteresis refers to the dependence of an output on past inputs. We can count threshold crossings in the positive direction, as well as 0 crossings in the negative direction. Then, we only count steps where a threshold crossing occurs after a 0 crossing, ensuring we count each step only once. In accounting for these four scenarios, we've managed to bring our messy \(a(t)\) fairly close to our ideal sine wave (Figure 16.13), allowing us to count steps. The problem, at first glance, looked straightforward. However, the real world and real people threw a few curve balls our way. Let's recap how we solved the problem: We started with total acceleration, . We used a low-pass filter to split total acceleration into user acceleration and gravitational acceleration, \((x_{u}(t), y_{u}(t), z_{u}(t))\) and \((x_{g}(t), y_{g}(t), z_{g}(t))\), respectively. We took the dot product of \((x_{u}(t), y_{u}(t), z_{u}(t))\) and \((x_{g}(t), y_{g}(t), z_{g}(t))\) to obtain the user acceleration in the direction of gravity, . We used a low-pass filter again to remove the high-frequency component of , removing noise. We used a high-pass filter to cancel the low-frequency component of , removing slow peaks. We set a threshold to ignore short peaks. We used hysteresis to avoid double-counting steps with bumpy peaks. As software developers in a training or academic setting, we may have been presented with a perfect signal and asked to write code to count the steps in that signal. While that may have been an interesting coding challenge, it wouldn't have been something we could apply in a live situation. We saw that in reality, with gravity and people thrown into the mix, the problem was a little more complex. We used mathematical tools to address the complexities, and were able to solve a real-world problem. It's time to translate our solution into code. Our goal for this chapter is to create a web application in Ruby that accepts accelerometer data, parses, processes, and analyzes the data, and returns the number of steps taken, the distance travelled, and the elapsed time. Our solution requires us to filter our time series several times. Rather than peppering filtering code throughout our program, it makes sense to create a class that takes care of the filtering, and if we ever need to enhance or modify it, we'll only ever need to change that one class. This strategy is called separation of concerns, a commonly used design principle which promotes splitting a program into distinct pieces, where every piece has one primary concern. It's a beautiful way to write clean, maintainable code that's easily extensible. We'll revisit this idea several times throughout the chapter. Let's dive into the filtering code, contained in, logically, a Filter class. Anytime our program needs to filter a time series, we can call one of the class methods in Filter with the data we need filtered: Each class method calls filter, which implements the IIR filter and returns the result. If we wish to add more filters in the future, we only need to change this one class. Note is that all magic numbers are defined at the top. This makes our class easier to read and understand. Our input data is coming from mobile devices such as Android phones and iPhones. Most mobile phones on the market today have accelerometers built in, that are able to record total acceleration. Let's call the input data format that records total acceleration the combined format. Many, but not all, devices can also record user acceleration and gravitational acceleration separately. Let's call this format the separated format. A device that has the ability to return data in the separated format necessarily has the ability to return data in the combined format. However, the inverse is not always true. Some devices can only record data in the combined format. Input data in the combined format will need to be passed through a low-pass filter to turn it into the separated format. We want our program to handle all mobile devices on the market with accelerometers, so we'll need to accept data in both formats. Let's look at the two formats we'll be accepting individually. Data in the combined format is total acceleration in the \(x\), \(y\), and \(z\) directions, over time. \(x\), \(y\), and \(z\) values will be separated by a comma, and samples per unit time will be separated by a semi-colon. \[ x_1,y_1,z_1; \ldots x_n,y_n,z_n; \] The separated format returns user acceleration and gravitational acceleration in the \(x\), \(y\), and \(z\) directions, over time. User acceleration values will be separated from gravitational acceleration values by a pipe. \[ x^{u}_1,y^{u}_1,z^{u}_1 \vert x^{g}_1,y^{g}_1,z^{g}_1; \ldots x^{u}_n,y^{u}_n,z^{u}_n \vert x^{g}_n,y^{g}_n,z^{g}_n; \] Dealing with multiple input formats is a common programming problem. If we want our entire program to work with both formats, every single piece of code dealing with the data would need to know how to handle both formats. This can become very messy, very quickly, especially if a third (or a fourth, or a fifth, or a hundredth) input format is added. The cleanest way for us to deal with this is to take our two input formats and fit them into a standard format as soon as possible, allowing the rest of the program to work with this new standard format. Our solution requires that we work with user acceleration and gravitational acceleration separately, so our standard format will need to be split into the two accelerations (Figure 16.14). Our standard format allows us to store a time series, as each element represents acceleration at a point in time. We've defined it as an array of arrays of arrays. Let's peel that onion. The input into our system will be data from an accelerometer, information on the user taking the walk (gender, stride, etc.), and information on the trial walk itself (sampling rate, actual steps taken, etc.). Our system will apply the signal processing solution, and output the number of steps calculated, the delta between the actual steps and calculated steps, the distance travelled, and the elapsed time. The entire process from input to output can be viewed as a pipeline (Figure 16.15). In the spirit of separation of concerns, we'll write the code for each distinct component of the pipeline—parsing, processing, and analyzing—individually. Given that we want our data in the standard format as early as possible, it makes sense to write a parser that allows us to take our two known input formats and convert them to a standard output format as the first component of our pipeline. Our standard format splits out user acceleration and gravitational acceleration, which means that if our data is in the combined format, our parser will need to first pass it through a low-pass filter to convert it to the standard format. In the future, if we ever have to add another input format, the only code we'll have to touch is this parser. Let's separate concerns once more, and create a Parser class to handle the parsing. Parser has a class-level run method as well as an initializer. This is a pattern we'll use several times, so it's worth a discussion. Initializers should generally be used for setting up an object, and shouldn't do a lot of work. Parser's initializer simply takes data in the combined or separated format and stores it in the instance variable @data. The parse instance method uses @data internally, and does the heavy lifting of parsing and setting the result in the standard format to @parsed_data. In our case, we'll never need to instantiate a Parser instance without having to immediately call parse. Therefore, we add a convenient class-level run method that instantiates an instance of Parser, calls parse on it, and returns the instance of the object. We can now pass our input data to run, knowing we'll receive an instance of Parser with @parsed_data already set. Let's take a look at our hard-working parse method. The first step in the process is to take string data and convert it to numerical data, giving us an array of arrays of arrays. Sound familiar? The next thing we do is ensure that the format is as expected. Unless we have exactly three elements per the innermost arrays, we throw an exception. Otherwise, we continue on. Note the differences in @parsed_data between the two formats at this stage. In the combined format it contains arrays of exactly one array: \[ [[[x_1, y_1, z_1]], \ldots [[x_n, y_n, z_n]] \] In the separated format it contains arrays of exactly two arrays: \[[[[x_{u}^1,y_{u}^1,z_{u}^1], [x_{g}^1,y_{g}^1,z_{g}^1]], ... [[x_{u}^n,y_{u}^n,z_{u}^n], [x_{g}^n,y_{g}^n,z_{g}^n]]]\] The separated format is already in our desired standard format after this operation. Amazing. However, if the data is combined (or, equivalently, has exactly one array where the separated format would have two), then we proceed with two loops. The first loop splits total acceleration into gravitational and user, using Filter with a :low_0_hz type, and the second loop reorganizes the data into the standard format. parse leaves us with @parsed_data holding data in the standard format, regardless of whether we started off with combined or separated data. What a relief! As our program becomes more sophisticated, one area for improvement is to make our users' lives easier by throwing exceptions with more specific error messages, allowing them to more quickly track down common input formatting problems. Based on the solution we defined, we'll need our code to do a couple of things to our parsed data before we can count steps: We'll handle short and bumpy peaks by avoiding them during step counting. Now that we have our data in the standard format, we can process it to get in into a state where we can analyze it to count steps (Figure 16.17). The purpose of processing is to take our data in the standard format and incrementally clean it up to get it to a state as close as possible to our ideal sine wave. Our two processing operations, taking the dot product and filtering, are quite distinct, but both are intended to process our data, so we'll create one class called a Processor. Again, we see the run and initialize methods pattern. run calls our two processor methods, dot_product and filter, directly. Each method accomplishes one of our two processing operations. dot_product isolates movement in the direction of gravity, and filter applies the low-pass and high-pass filters in sequence to remove jumpy and slow peaks. Provided information about the person using the pedometer is available, we can measure more than just steps. Our pedometer will measure distance travelled and elapsed time, as well as steps taken. A mobile pedometer is generally used by one person. Distance travelled during a walk is calculated by multiplying the steps taken by the person's stride length. If the stride length is unknown, we can use optional user information like gender and height to approximate it. Let's create a User class to encapsulate this related information. At the top of our class, we define constants to avoid hardcoding magic numbers and strings throughout. For the purposes of this discussion, let's assume that the values in MULTIPLIERS and AVERAGES have been determined from a large sample size of diverse people. Our initializer accepts gender, height, and stride as optional arguments. If the optional parameters are passed in, our initializer sets instance variables of the same names, after some data formatting. We raise an exception for invalid values. Even when all optional parameters are provided, the input stride takes precedence. If it's not provided, the calculate_stride method determines the most accurate stride length possible for the user. This is done with an if statement: Note that the further down the if statement we get, the less accurate our stride length becomes. In any case, our User class determines the stride length as best it can. The time spent travelling is measured by dividing the number of data samples in our Processor's @parsed_data by the sampling rate of the device, if we have it. Since the rate has more to do with the trial walk itself than the user, and the User class in fact does not have to be aware of the sampling rate, this is a good time to create a very small Trial class. All of the attribute readers in Trial are set in the initializer based on parameters passed in: Much like our User class, some information is optional. We're given the opportunity to input details of the trial, if we have it. If we don't have those details, our program bypasses calculating the additional results, such as time spent travelling. Another similarity to our User class is the prevention of invalid values. It's time to implement our step counting strategy in code. So far, we have a Processor class that contains @filtered_data, which is our clean time series representing user acceleration in the direction of gravity. We also have classes that give us the necessary information about the user and the trial. What we're missing is a way to analyze @filtered_data with the information from User and Trial, and count steps, measure distance, and measure time. The analysis portion of our program is different from the data manipulation of the Processor, and different from the information collection and aggregation of the User and Trial classes. Let's create a new class called Analyzer to perform this data analysis. The first thing we do in Analyzer is define a THRESHOLD constant, which we'll use to avoid counting short peaks as steps. For the purposes of this discussion, let's assume we've analyzed numerous diverse data sets and determined a threshold value that accommodated the largest number of those data sets. The threshold can eventually become dynamic and vary with different users, based on the calculated versus actual steps they've taken; a learning algorithm, if you will. Our Analyzer's initializer takes a data parameter and instances of User and Trial, and sets the instance variables @data, @user, and @trial to the passed-in parameters. The run method calls measure_steps, measure_delta, measure_distance, and measure_time. Let's take a look at each method. Finally! The step counting portion of our step counting app. The first thing we do in measure_steps is initialize two variables: We then iterate through @processor.filtered_data. If the current value is greater than or equal to THRESHOLD, and the previous value was less than THRESHOLD, then we've crossed the threshold in the positive direction, which could indicate a step. The unless statement skips ahead to the next data point if count_steps is false, indicating that we've already counted a step for that peak. If we haven't, we increment @steps by 1, and set count_steps to false to prevent any more steps from being counted for that peak. The next if statement sets count_steps to true once our time series has crossed the \(x\)-axis in the negative direction, and we're on to the next peak. There we have it, the step counting portion of our program! Our Processor class did a lot of work to clean up the time series and remove frequencies that would result in counting false steps, so our actual step counting implementation is not complex. It's worth noting that we store the entire time series for the walk in memory. Our trials are all short walks, so that's not currently a problem, but eventually we'd like to analyze long walks with large amounts of data. Ideally, we'd want to stream data in, only storing very small portions of the time series in memory. Keeping this in mind, we've put in the work to ensure that we only need the current data point and the data point before it. Additionally, we've implemented hysteresis using a Boolean value, so we don't need to look backward in the time series to ensure we've crossed the \(x\)-axis at 0. There's a fine balance between accounting for likely future iterations of the product, and over-engineering a solution for every conceivable product direction under the sun. In this case, it's reasonable to assume that we'll have to handle longer walks in the near future, and the costs of accounting for that in step counting are fairly low. If the trial provides actual steps taken during the walk, measure_delta will return the difference between the calculated and actual steps. The distance is measured by multiplying our user's stride by the number of steps. Since the distance depends on the step count, measure_steps must be called before measure_distance. As long as we have a sampling rate, time is calculated by dividing the total number of samples in filtered_data by the sampling rate. It follows, then, that time is calculated in seconds. Our Parser, Processor, and Analyzer classes, while useful individually, are definitely better together. Our program will often use them to run through the pipeline we introduced earlier. Since the pipeline will need to be run frequently, we'll create a Pipeline class to run it for us. We use our now-familiar run pattern and supply Pipeline with accelerometer data, and instances of User and Trial. The feed method implements the pipeline, which entails running Parser with the accelerometer data, then using the parser's parsed data to run Processor, and finally using the processor's filtered data to run Analyzer. The Pipeline keeps @parser, @processor, and @analyzer instance variables, so that the program has access to information from those objects for display purposes through the app. We're through the most labour intensive part of our program. Next, we'll build a web app to present the data in a format that is pleasing to a user. A web app naturally separates the data processing from the presentation of the data. Let's look at our app from a user's perspective before the code. When a user first enters the app by navigating to /uploads, they see a table of existing data and a form to submit new data by uploading an accelerometer output file and trial and user information (Figure 16.18). Submitting the form stores the data to the file system, parses, processes, and analyzes it, and redirects back to /uploads with the new entry in the table. Clicking the Detail link for an entry presents the user with the following view in Figure 16.19. The information presented includes values input by the user through the upload form, values calculated by our program, and graphs of the time series following the dot product operation, and again following filtering. The user can navigate back to /uploads using the Back to Uploads link. Let's look at what the outlined functionality above implies for us, technically. We'll need two major components that we don't yet have: Let's examine each of these two requirements. Our app needs to store input data to, and retrieve data from, the file system. We'll create an Upload class to do this. Since the class deals only with the file system and doesn't relate directly to the implementation of the pedometer, we've left it out for brevity, but it's worth discussing its basic functionality. Our Upload class has three class-level methods for file system access and retrieval, all of which return one or more instances of Upload: Once again, we've been wise to separate concerns in our program. All code related to storage and retrieval is contained in the Upload class. As our application grows, we'll likely want to use a database rather than saving everything to the file system. When the time comes for that, all we have to do it change the Upload class. This makes our refactoring simple and clean. In the future, we can save User and Trial objects to the database. The create, find, and all methods in Upload will then be relevant to User and Trial as well. That means we'd likely refactor those out into their own class to deal with data storage and retrieval in general, and each of our User, Trial, and Upload classes would inherit from that class. We might eventually add helper query methods to that class, and continue building it up from there. Web apps have been built many times over, so we'll leverage the important work of the open source community and use an existing framework to do the boring plumbing work for us. The Sinatra framework does just that. In the tool's own words, Sinatra is "a DSL for quickly creating web applications in Ruby". Perfect. Our web app will need to respond to HTTP requests, so we'll need a file that defines a route and associated code block for each combination of HTTP method and URL. Let's call it pedometer.rb. pedometer.rb allows our app to respond to HTTP requests for each of our routes. Each route's code block either retrieves data from, or stores data to, the file system through Upload, and then renders a view or redirects. The instance variables instantiated will be used directly in our views. The views simply display the data and aren't the focus of our app, so we we'll leave the code for them out of this chapter. Let's look at each of the routes in pedometer.rb individually. Navigating to http://localhost:4567/uploads sends an HTTP GET request to our app, triggering our get '/uploads' code. The code runs the pipeline for all of the uploads in the file system and renders the uploads view, which displays a list of the uploads, and a form to submit new uploads. If an error parameter is included, an error string is created, and the uploads view will display the error. Clicking the Detail link for each upload sends an HTTP GET to /upload with the file path for that upload. The pipeline runs, and the upload view is rendered. The view displays the details of the upload, including the charts, which are created using a JavaScript library called HighCharts. Our final route, an HTTP POST to create, is called when a user submits the form in the uploads view. The code block creates a new Upload, using the params hash to grab the values input by the user through the form, and redirects back to /uploads. If an error occurs in the creation process, the redirect to /uploads includes an error parameter to let the user know that something went wrong. Voilà! We've built a fully functional app, with true applicability. The real world presents us with intricate, complex challenges. Software is uniquely capable of addressing these challenges at scale with minimal resources. As software engineers, we have the power to create positive change in our homes, our communities, and our world. Our training, academic or otherwise, likely equipped us with the problem-solving skills to write code that solves isolated, well-defined problems. As we grow and hone our craft, it's up to us to extend that training to address practical problems, tangled up with all of the messy realities of our world. I hope that this chapter gave you a taste of breaking down a real problem into small, addressable parts, and writing beautiful, clean, extensible code to build a solution. Here's to solving interesting problems in an endlessly exciting world.
3
Nichimen Mirai Images Thread – 20 Year Anniversary (2019)
2 posts• Page of 11 Krowbarh Senior Chief Petty Officer Posts: 277 Joined: 24 Jun 2017, 21:20 Type the number ten into the box: 10 Nichimen Mirai Images Thread - 20 Year Anniversary >>> Nichimen Mirai from 20 years ago... Still the BEST volume modeler I have ever used...Nendo is but a small piece of Mirai...Wings3D Has improved greatly but is a mixture of Nendo & Mirai workflow & will never be able to have all the abilities of Mirai... Following images by Bay Raitt, Martin Krol or John Feather i p Krowbarh Senior Chief Petty Officer Posts: 277 Joined: 24 Jun 2017, 21:20 Type the number ten into the box: 10 Re: Nichimen Mirai Images Thread - 20 Year Anniversary >>> A Few more here... Yoda GIF Tut as well as the Yoda above done by Ken Brilliant- Hand GIF Bay Raitt- My Mirai GUI Color Scheme- i p 2 posts• Page of 11 i p
62
Show HN: LIV is a webmail front-end for your personal email server
{{ message }} derek-zhou/liv You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
5
Red Hat OpenShift now supports both Windows and Linux containers
Containers are largely a Linux technology. But Microsoft, besides supporting Linux containers on Windows 10 and Azure, also has its own Windows-based containers. So it is that many Microsoft-oriented companies run both Linux and Windows containers. After all, these days, there are more Linux virtual machines (VM)s and containers running on Linux on Azure than there are Windows Server VMs. But managing Linux and Windows containers with one interface is not such an easy trick.  So, I expect Red Hat to find many customers for its latest OpenShift Kubernetes feature: The ability to run and manage both Linux and Windows containers from one program. Open Source GitHub vs GitLab: Which program is right for you? The best Linux distros for beginners Feren OS is a Linux distribution that's as lovely as it is easy to use How to add new users to your Linux machine To pull off this trick, Red Hat OpenShift 4.6 uses the Windows Machine Config Operator (WMCO). This is a certified OpenShift operator based on the Kubernetes Operator Framework, which is jointly supported by both Red Hat and Microsoft. Also:  Best Linux Foundation classes OpenShift users can access WMCO via the Operator Hub to begin managing their Windows Containers within the OpenShift console. Kubernetes cluster administrator can add a Windows worker node as a day 2 operation with a prescribed configuration to an installer provisioned OpenShift 4.6 cluster. The prerequisite is an OpenShift 4.6+ cluster configured with hybrid Open Virtual Networking (OVN) Kubernetes networking. On the Windows side, you'll need Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 version 10.0.17763.1457 or newer. In other words, this is not a plug-and-play operation. You'll need to get it set up just right for it to work. So what will this bring you? You'll be able to orchestrate both Red Hat Enterprise Linux (RHEL) and Windows to run as building blocks of applications and supports .NET Core applications, .NET Framework applications, and other Windows applications. Once set up you'll be able to run Windows containers on OpenShift wherever it is supported across the open hybrid cloud. That includes bare-metal servers, Microsoft Azure, AWS, Google Cloud, IBM Cloud and, in the future, VMware vSphere. Specifically, it will enable you to: Move Windows containers to Red Hat OpenShift without needing to completely re-architect or write new code. Lower deployment costs for containerized workloads in heterogeneous IT environments. Improve productivity and DevOps agility by providing cloud-native management through a single platform. Greater portability and survivability of applications across hybrid cloud environments, including new public cloud deployments or edge installations. This new functionality isn't quite ready for primetime yet. Support for Windows Containers in OpenShift won't be available until early 2021.
7
Diagrams.net (was draw.io) – free, open-source diagram drawing tool
Security-first diagramming for teams. Bring your storage to our online tool, or go max privacy with the desktop app. Start Download No login or registration required. Diagram files created in 2005 will load in the app today More features | Example diagrams & templates Our range of draw.io branded integrations Google Workplace and Google Drive Works with Google Drive and Google Workplace (G Suite). Use add-ons for Docs, Sheets and Slides. Sharepoint and OneDrive Works with OneDrive and Sharepoint. Office 365 and Microsoft Teams apps provide tighter integration. Atlassian Highest-rated Confluence app, delivered by us as draw.io. Also for Jira. Git and Dropbox Works with GitHub, GitLab and Dropbox for visual documentation in distributed teams. Desktop Download draw.io Desktop for offline use, or draw.io for VSCode.* Notion Embed and edit diagrams directly with the draw.io for Notion Chrome extension. See more Start Now * Third-party integration.
2
Google Serverless Workflows
Home Products Workflows Send feedback Registration is open for the free IT Heroes Summit on April 19. Boost your AI innovation, cost savings, and security superpowers. Register now. Workflows Combine Google Cloud services and APIs to  build reliable applications, process automation, and data and machine learning pipelines. New customers get $300 in free credits to spend on Workflows. All customers get 5,000 steps and 2,000 external API calls per month, not charged against your credits. Deploy and execute a Workflow that connects a series of services together with this tutorial Reliably automate processes that include waiting and retries for up to one year Implement real-time processing with low-latency, event-driven executions VIDEO Introducing Google Cloud Workflows 4:37 Benefits Simplify your architecture Stateful Workflows allow you to visualize and monitor complex service integrations without additional dependencies. Incorporate reliability and fault tolerance Control failures with default or custom retry logic and error handling even when other systems fail—checkpointing every step to Cloud Spanner to help you keep track of progress. Zero maintenance Scale as needed: There’s nothing to patch or maintain. Pay only when your workflows run, with no cost while waiting or inactive. Key features Key features Reliable workflow execution Call any service, from Cloud Functions or Cloud Run to private and third-party APIs. Connectors make Google Cloud services particularly easy to use by taking care of request formatting, retries and waiting to complete long-running operations. Powerful execution control Use expressions and functions to transform response data and prepare request inputs. Automate conditions based on input and service responses. Specify retry policies and error handling. Wait for asynchronous operations and events with polling and callbacks. Pay per use Only pay when workflows take steps. BLOG Workflows, Google Cloud’s serverless orchestration engine Documentation Documentation Understand Workflows Discover the core concepts and key capabilities of Workflows in this product overview. Workflows quickstarts Learn how to create, deploy, and execute a workflow using the Cloud Console,the gcloud command-line tool, or Terraform. Workflows how-to guides Learn how to control the order of execution in a workflow, invoke services and make HTTP requests, wait using callbacks or polling, and create automated triggers. Syntax overview Learn how to write workflows to call services and APIs, work with response data, and add conditions, retries, and error handling. Use cases Use cases Use case App integration and microservice orchestration Combine sequences of service invocations into reliable and observable workflows. For example, use a workflow to implement receipt processing in an expense application. When a receipt image is uploaded to a Cloud Storage bucket, Workflows sends the image to Document AI. After processing is complete, a Cloud Function determines whether approval is required. Finally, the receipt is made visible to users by adding an entry in a Firestore database. Use case Business process automation Run line-of-business operations with Workflows. For example, automate order fulfillment and tracking with a workflow. After checking inventory, a shipment is requested from the warehouse and a customer notification is sent. The shipment is scanned when departing the warehouse, updating the workflow via a callback that adds tracking information to the order. Orders not marked as delivered within 30 days are escalated to customer service. Use case Data and ML pipelines Implement batch and real-time data pipelines using workflows that sequence exports, transformations, queries, and machine learning jobs. Workflows connectors for Google Cloud services like BigQuery make it easy to perform operations and wait for completion. Cloud Scheduler integration makes it simple to run workflows on a recurring schedule. Use case IT process automation Automate cloud infrastructure with workflows that control Google Cloud services. For example, schedule a monthly workflow to detect and remediate security compliance issues. Iterating through critical resources and IAM permissions, send required requests for approval renewal using a Cloud Function. Remove access for any permissions not renewed within 14 days. All features All features Workflows are automatically replicated across multiple zones and checkpoint state after each step, ensuring executions continue even after outages. Failures in other services are handled through default and customizable retry policies, timeouts, and custom error handling. Specify workflows in YAML or JSON with named steps, making them easy to visualize, understand, and observe. These machine-readable formats support programmatic generation and parsing of workflows. Wait for a given period to implement polling. Connectors provide blocking steps for many Google Cloud services with long-running operations. Simply write your steps and know each is complete before the next runs. Workflow executions are low-latency, supporting both real-time and batch processing. Through Eventarc, workflows can be executed when events occur, such as when a file is uploaded to Cloud Storage or when a Pub/Sub message is published. Create unique callback URLs inside your workflow. Then wait (with a configurable timeout of up to one year) for the URL to be called, receiving the HTTP request data in your workflow. Useful for waiting for external systems and implementing human-in-the-loop processes. Workflows run in a sandboxed environment and have no code dependencies that will require security patches. Store and retrieve secrets with Secret Manager. Orchestrate work of any Google Cloud product without worrying about authentication. Use a proper service account and let Workflows do the rest. Fast scheduling of workflow executions and transitions between steps. Predictable performance with no cold starts. Deploy in seconds to support a fast developer experience and quick production changes. Out-of-the-box integration with Cloud Logging with automatic and custom entries provides insight into each workflow execution. Cloud Monitoring tracks execution volume, error rates, and execution time. Pricing Pricing Pay-per-use, with an always-free tier, rounded up to the nearest 1,000 executed steps. Pay only for the executed steps in your workflow; pay nothing if your workflow doesn’t run. Use the Google Cloud Pricing Calculator for an estimate. First 5,000 steps Free Steps 5,000 to 100,000,000 $0.01 per increment of 1,000 steps Steps after 100,000,000 Contact sales for pricing options First 2,000 calls Free Steps 2,000 to 100,000,000 $0.025 per increment of 1,000 calls Steps after 100,000,000 Contact sales for pricing options If you pay in a currency other than USD, the prices listed in your currency on Google Cloud SKUs apply.
2
Is 'Irregardless' a Real Word?
It has come to our attention lately that there is a small and polite group of people who are not overly fond of the word irregardless . This group, who we might refer to as the disirregardlessers, makes their displeasure with this word known by calmly and rationally explaining their position ... oh, who are we kidding ... the disirregardlessers make themselves known by writing angry letters to us for defining it, and by taking to social media to let us know that "IRREGARDLESS IS NOT A REAL WORD" and "you sound stupid when you say that." Pictured: a common response to encountering this word We define irregardless, even though this act hurts the feelings of many. Why would a dictionary do such a thing? Do we enjoy causing pain? Have we abdicated our role as arbiter of all that is good and pure in the English language? These are all excellent questions (well, these are all questions), and you might ask them of some of these other fine dictionaries, all of whom also appear to enjoy causing pain through the defining of tawdry words. Irregardless: Regardless — The American Heritage Dictionary of the English Language, Fifth Edition, 2018 Irregardless: In nonstandard or humorous use: regardless. — The Oxford English Dictionary, 2nd edition, 1976 Irregardless: without attention to, or despite the conditions or situation; regardless — Cambridge Dictionary (dictionary.cambridge.org), 2018 The reason we, and these dictionaries above, define irregardless is very simple: it meets our criteria for inclusion. This word has been used by a large number of people (millions) for a long time (over two hundred years) with a specific and identifiable meaning ("regardless"). The fact that it is unnecessary, as there is already a word in English with the same meaning (regardless) is not terribly important; it is not a dictionary's job to assess whether a word is necessary before defining it. The fact that the word is generally viewed as nonstandard, or as illustrative of poor education, is likewise not important; dictionaries define the breadth of the language, and not simply the elegant parts at the top. We must confess that of the charges leveled against irregardless, the one asserting that it is not actually a word puzzles us most. If irregardless is not a word, then what is it, and why is it exciting so many people who care about words? Of course it is a word. You may, if you like, refer to it as a bad word, a silly word, a word you don't like, or by any one of a number of other descriptors, but to deny that a specific collection of letters used by many people for hundreds of years to mean a definite thing is a word is to deny the obvious. As a way of demonstrating why we enter some words in the dictionary and not others let's look at irregardless's less attractive and less successful cousin, unregardless. This has shown periodic use over the past 150 or so years, and, like irregardless, has appeared in print in a variety of formats. "Allons bon!" cries a passerby, unregardless of the poor man's mishap, "Monsieur est done côté à la Bourse!" — The Morning Chronicle (London, Eng.) 25 Jan. 1859 ...to find even in all that appears most trifling or contemptible, fresh evidence of the constant working of the Divine power "for glory and for beauty," and to teach it and proclaim it to the unthinking and the unregardless.... — John Ruskin, Modern Painters, 1886 Friday—well I gess I will be having to go to skool unregardless of evry thing I can do. — The Neshoba Democrat (Philadephia, MS), 12 Sept. 1929 So why do we define irregardless, but omit unregardless from our dictionary? One reason is that of scale: for every unregardless found in print there are a hundred or more examples of irregardless. Another reason is consistency of intent: the people writing unregardless do not appear to all have the same meaning in mind. Sometimes it functions as a synonym of regardless, and other times it appears to carry the meaning of "unthinking, or uncaring." If there had just been a few dozen instances of irregardless showing up in print, employed without a consistent meaning, it would not be a word we would enter; however, there are hundreds, even thousands, of citations for this word, all meaning more or less the same thing. If we were to remove irregardless from our dictionary it would not cause the word to magically disappear from the language; we do not have that kind of power. Our inclusion of the word is not an indication of the English language falling to pieces, the educational system failing, the work of the cursed Millennials, or anything else aside from the fact that a lot of people use this word to mean "regardless," and so we define it that way. We can promise you that the decision to enter this word in our dictionary (and in all the other dictionaries you will find it in) was the result of a significant amount of thought and consideration. Lexicographers are concerned with the business of defining language; they are not terribly interested in trolling readers by entering fake words which will upset them (and if we were going to make up fake words we would come up with something a little more exciting than a synonym for "regardless"). If you are a proud and committed disirregardlesser you should feel free to continue writing us angry letters, or post your trenchant and urbane screeds on Twitter whenever someone uses irregardless. It is our hope that this explanation of why we enter this word in our dictionary will mollify you as you do so. We just want you to be happy.
258
Unlearn rotation matrices as rotations
– Hey, Markus! What format is this head-rotation representation in? – It is a rotation matrix, I answer. Right handed, z forward through the nose and x through the left ear. Our young newly graduated colleague nods his/her head. After about 10 minutes I hear my name again. – Markus…. Eh, what order is it? – Oh no! You have opened Wikipedia? Haven’t you? I answer in despair from my desk. It happens time to time that a newly graduated engineer (or summer intern) asks me exactly this question. Almost always with the Wikipedia page open at the screen, which I think is horrible (or even worse, some “Learn OpenGL” tutorial). I steal take a chair to sit down beside the person. This will take a few minutes, we are going to do something that is harder than learning: we are going to unlearn. It is interesting, I get no questions, or only very short questions, on Euler angels, Rodriguez rotations and actually only one recurrent question on quaternions. But very often I get questions on rotation matrices. I think it is a bit odd since rotation matrices are very simple in comparison to many other rotation representations. I think a big reason for this is the Wikipedia page. It looks something like this: \[ R_x(\theta) = \begin{bmatrix} 1 & 0 & 0 \\ 0 & \cos \theta & -\sin \theta \\ 0 & \sin \theta & \cos \theta \end{bmatrix} \] \[ R_y(\theta) = \begin{bmatrix} \cos \theta & 0 & \sin \theta \\ 0 & 1 & 0 \\ -\sin \theta & 0 & \cos \theta \end{bmatrix} \] \[ R_z(\theta) = \begin{bmatrix} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0\\ 0 & 0 & 1\\ \end{bmatrix} \] \[ R = \begin{bmatrix} \cos\alpha\cos\beta & \cos\alpha\sin\beta\sin\gamma - \sin\alpha\cos\gamma & \cos\alpha\sin\beta\cos\gamma + \sin\alpha\sin\gamma \\ \sin\alpha\cos\beta & \sin\alpha\sin\beta\sin\gamma + \cos\alpha\cos\gamma & \sin\alpha\sin\beta\cos\gamma - \cos\alpha\sin\gamma \\ -\sin\beta & \cos\beta\sin\gamma & \cos\beta\cos\gamma \\ \end{bmatrix} \] It talks about rotations. Rotation around different axes and their relation to Euler angles. This can be a bit confusing when working with for example a head pose. You can of course think about rotation matrices as if the head rotates around different axes in different order, but it becomes kind of hard to interpret: \[ \begin{bmatrix} -0.9987820 & 0.0348782 & -0.0348995 \\ 0.0283128 & 0.9844193 & 0.1735424 \\ 0.0404086 & 0.1723429 & -0.9842078 \end{bmatrix} \] So to interpret this we need to solve the following equation system: \[ \begin{cases} -\sin(\beta) & = 0.0404086 \\ \cos(\beta)\sin(\gamma) & = 0.1723429 \\ \cos(\alpha)\cos(\beta) & = -0.9987820 \end{cases} \] and then we get an “intrinsic rotation whose Tait–Bryan angles are α, β, γ, about axes z, y, x” to visualize in our head. Its sad because I think rotation matrices are one of the easiest representation to interpret. Don’t think of them as rotations, think of them as a unit vectors of a new coordinate systems. We describe where the coordinate system is located related to another coordinate system (where we rotate from), for example from the camera’s coordinate system perspective (z forward, y upwards). The first column of the rotation matrix is the new x-axis expressed in the old coordinate system, the second column is the y-axis and so on. An identity matrix would yield in no rotation since all unit vectors would be the same as the previous coordinate system. \[ R = \begin{bmatrix} X_x & Y_x & Z_x \\ X_y & Y_y & Z_y \\ X_z & Y_z & Z_z \end{bmatrix} \] Lets go back to the example with the head expressed in the camera coordinate system and assume the head position is atfront of the camera. So by interpret the previous matrix, we can look at the new z-axis: \[ Z_{axis} = \begin{bmatrix} Z_x \\ Z_y \\ Z_z \end{bmatrix} = \begin{bmatrix} -0.0348995 \\ 0.1735424 \\ -0.9842078 \end{bmatrix} \] (Remember that z-axis is where the head’s nose is pointing) We can quickly see that z-part of the z-axis is almost -1. This means the nose is pointing at the opposite direction as the camera, eg. towards the camera if the person is sitting at front of it. We can also se that the persons head is rotated a little bit up (positive y component of the z-axis) and is pointing a little bit to the right of the camera (negative x component). And that’s it! Rotation matrices just describe the unit vectors of a new coordinate system. – Hey, Markus! How come this matrix is 4x4? HN discussion
1
The effects of cycling infrastructure and bike-sharing system in Lisbon
p , June 2020, Pages 672-682 Build it and give ‘em bikes, and they will come: The effects of cycling infrastructure and bike-sharing system in Lisbon
4
Go and Rust – objects without class
This article brought to you by LWN subscribers Subscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, please buy a subscription and make the next set of articles possible. May 1, 2013 This article was contributed by Neil Brown Since the advent of object-oriented programming languages around the time of Smalltalk in the 1970s, inheritance has been a mainstay of the object-oriented vision. It is therefore a little surprising that both "Go" and "Rust" — two relatively new languages which support object-oriented programming — manage to avoid mentioning it. Both the Rust Reference Manual and The Go Programming Language Specification contain the word "inherit" precisely once and the word "inheritance" not at all. Methods are quite heavily discussed, but inheritance is barely more than a "by the way". This may be just an economy of expression, or it may be an indication of a sea change in attitudes towards object orientation within the programming language community. It is this second possibility which this article will consider while exploring and contrasting the type systems of these two languages. The many faces of inheritance While inheritance is a core concept in object-oriented programming, it is not necessarily a well-defined concept. It always involves one thing getting some features by association with some previously defined things, but beyond that languages differ. The thing is typically a "class", but sometimes an "interface" or even (in prototype inheritance) an "object" that borrows some behavior and state from some other "prototypical" object. The features gained are usually fields (for storing values) and methods (for acting on those values), but the extent to which the inheriting thing can modify, replace, or extend these features is quite variable. Inheriting from a single ancestor is common. Inheriting from multiple ancestors is sometimes possible, but is an even less well-defined concept than single inheritance. Whether multiple inheritance really means anything useful, how it should be implemented, and how to approach the so-called diamond problem all lead to substantial divergence among approaches to inheritance. If we clear away these various peripheral details (important though they are), inheritance boils down to two, or possibly three, core concepts. It is the blurring of these concepts that is created by using one word ("inheritance"), which, it would seem, results in the wide variance among languages. And it is this blurring that is completely absent from Go and Rust. Data embedding The possible third core concept provided by inheritance is data embedding. This mechanism allows a data structure to be defined that includes a previously defined data structure in the same memory allocation. This is trivially achieved in C as seen in: struct kobject {char*name;struct list_head entry;... }; where a struct list_head is embedded in a struct kobject. It can sometimes be a little more convenient if the members of the embedded structure (next and prev in this case) can be accessed in the embedding object directly rather than being qualified as, in this case entry.next and entry.prev. This is possible in C11 and later using "anonymous structures". While this is trivial in C, it is not possible in this form in a number of object-oriented languages, particularly languages that style themselves as "pure" object oriented. In such languages, another structure (or object) can only be included by reference, not directly (i.e. a pointer can be included in the new structure, but the old structure itself cannot). Where structure embedding is not possible directly, it can often be achieved by inheritance, as the fields in the parent class (or classes) are directly available in objects of the child class. While structure embedding may not be strong motivation to use inheritance, it is certainly an outcome that can be achieved through using it, so it does qualify (for some languages at least) as one of the faces of inheritance. Subtype polymorphism Subtype polymorphism is a core concept that is almost synonymous with object inheritance. Polymorphic code is code that will work equally well with values from a range of different types. For subtype polymorphism, the values' types must be subtypes of some specified super-type. One of the best examples of this, which should be familiar to many, is the hierarchy of widgets provided by various graphical user interface libraries such as GTK+ or Qt. At the top of this hierarchy for GTK+ is the GtkWidget which has several subtypes including GtkContainer and GtkEditable. The leaves of the hierarchy are the widgets that can be displayed, such as GtkEntry and GtkRadioButton. GtkContainer is an ancestor of all widgets that can serve to group other widgets together in some way, so GtkHBox and GtkVBox — which present a list of widgets in a horizontal or vertical arrangement — are two subtypes of GtkContainer. Subtype polymorphism allows code that is written to handle a GtkContainer to work equally well with the subtypes GtkHBox and GtkVBox. Subtype polymorphism can be very powerful and expressive, but is not without its problems. One of the classic examples that appears in the literature involves "Point and ColorPoint" and exactly how the latter can be made a subtype of the former — which intuitively seems obvious, but practically raises various issues. A real-world example of a problem with polymorphism can be seen with the GtkMenuShell widget in the GTK+ widget set. This widget is used to create drop-down and pop-up menus. It does this in concert with GtkMenuItem which is a separate widget that displays a single item in a menu. GtkMenuShell is declared as a subtype of GtkContainer so that it can contain a collection of different GtkMenuItems, and can make use of the methods provided by GtkContainer to manage this collection. The difficulty arises because GtkMenuShell is only allowed to contain GtkMenuItem widgets, no other sort of child widget is permitted. So, while it is permitted to add a GtkButton widget to a GtkContainer, it is not permitted to add that same widget to a GtkMenuShell. If this restriction were to be encoded in the type system, GtkMenuShell would not be a true subtype of GtkContainer as it cannot be used in every place that a GtkContainer could be used — specifically it cannot be the target of gtk_container_add(myButton). The simple solution to this is to not encode the restriction into the type system. If the programmer tries to add a GtkButton to a GtkMenuShell, that is caught as a run-time error rather than a compile-time error. To the pragmatist, this is a simple and effective solution. To the purist, it seems to defeat the whole reason we have static typing in the first place. This example seems to give the flavor of subtype polymorphism quite nicely. It can be express a lot of type relationships well, but there are plenty of relationships it cannot express properly; cases where you need to fall back on run-time type checking. As such, it can be a reason to praise inheritance, and a reason to despise it. Code reuse The remaining core concept in inheritance is code reuse. When one class inherits from another, it not only gets to include fields from that class and to appear to be a subtype of that class, but also gets access to the implementation of that class and can usually modify it in interesting ways. Code reuse is, of course, quite possible without inheritance, as we had libraries long before we had objects. Doing it with inheritance seems to add an extra dimension. This comes from the fact that when some code in the parent class calls a particular method on the object, that method might have been replaced in the child object. This provides more control over the behavior of the code being reused, and so can make code reuse more powerful. A similar thing can be achieved in a C-like language by explicitly passing function pointers to library functions as is done with qsort(). That might feel a bit clumsy, though, which would discourage frequent use. This code reuse may seem as though it is just the flip-side of subtype inheritance, which was, after all, motivated by the value of using code from an ancestor to help implement a new class. In many cases, there is a real synergy between the two, but it is not universal. The classic examination of this issue is a paper by William R. Cook that examines the actual uses of inheritance in the Smalltalk-80 class library. He found that the actual subtype hierarchy (referred to in the paper as protocol conformance) is quite different from the inheritance hierarchy. For this code base at least, subtypes and code reuse are quite different things. As different languages have experimented with different perspectives on object-oriented programming, different attitudes to these two or three different faces have resulted in widely different implementations of inheritance. Possibly the place that shows this most clearly is multiple inheritance. When considering subtypes, multiple inheritance makes perfect sense as it is easy to understand how one object can have two orthogonal sets of behaviors which make it suitable to be a member of two super-types. When considering implementation inheritance for code reuse, multiple inheritance doesn't make as much sense because the different ancestral implementations have more room to trip over each other. It is probably for this reason that languages like Java only allow a single ancestor for regular inheritance, but allow inheritance of multiple "interfaces" which provide subtyping without code reuse. In general, having some confusion over the purpose of inheritance can easily result in confusion over the use of inheritance in the mind of the programmer. This confusion can appear in different ways, but perhaps the most obvious is in the choice between "is-a" relationships and "has-a" relationships that is easy to find being discussed on the Internet. "is-a" reflects subtyping, "has-a" can provide code reuse. Which is really appropriate is not always obvious, particularly if the language uses the same syntax for both. Is inheritance spent? Having these three very different concepts all built into the one concept of "inheritance" can hardly fail to result in people developing very different understandings. It can equally be expected to result in people trying to find a way out of the mess. That is just what we see in Go and Rust. While there are important differences, there are substantial similarities between the type systems of the two languages. Both have the expected scalars (integers, floating point numbers, characters, booleans) in various sizes where appropriate. Both have structures and arrays and pointers and slices (which are controlled pointers into arrays). Both have functions, closures, and methods. But, importantly, neither have classes. With inheritance largely gone, the primary tool for inheritance — the class — had to go as well. The namespace control provided by classes is left up to "package" (in Go) or "module" (in Rust). The data declarations are left up to structures. The use of classes to store a collection of methods has partly been handed over to "interfaces" (Go) or "traits" (Rust), and partly been discarded. In Go, a method can be defined anywhere that a function can be defined — there is simply an extra bit of syntax to indicate what type the method belongs to — the "receiver" of the method. So: func (p *Point) Length() float64 {return math.Sqrt(p.x * p.x + p.y * p.y) } is a method that applies to a Point, while: func Length(p *Point) float64 {return math.Sqrt(p.x * p.x + p.y * p.y) } would be a function that has the same result. These compile to identical code and when called as "p.Length()" and "Length(&p)" respectively, identical code is generated at the call sites. Rust has a somewhat different syntax with much the same effect: impl Point {fn Length(&self) -> float { sqrt(self.x * self.x + self.y * self.y)} } A single impl section can define multiple methods, but it is perfectly legal for a single type to have multiple impl sections. So while an impl may look a bit like a class, it isn't really. The "receiver" type on which the method operates does not need to be a structure — it can be any type though it does need to have a name. You could even define methods for int were it not for rules about method definitions being in the same package (or crate) as the definition of the receiver type. So in both languages, methods have managed to escape from existing only in classes and can exist on their own. Every type can simply have some arbitrary collection of methods associated with it. There are times though when it is useful to collect methods together into groups. For this, Go provides "interfaces" and Rust provides "traits". type file interface {Read(b Buffer) boolWrite(b Buffer) boolClose() } trait file {fn Read(&self, b: &Buffer) -> bool;fn Write(&self, b: &Buffer) -> bool;fn Close(&self); } These two constructs are extremely similar and are the closest either language gets to "classes". They are however completely "virtual". They (mostly) don't contain any implementation or any fields for storing data. They are just sets of method signatures. Other concrete types can conform to an interface or a trait, and functions or methods can declare parameters in terms of the interface or traits they must conform to. Traits and interfaces can be defined with reference to other traits or interfaces, but it is a simple union of the various sets of methods. type seekable interface { file Seek(offset u64) u64 } trait seekable : file { fn Seek(&self, offset: u64) -> u64; } No overriding of parameter or return types is permitted. Both languages allow pointers to be declared with interface or trait types. These can point to any value of any type that conforms to the given interface or trait. This is where the real practical difference between the Length() function and the Length() method defined earlier becomes apparent. Having the method allows a Point to be assigned to a pointer with the interface type: type measurable interface { Length() float64 } The function does not allow that assignment. Exploring the new inheritance Here we see the brave new world of inheritance. It is nothing more or less than simply sharing a collection of method signatures. It provides simple subtyping and doesn't even provide suggestions of code reuse or structure embedding. Multiple inheritance is perfectly possible and has a simple well-defined meaning. The diamond problem has disappeared because implementations are not inherited. Each method needs to be explicitly implemented for each concrete type so the question of conflicts between multiple inheritance paths simply does not arise. This requirement to explicitly implement every method for every concrete type may seem a little burdensome. Whether it is in practice is hard to determine without writing a substantial amount of code — an activity that current time constraints don't allow. It certainly appears that the developers of both languages don't find it too burdensome, though each has introduced little shortcuts to reduce the burden somewhat. The "mostly" caveat above refers to the shortcut that Rust provides. Rust traits can contain a "default" implementation for each method. As there are no data fields to work with, such a default cannot really do anything useful and can only return a constant, or call other methods in the trait. It is largely a syntactic shortcut, without providing any really inheritance-like functionality. An example from the Numeric Traits bikeshed is trait Eq { fn eq(&self, other: &Self) -> bool { return !self.ne(other) }; fn ne(&self, other: &Self) -> bool { return !self.eq(other) }; } In this example it is clear that the defaults by themselves do not provide a useful implementation. The real implementation is expected to define at least one of these methods to something meaningful for the final type. The other could then usefully remain as a default. This is very different from traditional method inheritance, and is really just a convenience to save some typing. In Go, structures can have anonymous members much like those in C11 described earlier. The methods attached to those embedded members are available on the embedding structure as delegates: if a method is not defined on a structure it will be delegated to an anonymous member value which does define the method, providing such a value can be chosen uniquely. While this looks a bit more like implementation inheritance, it is still quite different and much simpler. The delegated method can only access the value it is defined for and can only call the methods of that value. If it calls methods which have been redefined for the embedding object, it still gets the method in the embedded value. Thus the "extra dimension" of code reuse mentioned earlier is not present. Once again, this is little more than a syntactic convenience — undoubtedly useful but not one that adds new functionality. Besides these little differences in interface declarations, there are a couple of significant differences in the two type systems. One is that Rust supports parameterized types while Go does not. This is probably the larger of the differences and would have a pervasive effect on the sort of code that programmers write. However, it is only tangentially related to the idea of inheritance and so does not fit well in the present discussion. The other difference may seem trivial by comparison — Rust provides a discriminated union type while Go does not. When understood fully, this shows an important difference in attitudes towards inheritance exposed by the different languages. A discriminated union is much like a C "union" combined with an enum variable — the discriminant. The particular value of the enum determines which of the fields in the union is in effect at a particular time. In Rust this type is called an enum: enum Shape { Circle(Point, float), Rectangle(Point, Point) } So a "Shape" is either a Circle with a point and a length (center and radius) or a Rectangle with two points (top left and bottom right). Rust provides a match statement to access whichever value is currently in effect: match myshape {Circle(center, radius) => io::println("Nice circle!");Rectangle(tl, br) => io::println("What a boring rectangle"); } Go relies on interfaces to provide similar functionality. A variable of interface type can point to any value with an appropriate set of methods. If the types to go in the union have no methods in common, the empty interface is suitable: type void interface { } A void variable can now point to a circle or a rectangle. type Circle struct {center Pointradius float } type Rectangle struct {top_left, bottom_right Point } Of course it can equally well point to any other value too. The value stored in a void pointer can only be accessed following a "type assertion". This can take several forms. A nicely illustrative one for comparison with Rust is the type switch. switch s := myshape.(type) { case Circle:printString("Nice circle!") case Rectangle:printString("What a boring rectangle") } While Rust can equally create variables of empty traits and can assign a wide variety of pointers to such variables, it cannot copy Go's approach to extracting the actual value. There is no Rust equivalent of the "type assertion" used in Go. This means that the approaches to discriminated union in Rust and Go are disjoint — Go has nothing like "enum" and Rust has nothing like a "type assertion". While a lot could be said about the comparative wisdom and utility of these different choices (and, in fact, much has been said) there is one particular aspect which relates to the topic of this article. It is that Go uses inheritance to provide discriminated unions, while Rust provides explicit support. Are we moving forward? The history of programming languages in recent years seems to suggest that blurring multiple concepts into "inheritance" is confusing and probably a mistake. The approach to objects and methods taken by both Rust and Go seem to suggest an acknowledgment of this and a preference for separate, simple, well-defined concepts. It is then a little surprising that Go chooses to still blend two separate concepts — unions and subtyping — into one mechanism: interfaces. This analysis only provides a philosophical objection to that blend and as such it won't and shouldn't carry much weight. The important test is whether any practical complications or confusions arise. For that we'll just have to wait and see. One thing that is clear though is that the story of the development of the object-oriented programming paradigm is a story that has not yet been played out — there are many moves yet to make. Both Rust and Go add some new and interesting ideas which, like languages before them, will initially attract programmers, but will ultimately earn both languages their share of derision, just as there are plenty of detractors for C++ and Java today. They nonetheless serve to advance the art and we can look forward to the new ideas that will grow from the lessons learned today. Index entries for this article GuestArticles Brown, Neil (Log in to post comments) A little OO goes a long way Posted May 1, 2013 19:36 UTC (Wed) by ncm (subscriber, #165) [Link] The fundamental fact thrown out the window in the hysteria of '90s OO marketing was that a little bit of OO goes a long way. Inheritance is roughly as useful as function pointers in C: definitely useful in their place, but most C programs don't use them. Alex Stepanov, of STL fame, has referred to member functions as "OO gook". Member functions implement walled gardens mostly inaccessible to the template system, absent special effort. What made C++ uniquely powerful was not its OO features, but its destructor, combined (later on) with its ML-like template system. Rust is one of very few subsequent languages that have adopted (not to say inherited!) the destructor. h3 Posted May 1, 2013 21:49 UTC (Wed) by b (guest, #27559) [Link] right. the purist languages have a dogmatic allure (everything is an object, pure functional, etc) but tend to become brittle when encountering real-world problems...and also fracture upon contact with the whimsical, fashion-like trends of the programming world. i'm currently very interested in Go, Rust and Racket...but sadly as cool as Racket is, i think the jury is in - types are a good thing. Typed Racket isn't quite pervasive enough yet and i'm not sure it ever will be, the community probably too small to undertake the herculean effort of porting the Racket world to Typed Racket h3 Posted May 1, 2013 23:58 UTC (Wed) by b (subscriber, #165) [Link] Purism has a practical correlate that may account for its perceived value. A language can promise regularities that improve interoperability among libraries. E.g., you can code a destructor in any language, but only the language can guarantee it will be called everywhere it should be. Purism's role is much like that of religion's: religions enforce idiosyncratic behaviors, some of which turn out to have practical value to individuals, or to society, or to ruling powers. Once you identify these and codify them directly, the religions they come from may be left enforcing only irrelevant or actively harmful behaviors. A little OO goes a long way Posted May 5, 2013 15:20 UTC (Sun) by sionescu (subscriber, #59410) [Link] It's funny how you say that "purist languages have a dogmatic allure" immediately followed by "the jury is in - types are a good thing". h3 Posted May 6, 2013 17:00 UTC (Mon) by b (subscriber, #69389) [Link] Just because a language has types doesn't mean the user needs to manage them manually. You can certainly use Haskell without any type decorations (they're useful "test cases" and documentation when given explicitly though). A little OO goes a long way Posted May 3, 2013 3:08 UTC (Fri) by tjc (guest, #137) [Link] > What made C++ uniquely powerful was not its OO features, but its destructor, ... Please elucidate. h3 Posted May 3, 2013 7:40 UTC (Fri) by b (guest, #60262) [Link] What isn't clear? Destructors are guaranteed to run so give guaranteed, deterministic clean up of arbitrary resources at scope exit, however the scope is exited. That's something not possible in many, mahy other languages whereas inheritance (for code reuse or subtyping) and overridable function pointers are fairly easy to do with a bit of effort. I was very pleased to learn from one of Stroustrup's talks at the recent ACCU conference that he added destructors to C++ before he added inheritance. He knew what was important and needed. h3 Posted May 3, 2013 9:15 UTC (Fri) by b (guest, #1313) [Link] In many other languages you can use something like try-finally to achieve this. h3 Posted May 3, 2013 11:10 UTC (Fri) by b (subscriber, #307) [Link] when you have some resource external to your program represented by some type (a file, a windows, a proxy to a variable in another machine), you'll *always* want to take steps to destroy a variable of this type. So, for each variable of this type, you'll be typing (and others will be reading, and it will inflation the SLOCcount of your code) "try \n ... finally \n ... ... \n end" for no good reason, and without adding to the intended semantics, and exposing yourself as the programmer to an error introduced by a typo, the forgetting of some condition, or even to the optimization of some condition that was impossible but suddently become possible (e. g., when you wrote the code you knew you were closing some file in every possible path but some maintenance creates a new code path, like some new exception coming from a new version of a library, where the file goes out of scope still open). h3 Posted May 3, 2013 13:00 UTC (Fri) by b (guest, #60262) [Link] Also, you'll be writing the same code everywhere you want to clean up a resource of that type. Why repeat yourself, why not associate the cleanup code with the type and make it run automatically? I'm baffled why any self-respecting programmer would want to duplicate all that cleanup logic in every finally block or why they'd question the advantage of destructors. Python's with statement is the right idea, the object's scope is limited and its type has some predefined cleanup code that runs automatically when the scope ends. Bingo. h3 Posted May 3, 2013 14:18 UTC (Fri) by b (subscriber, #90713) [Link] To bring it back on topic, Go has 'defer'[1] which is a bit of try-finally and with/destructors by putting the deferred call near the construction site. It's still not as good as Python's 'with' in my opinion because you have to be explicit about the cleanup call in Go and in Python it's not cluttering up the code (aside from the indent level). [1] http://golang.org/doc/effective_go.html#defer h3 Posted May 4, 2013 5:37 UTC (Sat) by b (guest, #27876) [Link] And it's by for not as good as destructors: - It puts the burden on the programmer and not the type. - defer only works when leaving the scope of the function calling it. A type's destructors are called when going out of any scope in C++, e.g. when an instance of a class is used as a (non-pointer) member variable. A little OO goes a long way Posted May 3, 2013 15:53 UTC (Fri) by hummassa (subscriber, #307) [Link] Oh, the duplication is the least of the problems. A bigger one would be the combination of optimization by the programmer and external changes. So, the following code:try { f = open(...) f.x() f.y() if( z ) f.writeCheckSumAndLastBuffer() else f.writeLastBuffer()} finally { f.close()} generates a hidden bug when f.y(), that calls a w() function from an external library, starts seen some exception and the last buffer is not written. Bonus points if the "if(z)" thing was put by another programming, in the run of the normal maintenance of the program. h3 Posted May 4, 2013 5:45 UTC (Sat) by b (guest, #27876) [Link] I don't see how that is a problem of try/finally. Consider this C++ { Resource someResource(...); resource.x(); resource.y(); // throws resource.writeLastBuffer();} If y() throws here in C++, writeLastBuffer() is never called either. I if you always want to write something, add it to close() or ~Resource(). Also, Java 7 has a nicer try-with-resources statement, like Python's with, for classes that implement AutoClosable: try (Resource r = new Resource(...)) { // Do something with r...} h3 Posted May 4, 2013 10:21 UTC (Sat) by b (subscriber, #307) [Link] It still burdens the "client" programmer on remembering to use the new try thingy. Destructors have zero client programmer overhead. A little OO goes a long way Posted May 4, 2013 10:26 UTC (Sat) by hummassa (subscriber, #307) [Link] > I don't see how that is a problem of try/finally. You are right, of course. But using destructors you have a lot more chances that discovering that some code belong in an destructor and putting it there because in the client code the WriteLastBuffer thing sticks out like a sore thumb. :-D Ah, and once you wrote it, all call sites are correct from now on. A little OO goes a long way Posted May 7, 2013 14:52 UTC (Tue) by IkeTo (subscriber, #2122) [Link] Consider this C++ { Resource someResource(...); resource.x(); resource.y(); // throws resource.writeLastBuffer();} C++ programmers are accustomed to a concept called RAII, Resource Acquisition is Initialization. So if they always want the last buffer written, they tend to write something like: class LastBufferWriter {public: LastBufferWriter(Resource resource): resource_(resource) {} ~LastBufferWriter() { resource_.writeLastBuffer(); }private: Resource resource_;};... { Resource resource(...); LastBufferWriter writer(resource); // Anything below can throw or not throw, we don't care resource.x(); resource.y();} Not to say that everybody like having to define a class for every cleanup, though. But with C++0x lambda expression, the above can easily be automated. A little OO goes a long way Posted May 3, 2013 11:39 UTC (Fri) by jwakely (guest, #60262) [Link] Yes, I know. That's an inferior solution compared to destructors. A little OO goes a long way Posted May 3, 2013 12:52 UTC (Fri) by rleigh (guest, #14622) [Link] try..finally is a very poor alternative. Running the destructor when the object goes out of scope or is deleted gives you strict, deterministic cleanup. Using try..finally means I have to reimplement the same logic, *by hand*, everywhere in the codebase where the object goes out of scope. And if I forget to do this in just one place, I'm now leaking resources. What's the chance that this will happen in a codebase of any appreciable size, especially allowing for changes as a result of ongoing maintenance and refactoring? It is effectively guaranteed. The really great thing about this being done in the destructor is that I can be satisfied that I will never leak resources by default, ever. It's simply not possible. This is the real beauty of RAII; cleanup just happens under all circumstances, including unwinding by exceptions. As a relatively recent newcomer to Java from a C++ background, I have to say I find the resource management awful, and this stems directly from its lack of deterministic destruction. While it might do a decent job of managing memory, every other resource requires great care to manage by hand, be it file handles, locks or whatever, and I've seen several serious incidents as a result, typically running out of file handles. And the enforcement of checking all thrown exceptions itself introduces many bugs--you can't just let it cleanly and automatically unwind the stack, thereby defeating one of the primary purposes of having exceptions in the first place--decoupling the throwing and handling of errors. By way of comparison, I haven't had a single resource leak in the C++ program I maintain in 8 years, through effective use of RAII for all resources (memory, filehandles, locks). Regards, Roger Deterministic destruction Posted May 3, 2013 9:46 UTC (Fri) by drothlis (guest, #89727) [Link] http://en.wikipedia.org/wiki/Resource_Acquisition_Is_Init... h3 Posted May 3, 2013 16:34 UTC (Fri) by b (guest, #137) [Link] Thanks for the link. http://www.stroustrup.com/bs_faq2.html#finally Liskov Substitution Principle Posted May 2, 2013 14:08 UTC (Thu) by rriggs (guest, #11598) [Link] A very good overview of the issues that OO has with the concept of inheritance. It would have been good see a mention of Liskov, especially when discussing the issues around GtkMenuShell. That is a clear violation of the LSP. http://en.wikipedia.org/wiki/Liskov_substitution_principle Robert C. Martin's SOLID principles are a must read for any budding (or experienced) programmer. It is where I was first introduced to the concept. http://en.wikipedia.org/wiki/SOLID_%28object-oriented_des... h3 Posted May 2, 2013 16:28 UTC (Thu) by b (subscriber, #19068) [Link] Its nice to have theories and principles, and something like LSP makes a lot of sense in that it gives nice meaning to terms like "subtype" that you can reason about. However, in practice, things like the menu shell example happens. We have a common container class in order to have a common API for traversing the widget tree, and a menu shell has children so it has to be a container. However, a menu lays out its children in a specific way (essentially its a two-column layout where the first column is used for icon/checkbox/radiobutton), so it cannot accept *any* kind of child. The pragmatical solution in Gtk+ is that container has a gtk_container_child_type() method that specifies what kind of children a specific container supports. Then the menu shell can rely on the menu item API to separately position the two columns of its rows. Another possible solution is to make GtkMenuShell a grid-like container and force users to add separate widgets for e.g. the label and the radio button. This is rather bad API for users though, as it splits up a conceptual object like a checkbox menu row into two objects that you have to separately maintain. Another approach is the one this article talks about, i.e. make container an interface rather than a class, so that we avoid talking about subtypes at all. The gtk+ developers generally think that the Gtk+ class hierarchies are overly deep and that if we could break API we would have shallower hierarchies, less code sharing via inheritance, a greater reliance on interfaces to specify common APIs and more use of mixins to share code. However, even in such a world I think it makes sense to have some form of inheritance, including a container baseclass (or possibly just merge the container class into widget). How would a widget toolkit in go or rust look? Go and Rust — objects without class Posted May 3, 2013 12:14 UTC (Fri) by tshow (subscriber, #6411) [Link] Additional homework: Familiarize yourself with Self and Io. :) Io is pretty neat. I'd be interested to try a production-ready OO language that didn't have serious implementation warts and wasn't wedded to strong typing. I'm hopeful about Go in that regard, and I'm also hopeful that Go continues to treat OO as optional seasoning rather than The Way. h3 Posted May 3, 2013 17:59 UTC (Fri) by b (guest, #27559) [Link] go has strong typing. it also has type inference so the compiler can deduce an appropriate type if there isn't a strict annoation, but thats different than a dynamic language that merely assigns types to values, not variables Go and Rust — objects without class Posted May 9, 2013 21:43 UTC (Thu) by VITTUIX-MAN (guest, #82895) [Link] Well as I see it, Io and self seem to go to the class of languages with paradigm "objects as hash tables" which is kind of heavy weight approach to object orientation, with debatable benefits at least in a compiled languages. Just how often do you clone an object and add some properties to it on the fly, in run time? h3 Posted Apr 11, 2014 8:32 UTC (Fri) by b (guest, #25465) [Link] > Well as I see it, Io and self seem to go to the class of languages with paradigm "objects as hash tables" which is kind of heavy weight approach to object orientation, with debatable benefits at least in a compiled languages. At least Self implemented several powerful optimizations to remove that cost. Thanks to that work, virtual calls can even be *inlined* in Self/SmallTalk/Java/JavaScript. But since that requires a managed language runtime (to get statistics on the target of the call), that's not supported in most C++ implementations. I mention Self/SmallTalk/Java/JavaScript because the work was done by (some of) the same people — see the history of StrongTalk: http://en.wikipedia.org/wiki/Strongtalk#History. Lars Bak went on from StrongTalk to HotSpot and then to lead Google V8 (IIUC), and I learned some bits of this history first-hand from him in his Aarhus lecture on virtual machines. Go and Rust — objects without class Posted May 21, 2013 6:01 UTC (Tue) by mmaroti (guest, #84368) [Link] The article does not mention the main difference between the Rust and Go object model: 1) In Rust you separately pass the virtual table pointers with the object pointers. So if Rust wants to store a vector of objects implementing the Shape interface, then you have to record both the virtual table pointer and the data pointer for each Shape. However, Rust stores a vector of Circle objects that implement the Shape interface by storing a single virtual table for circle and a vector of data pointers for each Circle. Haskell does the same (there interfaces are called type classes and implementations are instances). 2) Go stores the virtual tables together with the objects. This is how C++ and Java stores objects, so no matter if you have a vector of Shapes or Circles, both can be stored in a vector of data pointers. h3 Posted May 22, 2013 8:19 UTC (Wed) by b (subscriber, #359) [Link] Hi. Thanks for you comment. I'm not sure I follow you though. The two type systems look very much the same in this particular respect. In Rust a pointer to a value is usually just to the value - no vtable is implied. To get a vtable,you use the "as" operator. "mycircle as Shape" becomes a pair of pointers, one to 'mycircle', one to a vtable which implements "Shape" for mycircle. This is described in section "8.1.10 Object types" of the Rust reference manual, and seems to agree with what you said. In Go, a pointer to a value is just to that value, no vtable. To get a vtable you need to convert it to an 'interface' type, such as by "Shape(mycircle)". This will compute (possibly at runtime) the vtable if it doesn't already exist, and will create a pointer-pair, just like in Rust. In Go you don't need the explicit cast. Assigning to an interface-type or passing as a parameter where an interface-type is expected are sufficient. This is a small difference to Rust where I think the "as" is required (not sure though). More details of the Go approach can be found in http://research.swtch.com/interfaces This seems quite different to your description of Go. h3 Posted May 22, 2013 20:40 UTC (Wed) by b (guest, #84368) [Link] Hi! Thanks for the pointers and clarification. Yes, as you write, interfaces in Go and Rust are stored essentially the same: both a pointer to the vtable and a pointer to the data is stored. However, I am under the impression that the vtable is computed dynamically for each cast in Go and statically at compile time in Rust. I am going by these two sources: http://smallcultfollowing.com/babysteps/blog/2012/04/09/r... https://news.ycombinator.com/item?id=3749860 By these accounts you cannot do upcasts in Rust, so the vtables (the actual type of objects) cannot be computed at runtime. In Go, the vtables (actual type) of objects can be computed. From my point of view it is an implementation detail whether a language is storing the vtable in the first field of the data object (the C++ way) or you pass the vtable pointer together with the data pointers (the Go way). The important point, that you try to cast from interface{} to any other interface. By the way, does Go have polymorphic arrays, which would ensure that all objects in the array are of the exact same type, and only a single vtable pointer is stored together with a bunch of data pointers? h3 Posted May 22, 2013 23:04 UTC (Wed) by b (subscriber, #359) [Link] Yes, the vtable (referred to in the page I linked as an 'itable') is computed dynamically at runtime in Go. However it is only computed once for a given interface/type pair - it isn't recomputed at each cast. No, casts from an interface to a particular type (I call them downcasts, but you seem to call them upcasts) are not possible in Rust. The article mentions this in that Rust has no equivalent of Go's type assertion. You need to use an 'enum' type in Rust if you want that sort of functionality. I see a couple of possibly-important differences between what you call the "C++ way" and the "Go way". The C++ way doesn't scale well for tiny objects. The stored vtable pointer might be bigger than the rest of the object. The C++ way requires a single vtable. I don't know how multiple interfaces work with that. The Go ways uses a different itable for each different interface, so multiple interfaces are trivial. I don't think that Go supports polymorphic arrays as you describe. h3 Posted May 23, 2013 2:25 UTC (Thu) by b (b, #52523) [Link] Reserve the first slot in the vtable for interface lookup function, kinda like QueryInterface in COM. >The C++ way requires a single vtable. I don't know how multiple interfaces work with that.
125
Windows 11 available on October 5
Today, we are thrilled to announce Windows 11 will start to become available on October 5, 2021. On this day, the free upgrade to Windows 11 will begin rolling out to eligible Windows 10 PCs and PCs that come pre-loaded with Windows 11 will start to become available for purchase. A new Windows experience, Windows 11 is designed to bring you closer to what you love. As the PC continues to play a more central role in our lives than ever before — Windows 11 is ready to empower your productivity and inspire your creativity. The Windows Insider community has been an invaluable community in helping us get to where we are today. Since the first Insider Preview Build was released in June, the engagement and feedback has been unprecedented. The team has also enjoyed sharing more behind the scenes stories on the development of Windows 11 in a new series we launched in June, Inside Windows 11. We sincerely appreciate the energy and enthusiasm from this community. The free upgrade to Windows 11 starts on October 5 and will be phased and measured with a focus on quality. Following the tremendous learnings from Windows 10, we want to make sure we’re providing you with the best possible experience. That means new eligible devices will be offered the upgrade first. The upgrade will then roll out over time to in-market devices based on intelligence models that consider hardware eligibility, reliability metrics, age of device and other factors that impact the upgrade experience. We expect all eligible devices to be offered the free upgrade to Windows 11 by mid-2022. If you have a Windows 10 PC that’s eligible for the upgrade, Windows Update will let you know when it’s available. You can also check to see if Windows 11 is ready for your device by going to Settings > Windows Update and select Check for updates*. October 5 is right around the corner — and there are a few things you can do to get ready for Windows 11. First, if you’re in need of a new PC now — don’t wait. You can get all the power and performance of a new Windows 10 PC and upgrade to Windows 11 for free after the rollout begins on October 5**. We’ve worked closely with our OEM and retail partners to bring you powerful Windows 10 PCs today, that will take you into the future with Windows 11. Here are a few to check out. The Acer Swift 5 (SF514-55) ultrathin-and-light notebook marries an uber-stylish design with the latest performance technology. Powered by 11th Gen Intel Core i5 and Intel Core i7 processors and verified to meet the requirements of an Intel Evo platform, the Swift 5 has the power and performance to seamlessly run multiple applications and provides up to 17 hours of battery life for all-day productivity. The touchscreen display is covered with a layer of Antimicrobial Corning Gorilla Glass, and you have the option to further include an antimicrobial solution on the touchpad, keyboard and all covers of the device. Click the link above for more details on the Acer website. The new Acer Swift X (SFX14-41G) notebook represents a new segment within the Swift portfolio, the first of its series to come powered with discrete graphics, all at 3.06 pounds. Up to a NVIDIA GeForce RTX 3050 Ti Laptop GPU, combined with up to an AMD Ryzen 7 5800U Mobile Processor and 16 GB of RAM offers creative professionals such as video editors or photographers plenty of power. True to the Swift family, all this hardware has been fitted into a metal chassis 0.7 in thin. Click the link above for more details on the Acer website. Asus Zenbook Flip 13 OLED UX363 has an all-new design that combines ultimate portability with supreme versatility. Its NanoEdge FHD OLED display and 360-degree ErgoLift hinge make extra compact, and the super-slim 13.9 mm chassis houses a wide range of I/O ports for easy connectivity. Its Intel Core processor gives effortless performance for on-the-go productivity and visual creativity. Asus Zenbook 14 UX425 has an all-new design that’s just 13.9mm slim. It has a four-sided NanoEdge display with a 90% screen-to-body ratio for immersive visuals, and there’s a complete set of full I/O ports. The latest 11th Gen Intel Core i7 processor and all-new Intel Iris Xe graphics makes it a perfect portable companion. Each of Dell Alienware’s X-Series laptops include Alienware Cryo-tech cooling technology and this generation features a patent-pending Quad Fan design engineered to provide the highest levels of gaming performance. Both the Alienware x15 and Alienware x17 are made of premium materials, including magnesium alloy and CNC-machined aluminum designed for structural rigidity, and are finished with a carefully formulated stain-resistant paint formula. Built for marathon gaming sessions, these laptops feature HyperEfficient voltage regulation technology which is designed to allow the system to perform at the highest levels for hours of gameplay. The Dell XPS 13 is crafted using authentic premium materials, precision cut to achieve a flawless finish in a durable, lightweight design. Designed to create the perfect affinity between aesthetics and functional purpose, it delivers powerful performance and a larger 4-sided InfinityEdge display. If you’re looking for something extra special, the HP Spectre x360 14 features cutting edge 2-in-1 design and superb performance with the latest Intel Core processors along with all-day battery life. If you’re a creator looking for a device that is as flexible as your workflow, the HP ENVY x360 15 is a mobile creative powerhouse, featuring AMD Ryzen or Intel Core processors, and Wi-Fi 6 and Bluetooth 5 for fast connectivity. The Spectre x360 is available at Best Buy and HP.com; the ENVY x360 15 is available at select retailers including Best Buy, Costco, Walmart and HP.com. Meet the versatile 2-in-1 Lenovo Yoga 7 convertible series, available in 14-inch sizes and designed with rounded edges to feel more comfortable in your hands. Immerse yourself in a vibrant Full HD IPS touchscreen display with your choice of either 11th Gen Intel Core or AMD Ryzen 5000 Series mobile processors plus integrated graphics. Available in Slate Grey hue, go anywhere with a 71WHr battery, a metal chassis that impresses from every angle and thoughtful details such as a webcam privacy shutter. Click the link above for more details on the Lenovo website. Master multitasking with the thin and light Lenovo Yoga Slim 7i Pro series, offering consumers a choice of an LCD or super-vibrant OLED display for greater immersion. Available in a 14-inch size and Light Silver hue, the laptop features either 11th Gen Intel Core or AMD Ryzen 5000 Series mobile processors – both models offer optional NVIDIA GeForce MX450 graphics to boost your content creation. Enjoy the convenience of Windows Hello and an IR camera with a raised notch for easier opening, plus a backlit keyboard. Click the link above for more details on the Lenovo website. Samsung Galaxy Book Pro and Galaxy Book Pro 360 reshape the PC for mobile-first consumers by bringing together next-generation connectivity, ultra-portable design and elevated performance. Equipped with 11th Gen Intel Core processor, Intel Iris Xe graphics, and AMOLED display within super-thin and light body for increased mobility, the Galaxy Book Pro series let you maximize productivity, enjoy immersive entertainment and unleash creativity. With complete Samsung Galaxy ecosystem integration, the Galaxy Book Pro series is now the ultimate link between your devices, fully connecting your digital world. The Galaxy Book Pro and Galaxy Book Pro 360 are available in 13-inch and 15-inch models with color options ranging from Mystic Navy, Mystic Silver and Mystic Bronze for Galaxy Book Pro 360 and Mystic Blue, Mystic Silver and Mystic Pink Gold for Galaxy Book Pro. Surface Pro 7  is ultra-light and versatile. Whether at your desk, on the couch, or in the yard, get more done your way with the best-selling Surface 2-in-1 that features a laptop-class Intel Core processor, all day battery life, HD cameras and a stunning 12.3-inch PixelSense touchscreen display. It transforms from tablet to laptop with pen and touch input, a built-in Kickstand, an optional removable Type Cover, and it easily connects to multiple monitors. Click the link above to learn more about Surface Pro 7. Surface Laptop 4 offers style and speed. Do it all with the perfect balance of sleek design, speed, immersive audio and significantly longer battery life than before. Stand out on HD video calls backed by Studio Mics. Capture ideas and use your favorite Microsoft 365 applications on the vibrant PixelSense touchscreen display in 13.5-inch or 15-inch models. Choose between 11th Gen Intel Core processors or AMD Ryzen Mobile Processors with Radeon Graphics Microsoft Surface Edition. Click the link above to learn more about Surface Laptop 4, including available color finishes and material options. We’ll be relaunching the PC Health Check app soon, so you can check to see if your current PC will be eligible to upgrade. In the meantime, you can learn more about Windows 11 minimum system requirements here. If you’re preparing for the upgrade and you’re not already using OneDrive, check it out. It’s a simple way to help keep your files secure and make it easier to transition through the upgrade or to a new device. For organizations that are managed by IT, today we announced new capabilities coming in Microsoft Endpoint Manager to help you to assess your readiness for Windows 11 and hybrid work at scale. You can learn more in the Microsoft Endpoint Manager Tech Community blog. For customers who are using a PC that won’t upgrade, and who aren’t ready to transition to a new device, Windows 10 is the right choice. We will support Windows 10 through October 14, 2025 and we recently announced that the next feature update to Windows 10 is coming later this year. Whatever you decide, we are committed to supporting you and offering choice in your computing journey. As Panos shared in June, Windows is more than an operating system; it’s where we connect with people, it’s where we learn, work and play. We can’t wait to see what Windows 11 empowers you to do and create. *Note, certain features require specific hardware; see our Windows 11 specifications page for more information. **The Windows 11 upgrade will start to be delivered to qualifying devices beginning on October 5, 2021 into 2022. Timing varies by device.
3
Apple Killed Our Hopes for USB-C, but They're Preparing Us for Portless Phones
Member-only story We might never see a universal phone charger as iPhone 12 hints at a wireless future Anupam Chugh p Follow Published in Big Tech Talks p 4 min read p Nov 3, 2020 -- 1 Share Photo by Mika Baumeister on Unsplash The release of the iPhone 12 was highly anticipated by the Apple community this year. Follow 28K Followers p Editor for Big Tech Talks Human bot and a Debugger at Better Programming. Satirist for controversial opinions. Freelance iOS dev. Mr. Chatterbox (per school teacher)linktr.ee/anupamchugh Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
224
More than eighty cultures still speak in whistles
Tourists visiting La Gomera and El Hierro in the Canary Islands can often hear locals communicating over long distances by whistling — not a tune, but the Spanish language. “Good whistlers can understand all the messages,” says David Díaz Reyes, an independent ethnomusicologist and whistled-language researcher and teacher who lives in the islands. “We can say, ‘And now I am making an interview with a Canadian guy.’” The locals are communicating in Silbo, one of the last vestiges of a much more widespread use of whistled languages. In at least 80 cultures worldwide, people have developed whistled versions of the local language when the circumstances call for it. To linguists, such adaptations are more than just a curiosity: By studying whistled languages, they hope to learn more about how our brains extract meaning from the complex sound patterns of speech. Whistling may even provide a glimpse of one of the most dramatic leaps forward in human evolution: the origin of language itself. Whistled languages are almost always developed by traditional cultures that live in rugged, mountainous terrain or in dense forest. That’s because whistled speech carries much farther than ordinary speech or shouting, says Julien Meyer, a linguist and bioacoustician at CNRS, the French national research center, who explores the topic of whistled languages in the 2021 Annual Review of Linguistics. Skilled whistlers can reach 120 decibels — louder than a car horn — and their whistles pack most of this power into a frequency range of 1 to 4 kHz, which is above the pitch of most ambient noise. As a result, whistled speech can be understood up to 10 times as far away as ordinary shouting can, Meyer and others have found. That lets people communicate even when they cannot easily approach close enough to shout. On La Gomera, for example, a few traditional shepherds still whistle to one another across mountain valleys that could take hours to cross. Whistled languages work because many of the key elements of speech can be mimicked in a whistle, says Meyer. We distinguish one speech sound, or phoneme, from another by subtle differences in their sound frequency patterns. A vowel such as a long e, for example, is formed higher in the mouth than a long o, giving it a higher sound. “It’s not pitch, exactly,” says Meyer. Instead, it’s a more complex change in sound quality, or timbre, which is easily conveyed in a whistle. Consonants, too, can be whistled. A t, for example, is richer in high frequencies than k, which gives the two sounds a different timbre, and there are also subtle differences that arise from movements of the tongue. Whistlers can capture all of these distinctions by varying the pitch and articulation of their whistle, says Meyer. And the skill can be adapted to any language, even those that have no tradition of whistling. To demonstrate, Meyer whistles English phrases such as “Nice to meet you,” and “Do you understand the whistle?” Learning to whistle a language you already speak is relatively straightforward. Díaz Reyes’s Spanish-language whistling students spend the first two or three months of the course learning to make a loud whistle with different pitches. “In the fourth or fifth month, they can make some words,” he says. “After eight months, they can speak it properly and understand every message.” This articulation of speech within a whistle only works for nontonal languages, where the pitch of speech sounds isn’t crucial to the meaning of the word. (English, Spanish and most other European languages are nontonal.) For tonal languages, in contrast, the meaning of a sound depends on its pitch relative to the rest of the sentence. In Chinese, for example, the syllable “ma” said with a steady high pitch means “mother,” but said with a pitch that dips and rises again, it means “horse.” In ordinary tonal speech, the vocal cords make the pitch modulations that form the tones while the front of the mouth forms much of the vowel and consonant sounds. But not so for whistling, which doesn’t use the vocal cords. Whistlers of tonal languages thus face a dilemma: Should they whistle the tones, or the vowels and consonants? “In whistling, you can produce only one of the two. They have to choose,” says Meyer. In practice, almost every whistled tonal language chooses to use pitch to encode the tones. For languages with a complex set of tones — such as Chinantec, a language in southern Mexico with seven tones (high, mid, low, falling high-low, falling mid-low, rising low-mid and rising mid-high), or the equally complex Hmong language — pitch still gives enough information to carry meaning. But for simpler tonal languages — such as Gavião, an Amazonian language Meyer has studied, which has just two tones, low and high — whistlers must confine their conversations to a few stereotyped sentences that are easily recognized. Even for nontonal languages, the whistled version of speech doesn’t contain as much frequency information as ordinary spoken language, but it does carry enough to recognize words. When researchers tested people’s comprehension of whistled Turkish, they found that experienced listeners correctly identified isolated words about 70 percent of the time; for words in common whistled sentences, the context helps to resolve ambiguities and the accuracy rose to approximately 80 to 90 percent. In essence, people listening to whistled speech are piecing together its meaning from fragments of the full speech signal, just as all of us do when listening to someone at a crowded cocktail party. “Regular speech is so complex — there is so much redundant information,” says Fanny Meunier, a psycholinguist at CNRS who studies speech in noisy environments. “If we have noise, then we can choose different types of information that are present in different places in the signal.” Linguists know surprisingly few details about how the brain does this. “We still don’t know what parts of the signal are useful to understand the message,” Meunier says. Most researchers who study this topic do so by deliberately degrading normal speech to see when listeners can no longer understand. But Meunier feels that whistling offers a less artificial approach. “With whistling, it was more like, let’s see what people did naturally to simplify the signal. What did they keep?” she says. The information crucial for understanding speech, she assumes, must lie somewhere within that whistled signal. Meunier and her colleagues are just beginning this work, so she has few results to share yet. So far, they have shown that even people who have never heard whistled speech before can recognize both vowels and consonants with an accuracy well better than chance. Moreover, trained musicians do better than nonmusicians at recognizing consonants, with flute players better than pianists or violinists, Anaïs Tran Ngoc, a linguistics graduate student at the University of the Cote d’Azur, has found. Tran Ngoc, herself a musician, speculates that this is because flutists are trained to use sounds like t and k to help articulate notes crisply. “So there’s this link with language that might not be present for other instruments,” she says. Whistled languages excite linguists for another reason, too: They share many features with what linguists think the first protolanguages must have been like, when speech and language first began to emerge during the dawn of modern humans. One of the big challenges of language is the need to control the vocal cords to make the full range of speech sounds. None of our closest relatives, the great apes, have developed such control — but whistling may be an easier first step. Indeed, a few orangutans in zoos have been observed to imitate zoo employees whistling as they work. When scientists tested one ape under controlled conditions, the animal was indeed able to mimic sequences of several whistles. The context of whistled language use also matches that likely for protolanguage. Today’s whistled languages are used for long-distance communication, often during hunting, Meyer notes. And the formulaic sentences used by whistlers of simple tonal languages are a close parallel to the way our ancestors may have used protolanguage to communicate a few simple ideas to their hunting partners — “Go that way,” for example, or “The antelope is over here.” That doesn’t mean that modern whistled speech is a vestigial remnant of those protolanguages, Meyer cautions. If whistling preceded voiced speech, those earliest whistles wouldn’t have needed to encode sounds produced by the vocal cords. But today’s whistled languages do, which means they arose later, as add-ons to conventional languages, not forerunners of them, Meyer says. Despite their interest to both linguists and casual observers, whistled languages are disappearing rapidly all over the world, and some — such as the whistled form of the Tepehua language in Mexico — have already vanished. Modernization is largely to blame, says Meyer, who points to roads as the biggest factor. “That’s why you still find whistled speech only in places that are very, very remote, that have had less contact with modernity, less access to roads,” he says. Among the Gavião of Brazil, for example, Meyer has observed that encroaching deforestation has largely eliminated whistling among those living close to the frontier, because they no longer hunt for subsistence. But in an undisturbed village near the center of their traditional territory, whistling still thrives. Fortunately, there are a few glimmers of hope. UNESCO, the UN cultural organization, has designated two whistled languages — Silbo in the Canary Islands, and a whistled Turkish among mountain shepherds — as elements of the world’s intangible cultural heritage. Such attention can lead to conservation efforts. In the Canary Islands, for example, a strong preservation movement has sprung up, and Silbo is now taught in schools and demonstrated at tourist hotels. “If people don’t make that effort, probably Silbo would have vanished,” says Díaz Reyes. There, at least, the future of whistled language looks bright. Editor’s note: This article was modified on August 17, 2021 to clarify that the whistled Spanish language used in the Canary Islands is found on multiple islands, including El Hierro, and not restricted to the island of La Gomera. In addition, the common name for the language is Silbo, not Silbo Gomero. Knowable Magazine is an independent journalistic endeavor from Annual Reviews. Get the latest Science stories in your inbox.
4
The Indifference Engine: An Ecological Characterization of Bitcoin [video]
Wassim Alsindi Playlists: / As Bitcoin surpasses previous price records and re-enters mainstream consciousness following several wilderness years, the twelve-year-old cryptocurrency appears to have “arrived” in the eyes of the market. The value proposition of an ungoverned, uncensorable digital means of value transfer is clear for all to see…but can humanity and Earth afford the thermodynamic price tag? To maintain the integrity of the transaction record, the Bitcoin network creates a hard boundary to the outside through exacting validation requirements. However it does not possess any feedback mechanism or capacity to respond to the consequences of the thermoeconomic challenges it issues. This insensitivity of ‘mined’ cryptocurrencies to the energy sources used to secure them has led to criticism as to their inability to mitigate their ecological externalities. Video MP4 WebM p eng 245 MB p eng 112 MB p eng 484 MB p eng 187 MB Subtitles Help us to subtitle this talk! Audio p eng 49 MB p eng 29 MB Embed <iframe width="1024" height="576" src="https://media.ccc.de/v/rc3-685465-the_indifference_engine_an_ecological_characterisation_of_bitcoin/oembed" frameborder="0" allowfullscreen> </iframe> Share:
1
Red Dead Redemption 2 is your Steam Awards game of the year
Gather round the fire, possemates, the votes for the yearly Steam Awards are in and you lot have decided that Rockstar's cowboy adventure Red Dead Redemption 2 was the game of the year in 2020. Come have a look-see to find out if anything you voted for during the awards wound up lassoing itself a win. RDR2 took the top spot for game of the year, which I can't personally argue with. I may have even voted for it, but I've slept since then. I certainly played a heck of a lot of the online portion of the game in 2020. Despite a rocky launch on multiple storefronts when it arrived on PC at the tail end of 2019, it seems everyone else must have come around on it as well. It also snagged the win for "outstanding story-rich game", which I can't speak to because I've been entirely absorbed by RDO and never touched the singleplayer bit. A crime, I know. Here's Matthew Castle in this here video review to tell you about that part instead. Unsurprisingly, Half-Life: Alyx got the votes for "VR game of the year". I've not played it myself, but it sure seems like it would have been an upset for Valve's own game not to take that one home, right? Graham dubbed it "the Half-Life game you've been waiting for" in his Half-Life: Alyx review so it certainly seems well-earned. As for the other awards, Fall Guys is "better with friends" while The Sims 4 takes home "sit back and relax"—lots of EA games arrived on Steam only last year, remember. Ori And The Will Of The Wisps won top marks for visual style and Doom Eternal snags the soundtrack award. You can spot the winners in the remaining categories over on Steam. By the by, the Steam Winter Sale is just about to wrap up as well. You have until 10am PST / 6pm GMT tomorrow, January 5th to sneak in your last sale purchases. If you need a bit of help deciding on what to snap up, here's what you should buy in the Steam sale.
1
Arizona Senate skips vote on bill that would regulate app stores
The Arizona State Senate was scheduled to vote on an unprecedented and controversial bill Wednesday that would have imposed far-reaching changes on how Apple and Google operate their respective mobile app stores, specifically by allowing alternative in-app payment systems. But the vote never happened, having been passed over on the schedule without explanation. The Verge watched every other bill on the schedule be debated and voted on over the senate’s live stream, but Arizona HB2005, listed first on the agenda, never came up. One notable Apple critic is now accusing the iPhone maker of stepping in to stop the vote, saying the company hired a former chief of staff to Arizona Gov. Doug Ducey to broker a deal that prevented the bill from being heard in the Senate and ultimately voted on. This is after the legislation, an amendment to the existing HB2005 law, passed the Arizona House of Representatives earlier this month in a landmark 31-29 vote. “The big show turned out to be a no show. The bill was killed in mid-air while on the agenda with a backroom deal. Apple has hired the governor’s former chief of staff, and word is that he brokered a deal to prevent this from even being heard,” said Basecamp co-founder David Heinemeier Hansson, a fierce Apple critic who submitted testimony in support of HB2005, on Twitter this afternoon. Apple declined to comment. “The big show turned out to be a no show.” It was well-known prior to today’s scheduled vote that both Apple and Google had hired lobbyists to combat the bill, according to a report from Protocol , because it directly threatened the companies’ industry standard app store commission of 30 percent. If the Arizona bill passed the senate and was signed into law by Ducey, it would have made the state a haven for app makers looking to sidestep the App Store and the Google Play Store’s payment systems, which are the mechanisms the companies use to take their cuts of all app sales and in-app purchases of digital goods. It could have also caused all sorts of additional headaches for both companies by forcing them to either institute a patchwork system of state-specific enforcement, or by potentially forcing them to stop doing business in Arizona altogether while opening the door to lawsuits against the state. In testimony in front of the Arizona House earlier this month, Apple’s chief compliance officer, Kyle Andeer, argued that the App Store provides enough value to developers to justify the 30 percent cut. “The commission has been described by some special interests as a ‘payment processing fee’—as if Apple is just swiping a credit card. That’s terribly misleading. Apple provides developers an enormous amount of value — both the store to distribute their apps around the world and the studio to create them. That is what the commission reflects,” Andeer said in written testimony. “Yet this bill tells Apple that it cannot use its own check-out lane (and collect a commission) in the store we built,” he added. “This would allow billion-dollar developers to take all of the App Store’s value for free — even if they’re selling digital goods, even if they’re making millions or even billions of dollars doing it. The bill is a government mandate that Apple give away the App Store.” A number of Democrats publicly objected to the bill It’s worth noting that the bill also faced considerable opposition in the Arizona House not by big business-loving Republicans, but instead by Democrats. A number of Democrats publicly objected to the bill and voted against it on the grounds it was potentially unconstitutional for interfering with interstate commerce and also that it interjected Arizona into a California legal fight between game developer Epic Games and both Apple and Google over the removal of Fortnite from the Android and iOS platforms. The bill, which was primarily sponsored by Rep. Regina Cobb (R-5), is one of many that have popped up in state legislatures around the country challenging Apple’s and Google’s longstanding policies around the mobile app economy. These bills can be traced back to growing antitrust pressure against Big Tech mounting in both Europe and Washington, DC, and they represents a new local and state front in the ongoing fight over the tech industry’s outsize power and whatever methods lawmakers may employ to try and reign it in. Other arenas include California, where Epic launched its own fight, and the European Union. which launched antitrust investigations into the App Store and Apple Pay over anticompetitive claims. Both Apple and Google operate the two most dominant app stores in the world, and while the Google Play Store allows alternative app stores and therefore alternative payments systems, Apple does not. That means all digital purchases on iOS are subject to Apple’s mandated 30 percent cut, or in some cases a reduced 15 percent cut, though Apple has been criticized for cutting secret deals, like those it has made with Amazon over Prime Video subscriptions and later in-app purchases, to exempt certain types of purchases when it’s strategically convenient. Both companies in the last six months announced changes to the commission structure that allows for smaller developers, which represent the vast majority of app makers on both Android and iOS, to claim a reduced 15 percent cut, though that has done little assuage the app store critics. These state app store bills have become lobbying battlegrounds between Big Tech and its fiercest critics These antitrust proposals, like HB2005, are largely the work of the Coalition for App Fairness (CAF). The CAF is an industry group formed last year consisting of Epic, Spotify, Tinder parent company Match Group, and dozens of other companies that have grown increasingly dissatisfied with the status quo of the mobile app economy and the app store owners’ ironclad developer agreements. Some of these companies, like Spotify, have for years complained of unfair treatment from Apple and have accused the company of prioritizing its own software over competitors through its use of App Store rules and iOS requirements. The CAF began lobbying lawmakers earlier this year, first in North Dakota and now in multiple states including Arizona, to instigate the introduction of bills like HB2005. While the North Dakota bill failed, Arizona’s was seen as a more promising alternative because it focused solely on in-app payment systems, while the North Dakota one also mandated that operating system owners allow for alternative app stores, too. “The legislative session is not over. We will continue to push for solutions that will increase choice, support app developers and small businesses and put a stop to monopolistic practices,” said Meghan DiMuzio, the CAF’s executive director, in a statement to The Verge. Now, the bill’s fate is now in question, and it’s not immediately clear what happened. Rep. Cobb, the bill’s sponsor, did not respond to a request for comment. The Arizona governor’s office and the office of the Arizona State Senate Majority Leader Rick Gray (R-21) also did not immediately respond to requests for comment. Update March 24th, 10:12PM ET: Added statement from the Coalition for App Fairness.
5
What you need to learn to become a DevOps
DevOps, as well as developer is a perpetual learning path. If you want to start your DevOps journey, here is a guideline. What you need to know depends on the job you need. If you do YAML all day long, you might not need to know every Linux commands. If you spend hours watching dashboard, charts and others monitoring tools, would not be the same if you are the only person on the service that do a little of everything. The difficult tradeoff is about "broad vs deep", meaning you should know lots of stuffs exists, not as deeply on every topic. Some of them are required to be comfortable with. Just imagine those two dialogs. This could be Okay, depends on the situation and the expected level. You can even read later some documentation to catch up on subjects. But now imagine the same question about an important topic like Linux or Docker. And you cannot catch up reading some Linux documentation like that. Few years ago, you could still hear some people say "Docker? yeah, but not for production". Let us face it, they were wrong. Everyone agrees on that now. It is so broadly used in large company, for heavy use in production worldwide, that there is no more discussion about "is it for production?". Those days, you need to move around code in production without thinking about if the servers have the dependences needed, or even the setup. If the container run and passes tests, it will run in production environment, simple as that. Without containerization, there is no microservices, DevOps would be at that achievement. For DevOps, know how it works, what it means, how to debug and log. Learn docker compose at it would save some time. Know what a multi-stage build is. Just play around with some containers. Kind of the next step of containerization. It is how to manage lots of containers on lots of servers. It could be Kubernetes of course, but why not play a little with Docker Swarm. Play with some reverse proxy with service auto-discovery, like Traefik, or name it. Istio and Envoy could be great but more advance topic. Do not bother now with "how to manage a Kubernetes cluster", every cloud provider has Kubernetes managed cluster offers. Start playing around with minikube for a start. Play with SSL certificates, nobody serious set a website in production without a certificate anymore. Let's encrypt is fine and free. Set one up with Certbot if needed, Traefik can handle automatically let's encrypt as well. Get a cheap domain name if needed (like ***.ovh for 2€ a year, or even a .com or .fr if your French as me, not that expensive) You cannot skip on this one, it is too big. I will talk more on this one in a separated post later. Just learn GCP, AWS or Azure. Depends on your work, you might be a Cloud Architect or SRE without touching ever VM, but let us be honest, in most cases, you will need some descent knowledge about Linux. At least know your ways around. I guess the more you know, the better it is for you, but I have no need to go deep down. Once again it depends on your job. If you are aim a CI/CD, or monitoring stuffs without managing VMs, I guess Linux does not matter that much. I guess you will always at some point be asked to do some Linux. Do not skip this one. So many things to learn in Linux, you will find all you need on the web. A good basic introduction to Linux should do it. Basic Linux commands should be done without thinking. I'm not saying you have to do one-liner with awk and grep without thinking but kill a process should be straight forward. Edit file (with vim of course!) should be a second nature. Want to find some text in a folder? grep -rnw -e 'where_is_my_vhost.com’. could be useful to find a configuration file based on domain name. I was listening the other day, Sam Newman (author of building microservices and monolith to microservices, both on O'Reilly) on a talk, joking like: "don’t do bash, I mean, don't try this at home kids". I know lots of DevOps would disagree on this one, especially Ops guys. But Python scripts is just so much more readable, that is mean it would be much more maintainable. You do not need a deep knowledge on Oriented Object. But functions and modules with imports could be nice. here is an example, python's Gitlab module would make some automation easy, you might refactor some code for some "getInstance" in one module. Otherwise, you will end up with copy-paste on all files, that could make tricky to modify. I am a developer so my point could be biased. (well, DevOps now, but... still developer). If you have no knowledge on how web page is done, here is a step-by-step progression you can try for yourself. Do not need to go deep on each subject but go broad. You might never need it, depends on your responsibility, but at least you will learn the language of "what you need to install a modern website". I am using HTTPie all the time, wget or Curl. But you might be more comfortable with postman or insomnia. The idea is to make request from anywhere (mostly from a server), to anywhere. Next, it is monitoring. It is an important topic, but I will not talk too much about it. You should have enough with everything else. You can check ELK or Prometheous with Graphana. I am just dropping some name without any explanation. If you want to dig into monitoring, you will find everything you need. Yes, it is a big topic as well, check Terraform. Here is a small exercise, the idea is to setup an VM instance from a backup with Terraform. I done this to drop my personal cost of VM as backup is much cheaper storage. It is a practical case, not that useful, but still a good exercise. Learn about some automation tools as Ansible. Do not start with that, but if you are curious, you might want to check those products. Still there? Not too scare? Once you learn a little bit about every topic, you should have a better sense of what you need and what you want. If you want to specialize in a topic like Cloud Architect, you should know what to do. After? Keep learning, again and again. It will never stop. It goes faster and faster.
2
The Source Code to Vvvvvv
{{ message }} TerryCavanagh/VVVVVV You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
114
The source of the e1000e corruption bug (2008)
LWN.net needs you! Without subscribers, LWN would simply not exist. Please consider signing up for a subscription and helping to keep LWN publishing By October 21, 2008 Jonathan Corbet When LWN last looked at the e1000e hardware corruption bug, the source of the problem was, at best, unclear. Problems within the driver itself seemed like a likely culprit, but it did not take long for those chasing this problem to realize that they needed to look further afield. For a while, the X server came under scrutiny, as did a number of other system components. When the real problem was found, though, it turned out to be a surprise for everybody involved. Tracking down intermittent problems is hard. When those problems result in the destruction of hardware, finding them is even harder. Even the most dedicated testers tend to balk when faced with the prospect of shipping their systems back to the manufacturer for repairs. So the task of finding this issue fell to Intel; engineers there locked themselves into a lab with a box full of e1000e adapters and set about bisecting the kernel history to identify the patch which caused the problem. Some time (and numerous fried adapters) later, the bisection process turned up an unlikely suspect: the ftrace tracing framework. Developers working on tracing generally put a lot of effort into minimizing the impact of their code on system performance. Every last bit of runtime overhead is scrutinized and eliminated if at all possible. As a general rule, bricking the hardware is a level of overhead which goes well beyond the acceptable parameters. So the ftrace developers, once informed of the bisection result, put in some significant work of their own to figure out what was going on. One of the features offered by ftrace is a simple function-call tracing operation; ftrace will output a line with the called function (and its caller) every time a function call is made. This tracing is accomplished by using the venerable profiling mechanism built into gcc (and most other Unix-based compilers). When code is compiled with the -pg option, the compiler will place a call to mcount() at the beginning of every function. The version of mcount() provided by ftrace then logs the relevant information on every call. As noted above, though, tracing developers are concerned about overhead. On most systems, it is almost certain that, at any given time, nobody will be doing function call tracing. Having all those mcount() calls happening anyway would be a measurable drag on the system. So the ftrace hackers looked for a way to eliminate that overhead when it is not needed. A naive solution to this problem might look something like the following. Rather than put in an unconditional call to mcount(), get gcc to add code like this: if (function_tracing_active) mcount(); But the kernel makes a lot of function calls, so even this version will have a noticeable overhead; it will also bloat the size of the kernel with all those tests. So the favored approach tends to be different: run-time patching. When function tracing is not being used, the kernel overwrites all of the mcount() calls with no-op instructions. As it happens, doing nothing is a highly optimized operation in contemporary processors, so the overhead of a few no-ops is nearly zero. Should somebody decide to turn function tracing on, the kernel can go through and patch all of those mcount() calls back in. Run-time patching can solve the performance problem, but it introduces a new problem of its own. Changing the code underneath a running kernel is a dangerous thing to do; extreme caution is required. Care must be taken to ensure that the kernel is not running in the affected code at the time, processor caches must be invalidated, and so on. To be safe, it is necessary to get all other processors on the system to stop and wait while the patching is taking place. The end result is that patching the code is an expensive thing to do. The way ftrace was coded was to patch out every mcount() call point as it was discovered through an actual call to mcount(). But, as noted above, run-time patching is very expensive, especially if it is done a single function at a time. So ftrace would make a list of mcount() call sites, then fix up a bunch of them later on. In that way, the cost of patching out the calls was significantly reduced. The problem now is that things might have changed between the time when an mcount() call is noticed and when the kernel gets around to patching out the call. It would be very unfortunate if the kernel were to patch out an mcount() call which no longer existed in the expected place. To be absolutely sure that unrelated data was not being corrupted, the ftrace code used the cmpxchg operation to patch in the no-ops. cmpxchg atomically tests the contents of the target memory against the caller's idea of what is supposed to be there; if the two do not match, the target location will be left with its old value at the end of the operation. So the no-ops will only be written to memory if the current contents of that memory are a call to mcount(). This all seems pretty safe, except that it fell down in one obscure, but important case. One obvious place where an mcount() call could go away is in loadable modules. This can happen if the module is unloaded, of course, but there is another important case too: any code marked as initialization code will be removed once initialization is complete. So a module's initialization function (and any other code marked __init) could leave a dangling reference in the "mcount() calls to be patched out" list maintained by ftrace. The final piece of this puzzle comes from this little fact: on 32-bit architectures, memory returned from vmalloc() and ioremap() share the same address space. Both functions create mappings to memory from the same range of addresses. Space for loadable modules is allocated with vmalloc(), so all module code is found within this shared address space. Meanwhile, the e1000e driver uses ioremap() to map the adapter's I/O memory and NVRAM into the kernel's address space. The end result is this fatal sequence of events: A module is loaded into the system. As part of the module's initialization, a number of mcount() calls are made; these call sites are noted for later patching. Module initialization completes, and the module's __init functions are removed from memory. The address space they occupied is freed up for future use. The e1000e driver maps its I/O memory and NVRAM into the address range recently occupied by the above-mentioned initialization code. Ftrace gets around to patching out the accumulated list of mcount() calls. But some of those "calls" are now, actually, I/O memory belonging to the e1000e device. Remember that the ftrace code was very careful in its patching, using cmpxchg to avoid overwriting anything which is not an mcount() call. But, as Steven Rostedt noted in his summary of the problem: The cmpxchg could have saved us in most cases (via luck) - but withioremap-ed memory that was exactly the wrong thing to do - theresults of cmpxchg on device memory are undefined. (and willlikely result in a write) The end result is a write to the wrong bit of I/O memory - and a destroyed device. In hindsight, this bug is reasonably clear and understandable, but it's not at all surprising that it took a long time to find. One should note that there were, in fact, two different bugs here. One of them is ftrace's attempt to write to a stale pointer. But the other one was just as important: the e1000e driver should never have left its hardware configured in a mode where a single stray write could turn it into a brick. One never knows where things might go wrong; hardware should never be left in such a vulnerable state if it can be helped. The good news is that both bugs have been fixed. The e1000e hardware was locked down before 2.6.27 was released, and the 2.6.27.1 update disables the dynamic ftrace feature. The ftrace code has been significantly rewritten for 2.6.28; it no longer records mcount() call sites on the fly, no longer uses cmpxchg, and, one hopes, is generally incapable of creating such mayhem again. Index entries for this article Kernel Ftrace Kernel Releases/2.6.27 (Log in to post comments) The source of the e1000e corruption bug Posted Oct 23, 2008 2:26 UTC (Thu) by modernjazz (guest, #4185) [Link] There seem to be other bugs that were fixed by disabling CONFIG_DYNAMIC_FTRACE: see, e.g., https://bugs.launchpad.net/ubuntu/+source/linux/+bug/263059 It's interesting that this was discovered by studying what might be the scariest case (bricking the hardware), rather than in a much "easier" case of studying hangs-on-boot. It goes to show you, intense motivation can overcome a lot of the barriers of inconvenience! h3 Posted Oct 23, 2008 3:31 UTC (Thu) by b (subscriber, #11875) [Link] Another issue was that ftrace was not a suspect at the time. I corrected any bugs that were passed on to me. We were designing a new (more robust) version of ftrace in the linux-tip tree. This new version does not have the problems that the old version (in 2.6.27) had. But since the new version was a new design, we held off pushing it to Linus. Unfortunately, all our testing of the old design never showed any of these issues. It took going out to a larger audience to have them appear. The source of the e1000e corruption bug Posted Oct 23, 2008 9:03 UTC (Thu) by alonz (subscriber, #815) [Link] Wouldn't it be better to simply dump the entire contents of the mcount buffer whenever any code is unmapped, instead of just disabling this (useful) optimization in a kernel that is likely to have a long life? h3 Posted Oct 23, 2008 12:11 UTC (Thu) by b (subscriber, #11875) [Link] Wouldn't it be better to simply dump the entire contents of the mcount buffer whenever any code is unmapped, instead of just disabling this (useful) optimization in a kernel that is likely to have a long life? From a safety point of view, no. Anything other than disabling it was unacceptable in the stable release. If we found a simple bug (off by one, or array out of bounds) then we could have fixed it. But the bug was a design issue (which has changed in 2.6.28). How would we know for sure that we got every place that kernel text was freed? How do we know that we don't add more bugs with this "dump the mcount on release". Now if you would like to have dynamic ftrace in 2.6.27, it would not be hard for me to port the new design. I've already ported it to 2.6.24-rt. Just do not expect this backport to show up in the stable branch. 64 bit archs safe? Posted Oct 23, 2008 8:52 UTC (Thu) by zdzichu (subscriber, #17118) [Link] The final piece of this puzzle comes from this little fact: on 32-bit architectures, memory returned from vmalloc() and ioremap() share the same address space. So those of us running x86_64 kernels were safe? What about other 64bit architectures with PCI-Express bus? h3 Posted Oct 23, 2008 12:03 UTC (Thu) by b (subscriber, #11875) [Link] So those of us running x86_64 kernels were safe? Yes, those running on x86_64 were safe from this bug because the init code and the NVM never shared the same address space. What about other 64bit architectures with PCI-Express bus? I will not say yes for sure. But most likely. The mapping of iospace is arch specific. But I don't see why a 64bit address space arch would share the iospace with anything els. Non-pessimal patching is possible Posted Oct 23, 2008 14:39 UTC (Thu) by jreiser (subscriber, #11027) [Link] Extreme caution is required. Yes, but "live" patching can be done, perhaps including this case. I have done it when all writes are naturally aligned, when the updated code makes sense after any subset of individual writes, and when the requirements for multi-processor synchronization can be postponed (as for Read-Copy-Update). In the particular case of x86, "call mcount" is five bytes: the one-byte opcode 0xe8, followed by four bytes of displacement. With a one-byte write, this can be changed to "test $displ,%eax" [opcode 0xe9] or "cmp $displ,%eax" [opcode 0x3d]. In this case both of these are equivalent to a no-op because of the software convention that the condition code is not busy (either as input or as output) at call. So, as long as mcount does not care about "extra" or "missing" calls [from caches or other processors] during a patch update, then live patching works and can be done inexpensively. Depending on the instruction-stream decoder, surrounding instructions, cache-line boundaries, etc., then the average time cost per patch site is most likely 0, 1/3, or 1/2 cycle; the maximum is 1 cycle. h3 Posted Oct 23, 2008 15:35 UTC (Thu) by b (subscriber, #26289) [Link] Actually, those instructions wouldn't be really noops : - they would take time to be executed - they update the flags Such instructions wouldn't be accepted as nop replacements (at least I wouldn't) h3 Posted Oct 23, 2008 17:59 UTC (Thu) by b (guest, #38647) [Link] Updating flags would be fine, because the compiler has already assumed that the flags would get messed up by the original function call. Non-pessimal patching is possible Posted Oct 23, 2008 19:07 UTC (Thu) by nevets (subscriber, #11875) [Link] Talking with Intel, they told me that updating code that might be running on another CPU is dangerous. Even in my tests, I found that the other CPU would take an GPF if it was executing code that changed. Basically they told me "don't do that". Modifying code on the fly is out of the question. Luckily we do not need to do that anymore. The nop patching is now done on system boot up before SMP is even initialized. The dynamic code now only updates .text section that never leaves once it is there (except for module unloading). In the case of module unloading, we now have a hook to remove the references in the ftrace table. We still check on code patching if what we modify is what we expect. If we fail here, we print a nasty warning and disable the function tracer. So far in my testing, I have not hit this warning. If anyone sees a warning coming out of the ftrace code, I hope they report it ASAP. And please CC me (rostedt@goodmis.org). Note: Some of this code is still in queue to be pulled. h3 Posted Nov 2, 2008 17:37 UTC (Sun) by b (guest, #11842) [Link] Do you check the pointers for validity before they are inserted in the table? If the pointer is not from the static kernel code or from module code, then it is worth investigating. The source of the e1000e corruption bug Posted Oct 23, 2008 18:02 UTC (Thu) by jimparis (guest, #38647) [Link] > So the no-ops will only be written to memory if the current contents of that memory are a call to mcount(). > This all seems pretty safe, except that it fell down in one obscure, but important case There's another bad case: When the memory was freed and reallocated with vmalloc, then filled with non-code data that includes the same byte sequence as the original call to mcount(). No I/O remapping required and now you've corrupted something. Although the chance of having some random data in kernel memory exactly match the pattern that was there before is probably vanishingly small, it's still there. I'm glad to hear the ftrace code has been reworked to not do this anymore. h3 Posted Oct 23, 2008 18:34 UTC (Thu) by b (subscriber, #11875) [Link] Why do you think I stressed in my quote: The cmpxchg could have saved us in most cases (via luck) - but with ioremap-ed memory that was exactly the wrong thing to do - the results of cmpxchg on device memory are undefined. (and will likely result in a write) ;-) h3 Posted Oct 23, 2008 19:37 UTC (Thu) by b (guest, #38647) [Link] Aah, I did misread that, as "if we're lucky then the memory was not ioremapped and so the cmpxchg saves us". Nevermind, carry on :) The source of the e1000e corruption bug Posted Oct 24, 2008 9:45 UTC (Fri) by NAR (guest, #1313) [Link] if (function_tracing_active) mcount(); But the kernel makes a lot of function calls, so even this version will have a noticeable overhead; Exactly how noticable? I was wondering, because the erlang VM also has a similar trace capability that can be turned on and off at runtime. I don't know how it's implemented, but I doubt there's NOOP-ing of instructions involved - still, it is used in fairly performance-critical applications. I just can't help the feeling that NOOP-ing was done because modifying code on-the-fly is sexy, not because it's that much faster. h3 Posted Oct 24, 2008 12:42 UTC (Fri) by b (subscriber, #11875) [Link] Our first version was not to replace the calls by nops, but by jmps (jmp three bytes forward, two bytes for the jmp call, three nops to skip). This itself showed a 1 or 2% overhead. Not much, but enough to make it not acceptable. Now adding a branch to the equation will definitely bring the overhead up. Remember, this is called at every function call inside the kernel. The source of the e1000e corruption bug Posted Oct 24, 2008 12:47 UTC (Fri) by madhatter (subscriber, #4665) [Link] This is a fascinating writeup, and so clear even I can understand it. It's now my second-favourite account of "how we delved the technical depths of a nasty problem" after the one at http://www.justpasha.org/folk/rm.html (though there the delving is done while fixing the problem, and here the delving is done while understanding it). Nicely done, Jon; thanks. h3 Posted Oct 25, 2008 19:35 UTC (Sat) by b (guest, #46433) [Link] This is a big reason why I subscribe LWN.net. I'm more of a graphics and «user» person, and not so much a developer (although I'm thoroughly fascinated by it). With Jonathan Corbet's extremely nice, easy to follow and oh-so-nice-and-technical articles I'm always getting a better understanding of computer tech and how the Linux kernel work. It's very interesting. Just two bugs? Posted Nov 2, 2008 17:54 UTC (Sun) by kasperd (guest, #11842) [Link] I would count hardware being able to brick itself as a bug as well.
5
Brave Search
Brave Search doesn’t track you or your queries. Ever. Private, independent, and transparent, Brave Search is the real alternative to Google. On mobile, desktop, and anywhere the web takes you. Search private. Search with confidence.
254
What Peng Shuai reveals about one-party rule
The Economist The Economist Skip to content
1
Make your own personal CRM (for free in 20 minutes)
tl;dr: don't miss important personal moments with your own personal CRM build time: 20 minutes (MVP) Update August 2020: the trick that enabled easy data import has been disabled. you can still use this build, but it will be more manual to set up if you came here from tech twitter, you're almost certainly aware of the ever present request for startup/meme about personal CRMs. If you came here from outside that bubble, the short version is you could use software to be a better friend¹. today we'll be building a personal CRM. You don't need to know how to code to do so - though we will be using a couple snippets I've written once set up, it will have the following features: #2 relies on you keeping a relatively up-to-date calendar. If that doesn't sound like you, you can choose to not implement that portion (or skip this entire exercise). there used to be a way to get every Facebook connection's birthday onscreen at once (via facebook.com/events/birthdays) the most recent UI update has limited the data to the next 7 days, making it mostly useless. I have not found a workaround on desktop, mobile app, or mobile web. their data export tools also do not include birthday data if you think you have a workaround that re-enables this hack, email me: alec@contextify.io #2.1 - let's go to LinkedIn and export that too. go to their member data portal, select Connections, and hit Request Archive #2.2 - there's a 10 minute wait because LinkedIn is also intentionally unhelpful. go make a coffee or feel free to start the next section. eventually, they will email you a link, it will take you back to the original page, and you'll click Download Archive. #3.0 - if you have multiple personal calendars (e.g. on one iCal and one on Google Calendars), I recommend consolidating them to one Google Calendar first #3.1 - great let's go to Calendar Settings select Import & Export on the sidebar, and select the Export button #3.2 - now we have a goofy .ics file. let's convert it to a useful .csv you can use my script. download it with: git clone git://github.com/alecbw/Google-Calendar-to-CSV && cd Google-Calendar-to-CSV move the .ics into that folder and then execute it with: python3 convert_ics_to_csv.py the script will convert the timestamps to your local timezone. it does not account for daylight savings or days you were traveling, because those things are hard. nice nice nice now you should have the two code-parsed CSVs plus the one downloaded from LinkedIn. let's smoosh them all together. If the article tag wasn't a giveaway, the final product lives in Google Sheets. I've created a template you can use here You'll want to copy and paste the three CSVs into the respectively named tabs, aligning them with the existing headers (you can CMD+A and CMD+C select the entirety of the CSV and paste into the cell labeled Paste Into This Cell) In the main tab, add some Full Names and watch the rest of the columns populate! Past the first two rows, you'll need to copy and paste (or drag down) the formulas once you get past the green/orange cells. The VLOOKUPs are mostly populated by Full Name; the Nicknames column is used for VLOOKUP’ing the Saw Last / Events data. ok so you've got your sweet looking personal CRM all set up. but wait. you're not done. let's make sure future calendar events are added. #6.1 - let's setup the Zapier Google Calendar -> PRM connection [it fits in the free tier if you have <100 events per months after filtering] Set the trigger to Event Ended in Google Calendar. Optionally: add a filter if you want to ignore recurring events (e.g. daily workout times). Events that don't trigger the filter won't count toward your Zap total. #6.2 - field values and formulas Below I've included the values to put in each field for the write to Zapier. Some require formulas to be pasted in; they will run automatically after the Zap is set up. You can copy and paste the below Field Values: ¹ There are plenty of valid reasons why using software to manage your friendships is non-ideal (or, as described by others, unempathetic). I am of the opinion that it is worthwhile to have something to counterbalance my forgetfulness. To each, their own. Thanks for reading. Questions or comments? 👉🏻 alec@contextify.io
2
What Is the Risk of Catching the Coronavirus on a Plane?
Florida Gov. Ron DeSantis tried to alleviate fears of flying during the pandemic at an event with airline and rental car executives.”The airplanes have just not been vectors when you see spread of the coronavirus,” DeSantis said during a discussion at Fort Lauderdale-Hollywood International Airport on Aug. 28. “The evidence is the evidence. And I think it’s something that is safe for people to do.” Is the evidence really so clear? DeSantis’ claim that airplanes have not been “vectors” for the spread of the coronavirus is untrue, according to experts. A “vector” spreads the virus from location to location, and airplanes have ferried infected passengers across geographies, making COVID-19 outbreaks more difficult to contain. Joseph Allen, an associate professor of exposure assessment science at Harvard University called airplanes “excellent vectors for viral spread” in a press call. In context, DeSantis seemed to be making a point about the safety of flying on a plane rather than the role airplanes played in spreading the virus from place to place. When we contacted the governor’s office for evidence to back up DeSantis’ comments, press secretary Cody McCloud didn’t produce any studies or statistics. Instead, he cited the Florida Department of Health’s contact tracing program, writing that it “has not yielded any information that would suggest any patients have been infected while travelling on a commercial aircraft.” Florida’s contact tracing program has been mired in controversy over reports that it is understaffed and ineffective. For instance, CNN called 27 Floridians who tested positive for COVID-19 and found that only five had been contacted by health authorities. (The Florida Department of Health did not respond to requests for an interview.) In the absence of reliable data, we decided to ask the experts about the possibility of contracting the virus while on a flight. On the whole, airplanes on their own provide generally safe environments when it comes to air quality, but experts said the risk for infection depends largely on policies airlines may have in place regarding passenger seating, masking and boarding time. According to experts, the risk of catching the coronavirus on a plane is relatively low if the airline is following the procedures laid out by public health experts: enforcing mask compliance, spacing out available seats and screening for sick passengers. “If you look at the science across all diseases, you see few outbreaks” on planes, Allen said. “It’s not the hotbed of infectivity that people think it is.” Airlines frequently note that commercial planes are equipped with HEPA filters, the Centers for Disease Control-recommended air filters used in hospital isolation rooms. HEPA filters capture 99.97% of airborne particles and substantially reduce the risk of viral spread. In addition, the air in plane cabins is completely changed over 10 to 12 times per hour, raising the air quality above that of a normal building. Because of the high air exchange rate, it’s unlikely you’ll catch the coronavirus from someone several rows away. However, you could still catch the virus from someone close by. “The greatest risk in flight would be if you happen to draw the short straw and sit next to or in front, behind or across the aisle from an infector,” said Richard Corsi, who studies indoor air pollution and is the dean of engineering at Portland State University. It’s also important to note that airplanes’ high-powered filtration systems aren’t sufficient on their own to prevent outbreaks. If an airline isn’t keeping middle seats open or vigilantly enforcing mask use, flying can actually be rather dangerous. Currently, the domestic airlines keeping middle seats open include Delta, Hawaiian, Southwest and JetBlue. The reason for this is that infected people send viral particles into the air at a faster rate than the airplanes flush them out of the cabin. “Whenever you cough, talk or breathe, you’re sending out droplets,” said Qingyan Chen, professor of mechanical engineering at Purdue University. “These droplets are in the cabin all the time.” This makes additional protective measures such as mask-wearing all the more necessary. Chen cited two international flights from earlier stages of the pandemic where infection rates varied depending on mask use. On the first flight, no passengers were wearing masks, and a single passenger infected 14 people as the plane traveled from London to Hanoi, Vietnam. On the second flight, from Singapore to Hangzhou in China, all passengers were wearing face masks. Although 15 passengers were Wuhan residents with either suspected or confirmed cases of COVID-19, the only man infected en route had loosened his mask mid-flight and had been sitting close to four Wuhan residents who later tested positive for the virus. Even though flying is a relatively low-risk activity, traveling should still be avoided unless absolutely necessary. “Anything that puts you in contact with more people is going to increase your risk,” said Cindy Prins, a clinical associate professor of epidemiology at the University of Florida College of Public Health and Health Professions. “If you compare it to just staying at home and quick trips to the grocery store, you’d have to put it above” that level of risk. The real danger of traveling isn’t the flight itself. However, going through security and waiting at the gate for your plane to dock are both likely to put you in close contact with people and increase your chances of contracting the virus. In addition, boarding — when the plane’s ventilation system is not running and people are unable to stay distanced from one another — is one of the riskiest parts of the travel process. “Minimizing this time period is important to reduce exposure,” wrote Corsi. “Get to your seat with your mask on and sit down as quickly as possible.” All in all, it’s too early to determine how much person-to-person transmission has occurred on plane flights. Julian Tang, an honorary associate professor in the Department of Respiratory Sciences at the University of Leicester in England, said he is aware of several clusters of infection related to air travel. However, it is challenging to prove that people have caught the virus on a flight. “Someone who presents with COVID-19 symptoms several days after arriving at their destination could have been infected at home before arriving at the airport, whilst at the airport or on the flight — or even on arrival at their destination airport — because everyone has a variable incubation period for COVID-19,” Tang said. Katherine Estep, a spokesperson for Airlines for America, a U.S.-focused industry trade group, said the CDC has not confirmed any cases of transmission onboard a U.S. airline. The absence of confirmed transmission is not necessarily evidence that fliers are safe. Instead, the lack of data reflects the fact that the U.S. has a higher infection rate relative to other countries, said Chen. Since the U.S. has so many confirmed cases, it’s more difficult to determine exactly where somebody contracted the virus. ​
1
Turn any image into a professional product image in one click
Wanna watch replicas and fake watch? Buy Now. Skip to content The best eCommerce tool Fully automatic professional product photo editor Turn any image into a professional product image in one click AI-powered photo edit Whitepxer does not require any manual editing, it is fully automatic editing, completed within a few seconds Smart adjustment You can adjust the details of the produc photo to make the photo even better - Shadow effect Sliding shadow variables to easily create product shadow effects - Color Slide color variables to easily complete color - Eraser Easily erase excess items - Rotate Easily adjust product angle Stunning Quality Made for all major eCommerce platforms They Love Us, You Will Too. "Your software is really easy to use. We are an e-commerce company. Before using Whitepxer, we had to make product images all night through. Amy E-commerce Amazon/eBay      5/5 "Awesome tool! I will definitely recommend it to my friends.” David E-commerce Amazon      4.7/5 "This website is absolutely amazing! Of course this incredible AI background create product image must be collected and enjoyed hahahaha!” Drone E-commerce shopify      5/5 Kiss your photo editing headaches goodbye Don’t waste another ounce of energy struggling to get your photos edited under a tight deadline. Try for free. Wanna watch replicas and fake watch? Buy Now.
4
Africa may have reached the pandemic's holy grail
Africa may have reached the pandemic's holy grail toggle caption Joseph Mizere/Xinhua News Agency/Getty Images Joseph Mizere/Xinhua News Agency/Getty Images When the results of his study came in, Kondwani Jambo was stunned. He's an immunologist in Malawi. And last year he had set out to determine just how many people in his country had been infected with the coronavirus since the pandemic began. Jambo, who works for the Malawi-Liverpool-Wellcome Trust Clinical Research Programme, knew the total number of cases was going to be higher than the official numbers. But his study revealed that the scale of spread was beyond anything he had anticipated — with a huge majority of Malawians infected long before the omicron variant emerged. "I was very shocked," he says. Most important, he says, the finding suggests that it has now been months since Malawi entered something akin to what many countries still struggling with massive omicron waves consider the holy grail: the endemic stage of the pandemic, in which the coronavirus becomes a more predictable seasonal bug like the flu or common cold. In fact, top scientists in Africa say Malawi is just one of many countries on the continent that appear to have already reached — if not quite endemicity — at least a substantially less threatening stage, as evidenced by both studies of the population's prior exposure to the coronavirus and its experience with the omicron variant. To understand how these scientists have come to hold this view, it helps to first consider what the pandemic has looked like in a country such as Malawi. Before the omicron wave, Malawi didn't seem to have been hit too hard by COVID-19. Even by July of last year, when Malawi had already gone through several waves of the coronavirus, Jambo says it appeared that only a tiny share of Malawians had been infected. "Probably less than 10% [of the population], if we look at the number of individuals that have tested positive," says Jambo. The number of people turning up in hospitals was also quite low — even during the peak of each successive COVID-19 wave in Malawi. Jambo knew this likely masked what had really been going on in Malawi. The country's population is very young — it has a median age of around 18, he notes. This suggests most infections prior to omicron's arrival were probably asymptomatic ones unlikely to show up in official tallies. People wouldn't have felt sick enough to go to the hospital. And coronavirus tests were in short supply in the country and therefore were generally used only for people with severe symptoms or who needed tests for travel. Goats and Soda Opinion: 5 steps we must take to vaccinate the world's vulnerable—and end the pandemic So to fill in the true picture, Jambo and his collaborators turned to another potential source of information: a repository of blood samples that had been collected from Malawians month after month by the national blood bank. And they checked how many of those samples had antibodies for the coronavirus. Their finding: By the start of Malawi's third COVID-19 wave with the delta variant last summer, as much as 80% of the population had already been infected with some strain of the coronavirus. "There was absolutely no way we would have guessed that this thing had spread that much," says Jambo. Similar studies have been done in other African countries, including Kenya, Madagascar and South Africa, adds Jambo. "And practically in every place they've done this, the results are exactly the same" — very high prevalence of infection detected well before the arrival of the omicron variant. Jambo thinks the findings from the blood samples in Malawi explain a key feature of the recent omicron wave there: The number of deaths this time has been a fraction of the already low number during previous waves. Less than 5% of Malawians have been fully vaccinated. So Jambo says their apparent resistance to severe disease was likely built up as a result of all the prior exposure to earlier variants. "Now we have had the beta variant — we have had the delta variant and the original," notes Jambo. "It seems like a combination of those three has been able to neutralize this omicron variant in terms of severe disease." toggle caption Rajesh Jantilal/AFP via Getty Images Rajesh Jantilal/AFP via Getty Images And now that the omicron wave has peaked across Africa, country after country there seems to have experienced the same pattern: a huge rise in infections that has not been matched by a commensurate spike in hospitalizations and death. Shabir Madhi is a prominent vaccinologist at the University of the Witwatersrand in South Africa. "I think we should draw comfort from the fact that this has been the least severe wave in the country," he says. The most likely reason, he says, is that — like Malawi — South Africa gained immunity through prior infections, he says. One difference is that in South Africa's case, this immunity came at a high price. South Africa's population is substantially older than Malawi's, and during the delta wave last summer, hospitals in the country were swamped. Still, the upshot, says Mahdi, is that "we've come to a point where at least three-quarters — and now after omicron, probably 80% — of South Africans have developed immunity and at least protection against severe disease and death." Of course, whether Africa is truly now in a less dangerous position depends on a "key question," says Emory University biologist Rustom Antia. "How long does the immunity that protects us from getting ill last?" Antia has been studying what would need to happen for the coronavirus to become endemic. But Mahdi says there's reason to be optimistic on this front. Research suggests this type of protection could last at least a year. So Mahdi says in African countries — and likely in many other low- and middle-income countries with similar experiences of COVID-19 — the takeaway is already clear: "I think we've reached a turning point in this pandemic. What we need to do is learn to live with the virus and get back to as much of a normal society as possible." Goats and Soda Welcome to the era of omicron rules and regs What does that look like? For one thing, says Mahdi, "we should stop chasing just getting an increase in the number of doses of vaccines that are administered." Vaccination efforts should be more tightly targeted on the vulnerable: "We need to ensure that at least 90% of people above the age of 50 are vaccinated." Similarly, when the next variant comes along, Mahdi says, it will be important not to immediately panic over the mere rise in infections. This rise will be inevitable, and any policy that's intended to stop it with economically disruptive restrictions, such as harsh COVID-19 lockdowns, isn't just unnecessarily damaging — "it's fanciful thinking." Instead, officials should keep an eye out for the far more unlikely scenario of a rise in severe illness and death.
8
Pfizer Shot Provides Partial Omicron Shield in Early Study
To continue, please click the box below to let us know you're not a robot.
1
Personal Finance for Engineers
Session 1: Introduction Date: September 15, 2020 Session 2: Behavioral Finance Date: September 22, 2020 Session 3: Compensation Date: September 29, 2020 Session 4: Saving & Budgeting Date: October 6, 2020 Session 5: Assets & Net Worth Date: October 13, 2020 Session 6: Debt Date: October 20, 2020 Session 7: Investing Date: October 27, 2020 Session 8: Financial Planning & Goals Date: November 3, 2020 Session 9: Real Estate Date: November 10, 2020 Session 10: Additional Topics (VC/PE, Derivatives, Crypto) Date: November 17, 2020 Share this: h3 Loading...
45
The $15B jet dilemma facing Boeing’s CEO
[1/3] A Boeing 737 MAX airplane lands after a test flight at Boeing Field in Seattle, Washington, U.S. June 29, 2020. REUTERS/Karen Ducey Summary Companies Boeing debates timing of next jet in strategy puzzle -sources Planemaker seeks to recover leading role after 737 MAX crisis Development cost and efficiency gains to be weighed in decision Airbus CEO says doesn't see any hurry for Boeing to replace MAX SEATTLE/PARIS, June 2 (Reuters) - Boeing Co CEO Dave Calhoun faces a multibillion-dollar dilemma over how to rebuild sales in its core airliner business that has sparked an internal debate and put the future of the largest U.S. exporter on the line, industry insiders say. Boeing is reeling from a safety scandal following crashes of its 737 MAX airliner and an air travel collapse caused by the pandemic. Those crises have overshadowed a deeper, longer term risk to the company's commercial passenger jet business. Boeing's share of the single-aisle jetliner market - where it competes in a global duopoly with Airbus - has faded from some 50% a decade ago to roughly 35% after the 737 MAX's lengthy grounding, according to Agency Partners and other analysts. Airbus' (AIR.PA) single-aisle A321neo has snapped up billions of dollars of orders in a recently booming segment of the market, as the largest MAX variants struggled to block it. Without a perfectly timed new addition to its portfolio, analysts warn America risks ceding to Europe a huge portion of that market - valued by planemakers at some $3.5 trillion over 20 years. But Boeing is not yet ready to settle on a plan to develop a new plane to counter the A321neo, and two leading options - press ahead now or wait until later - come with financial and strategic risks, several people briefed on the discussions said. "I'm confident that over a longer period of time, we'll get back to where we need to get to and I'm confident in the product line," Calhoun said in April as Boeing won new MAX orders. Asked about the company's discussions and options over a potential new airplane, a Boeing spokesman said it had no immediate comment beyond Calhoun's remarks to investors. A weakened Boeing has little margin for error, especially as it tackles industrial problems hobbling other airliners. Boeing's first option is to strike relatively quickly, bringing to market by around 2029 a 5,000-mile single-aisle jet with some 10% more fuel efficiency. That could potentially be launched for orders in 2023. "There is no better way to fix their image than invest in the future now, pure and simple," said Teal Group analyst Richard Aboulafia. A new single-aisle jet would replace the out-of-production 757 and fill a void between the MAX and larger 787, confirming a twist to earlier mid-market plans as reported by Reuters in April last year. The idea took a backseat early in the pandemic, before regaining attention. It would also be an anchor for an eventual clean-sheet replacement of the 737 family. An alternative option is to wait for the next leap in engine technology, not expected until the early 2030s. That could involve open-rotor engines with visible blades using a mixture of traditional turbines and electric propulsion. Wary of letting short-term product decisions drive strategy, Boeing is also prioritizing a deeper dive into investments or business changes needed to regain the No.1 spot, analysts say. Both approaches carry risks. If it moves too quickly, Boeing may face a relatively straightforward counter-move. Airbus' preference is do nothing and preserve a favorable status quo, European sources say. But it has for years harbored studies codenamed "A321neo-plus-plus" or "A321 Ultimate" with more seats and composite wings to repel any commercial attack. Such an upgrade might cost Airbus some $2-3 billion, but far less than the $15 billion Boeing would spend on a new plane. For Boeing, a premature tit-for-tat move runs the risk of merely replicating the strategic spot it finds itself in now. If it moves too slowly, however, investors may have to bear a decade of perilously low market share in the single-aisle category, the industry's profit powerhouse. Those urging restraint, including soon-departing finance chief Greg Smith, have a simple argument, insiders say. Boeing has amassed a mountain of debt and burned $20 billion in cash lurching from crisis to crisis. "It's a different world," one insider said. "How could you possibly be thinking about a new airplane?" However, some engineers at Boeing's Seattle commercial home are crying out for a bold move to reassert its engineering dominance following the worst period in its 105-year history. "That should be a priority for Boeing right now," said Tom McCarty, a veteran former Boeing avionics engineer. "To get back in clear leadership of advancing technology." As it weighs up when to act, Boeing has sought initial technical data from engine makers Rolls-Royce (RR.L), Pratt & Whitney (RTX.N) and the General Electric-Safran (GE.N), (SAF.PA) tie-up CFM International, industry sources say. A firm competition is not expected for a year or more, they add, a delay that illustrates Boeing's bind. Rolls, which has most to gain as it tries to re-enter the lucrative single-aisle market, said last month it would be ready for any new product. Watching Boeing's decision from the sidelines is China, where state manufacturer COMAC is working on a C919 narrowbody in a potential challenge to the cash-cow 737 and A320 families. Sitting on $7 billion in net cash and a second-mover's advantage, analysts say Airbus appears most comfortable, though it also faces its share of industrial headaches. A wild card in the deliberations is growing environmental pressure, mirrored in the priorities of each planemaker. Airbus has pledged to introduce the first hydrogen-powered small commercial plane in 2035. The "zero-emission" agenda reflects its CEO's conviction that disruptive technology will play a role in next-generation jets. But industry sources say it is no coincidence that such rhetoric also steers Boeing away from launching an interim jet. Boeing has emphasized quicker gains from sustainable aviation fuel (SAF). Any new 757-style jet would feature the ability to run 100% on SAF, people familiar with the plan said. While backing the drop-in fuel for technical reasons, Boeing has left itself enough room to argue that a relatively early new plane would still fit the industry's environmental objectives. Airbus has meanwhile kept up pressure with proposals last week to almost double single-aisle output within four years. While some suppliers questioned how quickly the plan was achievable, one industry executive noted it sent a "message that Airbus exits the crisis as No.1 and intends to stay there". One risk is that anything that looks like a grab for market share could trigger the very Boeing jet Airbus hopes to avoid. Asked whether he thought Airbus's expansion plans might provoke Boeing into launching a new plane, Airbus CEO Guillaume Faury played down the prospect of a new industry arms race. “If they trust the MAX with the pent-up demand they see for single-aisle then I don’t see why they would be in a hurry to replace the MAX. If they are in a different situation they might come to other conclusions,” Faury told Reuters. Reporting by Eric M. Johnson in Seattle, Tim Hepher in ParisAdditional reporting by Ankit Ajmera in BangaloreEditing by Mark Potter Our Standards: The Thomson Reuters Trust Principles.
47
Game of Life running on Penrose tiles
There's a fantastic feature in the New York Times about "The Lasting Lessons of John Conway's Game of Life" — well worth reading on its own, since they solicited short reflections from big thinkers on why Conway's famous cellular-automata gewgaw remains so fascinating, decades after its invention. Me, I first got Life running via a BASIC version in an early-80s computer magazine (like this one, in Byte ). It fried my noodle; I was accustomed to programs deterministically doing things you expected, but not deterministically doing things you didn't. Apparently, as the Times reports, Conway grew to hate his invention, to the point of shouting "I hate Life!" when someone mentioned it. But the most intriguing note in that piece, for me, was learning how the computer scientist Susan Stepney got Life running on Penrose tiles. Penrose tiles are nonrepeating, so working out the ruleset is a tricky affair. In this paper, she and Nick Owens describe their algorithm, and show off how they did it. Here's an example of them figuring out the possible neighbors for individual tiles … The original Game of Life has figures that become stable — they don't evolve any more because the position of their tiles prevents any new ones being born, or any from dying. (Conway called these "Still Lifes".) Stepney and Owens found a bunch of still lifes in Penrose tiling, too … One of the fun parts of the original Life were stable oscillating patterns, ones that got stuck in an evolutionary loop, reproducing the same shape like an animated gif. They found some of these in Penrose too — including these ones, which given the jagged geometry of Penrose tiling, they delightfully called "bats" … Here's what one of the bats looks like, in its fluctuations! Another famous construct in Life was the "glider" — an oscillating set of tiles that moved diagonally each time it looped around, so it flies off eternally into Lifespace. In 2012 some academics figured out how to make a glider in Penrose Life; there's video at in this New Scientist story. There's more stuff if you poke around online a bit — here's another scholarly paper on Penrose Life, and some video of Penrose Life on Youtube.
1
Git ls-files is Faster Than Fd and Find
In the Linux Git repository: p p --export-markdown--warmup 10 'git ls-files' 'find' 'fd --no-ignore' git ls-files is more than 5 times faster than both fd --no-ignore and find! In my editor I changed my mapping to open files from fd to git ls-files and I noticed it felt faster after the change. But that’s intriguing, given fd’s goal to be very fast. Git on the other hand is primarily a source code management system (SCM), it’s main business is not to help you list your files! Let’s run some benchmarks to make sure. Is git ls-files actually faster than fd or is that just an illusion? In our benchmark, we will use: We run the benchmarks with disk-cache filled, we are not measuring the cold cache case. That’s because in your editor, you may use the commands mentioned multiple times and would benefit from cache. The results are similar for an in memory repo, which confirms cache filling. Also, you work on those files, so they should be in cache to a degree. We also make sure to be on a quiet PC, with CPU power-saving deactivated. Furthermore, the CPU has 8 cores with hyper-threading, so fd uses 8 threads. Last but not least, unless otherwise noted, the files in the repo are only the ones committed, for instance, no build artifacts are present. We first need a Git repository. I’ve chosen to clone the Linux kernel repo because it is a fairly big one and a reference for Git performance measurements. This is important to ensure searches take a non-trivial amount of time: as hyperfine rightfully points out, short run times (less than 5 ms) are more difficult to accurately compare. p p--depth 1 --recursive p p We want to evaluate git ls-files versus fd and find. However, getting exactly the same list of file is not a trivial task: After some more tries, it turns out that this command gives exactly the same output as git ls-files: It is a fairly complicated command, with various criteria on the files to print and that could translate to an unfair advantage to git ls-files. Consequently, we will also use the simpler examples in the table above. Hyperfine is a great tool to compare various commands: it has a colored and markdown output, attempts to detect outliers, tunes the number of run… Here is an asciinema showing its output : For our first benchmark, on an SSD with btrfs, with commit ad347abe4a… checked out, we run: p p --export-markdown1--warmup 10 'git ls-files' \ p 'find' 'fd --no-ignore' 'fd --no-ignore --hidden' 'fd' \ p 'fd --no-ignore --hidden --exclude .git --type file --type symlink' This yields the following results: As mentioned in the TL;DR, git ls-files is at least 5 times faster than its closest competitor! Let’s find out why that is. To try to understand where this performance advantage of git ls-files comes from, let’s look into how files are stored in a repository. This is a quick overview, you can find more details about Git’s storage internals in this section of the Pro Git book. Git builds its own internal representation of the file system tree in the repository: Internal Git representation of the file system treeFrom the Pro Git book, written by Scott Chacon and Ben Straub and published by Apress, licensed under the Creative Commons Attribution Non Commercial Share Alike 3.0 license, copyright 2021. In the figure above, each tree object contains a list of folder or names and references to these (among other things). This representation is then stored by its hash in the .git folder, like so: As a result, to list the content of a folder, it seems Git has to access the corresponding tree object, stored in a file contained in a folder with the beginning of the hash. But doing that for the currently checked out files all the time would be slow, especially for frequently used commands like git status. Fortunately, git also maintains an index for files in the current working directory. This index, lists (among other things) each file in the repository with file-system metadata like last modification time. More details and examples are provided here. So, it seems that the index has everything ls-files requires. Let’s check it is used by ls-files Let’s ensure that ls-files uses only the index, without scanning many files in the repo or the .git folder. That would explain its performance advantage, as reading a file is cheaper than traversing many folders. To this end, we’ll use strace like so: p p -e>2 > It turns out the .git/index is read: And we are not reading objects in the .git folder or files in the repository. A quick check of Git’s source code confirms this. We now have an explanation for the speed git ls-files displays in our benchmarks! However, listing file in a fully committed repository is not the most common case when you work on your code: as you make changes, a larger portion of the files are changed or added. How does git ls-files compare in these other scenarios? When there are changes to some files, we shouldn’t see any significant performance difference: the index is still usable directly to get the names of the files in the repository, we don’t really care about whether their content changed. To check this, let’s change all the C files in the kernel sources (using some fish shell scripting): p p f in ( fd -e) pp 1 >> $f p p status | wc -l p p p --export-markdown2--warmup 10 'git ls-files' 'find' 'fd --no-ignore' \ p 'fd --no-ignore --hidden --exclude .git --type file --type symlink' We see the same numbers as before and it is again consistent with ls-files source code. Run git checkout -f @ after this to remove the changes made to the files. With yet uncommitted files, there are two subcases: So the only case that needs further investigations is the use of -o. Since we don’t have baseline results yet for -o, let’s first see how it compares without any unadded new files. When we haven’t added any new files in the repository: p p --export-markdown3--warmup 10 'git ls-files' 'git ls-files -o' 'find' \ p 'fd --no-ignore' 'fd --no-ignore --hidden --exclude .git --type file --type symlink' That suggests that git ls-files -o is performing some more work besides “just” reading the index. With strace, we see lines like: p p -e-o >2 > p p p ( AT_FDCWD"Documentation/"| O_NONBLOCK | O_CLOEXEC | O_DIRECTORY ) = 4 p p ( 4""{ st_mode =| 0755st_size = 1446}) = 0 p p ( 49932768 ) = 3032 p Let’s add some files now: p f in ( seq 1 1000 ) pp $f p And compare with our baseline: p p --export-markdown4--warmup 10 'git ls-files' 'git ls-files -o' 'find' \ p 'fd --no-ignore' 'fd --no-ignore --hidden --exclude .git --type file --type symlink' There is little to no statically significant difference to our baseline, which highlights that much of the time is spent on things relatively independent of the number of files processed. It’s also worth noting that there is relatively little speed difference between git ls-files -o and fd --no-ignore --hidden --exclude .git --type file --type symlink. Using strace, we can establish that all commands but git ls-files were reading all files in the repository. By comparing the strace outputs of git ls-files -o and fd --no-ignore --hidden --exclude .git --type file --type symlink (the two commands that print the same file list), we can see that they make similar system calls for each file in the repository. How to explain the (small) time difference between the two? I haven’t found convincing reasons in git source code for this case. It might be that the use of the index gives ls-files a head start. I’m now using git ls-files in my keyboard driven text editor instead of fd or find. It is faster, although the perceived difference described in the Introduction is probably due to spikes in latency on a cold cache. The selection of files is also narrowed down with ls-files to the ones I care about. That’s said, I’ve still kept the fd-based file listing as a fallback, as sometimes I’m not in a Git repository. After all, Git is already building an index, why not use it to speed up your jumping from file to file!
108
We Built a C++ Rendering Engine for the Web
Open Design is a developer toolkit that allows you to read and display data from designs using code. Though Open Design was first released in early 2021, the technology behind it has been powering Avocode's design handoff tool for six years. This is the story of how it all came to be. Back in 2014, there was a lot of pain involved with designers handing off UI designs to developers. Many designers were still using Photoshop and developers didn't have a great way to get what they needed from those designs. Avocode 1.0 was released to address those problems. At launch, it targeted teams using Photoshop and we started hearing that they saw their design → code process get faster and more accurate. It was a success! But we were just getting started. After our launch, we listened closely to feedback from our customers. We also did some UX testing with our designer and developer friends in Prague to learn more about the pain points in the design handoff workflow. In the middle of 2016, we turned this feedback into an action plan. We focused our entire engineering team on building two interconnected features: This was quite a lot to take on at once, but we were confident that we could deliver and it would've killed us not to at least give it a try. The first task was building parsers that could extract data from Photoshop and Sketch files without relying on the design tool itself. These parsers would need to deconstruct the file and convert the contents to JSON. So the team got to work cracking open Photoshop's notoriously opaque binary format and Sketch's SQLite database with binary plist blobs (this was before Sketch 43's new format was released). After a lot of hard work and lots of trial and error, we had the designs converted into a readable JSON document that we called a "Source JSON". But we weren't finished yet. If we had stopped here, the team working on the Avocode app would have to implement all of the app's functions (measuring distances, extracting text, etc.) twice - once for Photoshop and once for Sketch. On top of that, if a design tool update was pushed out that modified the format, the app team would have to support both the old and new version of the format (since we can't expect that every file will be saved in the newest version). This sounded like a major headache, so we decided to go a step further. What we needed was a stable API between the parsing team and the app team. So we took the best ideas from the Photoshop and Sketch formats and created a spec for a new JSON-based design format. This spec turned into Octopus 1.0. The Octopus printing machine we created for a marketing campaign. Then the team built converters that mapped values from the Source JSON into the new Octopus format. Some values could be mapped directly and some needed to be converted (for example, coordinate systems were normalized). The obvious advantage of this approach was that the app team could focus on building a great product without worrying about whether a design tool update would break a feature. They were building on something stable and reliable. Meanwhile, the parsing team could focus on testing new design tool updates and making sure that they still translated into Octopus. With Octopus, we checked off the first task. The next one to tackle was being able to actually display the design to the user. Previously, we used our Photoshop plugin to export a bitmap of every layer and placed them in a grid and at the specified position. Since we could no longer rely on the original design tool, we needed a new approach for generating these bitmaps. So we decided to stop thinking like a handoff tool and to start thinking like a design tool. Photoshop and Sketch both had their own rendering engines, and it made sense for us to build one as well. So the team set out to build Render, a rendering engine that could faithfully reproduce the design using nothing but the data in the Octopus format. We built the first version of Render in about 6 months. It wasn't perfect, but it handled the basics well and surprisingly rendered real-world designs decently well. It proved that we were on the right track, but the most glaring problem we saw was its performance. The team used JavaScript since they were already experienced with it but after some benchmarks, it was clear that we were hitting some language-specific limits. Rewriting a project in a new language usually isn't the smartest thing to do, but we decided to risk it. We hired a few brilliant C++ developers and they got to work implementing the logic. C++ is faster than JavaScript for most operations, but the real speed boost was gained by using OpenGL for hardware acceleration. We knew beforehand that designers were (and still are!) very intentional with where they put pixels. If this project was going to be successful, we would have to make sure that Render's output was really really close to the original design tool. To measure precision, we built an internal tool that generated a bitmap of each artboard using both Render and the original design tool. It then compared the differences between the two and highlighted problem areas for our developers to look into. Especially later in the process, text rendering turned out to be a challenging thing to get pixel-perfect. After about a year of really hard work, we attained an average of 99% rendering precision. In addition, the whole thing was lightning fast. In the summer of 2018, we launched Avocode 3.0 and billed it as the "world’s first truly cross-platform design handoff tool". For the first time, users could drag and drop a Photoshop or Sketch file directly into Avocode (no software or plugins necessary) and in just a minute or two, they could see and inspect the design. This is what we had been working towards for the last 2 years! Oh, by the way, we delivered support for Adobe XD and Figma in this update as well. Once we had the foundation of Octopus + Render to work with, adding support for new formats was far simpler than it used to be. Launch campaign for Avocode 3.0 Subscribe to get the latest Open Design news by email Send We rode on the coattails of the Avocode 3.0 launch for a few weeks, fixing bugs and watching how all of the new features were being used. One metric we paid special attention to was the amount of time it took between uploading a design and actually being able to see it. It was pretty fast for small Photoshop designs, but it could take a few minutes for a multi-artboard Sketch design. In that processing workflow, we performed four steps: As we were thinking about optimizing steps 3 and 4, we had an idea. Instead of rendering every layer server-side and then having the app download all of the bitmaps, could we do the rendering directly in the browser? We started exploring a cutting-edge technology called Emscripten that directly converted native C++ programs into code that could be run on the web. With this, we could just download the Octopus file in the app and progressively render tiles of the design, resulting in much faster load times. And, as the user zoomed in, the design could remain sharp instead of getting pixelated. These improvements would elevate the user experience, so we gave it a shot. After learning the ropes of Emscripten, learning tons about the limitations of web technologies, and making some optimizations to Render, the first version of View was looking really good. In the spring of 2019, we launched Avocode 3.7 with these changes included. Designs were opening up to 3x faster and customers were loving that their designs remained sharp even when they zoomed in to 1000%. It was a success! How parts of the design are rendered when the user pans around We were really excited to have these core technologies - Octopus, Render, and View - finally integrated into Avocode. We even built free tools using these technologies to drive traffic to our site. The first of these, Photoshop → Sketch Converter, garnered a ton of praise from the community and saw lots of engagement. From this and feedback from a pre-launch campaign, we realized that the core technologies we built could have use cases beyond just Avocode. If we externalized these and turned them into a product, would anyone use it? We were determined to find out. In the middle of 2020, we started taking this initiative seriously. We started by creating a REST API to import designs and get Octopus back. Our engineers worked on cleaning up the Octopus format, creating TypeScript types, writing documentation, creating a new public-facing API, and more. In January 2021, we launched Open Design. Our main goal was to learn more about how potential customers wanted to use this tech in their applications. We hopped on calls with around 40 different companies and kept hearing the same two things: Open Design turns designs files into something usable. We also heard that the raw API endpoints were an obstacle to getting started. That's why, a few weeks ago, we launched Open Design SDK, which is a Node.js library for interacting with Open Design. We also included Render so that anyone can export high-quality bitmaps and vectors from their designs. We're still listening closely to learn more about different use cases as well as how to make it easier to adopt Open Design into your toolchain. We're excited to finally put the technology that we've spent years working on into the hands of people that can really use it. We believe that by making design more open and accessible, we can help make design tools smarter and teams work together better. If you have any questions or want to learn more, we'd love to hear from you! Contact us or tweet at me. Do these challenges sound interesting to you? If so, we'd love for you to join the team.
3
He Designed a Smartwatch App to Help Stop His Dad's PTSD Nightmares
Mental Health He Designed A Smartwatch App To Help Stop His Dad's Nightmares toggle caption Carmen Ferderber Carmen Ferderber Tyler Skluzacek remembers his dad as a fun, outgoing man before he left to serve in Iraq. When Patrick Skluzacek came home in 2007, says his son, he had changed. Patrick was being consumed by nightmares. At night his dreams took him back to Fallujah, where he had served in the U.S. Army as a convoy commander. He sweated profusely and thrashed around in his sleep, sometimes violently. The nightmares were so vivid and so terrible that he feared closing his eyes. The only way he could get to sleep was with vodka and pills, he says. Patrick's life began to unwind. His marriage fell apart. "[I] pretty much lost everything," he says, fighting back tears. "My house, everything, my job, everything went." It's not an unfamiliar story for those who have served in war zones. According to the Department of Veterans Affairs, 52% of combat soldiers with post-traumatic stress disorder have nightmares fairly often, compared with 3% of the general public. They take a toll — not just on soldiers, but on their families. Patrick's son, however, would give the story a different ending. A hackathon to help those with PTSD Tyler was a senior at Macalester College in Saint Paul, Minn., in 2015 when he heard about a computer hackathon being held in Washington, D.C. Developers come together over an intense few days to build prototypes to tackle a specific problem. This particular hackathon focused on developing mobile applications to help people with PTSD. Tyler scraped together his on-campus job earnings and bought a ticket to Washington. During the hackathon, he put together a team to program a smartwatch to detect the onset of night terrors based on the wearer's heart rate and movement. The idea, Tyler says, was to use technology to imitate something service dogs were already doing — recognizing a traumatic nightmare and then nudging or licking the person to disrupt the bad dream. He thought the smartwatch could do this with a gentle vibration. The tricky part was to provide "just enough stimulus to pull them out of the deep REM cycle and allow the sleep to continue unaffected," Tyler says. Dad as guinea pig Getting the app to actually work — to recognize a nightmare and respond with just the right touch — would require a lot of trial and error. But what better test subject than your own dad? Patrick was game, but the experiment got off to a rocky start. In the early trials, the zapping watch spooked Patrick awake. And because he initially wore the watch around-the-clock, there were some startling readings. toggle caption NightWare The two break up laughing when they remember what happened when Patrick wore the watch while using an air hammer. "I still remember you had me wearing it full time," says Patrick, who lives in Blaine, Minn. "You thought I was having a heart attack because I had the watch on, and you thought my heart rate was 6,000 beats per minute." "I was terrified," says Tyler, who is now a graduate student in computer science at the University of Chicago. "Watching someone's data 24/7, I feel like is a lot like having a baby. I don't have a baby. But you're suddenly very concerned at all hours." With constant fine-tuning as his dad slept in the next room, Tyler eventually perfected the algorithm. "Having someone that close to you and knowing exactly when those nightmares happen was super important to training a model like that," Tyler says. "Little miracles" For Patrick, once they got the formula right, the watch was life-changing. "It was night and day when I put that watch on and it started working." The vibrations, he says, were "little miracles." After years of suffering, Patrick finally found relief. He was able to get his life back. He has remarried and he's working as a mechanic again. There are the occasional bad dreams, but they no longer rule his life. More people will soon be able to benefit from Tyler's invention. An investor purchased the rights to the app and started a company called NightWare. Last month, the Food and Drug Administration approved the app, which works with an Apple Watch, to treat PTSD-related nightmare disorders. It will soon be available by prescription through the VA.
15
Martin Fowler – Kinesis Advantage2 – Review after three years of use
About three-and-a-half years ago, I decided to pay a rather large sum for a rather unusual computer keyboard - the Kinesis Advantage 2. -- https://twitter.com/martinfowler/status/815974878675353600 $300 is rather a lot to pay for a keyboard, but as it's something I'm using all day when I'm at home, I'm happy to pay for something comfortable. But this is not a usual keyboard, as the photo indicates. The most notable statement on the product page is that it's an i keyboard. Hence when I mentioned it, the common reaction was that I was getting it to combat RSI. But that isn't the case. I did suffer from RSI, but that was in the 1990's and I solved it with some combination of a Microsoft Ergonomic Keyboard, and a palm rest for my mousing hand. But comfort is more than combating RSI. A good keyboard is just a more pleasant experience when you're typing all day. And reading about the Kinesis on Avdi Grimm's blog intrigued me with its other features. The keyboard has a concave shape, to make it more comfortable to type with (once you're used to it). The first one is obvious from the photo - there's no spacebar. On a regular keyboard the only thing your thumbs do is insert a space. While that's a common occurrence, it seems a shame to have two of your strongest digits only do that one thing. The Kinesis instead has thumb clusters with several useful keys: return, backspace, space, and forward delete. I find it so helpful to be able to hit keys without taking my hands off the home keys. (Although, I actually don't use forward-delete. I'm used to that being <CTRL>-D). Another unusual element for this keyboard is this: I'm sitting at key desk all day, why shouldn't I use my feet? The natural thing to use feet for is modifier keys, and I've set it up so that the left pedal is CTRL and my right pedal is ⌘. I like being able to use my feet to avoid tangling my hands in awkward key combinations. I still have the keys on the keyboard (with CTRL in the caps-lock position, of course) but most of the time I let my feet do the work. I also have the middle pedal set for Alt, but I find it's too awkward to use, so I hardly ever use it. (The pedals were an extra $100.) I mention "setting it up", and that's because another serious feature of this keyboard is its extreme customizability. I can set any key to do any action, and there's a special modifier key to make this easier, allowing the user to insert common phrases or any other key combination. I haven't used any of this, since my editor is Emacs, and thus I already have as much customizability as anyone could ever need. I might mention my Emacs configuration is a little unusual. I learned Emacs back in the 80's, and learned it using Esc as the Meta key. I prefer that, as I generally prefer prefix keys to modifiers. So for the Kinesis, I configured one of the thumb cluster keys to Esc, and that suits my setup perfectly. Again my hands don't leave the keyboard to type Esc. If you look at the photo you'll see that the navigation arrow keys are also placed where I can easily reach them with my hands on their normal places. I only have to move my hands for the mouse or the function keys. With such an unusual setup, I knew that it would take a while to get used to typing with the new configuration. Not just is the spacebar gone, but the keys are laid out in a subtlety different way - more comfortable, but ready to do a number on my touch-typing. The first week was very frustrating - I certainly wasn't used to getting confused between typing a space and a carriage return. But after that week, things were mostly ok, and I steadily got better. One worry I had was that I'd have difficulty typing on a regular keyboard once I was used to the Kinesis. Thankfully that hasn't happened. It certainly isn't as nice working on the laptop keyboard, and it's another reason to miss my home setup. But I have had no difficulty switching from one to the other (although I do wonder if my extra-long period at home due to covid-19 might make it harder once I disconnect the laptop from its dock again). Not long after I bought it, I started having some faults with the keyboard. Kinesis's customer support was excellent, they sent me replacement parts, all has been well since. Would I change anything? Oddly enough I do miss having a big spacebar, not when I'm typing, but because it's often used as a control for other things, such as the mute/unmute on zoom. I've also wondered if something could be done with that space between the keys, perhaps a trackpad, or some apple-ribbon like display? But these are nits, all-in-all I've been really happy with buying it, and consider it well-worth its high price. (Avdi's review has a lot more detail on the keyboard, including explanations of its unusual key layout.)
2
Do Drift's “conversational landing pages” work?
Do Drift's "conversational landing pages" work? I'd like to hear from people who have tried Drift's "conversational landing pages". Did they work? What was your use case? Status: OpenMar 28, 2019 - 07:50 AM Ecommerce Reviews Do you have the same question? Follow this Question Report it 1answer Answers Apr 02, 2019 - 09:52 AM What Are Conversational Landing Pages? With conversational landing pages, Drift intends to make conversations (driven by chatbots, and later by your team) the focal point of landing pages. It’s an alternative to having chat sidebars or pop-ups that might annoy, or get ignored by, customers. Conversational landing pages hinge on the idea that engaging with customers immediately will drive the conversions businesses want. If the chatbot says the right things, the customers should ideally move on to the next step in the sales funnel. The Value of Drift’s Conversational Landing Pages Although Drift isn’t cheap (many desired features are behind a paywall), it can pay off for some brands. The chat function works beautifully across devices, which is key in today’s mobile-optimized world. It’s easy to install and use, so there won’t be a big learning curve for your team. If you don’t have chat on your site yet, this is an easy way to do it. And if the chat feature you have isn’t getting conversions, Drift’s landing page approach is definitely worth a try. As people get burned out on popups and “contact us” forms, there’s value in a more interactive method. The best use cases for this feature are in lead generation, and specifically, sorting out qualified leads from the rest of the pool. It’s also helpful if you want to streamline your web navigation, such as by avoiding sending people to a “Contact Us” page separate from the landing page. You can see several demos on Drift's site below. If your site has a high bounce rate with no contact information given, Drift might be a good solution. The chatbot option can free up time for your reps, by pointing only qualified leads in their direction so your live team doesn’t have to sort through the whole chat pool. For example, the chatbot can send a lead to your rep once a certain point in the conversation has been reached, such as when the lead asks how much a product costs. One nice feature is that when a visitor leaves your site, Drift will send the chat to their email. This can give them the necessary push to convert if the chat alone didn’t do it. The platform also gathers information on leads so you can contact them again. Finally, Drift offers a good support team so you won’t have trouble contacting someone if you have an issue. Where Conversational Landing Pages Won’t Work One problem Drift chat runs into is a lack of organization. When you start getting lots of chat traffic, you may want a way to sort your customer chat logs (by importance or another category), a feature that Drift doesn’t provide. Also, if you already have a chat feature on your site that’s working well, there’s no reason to replace it with Drift’s landing page. And for companies that only sell low volumes and don’t intend to scale up, lead generation is probably manageable enough without this service. You should also consider your audience before creating a landing page based around chat. If you have a more old-fashioned audience that doesn’t spend much time online, they might find being thrust immediately into a chat creepy, not cool. Chat works well for many customers, but it’s still not the best across-the-board solution for everyone. Other Questions Needing Answers Browse by Topic Ecommerce General Facebook Ads Google Ads Shopify WooCommerce SEO Ecommerce Reviews Startups Social Media CRO (Conversion Rate Optimization) Advertising General Affiliate Marketing Book Summaries PPC & SEM Marketing General See More Recently Active Members 65 7033 5 Most Popular How This Multi-Billion $ Ecommerce Brand Got a 75% Conversion Rate Lift Is Turo Profitable? Can I Make Money Renting Out Cars On It? Great Products that Failed Because of Poor Marketing Has Anyone Sold Stuff on Bonanza? Is It Worth the Time? Adult Sex Toy Marketing Tips Why Would Someone Buy from Your Shopify Store Instead of Amazon? Share This Page
1
Making 100€ per month as an indie hacker and a big F-U to the tech industry
OSP Email Github Twitter Youtube 100€ MRR as an Indie Hacker May 2021 I am a (painfully) average developer. I do not make millions. I have not created complex trading algorithms, programs or businesses. I have a good yet average paying job. My group of friends are (in the nicest way possible) as average as I am, none of us has reached massive success, some might have stocks, but we work for a living. My clique is however heavily skewed towards tech people; developers, managers, product-designers and it blows my mind how inflated our dreams and egos are. We sit in high-towers where most of us are not happy with our high salaries and comfortable lifestyle. We chase the latest news to invest in the stock market, we jump companies every couple of years trying to become millionaires. The market and the tech ecosystem has ruined us. Last year I launched CI Demon and whenever I tell my tech friends I make 100€ per month they literally laugh about it. You can tell they are like “thats funny, you need to market it! you need to sell more!”. However when I tell my non-tech friends the reaction is completely different: “wow! that’s amazing!” or “that’s great and you don’t have to work much to make that!”. What does 100€ per month mean to non-tech people, nothing, because they have been brain-washed into looking at computers as a million dollar business. The crazy thing is: they cash a normal salary like me, yet they only consider businesses with returns of multiple zeroes as something worth chasing. That is incredibly dumb, and here is why; 100€/month is 1200€/year that goes straight into my pocket. It means a great vacation It means visit my family in my homecountry 1200€ for non-tech people is a good salary bump, for which many still have to work 40+ hours weeks. Even for tech workers 1200€ is still a salary bump you need to ask your boss for All I can say, it makes me super happy, I get to create something, which is fun for me, it also generates side-income with very little time spent. Wake up people, the chances of you becoming a billionaire are slim to none, it’s much better to be happy and having enough instead of chasing money every waking hour.
1
The Wisdom of Plants and the Future of Fashion
The relationship between plants and fashion has a very long history. From the earliest Egyptian linens made from flax fiber, to culturally specific techniques of pounding bark into cloth in Asia, Africa, and the Pacific, plant-based textiles have been part of our evolution as humans. In this digital age, information on plant characteristics that permeate clothing made from natural textiles such as linen, cotton, ramie, and bamboo is at our fingertips. We can learn about the durability, moisture absorption, bacteria resistance, and breathability of the textiles we wear. Our curiosity can lead us to make healthy choices about what we put next to the largest organ of our body, our skin. Today, the proliferation of cheap and plentiful mass- produced clothing has made the fashion industry one of the top polluters on the planet. At the same time, our search will show that clothing today carries a toxic burden. With the exception of California’s Proposition 65, there is little regulation of chemicals in clothing in the United States. Both natural and synthetic textiles are infused with chemicals that do not biodegrade and can be absorbed by our skin. In the nineteenth century, synthetic Azo dyes that do not require a mordant in the dye process replaced natural dyes in manufacturing. Today, the proliferation of cheap and plentiful mass- produced clothing has made the fashion industry one of the top polluters on the planet. Over-production of clothing is another key issue, along with chemical finishes such as waterproofing, anti-wrinkle treatment, fire retardants, herbicides, and pesticides. No matter how much we recycle individually, we find ourselves complicit in the toxic mix that eventually ends up in landfills. The Environmental Protection Agency has stated that the volume of clothing sent to landfills has skyrocketed with 17 million tons sent to landfills in 2018. If clothing takes more than 200 years to decompose while millions of tons are added each year, recycling cannot catch up to it. Choosing between two options – natural plant-based textiles or synthetic ones is important, but it may not be enough. We need to shift our thinking about fashion. We need to ask ourselves, what if clothing design was responsive to climate change and social issues like surveillance, and addressed our need for protection, comfort, health, and beauty? Can we program clothing to work for our changing needs? One part of the answer may reside in forming a new relationship with plants. We need to ask ourselves, what if clothing design was responsive to climate change and social issues like surveillance, and addressed our need for protection, comfort, health, and beauty? We can find wisdom in the weeds under our feet if only we care to look more deeply. We have learned about plants, but we can also learn from plants. There are many species of buttercup which are common across the United States. Here is the California Buttercup (Ranunculus californicus) that has been found to have exceptional optical properties that could be biomimicked for clothing. Photo: Calibas , CC BY-SA If clothing is our interface with the world beyond our body, its design shows little strategic intelligence, even as we drift deeper into climate unpredictability. In a climate-challenged time, we can learn by understanding how they collaborate with their environment to survive. One example is the buttercup. It has yellow pigment in the petals’ surface layer, but other layers use air just beneath their surface to reflect light like a mirror. The glowing phenomenon provides a strong visual signal to insect pollinators and directs the reflection of sunlight to the center of the flower in order to heat the reproductive organs. The ingenious surface structure of the petals created by a buttercup is an example of strategic intelligence for species survival. Another example is the bumpy waxy surface of a lotus water lily leaf. Water rolls off the structure, which in effect makes it water repellant and helps its other functions. Plants also adjust to high UV levels by creating a structural sunscreen of translucent layers to protect those layers that need light for photosynthesis. Clothing designers have begun to address their footprint on the planet. Ethical considerations are driving companies such as Eileen Fisher, Patagonia and Stella McCartney lead the way toward responsible fashion. If clothing is our interface with the world beyond our body, its design shows little strategic intelligence, even as we drift deeper into climate unpredictability. The summer light feels brighter, and a search will verify that NASA found ultraviolet radiation from the sun increasing significantly over the last 30 years. Textile structures modeled on how plants deal with our shared environment might deserve exploration. The Selfie moment ushered in by social media has overshadowed the ecosystem that we depend on. We no longer notice the background. Not too long ago, we had definite boundaries between private and public space. We are trapped in the Greek cautionary tale of Narcissus looking at his reflection in a pool of water, captivated by his self-image and not able to turn away even at the cost of his life. Our planet is a conversation between all organic life in which we can work together for mutual benefit. Perhaps we will come around and use the power at our fingertips to search so that we can learn and imagine how to navigate a planet in crisis. Humankind can and must adapt to the Anthropocene era and nature’s models in designs and processes can provide abundant inspiration for an industry with a global scale of unconscionable environmental and health impacts. Daria Dorosh, PhD, is an artist, educator, activist, designer and researcher. An FIT professor emeritus, she is a pioneering advocate of sustainable fashion. Her company Fashion Lab in Process, (F.L.i.P) creates multidisciplinary models for an ethical fashion future. https://www.researchgate.net/figure/Fig-S1-Imaging-scatterometry-of-the-buttercup-Ranunculus-acris-and-the-kingcup-Caltha_fig2_314251397 https://www.transparency-one.com/regulating-reducing-chemicals-fashion-industry/ https://www.twosistersecotextiles.com/pages/fabric-and-impact
2
Curl Installations per Capita
I’ve joked with friends and said that we should have a competition to see whom among us have the largest number of curl installations in their homes. This is of course somewhat based on that I claim that there are more than ten billion curl installations in the world. That’s more installations than humans. How many curl installations does an average person have? Amusingly, someone also asked me this question at curl presentation I did recently. I decided I would count my own installations to see what number I could possibly come up with, ignoring the discussion if I’m actually could be considered “average” in this regard or not. This counting includes a few assumptions and estimates, but this isn’t a game we can play with complete knowledge. But no crazy estimates, just reasonable ones! I decided to count my entire household’s amount just to avoid having to decide exactly which devices to include or not. I’m counting everything that is “used regularly” in my house (things that haven’t been used within the last 12 months don’t count). We’re four persons in my household. Me, my wife and my two teenage kids. Okay. Let the game begin. This is the Stenberg household count of October, 2021. 4: I have two kids who have one computer each at home. One Windows 10 and one macOS. They also have one ChromeOS laptop each for school. 3: My wife has no less than three laptops with Windows 10 for work and for home. 3: I have three computers I use regularly. One Windows 10 laptop and two Debian Linuxes (laptop + desktop). 1: We have a Windows 10 NUC connected to the living room TV. Subtotal: 11 full fledged computers. Tricky. In the Linux machines, the curl installation is often shared by all users so just because I use multiple tools (like git) that use curl doesn’t increase the installation count. Presumably this is also the same for most macOS and ChromeOS apps. On Windows however, applications that use libcurl use their own private build (as Windows itself doesn’t provide libcurl, only the curl tool) so they would count as additional installations. But I’m not sure how much curl is used in the applications my family use on Windows. I don’t think my son for example plays any of those games in which I know they use curl. I do however have (I counted!) 8 different VMs installed in my two primary development machines, running Windows, Linux (various distros for curl testing) and FreeBSD and they all have curl installed in them. I think they should count. 2: Android phones. curl is part of AOSP and seem to be shipped bundled by most vendor Androids as well. 2: iPhones. curl has been part of iOS since the beginning. 6 * 5: Youtube, Instagram. Spotify, Netflix, Google photos are installed in all of the mobile devices. Lots of other apps and games also use libcurl of course. I’ve decided to count low. Subtotal: 30 – 40 yeah, the mobile apps really boost the amount. 1: an LG TV. This is tricky since I believe the TV operating system itself uses curl and I know individual apps do, and I strongly suspect they run their own builds so more or less every additional app on the TV run its own curl installation… 1: An ASUS wifi router I’m “fairly sure” includes curl 1: A Synology NAS I’m also fairly sure has curl 1: My printer/scanner is an HP model. I know from “sources” that pretty much every HP printer made has curl in them. I’m assuming mine does too. I have half a dozen wifi-enabled powerplugs in my house but to my disappointment I’ve not found any evidence that they use curl. I have a Peugeot e2008 (electric) car, but there are no signs of curl installed in it and my casual Google searches also failed me. This could be one of the rarer car brands/models that don’t embed curl? Oh the irony. My Peugeot e2008 I have a Fitbit Versa 3 watch, but I don’t think it runs curl. Again, my googling doesn’t show any signs of that, and I’ve found no traces of my Ember coffee cup using curl. My fridge, washing machine, dish washer, stove and oven are all “dumb”, not network connected and not running curl. Gee, my whole kitchen is basically curl naked. We don’t have game consoles in the household so we’re missing out on those possible curl installations. I also don’t have any bluray players or dedicated set-top/streaming boxes. We don’t have any smart speakers, smart lightbulbs or fancy networked audio-players. We have a single TV, a single car and have stayed away from lots of other “smart home” and IoT devices that could be running lots of curl. Subtotal: lots of future potential! 11 + 8 + 6 + 30to40 + 4to9 = 59 to 74 CIPH (curl installations per household). If we go with the middle estimate, it means 66. 16.5 CIPC (curl installations per capita) If the over 16 curl installations per person in just this household is an indication, I think it may suggest that my existing “ten billion installations” estimate is rather on the low side… If we say 10 is a fair average count and there are 5 billion Internet connected users, yeah then we’re at 50 billion installations…
46
Matrix Calculus for Deep Learning
Brought to you by explained.ai Terence Parr and Jeremy Howard (Terence is a tech lead at Google and ex-Professor of computer/data science in University of San Francisco's MS in Data Science program. You might know Terence as the creator of the ANTLR parser generator. For more material, see Jeremy's fast.ai courses and University of San Francisco's Data Institute in-person version of the deep learning course.) Please send comments, suggestions, or fixes to Terence. Printable version (This HTML was generated from markup using bookish). A Chinese version is also available (content not verified by us). This paper is an attempt to explain all the matrix calculus you need in order to understand the training of deep neural networks. We assume no math knowledge beyond what you learned in calculus 1, and provide links to help you refresh the necessary math where needed. Note that you do b need to understand this material before you start learning to train and use deep learning in practice; rather, this material is for those who are already familiar with the basics of neural networks, and wish to deepen their understanding of the underlying math. Don't worry if you get stuck at some point along the way—-just go back and reread the previous section, and try writing down and working through some examples. And if you're still stuck, we're happy to answer your questions in the Theory category at forums.fast.ai. b: There is a reference section at the end of the paper summarizing all the key matrix calculus rules and terminology discussed here. Contents Introduction Review: Scalar derivative rules Introduction to vector calculus and partial derivatives Matrix calculus Generalization of the Jacobian Derivatives of vector element-wise binary operators Derivatives involving scalar expansion Vector sum reduction The Chain Rules The gradient of neuron activation The gradient of the neural network loss function The gradient with respect to the weights The derivative with respect to the bias Summary Matrix Calculus Reference Gradients and Jacobians Element-wise operations on vectors Scalar expansion Vector reductions Chain rules Notation Resources Most of us last saw calculus in school, but derivatives are a critical part of machine learning, particularly deep neural networks, which are trained by optimizing a loss function. Pick up a machine learning paper or the documentation of a library such as PyTorch and calculus comes screeching back into your life like distant relatives around the holidays. And it's not just any old scalar calculus that pops up—-you need differential i, the shotgun wedding of linear algebra and multivariate calculus. Well... maybe need isn't the right word; Jeremy's courses show how to become a world-class deep learning practitioner with only a minimal level of scalar calculus, thanks to leveraging the automatic differentiation built in to modern deep learning libraries. But if you really want to really understand what's going on under the hood of these libraries, and grok academic papers discussing the latest advances in model training techniques, you'll need to understand certain bits of the field of matrix calculus. For example, the activation of a single computation unit in a neural network is typically calculated using the dot product (from linear algebra) of an edge weight vector w with an input vector x plus a scalar bias (threshold): . Function is called the unit's affine function and is followed by a rectified linear unit, which clips negative values to zero: . Such a computational unit is sometimes referred to as an “artificial neuron” and looks like: Neural networks consist of many of these units, organized into multiple collections of neurons called layers. The activation of one layer's units become the input to the next layer's units. The activation of the unit or units in the final layer is called the network output. i this neuron means choosing weights w and bias b so that we get the desired output for all N inputs x. To do that, we minimize a i that compares the network's final with the (desired output of x) for all input x vectors. To minimize the loss, we use some variation on gradient descent, such as plain stochastic gradient descent (SGD), SGD with momentum, or Adam. All of those require the partial derivative (the gradient) of with respect to the model parameters w and b. Our goal is to gradually tweak w and b so that the overall loss function keeps getting smaller across all x inputs. If we're careful, we can derive the gradient by differentiating the scalar version of a common loss function (mean squared error): But this is just one neuron, and neural networks must train the weights and biases of all neurons in all layers simultaneously. Because there are multiple inputs and (potentially) multiple network outputs, we really need general rules for the derivative of a function with respect to a vector and even rules for the derivative of a vector-valued function with respect to a vector. This article walks through the derivation of some important rules for computing partial derivatives with respect to vectors, particularly those useful for training neural networks. This field is known as matrix calculus, and the good news is, we only need a small subset of that field, which we introduce here. While there is a lot of online material on multivariate calculus and linear algebra, they are typically taught as two separate undergraduate courses so most material treats them in isolation. The pages that do discuss matrix calculus often are really just lists of rules with minimal explanation or are just pieces of the story. They also tend to be quite obscure to all but a narrow audience of mathematicians, thanks to their use of dense notation and minimal discussion of foundational concepts. (See the annotated list of resources at the end.) In contrast, we're going to rederive and rediscover some key matrix calculus rules in an effort to explain them. It turns out that matrix calculus is really not that hard! There aren't dozens of new rules to learn; just a couple of key concepts. Our hope is that this short paper will get you started quickly in the world of matrix calculus as it relates to training neural networks. We're assuming you're already familiar with the basics of neural network architecture and training. If you're not, head over to Jeremy's course and complete part 1 of that, then we'll see you back here when you're done. (Note that, unlike many more academic approaches, we strongly suggest first learning to train and use neural networks in practice and then study the underlying math. The math will be much more understandable with the context in place; besides, it's not necessary to grok all this calculus to become an effective practitioner.) A note on notation: Jeremy's course exclusively uses code, instead of math notation, to explain concepts since unfamiliar functions in code are easy to search for and experiment with. In this paper, we do the opposite: there is a lot of math notation because one of the goals of this paper is to help you understand the notation that you'll see in deep learning papers and books. At the end of the paper, you'll find a brief table of the notation used, including a word or phrase you can use to search for more details. Hopefully you remember some of these main scalar derivative rules. If your memory is a bit fuzzy on this, have a look at Khan academy vid on scalar derivative rules. There are other rules for trigonometry, exponentials, etc., which you can find at Khan Academy differential calculus course. When a function has a single parameter, , you'll often see and used as shorthands for . We recommend against this notation as it does not make clear the variable we're taking the derivative with respect to. You can think of as an operator that maps a function of one parameter to another function. That means that maps to its derivative with respect to x, which is the same thing as . Also, if , then . Thinking of the derivative as an operator helps to simplify complicated derivatives because the operator is distributive and lets us pull out constants. For example, in the following equation, we can pull out the constant 9 and distribute the derivative operator across the elements within the parentheses. That procedure reduced the derivative of to a bit of arithmetic and the derivatives of x and , which are much easier to solve than the original derivative. Neural network layers are not single functions of a single parameter, . So, let's move on to functions of multiple parameters such as . For example, what is the derivative of xy (i.e., the multiplication of x and y)? In other words, how does the product xy change when we wiggle the variables? Well, it depends on whether we are changing x or y. We compute derivatives with respect to one variable (parameter) at a time, giving us two different partial derivatives for this two-parameter function (one for x and one for y). Instead of using operator , the partial derivative operator is (a stylized d and not the Greek letter ). So, and are the partial derivatives of xy; often, these are just called the partials. For functions of a single parameter, operator is equivalent to (for sufficiently smooth functions). However, it's better to use to make it clear you're referring to a scalar derivative. The partial derivative with respect to x is just the usual scalar derivative, simply treating any other variable in the equation as a constant. Consider function . The partial derivative with respect to x is written . There are three constants from the perspective of : 3, 2, and y. Therefore, . The partial derivative with respect to y treats x like a constant: . It's a good idea to derive these yourself before continuing otherwise the rest of the article won't make sense. Here's the Khan Academy video on partials if you need help. To make it clear we are doing vector calculus and not just multivariate calculus, let's consider what we do with the partial derivatives and (another way to say and ) that we computed for . Instead of having them just floating around and not organized in any way, let's organize them into a horizontal vector. We call this vector the gradient of and write it as: So the gradient of is simply a vector of its partials. Gradients are part of the vector calculus world, which deals with functions that map n scalar parameters to a single scalar. Now, let's get crazy and consider derivatives of multiple functions simultaneously. When we move from derivatives of one function to derivatives of many functions, we move from the world of vector calculus to matrix calculus. Let's compute partial derivatives for two functions, both of which take two parameters. We can keep the same from the last section, but let's also bring in . The gradient for g has two entries, a partial derivative for each parameter: Gradient vectors organize all of the partial derivatives for a specific scalar function. If we have two functions, we can also organize their gradients into a matrix by stacking the gradients. When we do so, we get the Jacobian matrix (or just the Jacobian) where the gradients are rows: Note that there are multiple ways to represent the Jacobian. We are using the so-called numerator layout but many papers and software will use the denominator layout. This is just transpose of the numerator layout Jacobian (flip it around its diagonal): So far, we've looked at a specific example of a Jacobian matrix. To define the Jacobian matrix more generally, let's combine multiple parameters into a single vector argument: . (You will sometimes see notation for vectors in the literature as well.) Lowercase letters in bold font such as x are vectors and those in italics font like x are scalars. xi is the element of vector x and is in italics because a single vector element is a scalar. We also have to define an orientation for vector x. We'll assume that all vectors are vertical by default of size : With multiple scalar-valued functions, we can combine them all into a vector just like we did with the parameters. Let be a vector of m scalar-valued functions that each take a vector x of length where is the cardinality (count) of elements in x. Each fi function within f returns a scalar just as in the previous section: For instance, we'd represent and from the last section as It's very often the case that because we will have a scalar function result for each element of the x vector. For example, consider the identity function : So we have functions and parameters, in this case. Generally speaking, though, the Jacobian matrix is the collection of all possible partial derivatives (m rows and n columns), which is the stack of m gradients with respect to x: Each is a horizontal n-vector because the partial derivative is with respect to a vector, x, whose length is . The width of the Jacobian is n if we're taking the partial derivative with respect to x because there are n parameters we can wiggle, each potentially changing the function's value. Therefore, the Jacobian is always m rows for m equations. It helps to think about the possible Jacobian shapes visually: The Jacobian of the identity function , with , has n functions and each function has n parameters held in a single vector x. The Jacobian is, therefore, a square matrix since : Make sure that you can derive each step above before moving on. If you get stuck, just consider each element of the matrix in isolation and apply the usual scalar derivative rules. That is a generally useful trick: Reduce vector expressions down to a set of scalar expressions and then take all of the partials, combining the results appropriately into vectors and matrices at the end. Also be careful to track whether a matrix is vertical, x, or horizontal, where means x transpose. Also make sure you pay attention to whether something is a scalar-valued function, , or a vector of functions (or a vector-valued function), . Element-wise binary operations on vectors, such as vector addition , are important because we can express many common vector operations, such as the multiplication of a vector by a scalar, as element-wise binary operations. By “element-wise binary operations” we simply mean applying an operator to the first item of each vector to get the first item of the output, then to the second items of the inputs for the second item of the output, and so forth. This is how all the basic math operators are applied by default in numpy or tensorflow, for example. Examples that often crop up in deep learning are and (returns a vector of ones and zeros). We can generalize the element-wise binary operations with notation where . (Reminder: is the number of items in x.) The symbol represents any element-wise operator (such as ) and not the function composition operator. Here's what equation looks like when we zoom in to examine the scalar equations: where we write n (not m) equations vertically to emphasize the fact that the result of element-wise operators give sized vector results. Using the ideas from the last section, we can see that the general case for the Jacobian with respect to w is the square matrix: and the Jacobian with respect to x is: That's quite a furball, but fortunately the Jacobian is very often a diagonal matrix, a matrix that is zero everywhere but the diagonal. Because this greatly simplifies the Jacobian, let's examine in detail when the Jacobian reduces to a diagonal matrix for element-wise operations. In a diagonal Jacobian, all elements off the diagonal are zero, where . (Notice that we are taking the partial derivative with respect to wj not wi .) Under what conditions are those off-diagonal elements zero? Precisely when fi and gi are contants with respect to wj , . Regardless of the operator, if those partial derivatives go to zero, the operation goes to zero, no matter what, and the partial derivative of a constant is zero. Those partials go to zero when fi and gi are not functions of wj . We know that element-wise operations imply that fi is purely a function of wi and gi is purely a function of xi . For example, sums . Consequently, reduces to and the goal becomes . and look like constants to the partial differentiation operator with respect to wj when so the partials are zero off the diagonal. (Notation is technically an abuse of our notation because fi and gi are functions of vectors not individual elements. We should really write something like , but that would muddy the equations further, and programmers are comfortable overloading functions, so we'll proceed with the notation anyway.) We'll take advantage of this simplification later and refer to the constraint that and access at most wi and xi , respectively, as the element-wise diagonal condition. Under this condition, the elements along the diagonal of the Jacobian are : (The large “0”s are a shorthand indicating all of the off-diagonal are 0.) More succinctly, we can write: where constructs a matrix whose diagonal elements are taken from vector x. Because we do lots of simple vector arithmetic, the general function in the binary element-wise operation is often just the vector w. Any time the general function is a vector, we know that reduces to . For example, vector addition fits our element-wise diagonal condition because has scalar equations that reduce to just with partial derivatives: That gives us , the identity matrix, because every element along the diagonal is 1. I represents the square identity matrix of appropriate dimensions that is zero everywhere but the diagonal, which contains all ones. Given the simplicity of this special case, reducing to , you should be able to derive the Jacobians for the common element-wise binary operations on vectors: The and operators are element-wise multiplication and division; is sometimes called the Hadamard product. There isn't a standard notation for element-wise multiplication and division so we're using an approach consistent with our general binary operation notation. When we multiply or add scalars to vectors, we're implicitly expanding the scalar to a vector and then performing an element-wise binary operation. For example, adding scalar z to vector x, , is really where and . (The notation represents a vector of ones of appropriate length.) z is any scalar that doesn't depend on x, which is useful because then for any xi and that will simplify our partial derivative computations. (It's okay to think of variable z as a constant for our discussion here.) Similarly, multiplying by a scalar, , is really where is the element-wise multiplication (Hadamard product) of the two vectors. The partial derivatives of vector-scalar addition and multiplication with respect to vector x use our element-wise rule: This follows because functions and clearly satisfy our element-wise diagonal condition for the Jacobian (that refer at most to xi and refers to the value of the vector). Using the usual rules for scalar partial derivatives, we arrive at the following diagonal elements of the Jacobian for vector-scalar addition: Computing the partial derivative with respect to the scalar parameter z, however, results in a vertical vector, not a diagonal matrix. The elements of the vector are: The diagonal elements of the Jacobian for vector-scalar multiplication involve the product rule for scalar derivatives: The partial derivative with respect to scalar parameter z is a vertical vector whose elements are: Summing up the elements of a vector is an important operation in deep learning, such as the network loss function, but we can also use it as a way to simplify computing the derivative of vector dot product and other operations that reduce vectors to scalars. Let . Notice we were careful here to leave the parameter as a vector x because each function fi could use all values in the vector, not just xi . The sum is over the results of the function and not the parameter. The gradient ( Jacobian) of vector summation is: (The summation inside the gradient elements can be tricky so make sure to keep your notation consistent.) Let's look at the gradient of the simple . The function inside the summation is just and the gradient is then: Because for , we can simplify to: Notice that the result is a horizontal vector full of 1s, not a vertical vector, and so the gradient is . (The T exponent of represents the transpose of the indicated vector. In this case, it flips a vertical vector to a horizontal vector.) It's very important to keep the shape of all of your vectors and matrices in order otherwise it's impossible to compute the derivatives of complex functions. As another example, let's sum the result of multiplying a vector by a constant scalar. If then . The gradient is: The derivative with respect to scalar variable z is : We can't compute partial derivatives of very complicated functions using just the basic matrix calculus rules we've seen so far. For example, we can't take the derivative of nested expressions like directly without reducing it to its scalar equivalent. We need to be able to combine our basic vector rules using what we can call the vector chain rule. Unfortunately, there are a number of rules for differentiation that fall under the name “chain rule” so we have to be careful which chain rule we're talking about. Part of our goal here is to clearly define and name three different chain rules and indicate in which situation they are appropriate. To get warmed up, we'll start with what we'll call the single-variable chain rule, where we want the derivative of a scalar function with respect to a scalar. Then we'll move on to an important concept called the total derivative and use it to define what we'll pedantically call the single-variable total-derivative chain rule. Then, we'll be ready for the vector chain rule in its full glory as needed for neural networks. The chain rule is conceptually a divide and conquer strategy (like Quicksort) that breaks complicated expressions into subexpressions whose derivatives are easier to compute. Its power derives from the fact that we can process each simple subexpression in isolation yet still combine the intermediate results to get the correct overall result. The chain rule comes into play when we need the derivative of an expression composed of nested subexpressions. For example, we need the chain rule when confronted with expressions like . The outermost expression takes the sin of an intermediate result, a nested subexpression that squares x. Specifically, we need the single-variable chain rule, so let's start by digging into that in more detail. Let's start with the solution to the derivative of our nested expression: . It doesn't take a mathematical genius to recognize components of the solution that smack of scalar differentiation rules, and . It looks like the solution is to multiply the derivative of the outer expression by the derivative of the inner expression or “chain the pieces together,” which is exactly right. In this section, we'll explore the general principle at work and provide a process that works for highly-nested expressions of a single variable. Chain rules are typically defined in terms of nested functions, such as for single-variable chain rules. (You will also see the chain rule defined using function composition , which is the same thing.) Some sources write the derivative using shorthand notation , but that hides the fact that we are introducing an intermediate variable: , which we'll see shortly. It's better to define the single-variable chain rule of explicitly so we never take the derivative with respect to the wrong variable. Here is the formulation of the single-variable chain rule we recommend: To deploy the single-variable chain rule, follow these steps: The third step puts the “chain” in “chain rule” because it chains together intermediate results. Multiplying the intermediate derivatives together is the common theme among all variations of the chain rule. Let's try this process on : Introduce intermediate variables. Let represent subexpression (shorthand for ). This gives us: The order of these subexpressions does not affect the answer, but we recommend working in the reverse order of operations dictated by the nesting (innermost to outermost). That way, expressions and derivatives are always functions of previously-computed elements. Compute derivatives. Combine. Substitute. Notice how easy it is to compute the derivatives of the intermediate variables in isolation! The chain rule says it's legal to do that and tells us how to combine the intermediate results to get . You can think of the combining step of the chain rule in terms of units canceling. If we let y be miles, x be the gallons in a gas tank, and u as gallons we can interpret as . The gallon denominator and numerator cancel. Another way to to think about the single-variable chain rule is to visualize the overall expression as a dataflow diagram or chain of operations (or abstract syntax tree for compiler people): Changes to function parameter x bubble up through a squaring operation then through a sin operation to change result y. You can think of as “getting changes from x to u” and as “getting changes from u to y.” Getting from x to y requires an intermediate hop. The chain rule is, by convention, usually written from the output variable down to the parameter(s), . But, the x-to-y perspective would be more clear if we reversed the flow and used the equivalent . Conditions under which the single-variable chain rule applies. Notice that there is a single dataflow path from x to the root y. Changes in x can influence output y in only one way. That is the condition under which we can apply the single-variable chain rule. An easier condition to remember, though one that's a bit looser, is that none of the intermediate subexpression functions, and , have more than one parameter. Consider , which would become after introducing intermediate variable u. As we'll see in the next section, has multiple paths from x to y. To handle that situation, we'll deploy the single-variable total-derivative chain rule. As an aside for those interested in automatic differentiation, papers and library documentation use terminology and (for use in the back-propagation algorithm). From a dataflow perspective, we are computing a forward differentiation because it follows the normal data flow direction. Backward differentiation, naturally, goes the other direction and we're asking how a change in the output would affect function parameter . Because backward differentiation can determine changes in all function parameters at once, it turns out to be much more efficient for computing the derivative of functions with lots of parameters. Forward differentiation, on the other hand, must consider how a change in each parameter, in turn, affects the function output . The following table emphasizes the order in which partial derivatives are computed for the two techniques. Automatic differentiation is beyond the scope of this article, but we're setting the stage for a future article. Many readers can solve in their heads, but our goal is a process that will work even for very complicated expressions. This process is also how automatic differentiation works in libraries like PyTorch. So, by solving derivatives manually in this way, you're also learning how to define functions for custom neural networks in PyTorch. With deeply nested expressions, it helps to think about deploying the chain rule the way a compiler unravels nested function calls like into a sequence (chain) of calls. The result of calling function fi is saved to a temporary variable called a register, which is then passed as a parameter to . Let's see how that looks in practice by using our process on a highly-nested equation like : Here is a visualization of the data flow through the chain of operations from x to y: At this point, we can handle derivatives of nested expressions of a single variable, x, using the chain rule but only if x can affect y through a single data flow path. To handle more complicated expressions, we need to extend our technique, which we'll do next. Our single-variable chain rule has limited applicability because all intermediate variables must be functions of single variables. But, it demonstrates the core mechanism of the chain rule, that of multiplying out all derivatives of intermediate subexpressions. To handle more general expressions such as , however, we need to augment that basic chain rule. Of course, we immediately see , but that is using the scalar addition derivative rule, not the chain rule. If we tried to apply the single-variable chain rule, we'd get the wrong answer. In fact, the previous chain rule is meaningless in this case because derivative operator does not apply to multivariate functions, such as among our intermediate variables: Let's try it anyway to see what happens. If we pretend that and , then instead of the right answer . Because has multiple parameters, partial derivatives come into play. Let's blindly apply the partial derivative operator to all of our equations and see what we get: Ooops! The partial is wrong because it violates a key assumption for partial derivatives. When taking the partial derivative with respect to x, the other variables must not vary as x varies. Otherwise, we could not act as if the other variables were constants. Clearly, though, is a function of x and therefore varies with x. because . A quick look at the data flow diagram for shows multiple paths from x to y, thus, making it clear we need to consider direct and indirect (through ) dependencies on x: A change in x affects y both as an operand of the addition and as the operand of the square operator. Here's an equation that describes how tweaks to x affect the output: Then, , which we can read as “the change in y is the difference between the original y and y at a tweaked x.” If we let , then . If we bump x by 1, , then . The change in y is not , as would lead us to believe, but ! Enter the “law” of total derivatives, which basically says that to compute , we need to sum up all possible contributions from changes in x to the change in y. The total derivative with respect to x assumes all variables, such as in this case, are functions of x and potentially vary as x varies. The total derivative of that depends on x directly and indirectly via intermediate variable is given by: Using this formula, we get the proper answer: That is an application of what we can call the single-variable total-derivative chain rule: The total derivative assumes all variables are potentially codependent whereas the partial derivative assumes all variables but x are constants. There is something subtle going on here with the notation. All of the derivatives are shown as partial derivatives because f and ui are functions of multiple variables. This notation mirrors that of MathWorld's notation but differs from Wikipedia, which uses instead (possibly to emphasize the total derivative nature of the equation). We'll stick with the partial derivative notation so that it's consistent with our discussion of the vector chain rule in the next section. In practice, just keep in mind that when you take the total derivative with respect to x, other variables might also be functions of x so add in their contributions as well. The left side of the equation looks like a typical partial derivative but the right-hand side is actually the total derivative. It's common, however, that many temporary variables are functions of a single parameter, which means that the single-variable total-derivative chain rule degenerates to the single-variable chain rule. Let's look at a nested subexpression, such as . We introduce three intermediate variables: where both and have terms that take into account the total derivative. Also notice that the total derivative formula always sums versus, say, multiplies terms . It's tempting to think that summing up terms in the derivative makes sense because, for example, adds two terms. Nope. The total derivative is adding terms because it represents a weighted sum of all x contributions to the change in y. For example, given instead of , the total-derivative chain rule formula still adds partial derivative terms. ( simplifies to but for this demonstration, let's not combine the terms.) Here are the intermediate variables and partial derivatives: The form of the total derivative remains the same, however: It's the partials (weights) that change, not the formula, when the intermediate variable operators change. Those readers with a strong calculus background might wonder why we aggressively introduce intermediate variables even for the non-nested subexpressions such as in . We use this process for three reasons: (i) computing the derivatives for the simplified subexpressions is usually trivial, (ii) we can simplify the chain rule, and (iii) the process mirrors how automatic differentiation works in neural network libraries. Using the intermediate variables even more aggressively, let's see how we can simplify our single-variable total-derivative chain rule to its final form. The goal is to get rid of the sticking out on the front like a sore thumb: We can achieve that by simply introducing a new temporary variable as an alias for x: . Then, the formula reduces to our final form: This total-derivative chain rule degenerates to the single-variable chain rule when all intermediate variables are functions of a single variable. Consequently, you can remember this more general formula to cover both cases. As a bit of dramatic foreshadowing, notice that the summation sure looks like a vector dot product, , or a vector multiply . Before we move on, a word of caution about terminology on the web. Unfortunately, the chain rule given in this section, based upon the total derivative, is universally called “multivariable chain rule” in calculus discussions, which is highly misleading! Only the intermediate variables are multivariate functions. The overall function, say, , is a scalar function that accepts a single parameter x. The derivative and parameter are scalars, not vectors, as one would expect with a so-called multivariate chain rule. (Within the context of a non-matrix calculus class, “multivariate chain rule” is likely unambiguous.) To reduce confusion, we use “single-variable total-derivative chain rule” to spell out the distinguishing feature between the simple single-variable chain rule, , and this one. Now that we've got a good handle on the total-derivative chain rule, we're ready to tackle the chain rule for vectors of functions and vector variables. Surprisingly, this more general chain rule is just as simple looking as the single-variable chain rule for scalars. Rather than just presenting the vector chain rule, let's rediscover it ourselves so we get a firm grip on it. We can start by computing the derivative of a sample vector function with respect to a scalar, , to see if we can abstract a general formula. Let's introduce two intermediate variables, and , one for each fi so that y looks more like : The derivative of vector y with respect to scalar x is a vertical vector with elements computed using the single-variable total-derivative chain rule: Ok, so now we have the answer using just the scalar rules, albeit with the derivatives grouped into a vector. Let's try to abstract from that result what it looks like in vector form. The goal is to convert the following vector of scalar operations to a vector operation. If we split the terms, isolating the terms into a vector, we get a matrix by vector multiplication: That means that the Jacobian is the multiplication of two other Jacobians, which is kinda cool. Let's check our results: Whew! We get the same answer as the scalar approach. This vector chain rule for vectors of functions and a single parameter appears to be correct and, indeed, mirrors the single-variable chain rule. Compare the vector rule: with the single-variable chain rule: To make this formula work for multiple parameters or vector x, we just have to change x to vector x in the equation. The effect is that and the resulting Jacobian, , are now matrices instead of vertical vectors. Our complete vector chain rule is: The beauty of the vector formula over the single-variable chain rule is that it automatically takes into consideration the total derivative while maintaining the same notational simplicity. The Jacobian contains all possible combinations of fi with respect to gj and gi with respect to xj . For completeness, here are the two Jacobian components in their full glory: where , , and . The resulting Jacobian is (an matrix multiplied by a matrix). Even within this formula, we can simplify further because, for many applications, the Jacobians are square ( ) and the off-diagonal entries are zero. It is the nature of neural networks that the associated mathematics deals with functions of vectors not vectors of functions. For example, the neuron affine function has term and the activation function is ; we'll consider derivatives of these functions in the next section. As we saw in a previous section, element-wise operations on vectors w and x yield diagonal matrices with elements because wi is a function purely of xi but not xj for . The same thing happens here when fi is purely a function of gi and gi is purely a function of xi : In this situation, the vector chain rule simplifies to: Therefore, the Jacobian reduces to a diagonal matrix whose elements are the single-variable chain rule values. After slogging through all of that mathematics, here's the payoff. All you need is the vector chain rule because the single-variable formulas are special cases of the vector chain rule. The following table summarizes the appropriate components to multiply in order to get the Jacobian. We now have all of the pieces needed to compute the derivative of a typical neuron activation for a single neural network computation unit with respect to the model parameters, w and b: (This represents a neuron with fully connected weights and rectified linear unit activation. There are, however, other affine functions such as convolution and other activation functions, such as exponential linear units, that follow similar logic.) Let's worry about max later and focus on computing and . (Recall that neural networks learn through optimization of their weights and biases.) We haven't discussed the derivative of the dot product yet, , but we can use the chain rule to avoid having to memorize yet another rule. (Note notation y not y as the result is a scalar not a vector.) The dot product is just the summation of the element-wise multiplication of the elements: . (You might also find it useful to remember the linear algebra notation .) We know how to compute the partial derivatives of and but haven't looked at partial derivatives for . We need the chain rule for that and so we can introduce an intermediate vector variable u just as we did using the single-variable chain rule: Once we've rephrased y, we recognize two subexpressions for which we already know the partial derivatives: The vector chain rule says to multiply the partials: To check our results, we can grind the dot product down into a pure scalar function: Hooray! Our scalar results match the vector chain rule results. Now, let , the full expression within the max activation function call. We have two different partials to compute, but we don't need the chain rule: Let's tackle the partials of the neuron activation, . The use of the function call on scalar z just says to treat all negative z values as 0. The derivative of the max function is a piecewise function. When , the derivative is 0 because z is a constant. When , the derivative of the max function is just the derivative of z, which is : An aside on broadcasting functions across scalars. When one or both of the arguments are vectors, such as , we broadcast the single-variable function across the elements. This is an example of an element-wise unary operator. Just to be clear: For the derivative of the broadcast version then, we get a vector of zeros and ones where: To get the derivative of the function, we need the chain rule because of the nested subexpression, . Following our process, let's introduce intermediate scalar variable z to represent the affine function giving: The vector chain rule tells us: which we can rewrite as follows: and then substitute back in: That equation matches our intuition. When the activation function clips affine function output z to 0, the derivative is zero with respect to any weight wi . When , it's as if the max function disappears and we get just the derivative of z with respect to the weights. Turning now to the derivative of the neuron activation with respect to b, we get: Let's use these partial derivatives now to handle the entire loss function. Training a neuron requires that we take the derivative of our loss or “cost” function with respect to the parameters of our model, w and b. For this example, we'll use mean-squared-error as our loss function. Because we train with multiple vector inputs (e.g., multiple images) and scalar targets (e.g., one classification per image), we need some more notation. Let where , and then let where yi is a scalar. Then the cost equation becomes: Following our chain rule process introduces these intermediate variables: Let's compute the gradient with respect to w first. Then, for the overall gradient, we get: To interpret that equation, we can substitute an error term yielding: From there, notice that this computation is a weighted average across all x i in X. The weights are the error terms, the difference between the target output and the actual neuron output for each x i input. The resulting gradient will, on average, point in the direction of higher cost or loss because large ei emphasize their associated x i. Imagine we only had one input vector, , then the gradient is just . If the error is 0, then the gradient is zero and we have arrived at the minimum loss. If is some small positive difference, the gradient is a small step in the direction of . If is large, the gradient is a large step in that direction. If is negative, the gradient is reversed, meaning the highest cost is in the negative direction. Of course, we want to reduce, not increase, the loss, which is why the gradient descent recurrence relation takes the negative of the gradient to update the current position (for scalar learning rate ): Because the gradient indicates the direction of higher cost, we want to update w in the opposite direction. To optimize the bias, b, we also need the partial with respect to b. Here are the intermediate variables again: We computed the partial with respect to the bias for equation previously: For v, the partial is: And for the partial of the cost function itself we get: As before, we can substitute an error term: The partial derivative is then just the average of the error or zero, according to the activation level. To update the neuron bias, we nudge it in the opposite direction of increased cost: In practice, it is convenient to combine w and b into a single vector parameter rather than having to deal with two different partials: . This requires a tweak to the input vector x as well but simplifies the activation function. By tacking a 1 onto the end of x, , becomes . This finishes off the optimization of the neural network loss function because we have the two partials necessary to perform a gradient descent. Hopefully you've made it all the way through to this point. You're well on your way to understanding matrix calculus! We've included a reference that summarizes all of the rules from this article in the next section. Also check out the annotated resource link below. Your next step would be to learn about the partial derivatives of matrices not just vectors. For example, you can take a look at the matrix differentiation section of Matrix calculus. b. We thank Yannet Interian (Faculty in MS data science program at University of San Francisco) and David Uminsky (Faculty/director of MS data science) for their help with the notation presented here. The gradient of a function of two variables is a horizontal 2-vector: The Jacobian of a vector-valued function that is a function of a vector is an ( and ) matrix containing all possible scalar partial derivatives: The Jacobian of the identity function is I. Define generic element-wise operations on vectors w and x using operator such as : The Jacobian with respect to w (similar for x) is: Given the constraint (element-wise diagonal condition) that and access at most wi and xi , respectively, the Jacobian simplifies to a diagonal matrix: Here are some sample element-wise operators: Adding scalar z to vector x, , is really where and . The partial derivative of a vector sum with respect to one of the vectors is: For and , we get: Vector dot product . Substituting and using the vector chain rule, we get: The vector chain rule is the general form as it degenerates to the others. When f is a function of a single variable x and all intermediate variables u are functions of a single variable, the single-variable chain rule applies. When some or all of the intermediate variables are functions of multiple variables, the single-variable total-derivative chain rule applies. In all other cases, the vector chain rule applies. Lowercase letters in bold font such as x are vectors and those in italics font like x are scalars. xi is the element of vector x and is in italics because a single vector element is a scalar. means “length of vector x.” The T exponent of represents the transpose of the indicated vector. is just a for-loop that iterates i from a to b, summing all the xi . Notation refers to a function called f with an argument of x. I represents the square “identity matrix” of appropriate dimensions that is zero everywhere but the diagonal, which contains all ones. constructs a matrix whose diagonal elements are taken from vector x. The dot product is the summation of the element-wise multiplication of the elements: . Or, you can look at it as . Differentiation is an operator that maps a function of one parameter to another function. That means that maps to its derivative with respect to x, which is the same thing as . Also, if , then . The partial derivative of the function with respect to x, , performs the usual scalar derivative holding all other variables constant. The gradient of f with respect to vector x, , organizes all of the partial derivatives for a specific scalar function. The Jacobian organizes the gradients of multiple functions into a matrix by stacking them: The following notation means that y has the value a upon and value b upon . Wolfram Alpha can do symbolic matrix algebra and there is also a cool dedicated matrix calculus differentiator. When looking for resources on the web, search for “matrix calculus” not “vector calculus.” Here are some comments on the top links that come up from a Google search: https://en.wikipedia.org/wiki/Matrix_calculus The Wikipedia entry is actually quite good and they have a good description of the different layout conventions. Recall that we use the numerator layout where the variables go horizontally and the functions go vertically in the Jacobian. Wikipedia also has a good description of total derivatives, but be careful that they use slightly different notation than we do. We always use the notation not dx. http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html This page has a section on matrix differentiation with some useful identities; this person uses numerator layout. This might be a good place to start after reading this article to learn about matrix versus vector differentiation. https://www.colorado.edu/engineering/CAS/courses.d/IFEM.d/IFEM.AppC.d/IFEM.AppC.pdf This is part of the course notes for “Introduction to Finite Element Methods” I believe by Carlos A. Felippa. His Jacobians are transposed from our notation because he uses denominator layout. http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.html This page has a huge number of useful derivatives computed for a variety of vectors and matrices. A great cheat sheet. There is no discussion to speak of, just a set of rules. https://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf Another cheat sheet that focuses on matrix operations in general with more discussion than the previous item. https://www.comp.nus.edu.sg/~cs5240/lecture/matrix-differentiation.pdf To learn more about neural networks and the mathematics behind optimization and back propagation, we highly recommend Michael Nielsen's book. For those interested specifically in convolutional neural networks, check out A guide to convolution arithmetic for deep learning. We reference the law of total derivative, which is an important concept that just means derivatives with respect to x must take into consideration the derivative with respect x of all variables that are a function of x.
1
Cyca v0.5, the Highlights Update
I know I said there would be no update this week-end, but I just couldn’t resist… I added a cool feature to Cyca which allows you to highlight text in feed items. It’s very simple in its principle but could be very powerful. You start by going on the new page added to your account, named “Highlights”. There, you will create your highlights by entering an expression to find in feed items, and assigning it a color. You can create as many highlights as you want, and highlights will only be shown to you, not other users. And they are always sorted by expression. Expressions can be a single word, or a sequence or words or characters you want to be highlighted in your feed items. Now, everytime you list feed items, they will be appropriately highlighted: You can see how “Cyca”, “Docker” and “Dev” appear on this particular list of feed items. Of course, it works both on read and unread items, which means you can browse your whole feed items history, they will all be highlighted. As the cherry on the cake, the foam on the beer, the chocolate sprinkles on the ice cream, text color automatically adapt to the highlight color to be readable in any circumstances. This new-born feature already screams for expansion, and maybe I will indeed expand its possibilities soon. I really think it’s a must have, and it’s, as always with Cyca, extra simple to use.
2
Facebook to pay up to $14.25 mln to settle U.S. employment discrimination claims
Companies p WASHINGTON, Oct 19 (Reuters) - Facebook Inc (FB.O) has agreed to pay up to $14.25 million to settle civil claims by the U.S. government that the social media company discriminated against American workers and violated federal recruitment rules, U.S. officials said on Tuesday. The two related settlements were announced by the Justice Department and Labor Department and confirmed by Facebook. The Justice Department last December filed a lawsuit accusing Facebook of giving hiring preferences to temporary workers including those who hold H-1B visas that let companies temporarily employ foreign workers in certain specialty occupations. Such visas are widely used by tech companies. Kristen Clarke, assistant U.S. attorney general for the Justice Department's Civil Rights Division, called the agreement with Facebook historic. "It represents by far the largest civil penalty the Civil Rights Division has ever recovered in the 35-year history of the Immigration and Nationality Act's anti-discrimination provision," Clarke said in a call with reporters, referring to a key U.S. immigration law that bars discrimination against workers because of their citizenship or immigration status. The case centered on Facebook's use of the so-called permanent labor certification, called the PERM program. The U.S. government said that Facebook refused to recruit or hire American workers for jobs that had been reserved for temporary visa holders under the PERM program. It also accused Facebook of "potential regulatory recruitment violations." Facebook will pay a civil penalty under the settlement of $4.75 million, plus up to $9.5 million to eligible victims of what the government called discriminatory hiring practices. "While we strongly believe we met the federal government's standards in our permanent labor certification (PERM) practices, we've reached agreements to end the ongoing litigation and move forward with our PERM program," a Facebook spokesperson said, adding that the company intends to "continue our focus on hiring the best builders from both the U.S. and around the world." The settlements come at a time when Facebook is facing increasing U.S. government scrutiny over other business practices. Facebook this month faced anger from U.S. lawmakers after former company employee and whistleblower Frances Haugen accused it of pushing for higher profits while being cavalier about user safety. Haugen has turned over thousands of documents to congressional investigators amid concerns that Facebook has harmed children's mental health and has stoked societal divisions. The company has denied any wrongdoing. In Tuesday's settlements, the Justice Department said that Facebook used recruitment practices designed to deter U.S. workers such as requiring applications to be submitted only by mail, refusing to consider American workers who applied for positions and hiring only temporary visa holders. The Labor Department this year conducted audits of Facebook's pending PERM applications and uncovered other concerns about the company's recruitment efforts. "Facebook is not above the law," U.S. Solicitor of Labor Seema Nanda told reporters, adding that the Labor Department is "committed to ensuring that the PERM process is not misused by employers - regardless of their size and reach." Reporting by Sarah N. Lynch; Editing by Will Dunham Our Standards: The Thomson Reuters Trust Principles.
21
The Quickest Antenna Design of the Year
During a recent weekend, I found myself with some parts laying around which I hadn't used in quite a while. First in the bin was an old Orange Pi One (an underpowered SBC similar to the Raspberry Pi) which I was going to use to run a video conferencing screen but which proved to be unable to run the graphical Linux installation well enough to be usable. My RTL-SDR stick also happened to be out (I can't actually remember why I grabbed it a few weeks ago for the first time in years). As I cleaned everything up and got ready to put it away, I realized that I was probably never going to use this stuff again, so instead of trashing it I decided to put it to good use! The Idea I'm often interested in looking at real-time aviation data, especially when a plane or helicopter is flying nearby. I've even used my RTL-SDR to capture and decode ADS-B packets in the past. This time, I decided to create a networked ADS-B receiver node and feed the data into websites such as Flightaware and Flightradar24. The only problem was, I did not want to spend a bunch of time on it - not much more than it would have taken to throw everything into the garbage! Mechanical and Software Design I knew that in an effort to minimize the installation time, this thing was going in the attic - no way was I going to expend the effort required to run cables outside the house nor entertain the idea of weatherproofing the device. I already had power available and a router with an open Ethernet port right in the middle of the middle of the attic (A consequence of running an old WRT54G with enough power to get Wi-Fi anywhere in my woods). Not only would time not permit me to run cabling external to the house, but until I petition for wifely approval of an antenna tower or convert a tall tree, there's no higher point to mount the antenna than the attic. One easy method to mount a bare PCB to the attic rafters was a 3D printed frame with points to screw the board in along with an almond-shaped hole to support the entire thing. Figure 1. SBC mounting/hanging frame. The software was one of the easiest parts thanks to the quick scripts available. The Orange Pi was running the latest Armbian in a headless, non-graphical mode. Both sets of instructions for Flightaware and FR24 were quick to configure. Since I saw a warning about high CPU temperature during bench testing, and because attic temperatures will be quite high, a quick and dirty heat sink consisting of thermal paste, pennies and super glue was constructed. Antenna Design Here's the part of the project where the Cadence Clarity 3D Workbench was so helpful. For the sake of simplicity, I realized that I could simply slide a piece of wire into the center conductor of the 75Ω coaxial connector. This would certainly prove to be the easiest (and quickest) method of finishing the project, and the results would be acceptable for my use-case. Additional care could be taken to improve the ground plane with radials extending out from the antenna, but that would detract from the speediness requirement. Of course, we could use any quarter-wave vertical monopole calculator such as the number one result from Google which says to use a 2.6" wire for ADS-B's 1090 MHz frequency. I figured that without radials, I'd do my best to estimate the antenna properties into a 75Ω simulation port with the same geometry as the RTL-SDR's coaxial connector. After some measurements with the calipers my base model was done in about 5 minutes. Figure 2. Clarity 3D Workbench CAD model. I started with the 2.6" recommendation from online calculators which showed the frequency was a tad bit too high. Two more iterations showed that 3" was the ideal length to extend from the coaxial connector. I pulled the bare center conductor from a piece of RG6 that was sitting on my bench, inserted it into the RTL-SDR, measured my 3" and cut. Figure 3. Simulated antenna S11 amplitude. As a result of the fact that this quick and dirty antenna design had zero ground plane to speak of, it was obvious that we were never going to get anywhere near an ideal VSWR. As shown in Fig. 3 above, the return loss at resonance was about 5dB giving an SWR of around 3.5. In the future, a proper antenna design would certainly improve this but today's engineering trade-off valued time over performance. Figure 4. 3D plot of antenna gain at operating frequency (dB). One final check in Clarity was to verify the far field radiation plot at 1090 MHz. The result was textbook for a quarter-wave monopole with a missing ground plane, and showed a peak gain of about 4dBi, which would certainly be higher with a better ground plane. Installation and Performance All that was left now was to put the system up in the attic. A climb up through the joists to the highest point within cabling distance of the router, then a quick nail is all that was needed. The final device mounting is shown below in Fig. 5 (but please excuse the photo quality - it was tough to get a decent shot while suspended on the joists with non-ideal lighting). Figure 5. Attic installation of ADS-B receiver. Upon plugging the device in and navigating to its internal web page (by default, available at "{ip address}:8080") it was encouraging to see some packets being received, with position data and everything! At least it worked enough to show basic functionality. Figure 6. Real-time received ADS-B data. After watching the data for a few days, it was obvious that I was receiving packets much better from the West (especially Southwest). On one hand, it was obvious that the antenna design and location was not optimal (in addition to using what may be literally the cheapest 1090 MHz radio receiver ever). At the same time, I was receiving good data out to over 50 nm in the best directions. Figure 7. ADS-B collected statistics showing range and direction of received data. This leads me to one additional conclusion: the impact of the local geography. I'm situated just north of a long ridge which runs in a NE/SW direction. The ridge isn't terribly high, but it's high enough to be called "mountains" here in New Jersey. Closer still, I'm directly adjacent to a local peak which blocks essentially everything to my East. These issues could be mitigated somewhat with the aforementioned antenna tower, a better receiver or of course a better designed and mounted antenna. All things considered though, this project went from junk parts to working device in a matter of a couple of hours. Perhaps the antenna would be unnoticeably different had I just used a quarter-wave calculation instead of a field solver, but using Clarity 3D Workbench certainly added confidence that the final antenna length was ideal for this application. Like everything in engineering, this was a "cost, performance, time: pick two" situation and by that metric I'm extremely happy with the results. Project File If you'd like to play around with the design, the 3DEM file for use in Cadence Clarity 3D Workbench is available for download here: ant.3dem Project File
5
Tesla already 'biggest short in the market' as Burry piles on: S3 Partners
The family office of Michael Burry had a big bearish bet on Tesla as of the end of the first quarter. The investor, who gained fame from the book and movie "The Big Short," is far from alone. "Tesla is, by far, the biggest short in the market," Ihor Dusaniwsky, managing director of predictive analytics at S3 Partners, told Yahoo Finance Live. "It's been the largest worldwide short for several years now." Tesla's short interest stood at $22.5 billion as of May 13, according to S3 data. As Dusaniwsky pointed out, that's almost as much as the short interest pegged to Amazon and Microsoft combined. p Update your settings here to see it. Put most simply, shorting a stock allows a trader to bet it will move lower. For a fee, the trader borrows a share to sell, then eventually "covers" or buys it — ideally at a lower price — and collects the difference. When the stock doesn't move lower as expected, the trader sometimes finds it too expensive to hold the short, and/or loses patience. That could be enough for the trader to throw in the towel and "buy to cover" the trade. But that buying among multiple short sellers can feed on itself. If new bulls are buying the stock, the price can go parabolic in a classic "short squeeze." (See our Yahoo U explainer on the short squeeze phenomenon.) That's what had happened to Tesla shares over the past several years: short interest climbed but then the shares skyrocketed, forcing waves of short squeezes that pushed them even higher. That cycle came to at least a temporary pause this year. Tesla's stock has fallen nearly 30% from its record high on Jan. 8. Even as prices for high-momentum names have moved lower, though, short bets have continued. "People are shorting into this downward movement" in tech, said Dusaniwsky. "So they're actually keeping their bets up by shorting more stock as the stock price goes down." He estimates the total value of short interest on the stocks he tracks is $1.1 trillion, up from $990 billion at the end of last year. Julie Hyman is the co-anchor of Yahoo Finance Live, weekdays 9am-11am ET . Read the latest financial and business news from Yahoo Finance strong Twitter strong Facebook , Instagram , Flipboard , SmartNews , LinkedIn , YouTube , and reddit .
2
First Geekbench Score for MacBook Pro M1 Max: 2x Faster Multi-Core Performance
First Geekbench Score Surfaces for MacBook Pro M1 Max With 2x Faster Multi-Core Performance Compared to M1 Just after Apple's event introducing the new MacBook Pro models with M1 Pro and M1 Max chips, the first benchmark for the high-end ‌M1 Max‌ chip with 10-core CPU and 32-core GPU appears to have surfaced. The chip features a single-core score of 1749 and a multi-core score of 11542, which offers double the multi-core performance of the M1 chip that's in the 13-inch MacBook Pro machine. Based on these numbers, the ‌M1 Max‌ outperforms all Mac chips with the exception of the Mac Pro and iMac models equipped with Intel's high-end 16 to 24-core Xeon chips. The 11542 multi-core score is on par with the late 2019 ‌Mac Pro‌ that is equipped with a 12-core Intel Xeon W-3235. The machine with the chip in question is running macOS 12.4, which we have seen in our analytics, and Geekbench's John Poole believes the result is legitimate. He initially said there was an issue with the frequency estimation, but he believes that this is an issue with Geekbench and not the processor. We should be seeing additional ‌M1 Max‌ and ‌M1 Pro‌ Geekbench results in the coming days as the new MacBook Pro models are expected to arrive to customers next Tuesday and media review units will be going out even sooner than that. Related Roundup: MacBook Pro 14 & 16" Tag: Geekbench Buyer's Guide: 14" & 16" MacBook Pro (Neutral) Related Forum: MacBook Pro Google to Roll Out New 'Drive for Desktop' App in the Coming Weeks, Replacing Backup & Sync and Drive File Stream Clients Tuesday July 13, 2021 1:18 am PDT by Earlier this year, Google announced that it planned to unify its Drive File Stream and Backup and Sync apps into a single Google Drive for desktop app. The company now says the new sync client will roll out "in the coming weeks" and has released additional information about what users can expect from the transition.To recap, there are currently two desktop sync solutions for using Google...
4
1990: LambdaMOO
LambdaMOO by Pavel Curtis Early Contributors: Tim Allen, Roger Crew, Judy Anderson, and Erik Ostrom Launched: October 30, 1990 [beta]; February 5 1991 [officially announced] Language: C [server]; MOO [world] Platform: Telnet LambdaMOO is a new kind of society, where thousands of people voluntarily come together from all over the world. What these people say or do may not always be to your liking; as when visiting any international city, it is wise to be careful who you associate with and what you say. Content note: the article discusses a violation of player consent online and its effect on the LambdaMOO community, without getting into details of the event. *** Connected ***The Coat ClosetThe closet is a dark, cramped space. It appears to be very crowded in here; you keep bumping into what feels like coats, boots, and other people (apparently sleeping). One useful thing that you've discovered in your bumbling about is a metal doorknob set at waist level into what might be a door. open door You open the closet door and leave the darkness for the living room, closing the door behind you so as not to wake the sleeping people inside.The Living RoomIt is very bright, open, and airy here, with large plate-glass windows looking southward over the pool to the gardens beyond. On the north wall, there is a rough stonework fireplace. The east and west walls are almost completely covered with large, well-stocked bookcases. An exit in the northwest corner leads to the kitchen and, in a more northerly direction, to the entrance hall. The door into the coat closet is at the north end of the east wall, and at the south end is a sliding glass door leading out onto a wooden deck. There are two sets of couches, one clustered around the fireplace and one with a view out the windows.You see Cockatoo, README for New MOOers, Welcome Poster, a fireplace, The Daily Whale, Helpful Person Finder, The Birthday Machine, a map of LambdaHouse, and lag meter here.Hagbard, Rusty (distracted), Porcupine (asleep), Primate's_Stick, and Purple_Guest are here. say Hello, world. You say, "Hello, world."Purple_Guest laughs. It had all started at the end of the ‘70s with MUD , the original multi-user Dungeon , which successfully demonstrated the incredible appeal of sharing a virtual world with other people. By the end of the ‘80s, text-based MUDs had become an established genre. As more and more university students gained access to computers and large quantities of unmetered Internet time, they created at first dozens, then hundreds and hundreds of MUD clones. The earliest were simple knock-offs of the original, but an increasing number were evolving into more and more sophisticated simulations of fantastical other worlds. This complexity had largely taken the form of increasingly elaborate rules: for combat, skill advancement, magic spells and items, or world simulation. Some MUDs offered dozens of character classes to choose from, each with complex progressions of skill trees; player-run guilds with arcane hierarchies of power; hundreds of unique weapons and monsters; and complex weather systems or day/night cycles. But some MUDders had begun to grow bored with the endless grind of combat and leveling. A few had started to wonder if it might be possible to base a virtual world around a different central conceit. In many MUDs, the ultimate goal was to rise to the highest experience level and become a wizard. To reward such long-term engagement with a community, wizards were often granted special powers and responsibilities: the power to teleport, for instance, or to enforce order by resolving disputes or banishing troublemakers. But the most tantalizing wizard ability of all was the power of creation. Some MUDs gave wizards access to new verbs that let them literally reshape the world, creating new rooms and connections, unique monsters and objects, and original puzzles and quests. Becoming a wizard could take tremendous effort—hundreds and hundreds of hours of playtime, not to mention the social skills necessary to ingratiate yourself with the existing wizard community—but what a reward to look forward to! Near the end of 1988, a short-lived MUD called Monster had launched with a simple but intriguing idea: what if you didn’t have to rise through the ranks and earn your wizardhood to help create the world? What if ultimate power was given out to everyone? While it hadn’t been the first game to experiment with this notion, Monster caught the attention of a CMU grad student named James Aspnes, who ended up streamlining and rewriting the popular package AberMUD into a version that stripped out all the extensive combat, magic, skill and advancement rules and gave all players the generous building permissions of a wizard. He called his engine TinyMUD , and hoped the lack of traditional content would force players to start building their own, giving rise to new kinds of virtual spaces that didn’t center on combat and skill trees. He suggested the “D” in MUD didn’t have to stand for Dungeon. What about Domain, or Dimension? While a good many MUDders shrugged their shoulders at the weird experiment and kept slaying virtual orcs, some found the concept wildly intriguing, so much so that within a few months the original TinyMUD had to shut down for exceeding the limitations of its host computer: user-made content had completely overwhelmed it. But other TinyMUDs sprung up to take its place, and soon spin-offs like TinyMUCK and TinyMUSH were everywhere. In mid-1990, a Canadian student named Stephen White released a package called MOO , which stood for “MUD, Object-Oriented.” White had realized that for players to truly be creative in a virtual world, the power to make new rooms and objects wasn’t enough. They would need the ability to create new rules and systems, too. But that would require a true programming language capable of altering the very world its user was immersed in, and a consistent ontology allowing that world to be changed in a simple and consistent manner. Object-oriented programming was becoming increasingly popular, so White decided to build a system where everything in the world—from players, to items, to rooms, to the exits connecting those rooms—was represented as an object that could be created or modified by special commands. Objects could have associated properties and routines: a property like “description” might be common to all objects, as well as code specifying that only the creator of an object could modify it. But object-oriented programming also allowed for a system of inheritance, which could let an object “descend” from an ancestor to gain its qualities while also acquiring new ones of its own. Useful boilerplate objects like Room or Person might be created first, defined with all the code and properties each needed to function. A Room might inherent a generic object's "description" property, and add one storing a list of possible exits. Then one might make a more specific Room called Outdoor Room , inheriting the features of its parent but adding new behaviors to simulate external areas where sun and sky were visible. Finally one might add a child of Outdoor Room called In the Meadow to simulate one such room in particular. Inheritance provided a simple and well-understood conceptual framework and technological underpinning, allowing for reuse of code and keeping the universe’s fundamental structure orderly and predictable. Crucially, it also made it easy for players to build on each others’ contributions. White demonstrated his MOO codebase with an alpha world which he never widely publicized. But among those who discovered it was Pavel Curtis, a researcher at the famous Xerox PARC lab in Palo Alto, California. PARC had become well-enshrined as one of the preeminent institutions of forward-thinking computer research: the lab had birthed innovations like the graphical user interface, the laser printer, the Ethernet protocol, and some of the first object-oriented programming languages. Curtis, researching language design and interested in the challenges of teaching programming to kids, had recently stumbled across MUD culture, and when he found White’s MOO he was intrigued by its enormous potential. Could this be one of the Next Big Things for PARC—shared virtual environments that any user could help design? White’s enthusiasm for the project had flagged, so with his permission, and free rein from the PARC bosses to set up a long-term virtual worlds research project, Curtis took over work on a revised second version of MOO . His character on White’s server was named Lambda (a term important in Curtis’s favorite programming languages like Lisp and Scheme), so he decided to give both his software and the first test world running it the name LambdaMOO . Like many first-time interactive fiction authors, the first thing Curtis built was his own house. As he invited in the first wave of friends and colleagues to help stress test the system, he encouraged them to extend the environment but keep it thematically consistent. One of his earliest collaborators was Judy Anderson, an ex he’d remained on good terms with and a former resident of the real Lambda House. Judy, whose avatar in this mirror-world was called yduJ, took to the role of possibility-architect with gusto, and soon began programming interesting objects throughout the house for players to interact with, like an interactive hot tub with working jets and temperature controls, and the game’s first puzzle (disabling an obnoxious burglar alarm). yduJ and others in the first wave of residents also extended the house beyond its original modest footprint, creating new wings, hallways, and rooms with the @dig command and new objects to fill them via @create . By early February 1991, when Curtis opened LambdaMOO to the public with an announcement on the Usenet group rec.games.mud, the house had swelled to the point it was already easy to get lost in. Unlike some other TinyMUD-likes with no enforcement of a consistent universe—a wizard’s castle might be one room south of a bustling spaceport, or adjacent to a recreation of a real Chicago dive bar— LambdaMOO took pains to enforce a consensus reality for its virtual space, a fictional framework that nevertheless might allow almost any kind of contribution: LambdaMOO takes place inside and on the grounds of a large, sprawling mansion. ...The house is also very large, so large in fact that the current occupants themselves have only ever explored a tiny portion of it. What may be going on in other parts of the house is anybody's guess. ...With nobody having the means or inclination to patrol the whole place, almost anything could be squatting here. South of the occupied part of the house lie the palatial gardens. Many parts of the gardens are still being tended and cared for... Of course, there are other parts of the gardens that have become quite overgrown and wild, sheltering who knows what. The land underneath the house is also full of strange tunnels, odd caverns, perhaps a forgotten mine, and other amusements. Of course, except for the wine cellars, the current occupants are completely unaware of such developments. LambdaMOO grew slowly at first, but it grew. After a year of building, with more and more eager creators joining every day, the dimensions of Lambda House had taken on dizzyingly fractal qualities. The grounds outside extended past lawns and gardens through thickets and rolling hills, eventually stretching to distant beaches and lands beyond. Pocket dimensions sprung up within the house itself, like the Looking-Glass Tavern which could be visited by gazing into a mirror in the foyer; or an entire miniature town built into a model railroad layout in the house’s guest bedroom, which one could magically shrink down to explore. A nightclub in Tiny Town became one of the MOO’s most happening hangout spots, and the town’s residential district a popular place to build a virtual home. Another common place to put down roots was the lavish hotel found inside a red plastic piece in the working Monopoly set in the dining room. Treehouses, rooftop observatories, hidden underground grottos, crawlspaces between the walls: the house and its grounds had become a wonderland of creative architecture and inspired world-building. In this place of “ pure communication , where looks don’t matter and only the best writers get laid,” descriptions were often richly evocative: Unlike in single-player IF, these were spaces designed for lingering, for inhabiting: stages for conversations and seductions and meetings with friends. Increasingly, both on LambdaMOO and elsewhere, these worlds were referred to as virtual realities. Contrary to popular conceptions of VR as requiring cutting-edge graphics or full immersion body-suits—fundamentally a thing of the future—proponents of MOO-VR saw text as a far superior (and already available) way to directly engage the imagination and experience a sense of immersive transportation. And the key to that immersion was collaboration: not only between the people who were playing but between them and the simulation itself. Good players could of course emote in ways that referenced the room they occupied and the objects within it: :collapses on the old couch, putting his feet up on the creaking end table. Kelvin collapses on the old couch, putting his feet up on the creaking end table. But the ability to program from within the virtual world let the software become a collaborative partner in the project of maintaining its fictional consistency : The living room’s description mentioned a couch (two sets of couches, actually) for the longest time. Then someone built an actual VR couch. You can sit on it, shove people off, stuff things into it, jostle it, reupholster it, search for things, and (occasionally) fall in. From under the couch cushions, you can shout, or return something that falls in (from someone else’s pockets, to be sure). These behaviors were created through the straightforward but powerful MOO programming language . For instance, a popular in-game coding tutorial would teach you how to create your own pet rock. To program the ability to pet your pet rock, you needed to type in three commands at the prompt: commands no different, from the system’s perspective, than any other player input like look or go north : The words this none none in the first line would define the specification of the pet verb from the rock’s perspective: it takes a single direct object this (the rock) and no preposition or indirect object. rxd indicates the verb is readable by others (anyone can pet the rock), callable by other verbs like a function, and will show a traceback if its program crashes. The dot at the end of the third line indicates the program being entered is finished. A player might weave these instructions into a stream of chatting with friends and interacting with the existing environment: programming the world turned into just another fundamental part of existing within it. MOO programs could become surprisingly complex, able to interface with nearly any aspect of the simulation they ran within, calling functions to query the game state or pipe messages to other parts of the virtual world. Programs could be typed in line-by-line as individual commands, or via a special editing environment—which was itself just a custom-made room one could enter, with a set of specialty verbs for manipulating the text stored inside its buffer. And the objects created became more and more fascinating and complex. Advanced programmers were soon creating toys like the helicopter on the west lawn, which had over twenty custom verbs and included extensive help text: Object inheritance led to a culture of reuse and sharing. The creator of a useful object could set a “fertile” flag that would let others create child objects from it, and soon whole catalogs of useful parent objects were available in the house’s library: objects with names like Simple Lockable Thing, Generic Amplifiable Musical Instrument, Generic Programmable Puppet, or Generic Aircraft (from which the helicopter descended). Improved children could themselves be made fertile, leading to long chains of iterative refinements and ever-increasing functionality. Generic Aircraft descended from Generic Magnetic Portable Secure Seated Integrated Detail Room, itself a very distant descendant of the basic Room object provided originally by Pavel Curtis. The platonic Room, the undescribed ur-location from which all others descended, became another popular hangout spot once people realized that it was like any other room object, and they could teleport inside it. Programmer-players created camcorders that could record real-time logs of MOO happenings in the room they occupied, saving them to the viewable text buffer of a child of Generic Videocassette; elaborate recreations of real-world pastimes like board games or laser tag arenas; even LambdaMOO ports of classic text games, like a Super Star Trek in which each player would be whisked to the bridge of their own starship to issue commands that moved it through a three-dimensional grid. Certain stock NPC classes became useful to architects building clubs or hangouts, such as a Waiter who could show up when a group claimed a table, take drink orders, and return minutes later to distribute beverage objects with drink verbs and simulations of fullness. Players could even adjust the inheritance tree of their own avatar, the object representing their digital self. One popular generic player class provided verbs to adjust how the system described your appearance and actions, so for instance one could morph into a dragon that would “thunder” rather than “say” any words spoken aloud. Another player class included a range of helpful features for cybersex, including ways to write descriptions of oneself at various states of undress, and grant other players permission to use certain verbs to uncover them. Within this rich world—one Pavel Curtis called a “prose-based reality”—a village was born. Nearly all MUDs spawned communities, of course, often strong ones with bonds that spilled out into the real world. But some combination of LambdaMOO ’s appeal to older players less interested in pure gameplay, its consistent fictional frame that made suspension of disbelief easier to sustain, and its ability for players to alter that frame and reshape it to make the world they most wanted to live in: whatever the reason, LambdaMOO rapidly evolved a proper society, steeped in a dense network of ideas, friendships, romances, and, soon enough, rivalries. Within a year of launch, there were thousands of registered players and often more than a hundred online at peak hours, enough to sustain a vibrancy of discourse that few other virtual spaces had yet achieved. A hundred simultaneous users might seem small by today’s standards, but was in many ways ideal: the size of a large party that never ended, with conversations spilling out into various wings and back porches, and a healthy network of friendly faces, rival cliques, and shared social spaces. It was the perfect size for a single community. The MOO had arrived at a pivotal moment in the growing cultural awareness of the Internet: journalist Julian Dibbell would later place LambdaMOO ’s ascendancy as taking place “about halfway between the first time you heard the words information superhighway and the first time you wished you never had.” Attracting attention from first specialist and then mainstream media, LambdaMOO seemed a dizzyingly immediate example of the coming future, the opening of a digital utopia where anyone could have the powers of a god. This heady vision was intoxicating and often addictive. “They were the seductions natural to any world built from the stuff of books and maps,” Dibbell later wrote, “the siren song of possibility.” Nearly every article or book on MUDs and MOOs written in the 1990s included a warning of some kind: one had a four-page section asking whether virtual worlds were “a hobby or an addiction,” noting that many a college student had dropped out of classes to spend hour after hour in the computer lab, living inside them. Some joked that the acronym stood for “Multi-Undergraduate Destroyers.” A Wired reporter assigned to write a feature story on “Why Playing MUDs is becoming the addiction of the ’90s” ended up becoming addicted himself, racking up huge bills with his Internet service provider and writing in the article’s conclusion, somewhat desperately, that “weeks have gone by and I find myself unable to stop MOOing.” He compared LambdaMOO to LSD. One estimate guesses that in 1993, MUDs made up 10% of all traffic on the Internet. A rumor spread that Australia had established a continent-wide ban on MUDding lest it clog up the country’s connection to the rest of the real world with descriptions of virtual ones: the rumor proved unfounded, but had seemed entirely plausible. In later years the graphical MMOs that descended from MUDs would prove equally compelling, but the fact that even their prose-based ancestors had been so hypnotic suggests the notion of a textual virtual reality was no naïve oxymoron. MUDs had been attracting increasing attention from academics and journalists throughout the early ’90s, but LambdaMOO was thrust onto the national stage with a Village Voice article in 1993 called “A Rape in Cyberspace.” Written by Julian Dibbell, the article described an incident that had taken place on the MOO earlier that year. An object called a voodoo doll had been created that could be “reshaped” to look like a particular character, and then manipulated to make it seem as if that character was taking actions their player had not initiated. One night in the very public space of the Living Room, a player dressed as a perverted clown used a voodoo doll to make two women appear to do disturbing, violent, and sexual things to each other and to onlookers, much to their players’ horror and distress. The incident had sparked a blaze of discussion in the previously laissez-faire community about standards of behavior and where disciplinary power should be vested, and Dibbell’s article struck a nerve that would prove resonant across future decades, musing about the morality of communities where words literally instantiated consensus reality. Noting that while no physical crime had occurred in the real world, the women involved still felt violated, Dibbell began to question sharp lines between words and action he had once held firm to: “the more seriously I took the notion of virtual rape, the less seriously I was able to take the notion of freedom of speech, with its tidy division of the world into the symbolic and the real.” The incident came in the midst of a remarkable transition of power on LambdaMOO . A few months earlier, Curtis had announced that he and the other system administrators—exhausted by the constant stream of player disputes and moderation requests—were “pulling out of the discipline/manners/arbitration business; we’re handing the burden and freedom of that role to the society at large.” But in lieu of any formal replacement for sysadmin fiat, the question of who, if anyone, now had the power to ban a virtual rapist went unanswered. Eventually one admin banned the perpetrator on his own initiative, but this too proved controversial: for people who had come to think of Lambda House as a second home—and often one where their most intense social connections were centered—the notion that an arbitrary whim could expel you from it forever was anathema. In the wake of public outcry, Curtis set up a formal in-game system for petitions and balloting that let any player whose idea captured two-thirds of the popular vote deliver a mandate to the devs. They would implement any passing proposal that met certain legal and feasibility standards: from banning a specific player, to reprogramming core systems, to even shutting the whole thing down, if that was what the player base wanted. This high-minded experiment—the admins as obedient servants of the people’s will—inspired a flurry of activity both inside and outside the community. Petitions on all kinds of topics from trivial to world-breaking were circulated, discussed, and debated endlessly; poli-sci, law, and sociology academics descended on the MOO in swarms to observe a civilization pulling itself out of anarchy from first principles. Curtis’s PARC experiment into advancing the evolution of virtual worlds seemed to be bearing real fruit. But perhaps predictably, the petition system led to an increasingly vitriolic environment, accompanied by all the hostility and bitterness that comes with real-world politicking, moral crusades, and battles for ideological survival. While the MOO’s population continued to grow, many of its core community members slowly stopped logging in. All the fun had been leached out of their virtual playground, replaced by something that smacked far too much of reality. Curtis would later reflect : We see these communities form whenever technology changes. Every time we give people another mechanism to communicate, they latch onto it. And then we see human nature happen again. People. Some of them will be assholes, some of them will care an enormous amount. Some will be beautiful and wonderful and some will be hateful and awful. There’s such a hunger for these kinds of systems, [but] then human nature does what we expect it to do if we’re paying attention at all, and there will always be people who are disappointed because they thought, this time—this time it is pure. MUDding never really died, but its player base became subsumed by far larger crowds attracted to the graphical MMOs emerging by the late 1990s. Games like Ultima Online (1997), EverQuest (1999), and Star Wars Galaxies (2003) were more than mere spiritual successors to MUDs: many were designed by teams of former MUDders, and often adopted concepts, rulesets, and lingo whole cloth from their textual ancestors. But few graphical MUDs dared give players the powers of a MOO: neither the wizardly tools of creation, nor the radically democratic notions of a self-guided community. Perhaps the most famous exception came with Linden Lab's Second Life (2003), which at first attracted similar hype as had LambdaMOO a decade earlier for its dreams of a self-made world. But the increased challenge of making 3D objects over textual ones meant its user creations often seemed amateurish, and its focus on commercial transactions turned vast swaths of its landscape into virtual strip malls, soulless and exploitative. For many years Second Life was held up as the exception that proved the rule: giving players too much creative power was as difficult as it was dangerous, and in most online spaces all a user could really change about the world was their own appearance—within the carefully curated limits of a nose-length slider or a set of pre-approved skin tones. Yet in the last decade new seeds of player creativity have grown from the soil of games like Minecraft and platforms like Roblox . On the MOO in the early nineties, hundreds of people who had never considered themselves coders or writers discovered the joy of creating something strange or beautiful or funny or functional and sharing it with friends. Today millions have found the same kind of thrill in new virtual spaces that embrace player creativity: cobbling together JavaScript or Lua instead of MOO-code, wrangling voxels or textures instead of words. These new games too have become places for community and connection, in part because they actually are places , not the dimensionless abstractions of social media, message boards, or chat rooms. Like LambdaMOO , they are places you choose when to enter and when to leave, filled with people you can approach or stay away from, and a virtual body that can do more than merely speak. The last few years have accelerated awareness of something that seemed obvious to many MOOers thirty years ago: there's more satisfaction in a conversation that happens somewhere you can pretend is real. LambdaMOO is still running circa 2021, but is a strange place to visit: both heavy with the dust of ages and as fresh and functional as the day its code first ran. A public bulletin board in the library exhibits surreal temporal collapse: an ad for a long-defunct BBS with a high-speed 14.4k modem sits alongside a note from a lonely Italian in quarantine with COVID-19. The last official news bulletin dates from 2004, yet the @who command still shows a dozen or so active players at any given moment squirreled away in odd corners of the map, still @dig ging. The hundreds of useful generic objects created over the decades remain just as fertile as they were in 1991, their code ready for reuse in a new generation’s projects. Each player on LambdaMOO is given a fixed quota of disk space, a rationing that prevents the community as a whole from exceeding the means of its hardware. As new players register characters, inactive players and objects are “reaped” to make room, in reverse order of how recently they've logged in or been used. During the MOO’s height of popularity, reaping could happen to inactive players as soon as six weeks after their last login. Today, one can avoid the reaper far, far longer—but not indefinitely. Within the house you can find an auction block where soon-to-be-reaped objects, rooms, and generic classes are up for grabs, transferable to any active players who might want to claim them. Perusing these hundreds of digital discards provokes a strange ennui. A tank missile, a bucket, a pair of angelic handcuffs, a skull-topped staff, a galaxy; rooms called Secluded Jungle Hot Tub, Generic Shower Stall, the Library of Rosecliff, or Under a Starry Sky. Which, if any, are worth saving? Once a player has been logged off for too long, their avatar appears to be sleeping: wandering the map today can feel like exploring an enchanted kingdom of sleeping beauties, some of whom have been sleeping for decades. Yet Lambda House still intrigues. Exploration remains perpetually magical: unlike in a single-author text game, here you never find the limits of the world model or the edges of the map. The next object might always have a new verb programmed into it, and behind any corner might lie a new domain awaiting fresh explorers. Listening to a seashell in a gazebo transports you to a lazy tropical paradise; winding a music box in a hidden glade summons ghostly figures to enact a tableau from Keats. Rooms with dynamic descriptions responding to the seasons and the time of day keep cycling through the hours, virtual moons moving through their phases above. Even with most of the people gone, the code they left behind still keeps Lambda House alive. Next week : the forgotten stress reliever of dialing in to your local BBS to blow up your neighbor’s Corellian Battleship. span connect to LambdaMOO , and the many historical documents and other ephemera found within were massively useful in preparing this article. An index of other active MUDs and MOOs is at mudstats.com . LambdaMOO is one of the better-documented MOOs of the early nineties, many of which have since vanished without a trace: especially helpful for research were Julian Dibbell’s book My Tiny Life: Crime and Passion in a Virtual World , and the self-published books Yib’s Guide to MOOing and Whereis Mineral: Adventures in MOO . Thanks also to Lynn Cherny for helpful feedback on a draft. Pavel Curtis offers physical and literary logic puzzles at pavelspuzzles.com .
1
Stored Procedures – love or leave 'em?
A stored procedure is a set of SQL statements that is stored on the database server and is available to be executed by name. Stored procedures are the cause of “religious wars” in the world of relational databases where some DB users live and die by them and other DB users consider them an anti-pattern. They are certainly a tool that has the potential to be mis-used and create performance issues and blockers to scalability. As a DBA/Developer/Architect, should you be using them or not? Let’s dive in and try to answer that question. When I see stored procedures used in databases, they fall into one of the following categories: There are various arguments for and against the use of stored procedures. Let’s examine a few: When you write a stored procedure, the query execution plan is stored/cached on the server which saves time when the stored proc is executed. This has become less of a factor over the years as DB technology has progressed. SQL optimizers have gotten better at storing plans for “dynamic” SQL and storing those plans for re-use. By making use of prepared statements in data access code, the same benefit of execution plan caching can often be gained. There are opportunities to grant an application or user permissions to execute a stored procedure but restrict access to the underlying table(s). Or, you could grant access to a procedure that INSERTs data into a table but deny access to a procedure that UPDATEs or DELETEs data from a table. If every CRUD operation performed against the database is contained in stored procedures, it’s relatively easy to have an understanding of the query patterns used in the database. As a DBA, getting your arms around the query patterns (both reads and writes) that are being executed against a database is a huge step in being able to manage and optimize a database -- especially when inheriting a new system with which you have little familiarity. SQL Injection Attacks are a classic security vulnerability. If you’re not familiar with them, google “little bobby tables” and check out the famous XKCD comic that pokes fun at them. Because stored procedures are typically parameterized, they provide a level of protection against unsanitized SQL inputs. To be fair, I have also seen stored procedures exploited with SQL Injection attacks; they’re not a magic fix for SQL injection attacks, and smart developers still take precautions to sanitize data and use parameterization correctly. For certain database operations that require several roundtrips from the app to the database, there can be a performance boost by planting all the necessary logic within a stored procedure to handle the entire operation without ever “leaving the database.” By storing SQL statements together in an encapsulated, named element which can be executed by various processes, there is an opportunity to re-use logic by application code, reporting services, and other DB clients that may be simpler than encapsulating that logic in an application tier (for example, in a microservice). In a system that is embracing microservices, this argument becomes largely irrelevant. There is a subtle temptation when writing stored procedures to add little bits of logic that really don’t belong in the database. When there is business logic shared between the services code and the database (in stored procedures), it is harder to have a holistic view of a software system’s business logic; things can easily be missed which can lead to unexpected bugs. In the case of 10,000-line monsters mentioned above, this embedding of business logic is taken to an extreme. This leads to highly unwieldy and difficult to maintain code. This is probably the most infamous use of stored procedures and one of the anti-patterns dissenters love to cite. The code used in stored procedures often moves away from standard ANSI-SQL and into implementation-specific syntax. This makes it harder to move between DB systems. If the interaction between applications and databases is all plain-vanilla SQL, it is relatively easy to migrate an app from using any database system to any other database system. While it is good practice to keep database schemas (tables, indexes, views, etc.) in source control, when the schema contains stored proc definitions, the management of the schema becomes much more complex. Often, changes to stored procedures are tightly bound to Data Access code, so managing versions of schemas and their appropriate code builds can be challenging. And, doing seamless upgrades of applications can cause complexities as well. An ORM (Object Relational Mapper) is a piece of software designed to abstract away the manual mapping of domain objects (used in code) with the corresponding table structures (used in databases). By employing an ORM, you (in theory) save yourself a lot of trouble writing boring, repetitive data access layer code. ORMs typically write SQL statements dynamically and execute them directly. It is often hard, or at least involves a decent investment in time and effort, to get ORMs to use stored procedures which subtracts from their intended utility. Note: I’m not actually a huge fan of ORMs, but I understand their value. My personal rule when deploying ORMs is to do so in a way where it’s easy to let the ORM do the data mapping when it’s efficient, but to allow the insertion of manual data-access logic for cases when the ORM is making poor choices. In other words, if the ORM makes your life easier 80% of the time, then leverage it for that 80% -- but don’t tie yourself to the ORM for that other painful 20%. I admit that in the part of my career where I was writing apps using the Microsoft stack and SQL Server that I was a huge proponent and user of stored procs. I was also a huge Microsoft snob and had no plans of moving off of SQL Server to any other database platform. Fifteen years later, with a more mellow attitude and a little more hard-won experience under my belt, I am much more of a believer in building systems that use the right tools for the right jobs. The database is there to provide durability, handle concurrency, provide consistency, and generally take away the stress of storing data. Put the data in the database. Put logic that surrounds the data access in a data API and expose that to your applications. In general, I believe that the trend regarding stored procedures is to move away from them -- especially when architecting systems that need to be highly available and massively scalable. The trends you will find around these types of architectures are the use of microservices, the adoption of various types of horizontally scalable data platforms (Distributed SQL, NoSQL, ElasticSearch, Spark, Snowflake, etc.), and a general desire to decouple business logic from data operations. I also believe that the potential pros that can be gained from the use of stored procedures can also be gained in other ways -- through the use of microservices and through good coding practices; however, the potential cons are much harder to overcome. I am hesitant to go so far as to call stored procedures a “crutch.” But I do think that if a system architect were to adopt a policy of explicitly barring the use of stored procedures in favor of other mechanisms that support reuse and decoupling of data and logic concerns that it would be a decision I would consider healthy. Disclaimer: I work for Cockroach Labs and in CockroachDB, we don’t support stored procedures. The discussion around whether we should support stored procedures comes up fairly often. It would certainly make data migrations easier. And, for some of the reasons noted above in this article, stored procedures can certainly add some value. But for the most part, we seem to take the stance that most modern systems are not adopting stored procedures, and so, this feature never seems to get high enough prioritization to get done. Instead, we seem to take on features that enable adoption of the more forward-thinking trends (cloud native, microservices, containerization, serverless). We may support them in the future, and if we do, I hope we will put some guardrails around their usage to gently nudge users away from using them in potentially limiting ways. Since the use of stored procedures is, as noted above, a bit of a religious war, I’d love to hear any comments you have on the pros and cons of stored procs and whether you think they are a tool that has a place in modern application architectures.
3
Show HN: Mltype – Typing Practice for Programmers
{{ message }} jankrepl/mltype You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
2
Implications of Outlawing Bitcoin
Your browser doesn't support HTML5 audio. Here is a link to the audio instead. Bitcoin is complicated and scary. Just like fire, electricity, computers, and every other ground-breaking invention before it. It is complicated and scary because most people do not understand how it works and why it might be useful. Once you begin to understand how it works, you will begin to understand why it is so useful to people around the world. And I hope that once you truly understand its basic operational principles, you will begin to understand why outlawing Bitcoin is a foolish proposition. In light of recent comments by legislators and politicians, we must not forget what Bitcoin does and how it does what it does. Bitcoin is text. Bitcoin is speech. Bitcoin is math. Bitcoin has no jurisdiction, just like 2+2=4 has no jurisdiction. Bitcoin knows no borders. Bitcoin is everywhere and nowhere, and if used and secured properly, bitcoin is as confiscatable as a thought. No amount of legalese or otherwise complicated language will change these facts. Using Bitcoin does not require any special equipment. We use software and specialized hardware to use Bitcoin more efficiently, and in a more secure manner, but in theory, Bitcoin can be run on pen and paper. The following statements are and will always be true: These statements will sound strange to you if you don’t know how Bitcoin operates, but they are true nonetheless. Thankfully, Bitcoin is an open system, which means that everyone can learn the operational details of the network. I encourage you to do that and, if you can, educate others. We must not forget what politicians are implying when they are musing about “banning wallets” and making up nonsensical and disingenuous adjectives such as “self-hosted” and “non-custodial.” A wallet is nothing special; it can be just some words in your head. You don’t need specialized equipment to generate a secure wallet. A coin or some dice is all you need. To interact with the Bitcoin network, you need a wallet, which is to say you need a private key. While conventional concepts do not apply well to Bitcoin, one could argue that creating a public-private key pair is akin to creating an account. And since public keys are derived from private keys, we only have to answer one question: what are private keys, and how are they created? A private key is a 256-bit number. That’s it. End of story. So, what exactly is a 256-bit number? Well, as the name suggests, a 256-bit number is a number that, when represented using zeros and ones, is 256 binary digits long. In other words: it’s a really big number. ⚠️ Warning: The private keys shown on this page are real private keys. Do not send bitcoins to or import/use any sample keys; you will lose your money. I repeat: YOU WILL LOSE YOUR MONEY The following is a 256-bit number: These zeroes and ones - or, more accurately, the information contained in these zeroes ones - are a valid Bitcoin private key. You could use this information to receive and send transactions on the Bitcoin network. Why is this important? It is important because I can create a Bitcoin wallet by sitting in my room, flipping a coin 256 times. If you want to outlaw “anonymous wallets,” you will have to outlaw this activity, along with all other techniques to create random numbers: rolling dice, drawing cards, measuring optical or atmospherical turbulences, and so on. Further, since these zeros and ones are just information, you can represent them in countless different ways. The information does not change; just the representation of the information changes. The hexadecimal version 12e188aeb7c9aeb0eef7fac7c89e3b9b535a30b2ce8d6b74b706fa6f86b061e4 represents the same private key as the zeros and ones above. As does the following mnemonic code, which can be learned by heart with some practice: concert, frozen, pull, battle, spend, fancy, orient, inside, quiz, submit, scare, mechanic, awake, mercy, lock, inside, language, tag, dash, control, borrow, hip, print, absorb Remember: this information, this 256-bit number, is all you need to interact with the Bitcoin network. You do not need an ID, or passport, or utility bill, or proof of residency. You do not even need to be human. If you are in control of a Bitcoin private key, you can send and receive transactions. If you want to understand Bitcoin, you will have to understand that a wallet can be created by flipping a coin 256 times. You flipped a coin 256 times. Now what? Time to earn some money! To receive sats, you need an address, which can be derived from your private key. Grab a pen and paper, go to your standing desk, and calculate your public key according to BIP32. If you are short on time or bad with math, don’t despair. There are online tools that will do the math for you. But remember that these software tools do nothing weird or magical. It’s just math, and you can do it yourself using nothing but a pen and paper. The outcome of all that math will produce a number that, when encoded as a bitcoin address, will look something like this: Share your address with someone else, and you are ready to receive your first sats. Keep in mind that you don’t have to share the address in this exact format. You can encode it as a QR code, as a number, as emojis, as an audio file, or as braille. You can put it in your invoice, display it on your homepage, in your profile, send it via a messaging application, or tattoo it on your body. It is just information. It can be represented in countless ways. Also, keep in mind that you do not have to be online to receive sats. The sats won’t be sent to you directly. Someone will sign a message that will transfer the sats to your name - if you excuse my imprecise language. It’s not your name, of course, since Bitcoin doesn’t know any names. But that would be one way to think about it. Someone just broadcast a transaction that includes your address as an output, which means that you will receive your first sats soon. Now what? Time to create a transaction and pass them on. If you have a private key, you can create a transaction. Remember that a private key is just a large number. What can you do with numbers? You guessed it: math! In its simplest form, a bitcoin transaction is a message that says something like the following: I, Alice, hereby transfer 21 sats to Bob. Signed, Alice. Real transactions might have multiple senders and multiple recipients as well as various other tweaks and efficiencies, but the essence remains the same. What is important to note is that nothing is secret in a transaction. All transactions are broadcast across the whole network, viewable and verifiable by everyone. All transactions are plain text. Nothing is encrypted. To write “Alice sends 21 sats to Bob” in a way that makes sense to the Bitcoin network, a special, more precise format has to be used. Don’t get confused by the format of the message or how the message is encoded. It doesn’t matter if the language is English or something that is easier to understand for computers. The meaning of the message remains the same. I could write the above as [A]--21-->[B] and sign it with the private key that corresponds to A, and it would essentially be the same thing. This brings us to the important part: the signature. Hand-written signatures are not very useful in a digital world, which is why mathematicians and cryptographers came up with a modern equivalent: digital signatures. I will not go into detail explaining how they work, but the important part is this: it’s all just math and numbers. Your private key acts as a large secret number that is used to perform mathematical operations. The result of these mathematical operations is a digital signature (another number) that can be checked using your public key, which is, again, a number. Math is what makes public-key cryptography work. The beauty of this math is that you can verify that the sender is in control of a secret number without revealing the secret number. This is what cryptographic signatures do. Let’s look at an example. The following is a valid transaction: You can use various tools to decode and inspect it. These tools help us humans to make sense of it all, but the underlying reality remains: it’s numbers all the way down. To reiterate, the following is all you have to do to interact with the Bitcoin network: Outlawing any of these three steps is ridiculous. It is ridiculous because of the peculiar nature of information. If you outlaw certain kinds of information, you implicitly outlaw strong representations of this information: text, speech, images, emojis, QR codes, sign language, interpretive dance, and so on. And since all information can be represented as a number - including math and computer code itself, it boils down to making numbers illegal. Although banning numbers is as ridiculous as it sounds, it has happened in the past. Illegal numbers and illegal primes are a thing precisely because some people tried to outlaw certain kinds of information. Society and law makers will have to grapple with the fact that Bitcoin wallets and transactions are just information, as is everything else in Bitcoin. Because a Bitcoin transaction is just information, sending sats to someone is propagating that information, or, in other words: sending a message. You don’t even have to send the message to a particular person. Base-layer transactions are broadcast transactions. They are sent to everyone on the network. Keep in mind that any communications channel can be used to send and receive information. The internet is simply the most efficient communications tool we currently have. But there is no reason why you couldn’t use a  satellite connection or ham radio, which people are and have been using, be it out of fun or necessity. The fact that spending sats is sending a message doesn’t change on higher layers. Nodes on the Lightning Network are doing the same thing: they are sending messages back and forth. Nothing more, nothing less. This hides two truths about Bitcoin in plain sight: Messages might be sent through an encrypted communications channel, but the messages of the protocol are and will always be plain text. They have to be. The whole point of Bitcoin is that everything is easily verifiable by everyone. Outlawing Bitcoin implies outlawing messaging. Keep in mind that we are dealing with pure information. Information can be encoded in virtually endless ways: different formats, same meaning. And herein lies the crux: you can not outlaw the meaning of a message. If you do not know the protocol, the meaning of the message will elude you. If you do not speak the language, you don’t know what is said or why it is being said. This brings us to the last piece of the Bitcoin protocol puzzle: mining. Most people do not understand what Bitcoin mining is and how it works. They do not speak the language of Bitcoin, failing to understand both what is spoken and why it is spoken in the first place. Bitcoin miners aren’t doing anything special, just like computers aren’t doing anything special. They are crunching numbers. Not too long ago, when you said “computer,” you were referring to a person. It was a job description, not a thing. The most efficient way to run the numbers was to pay a person to sit down and do the math. Over time, we’ve built ever more efficient contraptions to do the math for us. Today, when we say “computer,” we mean something that uses microchips instead of brains to do the computing. But the underlying reality has not changed: computers crunch numbers. They do not do anything special, or devious, or magical. The same is true for ASICs. Bitcoin, like NASA software before it, can run on a human substrate as well. We do not need ASICs to mine bitcoin. We could do it by hand. We could use our brains. It’s slow, and cumbersome, and inefficient. But we could absolutely do it. Just like you could use pigeons instead of computers to run the internet, you could use humans instead of silicon chips to run bitcoin. It would be highly inefficient, yes, but it would work just the same. As Ken Shirriff showed in his 2014 video, SHA-256 is simple enough to be computed with pen and paper. He managed to do one round of SHA-256 in 16 minutes and 45 seconds, which works out to a hash rate of 0.67 hashes per day. I am showing you all this to make it explicit what the basic building blocks of Bitcoin are: numbers, math, and the exchange of messages. This is true for all processes in Bitcoin. It doesn’t matter if you create a private key, derive a public key, generate a Bitcoin address, mine a block, sign a transaction, or open a Lightning channel. All you are doing is coming up with or finding large numbers, manipulating these numbers via mathematical equations, and sending the result of these equations to your peers. That’s it. Communication does not lose constitutional protection as “speech” simply because it is expressed in the language of computer code. Mathematical formulae and musical scores are written in “code,” i.e., symbolic notations not comprehensible to the uninitiated, and yet both are covered by the First Amendment. If someone chose to write a novel entirely in computer object code by using strings of 1’s and 0’s for each letter of each word, the resulting work would be no different for constitutional purposes than if it had been written in English. Once you understand that Bitcoin is information - and that computers and the internet are just the best substrates to transform and transmit this information - the implications of outlawing Bitcoin should become clear. You can put Bitcoin in a book, which means you would have to ban the publication of books. You can speak bitcoin by uttering 12 words, which means you would have to ban speech. You can mine bitcoin with pen and paper, which means you would have to outlaw math, or thinking, or writing. You can store bitcoin in your head, which means, of course, that having certain thoughts is illegal if “holding bitcoin” is illegal. If having 12 words in your head is illegal, something is terribly wrong with the law. If the police storm your building because you are finding or creating a random number in the privacy of your own home, something is terribly wrong with the police. If the peaceful exchange of messages is seen as dangerous or immoral by society, something is terribly wrong with society. If speaking or knowing numbers becomes a criminal act, I don’t want to be a law-abiding citizen in the first place. Bitcoin is pure information. It utilizes the properties of information as well as the transformation of information - computation - to build up a shared construct that we can independently agree upon and verify. It is nothing but math and numbers. Zeros and ones, sent back and forth by voluntary participants that want to send and receive messages in peace. A Bitcoin private key is a large number. When represented as words, this number can be stored in your head. A private key is all that is required to send and receive payments. You can sign and verify transactions with pen and paper. You can mine bitcoin with pen and paper. Bitcoin is just a messaging protocol that does these operations efficiently and automatically. Understanding Bitcoin from first principles will make it obvious that the idea of banning “anonymous crypto wallets” is not feasible. You would have to outlaw the generation of entropy, the act of coming up with random numbers. You would have to surveil everyone at all times, kicking in their door and arresting them once they sit down and start flipping a coin or rolling some dice. You would have to pass legislation that criminalizes thought itself since creating an “anonymous bitcoin wallet” is nothing more than coming up with 12 random words. Dear legislators, I ask you earnestly: Are you prepared to outlaw thought itself? Should math be illegal? Do you genuinely believe that outlawing speech is a good idea? I hope that we can all agree that thought and speech are paramount to a free and prosperous society. And I hope that, as more and more people understand how Bitcoin operates, citizens and legislators alike will realize that Bitcoin is just that: thought and speech. This article is largely based on two chapters of my upcoming book 21 Ways . Want to help? Add a translation! All translations »
1
The implicit/recent differences between the meanings of “follow” and “subscribe”
James Cridland, writing on Podnews: Apple Podcasts will no longer use the word “subscribe” in a few weeks. Listeners will be invited to “follow” their favourite podcasts instead. The new wording will be in iOS 14.5, which should be released later this month (and is available in beta). We expect Apple to communicate further with creators, and listeners, when this version of iOS is released. For me it will feel weird for a few months, but in many ways, this change makes sense, as the word “follow” sounds easier, less intimidating, and surely less of an obstacle than the word “subscribe,” which sounds like it requires more engagement and effort from the user. When you subscribe to a newsletter, a service, or a magazine, you may have to enter your email, sign up, log in, and sometimes even have to pay, whereas a podcast subscription is usually one click away, and usually free. Spotify already uses the word “follow” for podcasts, and if I wonder what impact this change of vocabulary will have on RSS readers — do you subscribe to a feed or do you follow a feed? — I have no doubt that the other podcast platforms and podcast players still using the word “subscribe” will eventually adopt “follow” too. If “follow” implies easy and free, and “subscribe” implies more commitment and maybe the involvement of a payment, I wonder if subscriber-only podcasts like Dithering will keep using the word “subscribe” (maybe this is another thing we will see on Apple Podcasts?), and if Twitter’s upcoming feature Super Follow — essentially a subscription service — will end up being called “Twitter Subscribe” to avoid confusion.
89
Plan the Sprint, Not the Project
So much of what we call agile is waterfall but with sprints. One of my favorite principles from Lean is to decide as late as possible. Wait to decide until you can’t wait any longer, because that’s when you have the most information. Apply this to everything: technical architecture, prioritization, requirements, whatever. In project planning, deciding as late as possible means that you strong. That way you can be as confident as possible that it’s the most valuable thing to do next. Don’t decide what’s in scope for the whole project before the project even begins. Shut up and plan the sprint. If you focus on the sprint, then the project works itself out. Make sure every sprint has a sprint goal so you’re always doing the most valuable thing at that moment. And limit your backlog to 1.5 to 2 sprints worth of tickets. You’ll change requirements or priority of anything beyond that before you get around to building it anyway. In the timeless words of the Agile Manifesto: Customer collaboration over contract negotiation. Responding to change over following a plan. “But my boss (or client or stakeholder or whoever) wants to know how much it will cost!” Sure, and you can tell them a price and a timeline, but you can’t guarantee a scope of work. Fixed price, variable scope. This is for their own good. What seemed valuable when we started will change. Help them be agile. Help them understand that responding to change brings more value than following a plan. Talk to them about how collaboration brings more value that negotiating a scope of work. Help them learn to decide as late as possible. So don’t plan the entire project before it starts. And don’t create tickets for things you won’t work on for months, if ever. Shut up and plan the sprint. Thanks for reading! Subscribe via email or RSS, follow me on Twitter, or discuss this post on Reddit!
1
From Pickup Artist to Pariah (2016)
Jared Rutledge has been called a sociopath. Strangers have picketed outside his coffee shop, calling for his castration. People he thought were his friends won’t return his texts. There are a few places he still feels safe: weekly lunches with his grandma, his therapist’s office, the meetings of his peer-facilitated men’s support circle. At night, he reads fantasy books and loses himself in a universe with societal rules unlike the ones he broke here in Asheville, North Carolina. The scandal that resulted in Jared’s self-imposed house arrest started in August, when an anonymous bare-bones Wordpress blog entitled “Jared and Jacob Said” appeared online. For a few weeks, no one noticed. Then, on a Friday in September, someone Jared didn’t know posted the link on Facebook. That night, C., a redheaded woman in her late 20s, saw the link to the blog. She glanced at it long enough to understand that it had something to do with Waking Life Espresso, a popular West Asheville coffee shop owned by Jared Rutledge and Jacob Owens. C. and Jared had had a six-month fling in 2012; she’d just gotten out of an abusive relationship, and Jared, with his brown curls and his philosophy major’s curiosity, seemed like the perfect candidate for some strings-free fun. Her experience with him had been such a refreshing example of no-drama casual sex that when several friends asked her about Jared after matching with him on Tinder, she told them to go for it. Her friends went on to sleep with him, too. The next morning, C. scrolled through the blog on her phone, trying to make sense of what she was reading. It seemed that Jared, with Jacob as his wingman and sidekick, had a secret online life as a member of the pickup-artist community. The pickup artist’s most familiar incarnation is Neil Strauss, author of the 2005 best seller The Game: Penetrating the Secret Society of Pickup Artists . But since The Game’s publication a decade ago, it’s evolved into a thriving constellation of blogs and sub-Reddits offering obsessive overanalysis of dates, economics-inflected gender theory, and men’s-rights rants. The sites are known, collectively, as the “manosphere.” One of the central tenets of the manosphere is that there is a “red pill/blue pill” dichotomy permeating the world. The image comes from The Matrix , when Morpheus challenges Neo thusly: “You take the blue pill, the story ends. You wake up in your bed and believe whatever you want to believe. You take the red pill, you stay in wonderland, and I show you how deep the rabbit hole goes.” You can either unquestioningly accept society’s fictions — blue-pill thinking — or grasp the true power that comes from taking the red pill and facing the painful truths that most people deny. In the manosphere, the red-pill truth is that men are victimized by a contemporary culture that is biased toward the female perspective. Reddit’s Red Pill community, which has more than 100,000 subscribers, is devoted to “discussion of sexual strategy in a culture increasingly lacking a positive identity for men.” There’s also a Red Pill Women group, with 12,500 members, which sums up its philosophy as “Find a good man and defer to him.” At their most innocuous, these sources provide men with tips for becoming more self-confident and assertive; at their most toxic, they include casual racism, rants against age-of-consent laws, and blog posts entitled “Why Women Shouldn’t Work.” Rutledge had become enamored with the manosphere after reading The Game in 2013. The following year, he started an anonymous blog and Twitter account under the handle Holistic Game (tagline: “Putting the sweet D in the tender V since 2013”). He also recruited Owens, his friend and business partner, to start a podcast. The format was simple: The two men sat on a leather couch in Jared’s living room, drinking whiskey, riffing on their dating experiences, and offering advice to less-skilled men. Jared’s blog posts offered dating tips, braggy descriptions of sexual encounters, angry venting about women who’d flaked on him, and pseudointellectual analysis of gender relations, all aimed at an audience of like-minded men. Even as Jared tried to build a following in the manosphere, he hid Holistic Game from his friends and customers, with the exception of Jacob. Then, last August, someone — it’s still not clear who — created “Jared and Jacob Said,” which meticulously built the case that the guys behind Holistic Game were Jacob and Jared from Waking Life and then proceeded to lay out the men’s most offensive social-media musings. (Most of the comments were Jared’s; Jacob participated in the podcast but didn’t write any of the tweets or blog posts.) It was difficult for C. to reconcile the respectful, bearded philosophy major she’d dated with Tweets like “I hate girls ‘I do everything but fuck on the 1st date’ rules. Hard to hide my disdain. My cock was in your mouth, why not the pussy bitch.” They must have the wrong guy. Then she started reading a post titled “A Breakdown of All My Lays.” “I’m going to analyze my own experience with women in order to shed some light on what women are really like,” Jared had written. What followed was a list of his sexual conquests, evaluated with a numerical score that ranked each woman’s face, body, and personality, as well as a brief description. There was something horrifyingly familiar about number four: “Frisky little redhead, early twenties. Not very hot and talked too much … I bailed on her because I wasn’t that into it. I see her from time to time, and she’s letting herself go a little.” C. screamed so loudly that her boyfriend jumped out of the shower to see what was wrong. Word of Jared and Jacob’s double life online was spreading across town. Women who’d slept with Jared, including C.’s three friends, found themselves picked apart on his list: “damaged goods”; “headed towards cat lady status”; “not very hot.” Sarah Winkler, a coffee-shop employee who had worked for Jared and Jacob for two and a half years, read their analysis of “female behaviors” and assertions that “logic is not a woman’s strong suit” and immediately quit. Asheville is a small, tight-knit community; pretty much everyone I spoke to knew someone on Jared’s list. “I was so upset, because I had vouched for him,” C. said. “And then it’s like, ‘Who were you? Who are you?’” I had assumed that Jared’s online persona was a kind of garden-variety lame-dude sexism, something that was more disappointing than threatening. But the night before I met him in person, I read all the blog posts and tweets together, one after another, and started to feel queasy. (Jacob did not agree to meet with me.) There was a casual aggression mixed with a strong undercurrent of contempt: “There are few things that give me more sadistic pleasure than witnessing the ever-increasing neuroses of a woman hitting the wall,” he tweeted. (“Hitting the wall” is manosphere-speak for aging.) And: “I grow increasingly weary of women’s utter inability to be self-aware and communicate like an adult. I’m just going to start yelling at them.” And: “I’d care way less about Mexican immigration if their women didn’t look like sunburnt cane toads. More Spanish, less indigenous.” Jared lives in a tidy West Asheville house he owns. Tallish and good-looking in a generic way, with weary eyes, he had on jeans and the blue-checked button-down that New York had just that week named “the shirt that every man owns.” A painting of his great-great-great-great-grandmother, prim and Puritan in a starched collar, looked down on us as we talked in the living room. “I indulged in cynicism and bitterness,” he told me right away. “And that’s what I’ve been working through for the last month and a half. What are the causes of it inside myself?” He spoke with the careful words of someone who has been recently therapized. “What made me —” he’d start to say and then catch himself. “That’s not the right way to say it, because I chose it. When I talk about causes, I don’t want it to sound like a justification.” And: “I should use I statements.” “All of my life I have looked for certainty and attempted to make sense of the world,” he said. That search for an explanatory framework led him to the manosphere, where his tendency toward judgment was amplified and given direction. “It’s like a rut in your brain. I’ve had this my whole life, in different arenas. Anger and judgment. Road rage or being mad at a customer who annoyed me. The actual thing to fix is why do I feel the need to — mentally or verbally or on Twitter — punish someone with words because they slighted me. The root of it is the need to judge the world because it doesn’t meet my expectation.” Jared grew up in West Asheville. On his blog, he described his father as “reasonably beta”: “He has long struggled with feelings of rejection and worthlessness, and had no idea how to teach me to be successful, cool, secure, or charismatic. The outcome-independence, charm, and confidence I’ve been slowly learning through game was completely lacking in my upbringing.” The family attended a Baptist church until Jared was in eighth grade, when they switched to a Pentecostal church where religious ecstasy was encouraged — Jared spoke in tongues during more than one service — but sex was taboo. In ninth grade, he and some friends discovered how to download porn on school computers. They sent some of their favorites to the principal using an anonymous email address but were ratted out by a classmate. Jared was suspended for a week. “Walking into chapel the next Wednesday was hellaciously shaming,” he wrote in his public apology after the Holistic Game scandal broke. “It felt white hot. To know that everyone in that gymnasium was disappointed and disgusted in me was almost unbearable. But I’d brought it on myself, and there was nothing for it.” In college at the University of North Carolina–Asheville, he majored in philosophy and confidently told atheist friends they were going to hell. He was still a virgin, but he diligently studied sexual how-to videos. After college, he spent two years in Australia, studying music in a Christian creative-arts school, learning about coffee, and finally losing his virginity at age 24. In 2009, after moving back to Asheville, Jared opened Waking Life Espresso, a coffee shop meant to double as an intellectual meeting space; it’s named after Richard Linklater’s trippy philosophical movie. Informal philosophy-discussion groups regularly met in the back room. Jared was pedantic and exacting about his coffee. Nearly everyone I interviewed, even those who are fantasizing about terrible things befalling Jared, spoke wistfully of Waking Life’s coffee. Within a few years, Waking Life was selling its popular flash-chilled iced coffee through many local retailers, including Whole Foods, and planning to open a second location. (Jacob became a co-owner in 2014.) In 2012, Jared simultaneously split up from his long-term girlfriend and lost his faith in Christianity after a family tragedy that he prefers not to talk about. Freed of constraints, he was eager to experience all that the world of sex had to offer, but he kept striking out. Then he discovered i. Strauss’s book detailed the Pick Up Artist’s (PUA’s) carefully honed techniques for seducing women, including the “neg” (mildly insulting a woman to lower her self-esteem so she’s more receptive to your advances) and “peacocking” (wearing an ostentatious accessory to attract attention and project confidence). The book was a revelation. “It wasn’t that I read it and was like, ‘Oh my gosh, I want to do magic tricks in front of women,’” he said. “It was more like, ‘This is figure-out-able.’ I didn’t have the skill set to achieve the outcomes that I wanted, but I could learn it.” Attempting Game techniques in the real world didn’t always go smoothly, such as the first time he attempted a neg on a date. “This woman mentioned that she was the youngest, and I said, ‘Oh, that makes sense. You are kind of entitled and princessy.’ She got very offended and it totally blew up the date, and that was that.” But as Jared continued to explore The Game and the online world it inspired, he found plenty that was useful. “You learn how to flirt and you learn how to talk to women and you learn how to say things in ways that don’t make them feel uncomfortable anymore. I hit on a tremendous amount of girls where, if I had done it three years ago” — before learning about The Game — “they would have been like, meh. But they weren’t that way when I hit on them,” he said. I spoke with a number of women who were involved with Jared during these years — some as one-night stands, others for as long as six months. None of them wanted an exclusive relationship, and all of them felt okay about how things had gone with Jared before they read his descriptions of them online. “Men [in Asheville] in their 30s have, like, two part-time jobs and four roommates,” one told me. “They don’t grow up.” In this context, Jared stood out. “He had his own business,” another woman told me, “and that was something I liked about him.” Before long, Jared’s sex life was like a part-time job. While some PUAs try to rack up as many one-night stands as possible, Jared was after a series of regular sex partners, what’s known in the Red Pill world as a harem. He hit on customers and friends, suppliers and strangers, women on Tinder and women on OKCupid. He had sex with women in the apartment above the coffee shop and in the garage out back. He created a spreadsheet that he updated with each conquest, color-coded based on how he met them and how the relationship ended. As soon as a new partner walked out the door, he’d rush to the computer to add her to the list, the thrill of quantification merging with the thrill of the chase. In 2012, he slept with three women; in 2013, 17; in 2014, 22. In manosphere terms, he was spinning plates — keeping multiple casual relationships going at once. The most popular PUAs have online empires with e-books, coaching programs, and tens of thousands of newsletter subscribers. Jared had vague hopes of monetizing his newly acquired skills when he started Holistic Game in 2014. “I didn’t write those blog posts or tweets for women,” he said. “I wrote them for men. I wrote them for other men in this corner of the internet to validate me and make me feel good.” Jared intended Holistic Game to be a positive, thoughtful contribution to the Red Pill universe. Early on, he wrote posts with titles like “Baudrillard’s Hyperreality and the Manosphere” and pointedly countered some of the worst Red Pill tropes about how women are sinister creatures who only want to humiliate men. He told me that the more hateful and racist parts of the manosphere disturbed him; he just appreciated the dating advice: “You eat the meat and spit out the bones, you know?” If Jared had studied his foundational text more closely, he might’ve been able to predict what happened next. By the end of The Game, Strauss has a revelation: The systematic, quantified pursuit of women tends to make men bitter and resentful. (Of course, Strauss apparently didn’t internalize his own revelation either. Earlier this year, he published a new memoir, The Truth: An Uncomfortable Book About Relationships , detailing his reimmersion in sexual conquest and shallow relationships, which also ends with redemption and lessons learned.) And so, even as Jared was getting what he purportedly wanted — plenty of sex with plenty of women — he became increasingly bitter and judgmental. Over time, his anger became directed not at a particular woman who flaked on him but at women as a group: “The hardest thing in game,” he tweeted, “is not hating women for how fucking stupid they can be.” Asheville doesn’t seem like an obvious place to encounter the manosphere’s particular flavor of aggrieved masculinity. The town, nestled at the foothills of the Blue Ridge Mountains, was deemed “America’s new freak capital” by Rolling Stone in 2000, thanks to its high proportion of crystal healers, anarchists, and various inflections of hippie. It’s a place where you can find flax milk in the coffee shop and have your pick of Wiccan covens. At the organic vegetarian teahouse, I heard a man tell a woman, sincerely, “It’s like the yoga of eating.” As the news of Jared and Jacob’s secret life continued to spread on Facebook, the men issued ham-fisted apologies (Jared: “Most of my life I’ve struggled with insecurities around dating”; Jacob: “I love women”) that only seemed to make things worse. They proclaimed that they would donate the coffee shop’s profits for the rest of the year to Our Voice, a local rape crisis center; Our Voice rejected the money and issued a statement saying that the organization “is not in a position of absolving them for their misogyny as it perpetuates a culture of danger to all women and girls.” Some Ashevillians took a boys-will-be-boys attitude, arguing that Jared and Jacob were just talking the way men talk when women aren’t around. But for the most part, the judgment against them was swift. The owner of a local gallery who had been friendly with the two men stopped selling their iced coffee. Another small-business owner called Whole Foods and suggested it do the same. A few dozen people picketed outside Waking Life, holding signs that read I AM A WOMAN, NOT A PLATE and DON'T BUY THEIR COFFEE OR THEIR APOLOGY. Jared had uncomfortable conversations about his personal life with his family members. “You know,” his grandmother told him, “we’re women too.” The online discussion got so heated that one local yoga instructor created a special series of poses to enable forgiveness and shared it on Facebook. But, said Jared, “Nobody reached out to us to say, ‘What do you need to heal, to be better men?’ — except Trey.” Trey Crispin is 45, with a graying man-bun and a gentle manner. He’s also the size of a fullback. Back when he was a professional snowboarder, he regularly snapped his boards in half with the force of his moves. Later, he learned how to control his own strength through the practice of t’ai chi. Having worked on his own anger issues for years, he says he is precisely attuned to other people’s aggression and sensitive to when it seems misplaced. Like most other people in Asheville, Trey found out about Jared and Jacob via Facebook. He looked through the blog and the tweets and felt repulsed. He had never met the men or been to their coffee shop, but for some reason the story stuck in his head. I met Trey at his house in downtown Asheville. His aesthetic could be summed up as “enlightened masculinity”: wind chimes on the front porch, the thick smell of nag champa in the air, Indian ragas playing on Pandora. He made us green tea and steel-cut oats with almonds and turmeric. Within ten minutes, he was teaching me chi exercises. A couple days after Jared and Jacob were outed, Trey heard that, as he put it, “an angry mob” of protesters had gathered outside Waking Life. The idea of his community tearing itself apart troubled him. He sat at his wooden dinner table, musing, as the sun set. What would it take for men to use such abusive language? Where was their rage coming from? Was there a way to turn this scandal into an opportunity for growth? “I’ve made mistakes,” he told me. “Big ones. And I was taught what community means here. I have an ecstatic-dance practice in this town. I contra-dance in this town. I play music in this town. This town taught me what community is — how, when you fuck up and you bring it to a group of your contemporaries and listen with an open mind to the other minds that you share your space with, you can work it the fuck out.” That evening, Trey tried to dream up a way forward that would honor the harm that had been done but prioritize healing over vengeance. In Trey’s vision, the coffee shop would stay open, with all profits going to nonprofits that combated violence against women. The Waking Life men would organize community circles — groups of men and women who would get together and talk, vent their frustrations, and, he hoped, learn from one another. He contacted Jacob and Jared with his idea through Facebook, and the two men agreed to meet with him at the coffee shop the next evening. It had been three days since the scandal had broken, and the two men looked feral, hunted. They hadn’t been eating; they smelled bad. Trey told them to sit down and asked them to explain what they’d done and why. Every time they offered up an excuse or rationalization, Trey filled the room with his shouting. “He’s a big guy,” Jared told me. “I’d never been sat down and yelled at like that by a man before.” By the end of the evening, Trey believed that Jared and Jacob were sincerely interested in the hard work of self-examination and earning forgiveness. “That night, I felt like I had hope for the first time in about 72 hours,” Jared said. But when the men floated their plan publicly, they found that the community was not receptive. Trey approached a number of local organizations, but none wanted to be part of his gender forums. Protesters continued to congregate outside the shop. By this point, all of Waking Life’s employees had quit; two of them said in a statement on Facebook that “money can not be used to mend broken trust, absolve one of accountability, or assuage the weight of personal guilt.” Trey says he received threats for trying to help Jared and Jacob; he took them seriously enough that he didn’t sleep at home for a few days. On October 5, Jared and Jacob announced that they were closing Waking Life Espresso for good. “If you’re going to say you’re a loving, supportive community and then just kick out everybody that does something fucked up — I think that’s wrong,” Jared told me. “You don’t get to say, ‘We’re loving and supportive and inclusive’ and not put in the work to be that. ” The men found themselves shunned from their other community as well: The manosphere turned on them for their public apologies. “They revealed themselves as fearful little men scrambling to be PC and do damage control,” one poster wrote on the Red Pill sub-Reddit. “The apology was pathetic and he tried to paint himself as a victim. It’s like if your girlfriend caught you masturbating would you scramble to hide the evidence and apologize or would you look her straight in the eyes, without stopping and ask if she’s going to help you finish or go away?” Jared told me that the day before I first met with him, he saw a very attractive woman in the grocery store. He knew exactly how he’d approach, the joke he’d make about the wine label, the first steps in establishing a rapport. “I knew that if I went and talked to her, it would probably be well received from the way she looked at me,” he said. “But I didn’t, because I’m terrified of saying anything to a woman in a flirtatious way in Asheville right now.” Still, having spent weeks in the depths of self-examination, he wondered when he’ll be able to reemerge in town as the new, humbled version of himself — and on what terms. He admitted repeatedly that what he did was wrong: “I used fucking nasty language. I used hurtful and violent language. I shared things I should never have shared about lovers and I objectified women and broke them down to box scores in a way that is objectifying and gross.” But he doesn’t want to apologize for sins he doesn’t think he’s guilty of. Some of his blog posts and tweets discuss sex acts tinged with violence — choking, belt-whipping — but Jared said everything was consensual and mutually pleasurable. “I don’t want to perpetuate rape in any way,” he said. “But I can’t neuter myself along the way. I can’t toss away my sexuality in the same way that Christianity taught me to do.” He is also still trying to figure out which Red Pill principles he can keep. The manosphere helped him gain confidence and release him from feeling ashamed of his desires. He still talks about The Game fondly, dropping the definite article as if it were an old friend: “I’m not going to throw away the good parts of game. I’m not going to throw away the fact that now I know how to flirt, I know how to teach, I know how to have fun.” I asked him if he still wanted to follow the plan he’d written about in his pre-reflection-and-repentance era: fuck around as much as possible until age 38, then marry a 24- or 25-year-old. “Yeah,” he said without hesitation. “Derek Jeter’s doing it.” I must have looked incredulous. “It’s kind of a double standard, right?” he said. “Because everyone’s okay with him doing it, nobody has a problem with that.” “Why do you want to marry a 25-year-old?” I asked. I pointed out that 25-year-olds eventually become 35-year-olds, and then 45-year-olds. “Here’s the thing,” he said. “There’s scientific evidence that when a male commits to somebody, in his brain they’re always as attractive — I’ll send you the study.” You can’t tell from the recording, but I’m pretty sure I put my head in my hands. “Why does that provoke such an emotional response in you?” Jared asked. I wasn’t even sure how to begin. All the PUA dictates and manosphere Reddit threads were crowding my head with their stupid acronyms and their reductive explanations of evolutionary biology. Manosphere principles presume that heterosexual relationships are a zero-sum power game where men and women are always operating at cross-purposes. I was starting to wonder if maybe they were right. “I’m open to change,” Jared was saying. “Maybe I’ll be 38 and I’ll marry a beautiful 50-year-old CFO that can just kick my ass and ties me up. I’m open to new information always. I think that’s where I got off the mark a little bit with this. I thought I had the answer — but the answers I thought I had hurt a lot of people and hurt me.” On my last night in Asheville, I met four women at a downtown bar. All of them were on Jared’s List of Lays. Over cocktails and ramen, the women told me about Jared’s sexual habits, his occasional flakiness, his black-and-white worldview. (They also asked me not to use their names.) They seemed most troubled by just how fine he had been to date. “I really liked him,” said W. “And that’s what makes me feel so gullible.” He hadn’t tricked them by cheating or falsely professing love; they’d all hooked up with him knowing the relationship was casual. “I knew he was dating other people. I was, too,” said W. Jared would ask her to parse mixed messages from other women: This girl had bailed on him twice; what should he do? “He wanted to talk about dating strategy all the time,” said L. “One time I asked him something about texting my ex-boyfriend,” agreed K. “He emailed me a 40-page PDF about men and women.” If anything, they said, it was Jared who wasn’t able to take the sex casually. Now they know that when a woman turned him down or canceled a date or otherwise didn’t live up to his expectations, he lashed out online. And in some ways this betrayal was worse than anything he could have dished out had their encounters been full-blown love affairs. “Having my heart broken by someone I never had an emotional investment in — it’s awful,” said C. Several of the women told me they didn’t leave their rooms for days after the List of Lays came out and that their relationships with other men in their lives suffered. C. stopped sleeping with her boyfriend temporarily, feeling self-conscious about what Jared had said about her: “[The blog] says that I’d really let myself go, which brought up little bits left over from having an eating disorder. It’s like, don’t fuck with my life.” And K.’s on-again, off-again boyfriend broke up with her, she thinks because of what was said about her online, and she’s stopped dating entirely. “I’m mad at all men right now,” she told me. It didn’t help that the women now realized that there was a whole teeming internet full of Jareds out there. “I didn’t even know this sphere existed in humanity,” said K. After the scandal broke, several of the women started a private Facebook group to collectively process the disorienting experience of seeing their private lives — including one of the women’s first experience with anal sex — put up for public consumption. A week or so after finding each other online, some of the women got together in person. Many had never met before. “You’d think that meeting up with a bunch of people who’ve had sex with the same person you have would be awful,” said C. “But instead it was like, look at all these wonderful ladies.” The scandal had left W. feeling ashamed and embarrassed for being susceptible to Jared’s manipulations. “I started realizing that he said so many of the same things — things I thought were genuine — to so many of us.” Meeting the other women helped her realize she wasn’t a fool: “It was comforting to see that so many good, reasonable people fell for the same thing.” Some of them have continued to meet; these days, they talk about their boyfriends and their jobs just as much as they talk about Jared. “It’s so hard to make friends after college,” L. said. “So in a weird way, it’s been a kind of gift.” The other positive thing to come from the scandal is that Our Voice, Asheville’s rape crisis center, has received donations from around the country and implemented a new program to combat sexism in the service industry. “There’s so much misogyny in the world, particularly in the service industry,” said Sarah Winkler, the former Waking Life barista, is opening a coffee shop of her own this year. “This was clear evidence of that. All women were affected by it, whether your name was on the list or not.” Winkler worked with Jared for years; if they weren’t friends, they were at least friendly. When I asked her about Jared’s attempts to become a better man and be reaccepted by the community, she sighed and looked sad. “There needs to be time for the community to heal, and time for the healing within them … They needed a lot of growth.” The women who dated Jared are less interested in his evolution. “He doesn’t deserve a fresh start,” said W. “I never want to see his face again.” Edward Hopper, American 1882-1967 Sunlight in a Cafeteria, 1958 oil on canvas Courtesy of Yale University Art Gallery Bequest of Stephen Carlton Clark, B.A. 1903
2
Pirates: So You Want to Join ‘The Scene’?
Anyone involved in the piracy ecosystem could stake claim to being ‘in the scene’ but for those with a discerning interest in pirate matters, terminology is all important. After decades of existence, The Scene has attained mythical status among pirates. It’s not a site, a place, a person, or a group. ‘The Scene’ is all of these things, combined in a virtual world to which few people ever gain access. In basic terms, The Scene is a collection of both loose and tight-knit individuals and groups, using Internet networks as meeting places and storage vessels, in order to quickly leak as much pirated content as possible. From movies, TV shows and music, to software, eBooks and beyond. Almost anything digital is fair game for piracy at the most elite level. These people – “Sceners” – are as protective of ‘their’ content as they are meticulous of their privacy but that doesn’t stop huge volumes of ‘their’ material leaking out onto the wider Internet. And occasionally – very occasionally – one of their members breaking ranks to tell people about it. TorrentFreak recently made contact with one such individual who indicated a willingness to pull back the veil. However, verifying that ‘Sceners’ are who they say they are is inherently difficult. In part, we tackled this problem by agreeing for a pre-determined character string to be planted inside a Scene release. With a fairly quick turnaround and as promised, the agreed characters appeared in a specific release. That the release had been made was confirmed by the standard accompanying text-based NFO file, which collectively are both widely and publicly available. In respect of the group’s identity, we were asked to say that it has been active since 2018, but nothing more. We can confirm, however, that it already has dozens of releases thus far in 2019. Our contact, who we will call “Source”, also claims to work with groups involved with so-called WEB releases, such as video content obtained and decrypted using sources including Netflix and Prime Video. For security reasons, he wasn’t prepared to prove membership of that niche in the same fashion. However, the information he provided on those activities (to be covered in an upcoming part 2 of this article) is very interesting indeed. But first, an introduction to the basics, for those unfamiliar with how The Scene operates. Basics of ‘The Scene’ – “Source’s” summary (in his own words) Becoming a member of The Scene Despite “Source’s” own group being relatively new, he says his history with The Scene dates back three years. Intrigued at the possibility of becoming a member but with no prior experience, he contacted a Scene group using an email address inside an NFO, offering his coding skills. “I was able to convince the group to slowly adopt me into The Scene by providing them scripts and tools to make their job easier and faster, alongside other programming related tasks. The thing with Scene groups is that they don’t trust outsiders,” he explains. Given that not granting access to the wrong people is fundamental to the security of The Scene, we asked how this “vetting” took place. “Source” explained that it was conducted over a period of time (around four months), with a particular Scene group carrying out its own investigations to ensure he wasn’t lying about himself or his abilities. “The groups who vet new members also often try their best to dox the recruit, to make sure that the user is secure. If you’re able to be doxed (based on the info you give, your IP-addresses, anything really) you will lose your chances to join. The group won’t actually do anything with your personal info,” he adds, somewhat reassuringly. Once the group was satisfied with his credentials, “Source” gained access to his very first topsite, which he describes as small and tight-knit. Topsites often use IRC (Internet Relay Chat) for communications so from there it was a matter of being patient while simultaneously attempting to gain the trust of others in the channel. “Most Sceners are very cautious of new users, even after being vetted in, due to the risk of a user still being insecure, an undercover officer or generally unwanted in terms of behavior. Once you’ve been idling in the chats and such for months, you slowly start gaining some basic recognition and trust,” he says. Once he’d gained access via the first topsite, “Source” says he decided to branch out on his own by creating his own Scene group and gathering content to release. From there he communicated with other users on the topsite in an effort to gain access to additional topsites as an affiliate. As mentioned earlier, his own releases via his own group (the name of which we aren’t disclosing here) number in the dozens over the past several months alone. They are listed on publicly available ‘pre-databases‘ which archive information and NFO files which provide i nfo rmation related to Scene releases. However, his own group isn’t the only string to the Source bow. Of particular interest is his involvement with so-called WEB releases, i.e pirate releases of originally protected video content obtained from platforms like Netflix and Prime Video. “Content for WEB releases are obtained by downloading the source content. Whenever you stream a video online, you are downloading chunks of a video file to your computer. Sceners simply save that content and attempt to decrypt it for non-DRM playback later,” Source explains. “Streams from these sites are protected by DRM. The most common, and hard to crack DRM is called Widevine. The way the Scene handles WEB-releases is by using specialized tools coded by The Scene, for The Scene.” This is a particularly sensitive area, not least since Source says he’s acted as a programmer for multiple Scene groups making these releases. He’s understandably cautious so until next week (when we’ll continue with more detail specifically about WEB content) he leaves an early cautionary note for anyone considering joining The Scene. “You can become Sceners with friends, but not friends with Sceners,” he concludes.
1
People ARE AWESOME 2021-PART 2
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
2
Interactive Arcade Fire Music Video (2010)
The Wilderness Machine | Terms & Privacy | Credits | Share An interactive film by Chris Milk Featuring "We Used To Wait" Built in HTML5 This film does not work on mobile as it requires pop-ups. Please visit the site from a desktop browser.
1
Innovation in the web without sacrificing Accessibility
In the past decades, we have pushed the boundaries of what is possible to create on the web platform. If you think about it, it’s quite astounding what web developers have managed to create using mounds of non-semantic <div> elements which were never intended for that purpose. In our quest to represent anything that we want, we’ve decided to use the element which does not inherently represent anything. By choosing non-semantic HTML, we’ve removed any contextual information which might be useful for assistive technologies. It’s basically like cutting a book into sentences, removing any page numbers or information about the chapters, and placing them in a line on the floor. To read the book, you then have to walk next to each sentence and read it sequentially (guessing where any paragraph breaks might have taken place). And at any given time, it is possible that a door will shut right in front of your face, cutting you off from the rest of the sentences that you haven’t read. In the worst case scenario, four walls will drop down from the ceiling, trapping you in a dark room with five sentences from which you cannot escape, no matter how hard you try. This metaphor is not even that far fetched. By not using semantic HTML, your web application is basically like a soup of words and sentences, without structure and without navigability. Without navigability and without elements which are clearly interactive (e.g. a link or a button), most content will simply not be accessible for users of assistive technologies (like a door shut right in front of their face). And if we are messing around with focus without knowing what we are doing, we can move users to somewhere that they don’t want to be and don’t understand how to get out (like a booby trap that unexpectedly drops from the ceiling). For a classic web application, e.g. a web application which consists primarily of content with a few interactive elements which can be implemented using links (<a>) or forms (<form>), we have absolutely no excuse: we should use semantic HTML elements everywhere. To ensure that our app is accessible, we can test it using different assistive technologies and apply a few tips and tricks to improve the accessibility, but having semantic HTML as the basis will get us most of the way there. I would say that an overwhelming majority of web applications fall in this category. But what about those outliers? Those applications which really stretch the boundaries for what used to be possible in the web? In the current climate of remote work we are always looking for new and great tools which allow us to be productive and work collaboratively. In order to do this, we tried to recreate our physical workplaces in a digital setting. We try to find web apps for digital whiteboarding, or apps that create a virtual event where we can move our avatar around and talk to different people. There is one issue here that we’ve completely overlooked: When recreating our physical workspaces on the web, we forgot that our physical workspaces are also not really accessible for many different people. Whiteboards are not accessible for people who have poor eyesight, or who have motor disabilities. Event venues and parties may be difficult for people who cannot see, hear, or who are easily overstimulated by too much noise or motion. When we transfer these experiences onto the internet, can we then be surprised when the applications that we create are also not accessible? I’m not surprised, but I am saddened. In many well designed tools, I am quite impressed with the different solutions that teams have come up with. We have really pushed the boundaries of what is possible. However, I feel like these solutions have achieved a clean interface design while also sacrificing HTML, the skeleton of the web. We are missing out on an opportunity to create tools for an under-served market (the number of users of assistive technologies is huge). But we are also missing out on the opportunity to work on challenging problems that are interesting and could really truly help people. Just think about it for a minute. It is true that there is no exact semantic HTML element for everything that we would ever want to visualizes in our UI. But if we are trying to think outside of the box anyway, why can’t we think outside of the box here as well? If we were to build an app for a virtual event where we can move an avatar around a virtual room and talk with colleagues, why couldn’t we try to use modern technology to automatically generate closed captions for users who cannot hear (or cannot hear well due to their current situation or surroundings)? It might be difficult for some users to move through the virtual room using a mouse, so why couldn’t we make sure that users can move their avatar using their keyboard and additionally provide a navigation element (<nav>) that would allow the users to view a list of other users in the room and navigate directly to them to talk to them? Let’s also consider the example of the digital whiteboard. Traditionally, working on a whiteboard is not very accessible for a lot of people, but if we were creating a digital whiteboard where all of the text is already digital, we would have the opportunity to present that information in a form which would be accessible for screenreaders and other assistive technologies. If we were visually grouping different notes on a pinboard inside of our app, might we not actually have an unordered list (<ul>) of text elements that we simply style and position differently using CSS? If we have different pinboards in our app with different names, why don’t we use different sections (<section>) which each have a heading element (<h2> - <h6>) so that they can be accessed from the accessibility tree, even if we do end up positioning them absolutely on an unlimited canvas? I’m not sure how you could best translate information about the visual proximity of information on a digital whiteboard (i.e. what other information is near what I am currently looking at) into a format which would be accessible, but isn’t it an interesting problem to think about? There are a lot of fascinating problems out there. I just wish more of us were working on them.
1
Global VPN for Opera for Security Is Out
These extensions and wallpapers are made for the Opera browser. Download Opera browser with: built-in ad blocker battery saver free VPN These extensions and wallpapers are made for the Opera browser. View all x Global VPN Adblocker Proxy 3.2 / 5 Your rating Awful Poor Average Good Excellent Total number of ratings: p Global Vpn is an amazing vpn product, with an inexpensive payement plan, as well as a free version. We are the best vpn browser extension proxy with your privacy and peace of mind covered. GlobalVPN is the best paid browser extension proxy VPN, proudly made in America. We love our users and are always looking for a way to improve and expand our products. Our product has fast high network priority countries, and can be used for privacy, unblocking sites, or getting an alias country for sites like Netflix legaly. GlobalVPN also includes a useful adblocker, saving you from annoying popups and ads, even on your search engine! Other options featured in GlobalVPN include, themes, stats, and an amazing user agent changer! Using our VPN can save you a lot of trouble and provide privacy on the web, for an easy, cheap one time payement that is completly worth it. VPN's divert traffic from your computer to our server routing it to unblock sites (•ㅅ•) to free yourself from censorship ( ͡° ͜ʖ ͡°)╭∩╮ and giving yourself privacy (☞ ͡° ͜ʖ ͡°)☞ and freedom of mind ᕦ( ͡͡~͜ʖ ͡° )ᕤ from censors and blockers, with servers in 25+ countries from Budapest to Japan, and is perfect for this years olympics!. Global vpn: Is fast! (▀̿ĺ̯▀̿ ̿)ᕗ Has 25+ locations around the world! ( ͡°( ͡° ͜ʖ( ͡° ͜ʖ ͡°)ʖ ͡°) ͡°) Is made in the USA ( ͡~ ͜ʖ ͡° ) We ♥ our customers. Really ( ͡♥ ͜ʖ ͡♥) How much more could you ask for? ᕕ( ՞ ᗜ ՞ )ᕗ How Globalvpn is better than our competitors: (⌐▀͡ ̯ʖ▀)=/̵͇̿̿/’̿’̿̿̿ ̿ ̿̿ Hola: We do not use your computer as a proxy server, putting you at legal and technical risk. Zenmate, Touch VPN, Proxy Free VPN DEEPRISM, VeePN, Free Avira Phantom VPN, uVPN, and Whoer VPN: We have many more than the 1-4 servers provided free; PrimeVpn: We have more counties and better servers, as well as a richer interface with more options. NucleusVPN, FreeLy Vpn, Prime VPN, VPN.s, FREE vpn proxy, Free OpenVPN, iNinja, Browsec, VPN Master, Free OpenVPN, Hide My IP and MyBrowser VPN: We have a nicer interface and do not reqiure technical skills. HotVPN, WeVPN, Hoxx Vpn, Thunder VPN, VirtualShield, Private Internet Access, BelkaVPN, and Dot VPN: We don't require a sign up. Onion VPN: We are faster and do not use the problematic TOR network. FilmFinder, Turbo Button, Private Internet Access Extension, IP Address & Geolocation, NordPass, WebRTC Protect, WebRTC Control, Spot Shopping - Coupons & Deals, SAASPASS Single Sign-On: We are actually a vpn service. And more..... Other similar usage products: uBlock Origin, 360 Internet Protection, Adguard, Mattrhorn Adblocker, Adguard, DuckDuckGo, Avast Online Security, Ghostery, HTTPS Everywhere Created by a few youths in the USA. ( ͡° ͜ʖ ͡°) Contact arcangel27@tutanota.com to get information or discuss buying this extension. Madness? THIS IS SPARTA!!! This product is not to be used in china as per communist rulers (☭ ͜ʖ ☭) ( ͡° ͜ʖ ͡°)╭∩╮ Permissions Screenshots Feedback from users
1
Operation Brace Yourself
Press Release South Florida Man Pleads Guilty To Consecutive Health Care Fraud Conspiracies Thursday, October 7, 2021 Share For Immediate Release U.S. Attorney's Office, Middle District of Florida Tampa, FL – Patsy Truglia (53, Parkland) has pleaded guilty to two counts of conspiracy to commit health care fraud and one count of making a false statement in a matter involving a health care benefit program. He faces a maximum penalty of 15 years in federal prison. A sentencing date has not yet been set. According to the plea agreement and other court documents, beginning in January 2018 and continuing into April 2019, Truglia and other conspirators, including co-defendant Ruth Bianca Fernandez (who worked under Truglia’s supervision), generated medically unnecessary physicians’ orders via their telemarketing operation for certain orthotic devices—i.e., knee braces, back braces, wrist braces, and other braces—referred to as durable medical equipment (“DME”). Through the telemarketing operation, federal health care program beneficiaries’ (i.e., Medicare beneficiaries’) personal and medical information was harvested to create the unnecessary DME brace orders. The brace orders were then forwarded to purported “telemedicine” vendors that, in exchange for a fee, paid illegal bribes to physicians to sign the orders, often without ever contacting the beneficiaries to conduct the required telehealth consultations. The fraudulent, illegal brace orders were then returned to Truglia’s telemarketing operation, which used the orders as support for millions of dollars in false and fraudulent claims that were submitted to the Medicare program. To avoid Medicare scrutiny, Truglia and Fernandez spread the fraudulent claims across five DME storefronts operated under Truglia’s ownership and control, and Fernandez’s day-to-day management. In all, through their five storefronts, Truglia, Fernandez, and other conspirators caused approximately $25 million in fraudulent DME claims to be submitted to Medicare, resulting in approximately $12 million in payments. On April 9, 2019, multiple federal law enforcement agencies participated in a nationwide action referred to as “Operation Brace Yourself.” The Operation targeted ongoing schemes, such as Truglia’s, in which companies were paying illegal bribes to secure signed physicians’ DME brace orders for use as support for fraudulent claims that were submitted to the federal programs. In the Middle District of Florida, the Operation included, among other efforts, the execution of search warrants at several of Truglia’s DME storefronts and a civil action which, among other ramifications, enjoined Truglia and (by extension) his five storefronts from engaging in any further health care fraud conduct. Undeterred by this action, beginning in or around April 2019, and continuing into July 2020, Truglia and other conspirators—some who had worked with Truglia in the earlier conspiracy, as well as some new conspirators—carried out a similar conspiracy using three new DME storefronts and different “telemedicine” vendors. Through this conspiracy, Truglia and his conspirators caused an additional approximately $12 million in fraudulent DME claims to be submitted to Medicare, resulting in approximately $6.3 million in payments. This case was investigated by U.S. Department of Health and Human Services – Office of Inspector General, the Federal Bureau of Investigation, the Department of Veterans Affairs – Office of Inspector General, and the Internal Revenue Service Criminal Investigation, Tampa Field Office. The criminal case is being prosecuted by Assistant United States Attorneys Jay G. Trezevant, Tiffany E. Fields, and James A. Muench. The civil action is being handled by Assistant United States Attorneys Carolyn B. Tapie and Sean P. Keefe. Updated April 19, 2023 Attachments Second Superseding Information p ] Plea Agreement p ] Topics Financial Fraud Health Care Fraud Component USAO - Florida, Middle
1
A dynamic stability design strategy for lithium metal solid state batteries
Extended Data Fig. 1 SEM images of LPSCl, LGPS and LSPS particles. a–c, SEM images of LPSCl (a), LGPS (b) and LSPS (c) particles. Extended Data Fig. 2 Electrochemical voltage profiles, optical and SEM images of lithium discharged asymmetric batteries with different electrolytes. Asymmetric batteries with Li/G as anode (lithium capacity loading = 3 mAh cm−2), stainless steel (SS) current collector as cathode, and solid electrolytes as separator were assembled. Lithium was enforced to deposit on the surface of solid electrolytes at 0.25 mA cm−2. Different electrochemical behaviours and surface information were observed. a, Short-circuiting happened immediately after lithium was deposited on the surface of pure LPSCl pellet. A metallic colour (silver or grey) was observed from the optical image and a small level of cracking was observed from the SEM image. b, Voltage ramping up quickly after lithium was deposited on the surface of pure LGPS pellet in a few hours. Decomposition (dark black) was observed from the optical image and no crack was observed from the SEM image. c, Voltage ramping up gradually, reaching cut-off voltage after lithium was fully deposited on the surface of the LGPS separated LPSCl pellet. Metallic colour (silver or grey) on large area was observed from the optical image and cracks were observed from the SEM image. Extended Data Fig. 3 XPS characterization on the dark region of LGPS and LPSCl after lithium discharge. a–c, XPS data of the black region on the LGPS surface after lithium discharging at 0.25 mA cm−2 (shown in Supplementary Fig. 2b) with the chemical information of S (a), P (b) and Ge (c). d–f, XPS data of the silver region on the LPSCl surface after lithium discharging (shown in Supplementary Fig. 2c) with the chemical information of S (d), P (e) and Cl (f). The beam size of XPS is 400 μm. Extended Data Fig. 4 Performance difference of LSPS as the single layer and the central layer of multilayer in symmetric battery configurations. a, Symmetric battery with Li9.54Si1.74(P0.9Sb0.1)1.44S11.7Cl0.3 (LSPS) as electrolyte and graphite covered lithium (Li/G) as electrodes. b, Symmetric battery with the combination of Li9.54Si1.74(P0.9Sb0.1)1.44S11.7Cl0.3 (LSPS) and Li5.5PS4.5Cl1.5 (LPSCl) in the configuration of LPSCl–LSPS–LPSCl as electrolyte and graphite covered lithium as electrodes. Extended Data Fig. 5 Cycling performance of symmetric batteries with LGPS or LPSCl as the single solid electrolyte layer. a, High rate (10 mA cm−2) cycling for Li10Ge1P2S12 (LGPS) symmetric battery with Li/G as electrodes. The over potential starts from 0.6 V and quickly ramp up to over 1.5 V in the first few cycles. b, High rate (15 mA cm−2) cycling for LGPS symmetric battery with Li/G as electrodes. The over potential ramping up to over 5 V in the first cycle. c, Symmetric battery with LPSCl as electrolyte and Li/G as electrodes, cycling at 0.25 mA cm−2. Short-circuiting shows up in the first two cycles. Extended Data Fig. 6 Optical image, XRD and XPS of cross-sections of symmetric batteries before and after cycling. a, Optical image of cross-section of Li/G-LPSCl-LGPS-LPSCl-G/Li after 300 h cycling at 0.25 mA cm−2 at room temperature, showing another region without decomposition. b, Post-treated image of a in only black and white. c, Optical image of cross-section of Li/G–LPSCl–LGPS–LPSCl–G/Li after 30 cycles at 20 mA cm−2 at 55 °C. XPS spot size is marked in the black region for a comparison with the size of the black region. d, Optical image of the cross-section of the LPSCl–LGPS–LPSCl pellet before cycling. e, Optical image of the cross-section of the LPSCl–LGPS–LPSCl pellet after 300 h cycling at 0.25 mA cm−2 (e1) and 30 cycles at 20 mA cm−2 (e2). The images in d and e are from the same pellet in a and c in a larger view, which were taken by an optical microscope in the glovebox. f, XRD of LGPS before and after cycling at 0.25 mA cm−2 for 300 h, with features of XRD peaks shown in g1–g5. h–j, XPS measurement of S 2p (h), P 2p (i) and Ge 3d (j) on the black region in the cross-section of the sandwich pellet after battery cycling at 0.25 mA cm−2 for 300 h. The beam size of the XPS is 70 μm. Extended Data Fig. 7 Morphology difference of LGPS and LPSCl before and after cycling. SEM images of the solid electrolytes before cycling (first row), and after cycling for 100 h (second row) and 300 h (third row) in the region of LPSCl, LGPS, and their transition areas. The fourth column: LPSCl side with a 10-μm scale bar. The SEM images were from the symmetric battery in the configuration of Li/G–LPSCl–LGPS–LPSCl–G/Li. Extended Data Fig. 8 Half-battery cycling performance using pure LGPS and/or LPSCl as electrolytes. a, The discharging profiles of graphite covered Li paired with LiNi0.8Mn0.1Co0.1O2 (Li/G-NMC811) batteries, using Li5.5PS4.5Cl1.5 (LPSCl, green 10C), Li9.54Si1.74(P0.9Sb0.1)1.44S11.7Cl0. 3 (LSPS, blue 10C), and multilayer LPSCl-LSPS-LPSCl configuration (purple 10C, black 1C, red 0.1C) as the electrolyte. The batteries were first charged at 0.1C and then discharged at various rates at room temperature. b, c, The cycling performance of the same multilayer battery at 5C (b) and 10C (c) in the range of 2.5–4.3 V in the environment without humidity control (55 °C). d, e, The first charge and discharge profiles of Li-LiCoO2 (Li-LCO) batteries with (d) Li5.5PS4.5Cl1.5 (LPSCl) and (e) Li10Ge1P2S12 (LGPS) as the electrolyte. Uncoated LCO and LiNbO3-coated LCO is applied for LPSCl and LGPS, respectively. f, g, The first charge and discharge profiles of graphite covered Li paired with LiNi0.8Mn0.1Co0.1O2 (Li/G-NMC811) batteries with LPSCl as the electrolyte at (f) 0.3C and (g) 0.5C; along with the cycling performance at (h) 0.3C (LCO at 0.1C is also shown) and (i) 0.5C. All batteries in d–i were tested at room temperature. The battery configuration and materials used are summarized in Supplementary Table 2. j, Cycling performance of solid-state battery with multilayer electrolytes at different Li/graphite capacity ratios of 10:1, 5:1 and 2.5:1. k, Cycling performance of solid-state battery with multilayer electrolytes under different operating pressures of 50–75 MPa, 150 MPa and 250 MPa. l, Cycling performance of solid-state battery with thin multilayer: Li/G–LPSCl (100 μm)–LSPS (50 μm)–LPSCl (50 μm)–NMC811. m, High-power voltage profile of the Li/G–LPSCl–LSPS–LPSCl–NMC811 battery at 100C–500C at 55 °C with a cut-off voltage of 2–4.3 V. Red, blue and pink curves are from batteries first charged at 0.5C and then discharged at high C rates, and black curves are at 100C charge and discharge. 1C = 0.43 mA cm−2. Extended Data Table 1 Mechanical and (electro)chemical properties of different electrolytes Extended Data Table 2 Battery configurations and materials ratios applied in this work
1
A Look at Tesla's Financials
Startup Sapience p Follow 6 min read p Jul 13, 2020 -- Listen Share Here is the video from this transcript: YouTube Tesla recently overcame Toyota to become the world’s most valuable carmaker, despite selling 30 times less cars than Toyota. Investors are confident in Tesla’s ability to dominate the industry, as the firm started delivering consecutive quarters of profitability. But let’s see what’s behind their financials. Before taking a look, I laid out a visual representation of their business. Tesla divides its business into the automotive and energy generation and storage division. I am sure most of you have heard about their Model S, Roadster and Cybertruck. These, along with other models, form part of their automotive divisions. Under their energy storage division, they provide their Powerwall and Powerpack products to residential and commercial customers who can store energy for later use. Tesla also provides their Solar Roof, a solar energy system that convert sunlight into electrical current, to complement their Powerwall. And I think about Tesla’s Supercharger network as a bridge between their automotive and energy storage divisions outside the home setting. The Supercharger network involves a collection of high-speed chargers designed to recharge Tesla vehicles quickly using renewable power. Now, a car is in itself a piece of technology. But Tesla plays on how it uses advanced tech in their cars, such as their powertrain system, autopilot hardware, full self driving hardware, and neural net. All of this, combined with the lower maintenance and other ownership costs, is designed to make people switch to Tesla vehicles. Now, let’s look at their revenues split by division. Automotive is obviously the biggest component of sales at over 20 billion dollars for fiscal year 2019. I have to say that Tesla did a great job at generating world wide brand awareness through media coverage. I doubt that sales would have increased by that much if it was not for the well played marketing strategy. The revenues derived from the energy generation segment is mostly as a result from Solar City’s acquisition in 2016. Now, I will draw your attention to a sales component, named Services and Other. This consists of after-sales services, sales of used vehicles, vehicle insurance and the likes. The reason I brought attention to this is because Gross Margin is negatively impacted by that small segment. It is really just a cost center designed to support the main automotive business. Tesla lumps automotive with services and other to get to a blended gross margin of around 17%. We can observe a decline in overall gross margin. This is due to lower Model S and X margins from the lower selling prices. Now, most of you might wonder why gross margins might seem low. Well, it’s not really low. Other car manufacturers like Toyota and Ford report similar gross margin levels, or lower. But one could argue that Tesla manufactures high end cars and thus should command a higher margin. Thing is, investors are banking on the good old economies of scale to drive margins up in the future. And it’s a really simple concept to get behind. Look at it this way. Cost of sales embeds depreciation. So, when a production line is started, it is probably not running at full capacity producing, let’s say 3,000 cars per week. But the depreciation of the facility is still impacting the profit and loss. When the production facility reaches a sustainable utilization percentage, let’s say 10,000 cars per week, the higher number of cars sold eventually outpaces the depreciation cost. In my opinion, a gross margin of around 25 to 30% is achievable for Tesla in the long term. Tesla factories have a production capacity of nearly 700,000 cars, but it produced only around half of that for trailing twelve months Q1 2020. Now, let’s turn to research and development. This component has been going down as a percentage of sales, due to operational efficiencies and process automation. Tesla has also been working to reduce its selling, general and admin expenses through cost optimization. But mind you, deriving efficiencies comes at a cost, called restructuring costs. Tesla has booked a fair amount of those in recent years. This includes things such as termination fees from laying off employees and abandonment of research developments. After deducting cost of sales, operating and non operating expenses, net income is negative, and has been so for some years. This is why Tesla has a negative effective tax rate, and can carry over operating losses to reduce taxes in future years. But Tesla might be done with losses. The firm reported consecutive profitable quarters since Q3 2019. The business seems to be scaling in the right direction. Let’s take a look at Tesla’s cash flow situation. Cash from operating activities includes revenues from their vehicles and other sales, offset by cost of sales, operating expenses and interest payments. All of this is also adjusted for working capital. Now, see how operating cash flow turns positive after 2017. That’s a sign that Tesla can generate sufficient cash to operate business activities. Cash from investing activities pertain to capital expenditures in connection with their Gigafactory construction, business acquisitions, as well as new product lines installation. As for cash from financing activities, it includes things such as inflows and outflows from debt and stock. And Tesla has been tapping into debt markets to finance most of its capital expenditures. Another way to look at cash flow is subtracting capital expenditures from operating cash flow to get to free cash flow. This is essentially a measure of sustainability. Negative free cash flow means a company needs outside sources of financing to grow operations. Positive free cash flow means the company has enough cash on hand to even repay creditors and issue dividends to shareholders. Not surprisingly, Tesla has been posting mostly negative values for a long time. It just means that Tesla has been investing more in capital expenditures. But the negative values have been less frequent recently. I have to point out that being free cash flow negative for a long time is not that bad. Do you know why? We can look at other metrics such as EBITDA, Adjusted EBITDA, Operating Margin and so on. But keep in mind that the stock price of a company reflects investors’ expectation. So, the present situation is not really the final destination. That’s why you see stocks dive or skyrocket after they either miss or beat earnings estimates. Tesla’s stock price is up over 7,000% since its IPO. And a lot of people are saying that Tesla’s current financials do not warrant its current valuation. Just keep in mind that investors are expecting a lot of growth from Tesla. If reported metrics show anything to the contrary, the stock price will adjust accordingly. I hope you enjoyed this article and learned something new. Don’t hesitate to tell us what you think about Tesla’s financials. Do you believe in the company’s long-term prospects? What should they do to maintain their edge? Let us know.
1
Albert is hiring to democratize financial advice for all (Remote US)
Competitive pay + equity We take financial wellness seriously by offering competitive salaries, annual bonuses, and meaningful equity for employees. Comprehensive health plans We offer comprehensive and robust medical, dental, and vision plans. We also cover premiums for you and your family. Free daily meals, coffee, snacks, and beer Employees enjoy free lunch on weekdays. Plus snacks, coffee, and happy hours at the office. Paid time off Recharge with paid holiday, sick, and vacation days, plus 1 month to work from anywhere, from Thailand to home. 401(k) Plan for your future from Day 1. We offer 401(k) plans with company matching. Wellness Get a monthly stipend to improve your mental and physical health. Plus, a free Headspace subscription if you're on our insurance plan.
1
Green Fast Keto
Log in Forgot password? or English (US) Français (France) Português (Brasil) Italiano p p p Meta © 2023
187
Work on interesting problems. Not interesting tech
When I first began learning programming with basic and pascal, our computer class started teaching us Java. As for a 10 year old it was hard for me to grasp the concept of object oriented programming. It took me another 10 years to partially grasp the concept of OOP, and another 10 years to completely understand OOP. However, Java left a bad taste in me and I decided I’d never ever use Java for a project. I haven’t used Java for any of my projects. That was until recently where I had to work on a project that was completely made with Java. I had no other choice. I picked up the project because I was genuinely interested in it, and because it was extremely challenging. Therefore out of interest to the project I had to learn some Java. I haven’t done anything with Java recently, and I was feeling as a the 10 year old learning Java for the first time. So I picked up a Udemy course, and started learning Java all over again. I refreshed my knowledge and now I’m feeling very comfortable in doing Java, and everyday I’m trying to contribute to more to the project with complex tasks. The project has 20 people, and I was a contributor as well as a playing a project manager role. Since I had fo coordinate around 20 people, to scratch my own itch I began writing a simple project management tool. Which has now grown to a decent project management tool with Kanban boards and more. So the point I have to emphasize is that the best way to learn something, to do something new, or build something interesting is to work on interesting ideas, or ideas that you are genuinely interested. When you work on what you like, you enjoy what you’re doing. Even though I hated Java, the project made me learn Java. If I was not working on an interesting project then learning Java would have been a pain. Working on things that interest you will develop the skills that you might not get by learning an interesting technologies. For example, you might learn K8, Cassandra but you that will not help you to solve problems if you haven’t worked on a problems, or at least you might not get the chance to solve a problem in real life by using these technologies. It’s good to learn a cool technology, but at the end of the day you will become a person who knows few buzzwords and some cool technology but don’t have the skill to solve a real problem. I wrote this not because I’m perfect, but I come across people who know some cool tech but does not know how to solve a real problem. I’d rather hire or work with someone who has more problem solving skills than a person who knows some tech but no problem solving skills. I never thought this post will reach front page of HN. I wrote this in a hurry and now reading again I see I have made lot of mistakes (I was using my freaking iPad to write. It’s painfully difficult to type a blog post by using an onscreen keyboard)
1
Tesla Video Data Processing Supercomputer ‘Dojo’
Elon Musk tweeted last week that Tesla is recruiting AI or chip talents for the company’s neural network training supercomputer project “Dojo. “ Musk boasted the “beast” will be able to process “truly vast amount of video data,” and “the FSD [full self-driving] improvement will come as a quantum leap, because it’s a fundamental architectural rewrite, not an incremental tweak.” Musk added his own use case: “I drive the bleeding edge alpha build in my car personally. Almost at zero interventions between home & work.” The Dojo talent recruitment drive reflects Musk’s determination to achieve full (L5) autonomy for his vehicles. The plan is to grow the Tesla autopilot capabilities by upgrading dimensional comprehension to a 4D infrastructure from the current systems, which Musk has pegged at “about 2.5D.” Musk mused on self-driving dimensions and milestones during last month’s Q2 2020 Tesla Earnings Call: “The actual major milestone that is happening right now is really transition of the autonomy systems of cars, like AI, if you will, from thinking about things like 2.5D , things like isolated pictures… a nd transitioning to 4D, which is videos essentially. T hat architectural change , which has been underway for some time… really matters for full self-driving . It’s hard to convey how much better a fully 4D system would work. This is fundamental, t he car will seem to have a giant improvement . Probably not later than this year, i t will be able to do traffic lights, stops, turns, everything.” Why is this upgrade from 2.5D to 4D so critical for the next self-driving breakthrough? UC Berkeley Researcher Dr. Fisher Yu spoke with Synced to provide some context. Yu explains that when humans see objects, even with occluded views, we can naturally recognize their semantic categories and predict their underlying 3D structures. On the road, this would entail for example a driver understanding the geometry of other vehicles from the partial views provided by their own rear-view mirrors. Yu attributes British neuroscientist and physiologist David Marr with initiating one of the most promising theories of vision in the 1970-80s, when he asserted that recognition involved several intermediate representations and steps. Humans can infer the surface layout of objects from 2D images, aka a 2.5D representation that adds specific visuospatial properties, and is then processed into a 3D volumetric representation with depth and volume perception of the object. “In the field of autonomous driving, the process of 2.5D to 3D has already offered a lot of information,” Yu notes, “for example, when given the 2.5D representations and the speed of the vehicle, it is easy to predict when to brake to avoid collision with a car in front — 2.5D representations are sufficient here.” Yu says however that 3D representations are required to achieve more robust systems, as 3D information such as the dimensions of other cars can be leveraged for generating safer driving routes and can even be used to infer vehicle functionality, such as where and how doors can be opened. Moving from 2.5D to 3D increases the capabilities of the self-driving systems beyond obtaining and processing information about surrounding obstacles and the speed of vehicles, etc. “It also enables the systems to think like humans and predict the intention of a certain object and potential interaction with it,” says Yu. “It is still challenging to predict 3D information accurately only based on video feeds. If we ask people to estimate the exact distance of a car, it is easy to say it is 20 meters away. But it is hard to imagine someone can confidently say ‘the car is 24.3 meters away’.” “Introducing temporal information can bring out many very specific benefits for developing autonomous driving systems to be safer and more comfortable, “ says Yu. “For instance, to predict the potential routes a car can take, it is critical to consider temporal information such as previous routes and speed through referencing past frames of the videos.” Obtaining the required quality temporal information is possible due to the massive amount of large-scale video data currently being collected by robots and intelligent vehicles. The recent Tesla patent Generating Ground Truth for Machine Learning from Time Series Elements provides further insights into what Musk envisions with the move toward 4D: “As one example, a series of images for a time period, such as 30 seconds, is used to determine the actual path of a vehicle lane line over the time period the vehicle travels. The vehicle lane line is determined by using the most accurate images of the vehicle lane over the time period. Different portions (or locations) of the lane line may be identified from different image data of the time series. As the vehicle travels in a lane alongside a lane line, more accurate data is captured for different portions of the lane line. In some examples, occluded portions of the lane line are revealed as the vehicle travels, for example, along a hidden curve or over a crest of a hill. The most accurate portions of the lane line from each image of the time series may be used to identify a lane line over the entire group of image data. Image data of the lane line in the distance is typically less detailed than image data of the lane line near the vehicle. By capturing a time series of image data as a vehicle travels along a lane, accurate image data and corresponding odometry data for all portions of the corresponding lane line are collected.” In his November 2019 talk PyTorch at Tesla , Tesla Senior AI Director Andrej Karpathy said the goal of the Dojo training supercomputer is to increase performance by orders of magnitude at a lower cost. If the ambitious development of the Dojo supercomputer and Autopilot system architectural change to 4D all goes well, it would give Tesla vehicles a huge lead in the race to the self-driving L5 finish line. Elon Musk has hinted that the 4D Autopilot FSD upgrades will be a limited public release in “6 to 10 weeks.” For those interested in joining the team that wants to make history, the main Dojo recruitment engineering locations are Palo Alto, Austin, and Seattle. Tesla says working remotely would be acceptable for “exceptional candidates.” Synced will update readers when additional information becomes available. Reporter: Fangyu Cai | Editor: Michael Sarazen Synced Report |  A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle. strong database  covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios. Click here to find more reports from us. We know you don’t want to miss any story. Subscribe to our popular  Synced Global AI Weekly  to get weekly AI updates. Share this: h3 Loading...
1
YouTube Allows Videos to Be Sampled by Default
YouTube Video creators are automatically opted-in to allow other creators to sample their content for YouTube Shorts videos. YouTube Shorts is a relatively new form of video similar to TikTok videos. YouTube shorts allows content creators to sample original content from other YouTube videos that are automatically opted-in. YouTube is automatically opt-in the videos of content creators and to make it difficult to opt out of giving away their content to other creators. At this point anyone can take original content and use it for YouTube shorts. Dark Patterns is a form of user interface manipulation designed to force a user to take a desired action. Dark Patterns are used by big tech companies and even politicians to manipulate users to perform unintended actions such as give away their privacy or make larger donations or purchases than intended. For example, user interfaces on Google or Facebook products are said to take an extra five more clicks to opt-out of giving away ones privacy whereas give it all away is the default and takes zero action to do. YouTube appears to be using Dark Patterns to make it difficult to opt-out of content sharing and easy to opt-in (by doing nothing). A creator needs to click the “Content” link to get to the creator back end and then they need to click an edit icon on an individual video and then scroll to the bottom of the page and click a “Show All” link and then scroll again to the bottom of the page to reach the opt-out selection. YouTube appears to not have sent a formal notice that YouTube Creators are free to sample videos. YouTube creators are reacting with shock because they do not recall having been to opt-in to giving away their content. One person tweeted: “This is pretty concerning – there is new permission added to YouTube allowing other creators to sample your videos for Shorts. I naively assumed this would only apply to new videos, but it’s auto-opted in all my content.” YouTube then hides the opt-out menu under a “Show All” link and then forces a user to literally scroll all the way to the bottom of the page to elect to opt-out of sharing original content with YouTube Shorts creators. A problem with how YouTube is handling opting out is that there appears to be no way to opt out an entire channel of videos. That means that each video must be opted out of automatic sharing one by one. That can be problematic for users with thousands of videos. If it takes one minute to opt out one video then it will take over 16 hours to opt out one thousand videos at a pace of one video per opted out per video. There doesn't seem to be a channel setting to opt out of this, so the only solution currently seems to be to manually untick this on every single video. Is there another solution to this @YouTubeCreators @YouTube? — Luke Sherran (@lsherran) May 2, 2021 The YouTube shorts program is not currently monetizable. YouTube Shorts creators can’t show ads on them. Also the short form nature of the Shorts videos prevents a creator from using an entire video, which is probably why the use of someone else’s content is called sampling. All of those factors might cause the sampling to be viewed as fair use. That said there appears to be no benefit from having ones content sampled by another content creator with a huge audience. What do you think? Is it fair?
2
Fifth of countries at risk of ecosystem collapse, analysis finds
One-fifth of the world’s countries are at risk of their ecosystems collapsing because of the destruction of wildlife and their habitats, according to an analysis by the insurance firm Swiss Re. Natural “services” such as food, clean water and air, and flood protection have already been damaged by human activity. More than half of global GDP – $42tn (£32tn) – depends on high-functioning biodiversity, according to the report, but the risk of tipping points is growing. Countries including Australia, Israel and South Africa rank near the top of Swiss Re’s index of risk to biodiversity and ecosystem services, with India, Spain and Belgium also highlighted. Countries with fragile ecosystems and large farming sectors, such as Pakistan and Nigeria, are also flagged up. 40% of world’s plant species at risk of extinction Countries including Brazil and Indonesia had large areas of intact ecosystems but had a strong economic dependence on natural resources, which showed the importance of protecting their wild places, Swiss Re said. “A staggering fifth of countries globally are at risk of their ecosystems collapsing due to a decline in biodiversity and related beneficial services,” said Swiss Re, one of the world’s biggest reinsurers and a linchpin of the global insurance industry. “If the ecosystem service decline goes on [in countries at risk], you would see then scarcities unfolding even more strongly, up to tipping points,” said Oliver Schelske, lead author of the research. Jeffrey Bohn, Swiss Re’s chief research officer, said: “This is the first index to our knowledge that pulls together indicators of biodiversity and ecosystems to cross-compare around the world, and then specifically link back to the economies of those locations.” The index was designed to help insurers assess ecosystem risks when setting premiums for businesses but Bohn said it could have a wider use as it “allows businesses and governments to factor biodiversity and ecosystems into their economic decision-making”. The UN revealed in September that the world’s governments failed to meet a single target to stem biodiversity losses in the last decade, while leading scientists warned in 2019 that humans were in jeopardy from the accelerating decline of the Earth’s natural life-support systems. More than 60 national leaders recently pledged to end the destruction. The Swiss Re index is built on 10 key ecosystem services identified by the world’s scientists and uses scientific data to map the state of these services at a resolution of one square kilometre across the world’s land. The services include provision of clean water and air, food, timber, pollination, fertile soil, erosion control, and coastal protection, as well as a measure of habitat intactness. Those countries with more than 30% of their area found to have fragile ecosystems were deemed to be at risk of those ecosystems collapsing. Just one in seven countries had intact ecosystems covering more than 30% of their country area. Among the G20 leading economies, South Africa and Australia were seen as being most at risk, with China 7th, the US 9th and the UK 16th. Alexander Pfaff, a professor of public policy, economics and environment at Duke University in the US, said: “Societies, from local to global, can do much better when we not only acknowledge the importance of contributions from nature – as this index is doing – but also take that into account in our actions, private and public.” Pfaff said it was important to note that the economic impacts of the degradation of nature began well before ecosystem collapse, adding: “Naming a problem may well be half the solution, [but] the other half is taking action.” Swiss Re said developing and developed countries were at risk from biodiversity loss. Water scarcity, for example, could damage manufacturing sectors, properties and supply chains. Bohn said about 75% of global assets were not insured, partly because of insufficient data. He said the index could help quantify risks such as crops losses and flooding.
37
Etna is erupting
Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more
3
Darter Pro from System76: A Lightweight Linux Laptop, Full Review
The Darter Pro… the newly revised notebook from System76. This is aimed towards those who daily bring a laptop with them on the go, while being sleek, slim, and easy to carry. While I can’t recommend it for AAA gaming, it certainly is fast enough to handle all of your indie or 2D games, and the Tiger Lake CPU boasts fast compile times. Note: review unit provided by System76. According to 9to5Linux, the Darter Pro has just been revised a few weeks ago and brings the following upgrades: Let’s dive in and see what we got! The Darter Pro comes in the same packaging as the Serval WS: the white box with “system76” printed on top, the ceramic wrap covering the laptop itself, with the charger tucked into a separate bag, the same welcome letter and stickers. Only difference as far as I could tell was this box came with a cleaning cloth; the Serval WS didn’t. However, the Serval had a System76 pin; this doesn’t. The laptop comes in a black, half-magnesium alloy, half-plastic finish (the lid and bottom panels consist of the former, while the bezel and the palm rest are made of the latter). I don’t know what the Serval WS’s material is, but it felt slightly smoother. It’s using the same keyboard as the Serval, with the same function keys, font, numpad, keyboard backlights, etc. As such it features the same quirks, with a small right Shift key, and a lack of LED indicators when Caps or Num Lock is on. The trackpad is a little different: it’s simply a large rectangle with no buttons. Pressing it will mimic left-click, lightly pressing with two fingers mimics right-click, and pages can be scrolled by dragging two fingers up or down. Like the Serval, the trackpad is located slightly more to the left than center. The base weight is 3.84 pounds (or 1.74 kg), and it measures 14.06″ × 8.68″ × 0.78″ (35.7 × 22.05 × 1.99cm). Users who are looking for a sleek, slim laptop will definitely find this appealing. For the sake of comparison, let’s have a look at the T15 laptop from Lenovo. Lenovo claims it has somewhere between 10 and 15-hours battery life. It weighs almost exactly the same as the Darter Pro, at 3.86 pounds (1.75 kg) and retains the same screen size and resolution. The price for similar hardware specs is $1,223 and has the following components: The only advantage I see with the Lenovo model is the supposed longer battery life. Otherwise, at a reduced cost, the Darter Pro’s processor is one generation ahead, with a faster clock speed, and faster RAM. Plus, the Darter Pro has Linux pre-installed; you can’t go wrong there. But more on the actual specs of the Darter Pro later. The speakers are set on the bottom of the device; one on the left, the other on the right. Testing the saturation level on full volume while listening to some tunes, I can confirm there’s basically no saturation (no clipping or distortion) and it can be pretty loud, even louder than the built-in speakers on my desktop monitor. On the left side of the laptop, we have: That’s it. Nothing in the front or back, as those sides are far too slim to offer anything else. Opening the laptop generally requires two hands. As far as the hinges are concerned, since they are made of magnesium alloy, I would imagine they would last a while. However, the problem with this is when opening the laptop, the hinges sort of “bang” against a solid surface, so if you’re looking to extend the hinges’ life, you’re probably better off opening it on your lap, then setting it on the table. When turning the laptop on for the first time, it goes through a similar process as does the Serval; configure language and keyboard layout, optionally set up encryption, reboot, then configure user account info. So, onto the specs. My particular unit has: When ordering this laptop, you can add as much as 64 GB RAM, and store up to 4 TB of storage via two M.2 slots. You can also choose a lower-level CPU (i5 1135G7) if you’re looking to keep costs down. The base price is $1,099, but it can go as high as $2,701 by maxing out the best hardware and getting the most storage/memory. My configuration came to $1,648. As with the Serval WS or any other System76 product, you can finance and pay a monthly rate, add accessories, or increase the warranty period at an additional cost. As a comparison, the Lenovo T15 with almost the same kind of configuration is about the same price ($1705 at the time of writing), but the Darter Pro has a newer i7 version and RAM at higher clock. Lenovo keeps offering rebates on their configuration so prices are always changing from one month to the next, but overall the Darter Pro is at a competitive price point. Even on the highest configuration on the Darter Pro, it’s however not possible to change the screen resolution. As laptops these days are following a trend of 1440p or higher at the same screen size, users can only get 1080p with this. Not a terrible thing though, as I still use a 1080p monitor for my desktop. It is not possible to add a dedicated GPU to this particular unit; instead, it uses the Intel Iris Xe graphics. As you’ll see later on, though, the Iris graphics are fairly impressive! The screen itself is 15.6″, or 39.624 cm, capable of 1080p resolution. There’s full, crisp color here no matter what angle you’re looking at it; it uses an IPS display and has a maximum refresh rate of 120 Hz. I can confirm external monitor connectivity via the HDMI port works just fine. Wi-fi 6/Bluetooth 5 is equipped. The webcam is situated at the top above the screen, and is 1 MP and capable of capturing photos/videos at 720p; you’ll know it’s on when the white LED indictator that sits just a little to the right of it lights up. Concerning whether the webcam LED is hardwired, or if it’s controlled by software, I have reached out to System76 about this and am currently awaiting response. I will update this review once I get the answer. The battery is 73 Wh (watt hours) with 4 cells. Like the Serval unit I reviewed, Pop!_OS 20.10 is pre-installed with my review unit. I ran a quick test of the built-in microphone; it works just fine: Like with any of System76’s products, you can find documentation on their website concerning parts and repairs. You’re free to add your own memory, storage, cooling system, battery, etc. As many laptop manufacturers are opposed to users taking apart their machines — and, as such, make it increasingly complicated to tear down — it’s awesome to see a company that not only permits this but also makes it easy and provide instructions on how to do so! One big advantage the Darter Pro has over the Serval WS is it’s using open-source firmware, created none other than the team at System76. You can freely peruse their source code over on their Github page. Honestly, I have never seen a computer in my life that’s used open-source firmware until now; I can’t describe in words how amazing it is how far we’ve come in terms of open-source software, and the impact this will have for computers in generations to come. Though Objective-C/C/C++ is over my head, any other C guru out there can take a look at the source code, see what’s going on behind the scenes, and make changes for everyone’s benefit. The Darter Pro isn’t the only device that has this benefit; several other of System76’s products have this: As to why the Serval WS isn’t taking advantage of this, I honestly don’t know. The number of System76’s devices that are controlled by open-source firmware are growing, so I imagine it will only be a matter of time before the workstation gets its chance to shine. Another advantage of open-source firmware is there’s no middleware crap going on while your computer is booting; its only job is to do what you want, never mind check all of this other stuff first before proceeding. As such, boot times are supposedly decreased. I recorded the time it took to get to the login screen: it was about 10 seconds. After putting in the password for my account, it took 3 seconds to get to the desktop. Not that much faster than most modern laptops or desktops with SSD/NVMe storage, but still fairly impressive. Accessing the BIOS while turning the Darter Pro on is accomplished by pressing the Escape key. It’s a very simple interface, with simple options: So, you can choose what partition to boot from, change the default boot partition, change boot order, and find out more information concerning the firmware configuration. That’s it! Here’s a request to anyone out there who knows how to work with code: if we could get an option to overclock the processor, that’d be great (although, it might not be a good idea, what with how thin this laptop is…). It doesn’t look like we can set up a supervisor password. That would be another good option to have. The beauty of open-source! Upgrading the firmware, whether the device is using open-source firmware or not, can be done through the Firmware panel in GNOME’s Settings menu: If there was an upgrade available for my system, there would be a green “Update” button. Then the firmware file would be downloaded, and the system would restart and flash the new firmware. Firmware can also be upgraded through the command line. For more information on the benefits of using open-source firmware, check out System76’s blog post. Let’s start off with some compiling benchmark times/frames. Compiling the Linux kernel took 177 seconds: That currently ranks around the 32nd percentile on openbenchmarking.org and matches very closely with the AMD Ryzen 3 Pro 4350G. Next, let’s see how well this laptop handles x264 encoding: 48 frames-per-second. That ranks as the 39th percentile and again nearly mimics the exact performance as the Ryzen 3. Not bad! Heh…you’re probably wondering why I even bothered running gaming benchmarks on a machine with integrated graphics. Intel’s GPU performance is notoriously known for being sucky. However, I was surprised by the results here. It still does a crap job, but it did better than I expected. Want to play Shadow of the Tomb Raider ? You could, so long as: I’m not lying about this; just take a look at the benchmark (Note: I set the CPU governor to “performance”): Playing at 720p is probably the biggest drawback, as the graphics won’t look as “crisp” compared to 1080p, but I’m surprised this game can even play at all, and at an okay framerate at that! So I dug a little further into this integrated graphics experiment, and I’ve included the results of running F1 2017 at 720p on the lowest preset: Look at that! Over 60 FPS on average! How about 1080p? These are some of the last ports Feral has worked on, but they’re still relatively new. So, the fact that you can play a modern AAA title on ultra low at 720p, with an average of 30 FPS, I could argue this could be used as a gaming on-the-go device, while having somewhat better battery life than the Serval WS. There’s a few compromises here, yes, but I’m still pretty impressed, considering the Darter Pro is using the Iris graphics. At any rate, it’s a great device for handling some of the less-demanding titles out there, including Slay the Spire and Monster Train . Only time I heard fan noise was while gaming or running benchmarks. And it’s not that bad in terms of loudness. The laptop is nearly silent when doing web browsing or pretty much anything else. System76 claims the battery life lasts 9 hours. Average battery life on my end was in-between six-to-six-and-a-half hours with Wi-Fi disabled. Bear in mind: Of course, there are a wide variety of factors that come into play here, including screen brightness, Wi-Fi, Bluetooth, CPU governor, USB devices, what’s on the screen, etc. So running the experiment for the second time with Wi-Fi enabled, it lasted four-and-a-half hours: I could barely get two hours of playtime while playing Ocean’s Heart , with the same screen brightness, battery plan, and CPU governor. Even with a pixelated game like this, the iGPU is pretty much going to be maxed out the entire time, so keep your gaming sessions sparse if the laptop isn’t conveniently located next to a charger. Yeah, I don’t know where System76 got the nine hours from, but you’re definitely not going to get as much battery life as that. You’d need to have the lowest brightness, keyboard backlight off, Wi-Fi and Bluetooth disabled, and the “Battery Life” battery plan, but even then, my guess would be about eight hours max, unless the computer is idle the whole time. Since the keyboard is the same as the Serval WS, it suffers from the same problems — no Caps Lock or Num Lock LEDs, no saving of the keyboard backlight color on restart, small right Shift key, etc. Even if there isn’t a physical LED indictor for Caps or Num Lock, Pop!_OS should at least have a software-based solution on the taskbar; an icon that lights up when they’re on and greys out when it’s off. When opening up the laptop, the bottom half is thin enough that the hinges from the top half collide with the surface in which the laptop is sitting on, and will cause the bottom half to slightly rise. Perhaps that’s an intentional design, but I’m a bit worried the colliding can cause a little bit of damage to the hinges. While it’s a huge advantage that the Darter Pro is controlled by open-source firmware, the BIOS leaves a lot to be desired. Specifically, there’s no administrative password that you can set, as far as I can tell, or manage the various hardware peripherals, like their clock speed. Perhaps the firmware is a bit too early in its development stages to be complaining about this, and for all I know it could already be on System76’s to-do list to add these features, but it’s just something I want to make potential buyers aware of right now. The Darter Pro is a nice laptop that shares the same screen size as the Serval WS, while still being more compact and much thinner. Even without a dedicated GPU, the Darter Pro can do some basic gaming, and even AAA-style 3D games if you can withstand 720p resolution and ultra-low graphics settings. The biggest advantage I see with a laptop like this is that everything that controls it is open-source; you can browse the source code to see what is actually going on as your computer is booting, reap the benefits of fast boot times, enjoy the simplicity of the BIOS, and add your own features if you understand the code. The battery isn’t as long-lasting as I expected. Six hours is okay, but it definitely doesn’t match up to System76’s 9-hour claim. The BIOS could benefit from more advanced features, but I really do like the simplicity of it so far. Hinges are a bit of a concern to me when opening up the laptop, and the keyboard has no Caps or Num Lock LEDs. If you want something that’s: The Darter Pro is a good choice, albeit with a 1080p display, which may not be enough for some users. And at a few dollars cheaper than T15 with similar specs as my review unit, it has a lot more benefits, including a faster processor, faster memory, open-source firmware, and a pre-installed Linux distribution. You might want to check out the following articles too! DeckyLoader: Improve Your Steam Deck Experience Now Best Steam Deck Games Released in the Past Week, with Pocket Mirror, Rayze and Mechabellum - 2023-05... Best Steam Deck Games Released in the Past Week with the Outlast Trials, Humanity and Songs of Syx -... 32votes Article Rating
1
What Is My Phone Number
How to Find Your Phone Number on Android, iOS or any other phone? Use Web, Chrome, Windows, iPhone or Android app to find your number!whatismynumber.io When you’re new to your phone, you may not know the phone number to the device. Heck! You never have to use your own phone number, right? Fortunately, there is a way to view the phone number assigned to your phone using whatismynumber.io. Use the number detection on the right → How does it work: We’ll give you a phone number to call, detect your number and show it on this page. Or download the “What Is My Number (whatismynumber.io)” app for Android, iOS to try to read your phone number from the SIM card as well. You'll have an easy app to show you your phone number in one touch. What is My Phone Number? Generating number Let’s try making a free call to our test number: Come back to this app after making the call to see the number you were calling from. * Please check with your mobile operator if unanswered calls are billed in your plan. Let’s try making a free call to our test number: PLEASE HOLD Wow, there are too many people using the service right now.  people are in line before you to get the phone number. 0 Estimate wait time: seconds ~ Time Out Your reservation has ended. Try again. My Phone Number is: We’re sorry but we could not detect your phone number automatically Enter number manually Phone number copied to clipboard Your phone number Edit Save Your name Edit Save About https://whatismynumber.io Terms of Service Privacy Policy Apps on other platforms iPhone app Android app Chrome extension Social Facebook Twitter Feedback Send Feedback Help translate Share
1
Rejuvenate Pogoplug v4 with Debian 10(buster)
Member-only story Supratim Samanta p Follow Published in Geek Culture p 1 min read p Jun 21, 2021 -- 2 Share Remember this one? Pogoplug v4. A handy device able to drive 3 hard disk and be able to run a NAS? I had bought it like 7 years back and have been using it in my home network for a long time. After the company went down, I flashed debian and was able to increase the lifespan of this handy device further more. Follow 452 Followers p Writer for Geek Culture 300K+views 🙏🏼#Unconventional #ProblemSolver #Tech #Dev #Productivity . Join my fascinating journey.🕺 Support me at https://susamn.medium.com/membership Follow Help Status Writers Blog Careers Privacy Terms About Text to speech Teams
7
EVGA explains how Amazon's MMO bricked 24 GeForce RTX 3090s
An analysis of dead EVGA GeForce RTX 3090 cards that failed while playing Amazon’s New World game indicate a rare soldering issue limited to a small batch of cards is responsible, a company spokesman told PCWorld. EVGA said it received about two dozen dead GeForce RTX 3090 cards believed to have failed from playing the New World beta. All of the cards were earlier production run cards manufactured in 2020. Under an X-ray analysis, they appear to have “poor workmanship” on soldering around the card’s MOSFET circuits that powered the impacted cards. Gamers went into minor panic in early July when reports started coming that the popular beta version of Amazon’s New World MMO was killing graphics cards. At the time, the brunt of the reports revolved around EVGA’s GeForce RTX 3090 GPUs, although there were unconfirmed reports of cards dying from different manufacturers and different makes of cards as well, including both Radeon and older GeForce cards. EVGA said all of the failures reported to the company were confined to GeForce RTX 3090 cards. The company said it immediately shipped replacement cards to affected gamers rather than wait for the failed cards to be returned. The company declined to say how many GeForce RTX 3090 cards it has sold, but did characterize this small batch as significantly less than 1 percent of the total. Amazon Amazon’s New World MMO. Customers told EVGA that the cards died while in the game’s menu system or while loading the game. No customers have reported issues to EVGA since Amazon added frame rate limiters to the game. Both EVGA and Nvidia worked with Amazon Games to obtain the version of the game that bricked the cards to conduct further testing. EVGA said it could not replicate the issue, but it was working to add a method to screen for the particular power profile the beta version created. The New World beta in question did not implement a frame rate limiter in the menu system, which many released games include. That could cause a GPU to suddenly go from an in-game frame rate of 100 fps to roaring along at 800 fps. It’s akin to driving a car at full throttle going up hill and accidentally putting the car into neutral. If you don’t lift your foot, the engine revs into its redline. Early theories that EVGA’s cooling system was to blame are also incorrect, the spokesman said. “In no way shape or form, is it related to the fan controller,” he said. Brad Chacos/IDG The EVGA GeForce RTX 3080 Ti FTW3 Ultra pictured here features the same iCX cooling technology as the 3090 version. In that theory, EVGA’s temperature monitoring hardware was being blamed for not being able to keep up with load of RTX 30-series GPUs, causing gaps that sparked the bricking. EVGA said that’s not the case, however. The fan controller’s micro-controller could indeed look like it wasn’t working under the extreme swings of New World, but EVGA said the issue there is related to noise on the i2c bus, causing popular third-party monitoring tools such as HWInfo and GPU-Z to incorrectly report the noise as the fan controller failing. Its own EVGA Precision X1 tool would screen out the noise and correctly report it, EVGA officials said. The company has since issued a micro-controller update that, when paired with updated versions of the third-party tools, will correctly show the fan controller working properly. To obtain the latest micro-controller update, EVGA said customers should download the latest version of EVGA Precision X1, which should indicate if a micro-controller update is required for the GPU or not.
2
Uber proposes California-style gig work reforms in Europe
Uber urged EU policymakers to implement reforms that protect drivers and couriers operating through an app, without reclassifying them as employees. The ride-hailing giant floated a model similar to Prop 22 in California, which exempted its drivers from employee status while still entitling them to some benefits. The move comes ahead of a review from the European Commission on Feb. 24, which aims to lay the groundwork for regulation of gig economy platforms. Uber CEO Dara Khosrowshahi speaks at a product launch event in San Francisco, California on September 26, 2019. Philip Pacheco | AFP via Getty Images LONDON — Uber called on the European Union to introduce a framework for gig economy workers, floating a model similar to that adopted by California after a contentious fight over the employment status of its drivers. The U.S. ride-hailing giant shared a "white paper" with EU competition chief Margrethe Vestager, jobs commissioner Nicolas Schmit and other officials. It urged policymakers to implement reforms that protect drivers and couriers operating through an app, without reclassifying them as employees. It's a thorny issue for Uber and other companies in the so-called gig economy that encourage temporary, flexible working models in favor of full-time employment. Last year, Uber, Lyft and other firms successfully fought against proposals in California which would have given their drivers the status of employees rather than independent contractors. Californian voters approved Proposition 22, a measure that would allow drivers for app-based transportation and delivery companies to be classified as independent contractors while still entitling them to new benefits like minimum earnings and vehicle insurance. "We're calling on policymakers, other platforms and social representatives to move quickly to build a framework for flexible earning opportunities, with industry-wide standards that all platform companies must provide for independent workers," Uber CEO Dara Khosrowshahi said in a blog post Monday. watch now "This could include introducing new laws such as the legislation recently enacted in California," he added. Uber said the EU could alternatively set new principles through a "European model of social dialogue" between platform workers, policy makers and industry representatives. Uber has warned that, by treating its drivers as employees, authorities would give the firm no choice but to increase costs — and that those costs would be passed down to customers. Uber envisions a "third way" for gig economy employment status that offers drivers some protections while still allowing them flexibility of contract work. In the U.S., the firm suggested benefits funds that can be used by workers for things like health insurance and paid time off. The company's European white paper calls for new rules that encompass an "industry-wide level playing field" and sets a "consistent earnings baseline" for workers across different platforms. The move comes ahead of a review from the European Commission on Feb. 24, which aims to lay the groundwork for regulation of gig economy platforms. It also arrives at a time when food delivery is booming while taxi-hailing services have been severely impacted by coronavirus lockdowns in Europe. Companies like Uber and Deliveroo faced criticism for failing to provide drivers with a safety net during the pandemic. Meanwhile, drivers are making demands of their own on Uber's business practices across Europe. In the U.K., the Supreme Court is set to deliver a ruling on whether Uber's drivers should be classified as workers entitled to protections like a minimum wage and holiday pay. Elsewhere, Uber drivers in the Netherlands are demanding the company reveals how its algorithms manage their work. It's not the first time Uber has faced scrutiny in Europe. In 2017, the European Court of Justice dealt Uber a major setback by ruling it was a transportation firm rather than a digital company, paving the way for stricter regulation of the firm. And London twice banned the app from operating in the U.K. capital over safety concerns. Uber was issued a temporary London license in September.
3
Naomi Osaka withdraws from French Open amid row over press conferences
Naomi Osaka has announced her withdrawal from Roland Garros one day after she was fined $15,000 by the French Open and warned that she could face expulsion from the tournament following her decision not to speak with the press during the tournament. Osaka, 23, who won her first match against Patricia Maria Tig and was scheduled to face Ana Bogdan in the second round, had released a statement last Wednesday stating her intention to skip her media obligations during Roland Garros because of the effects of her interactions with the press on her mental health. Forthcoming. insightful, eloquent: Naomi Osaka’s media snub is a big loss for tennis | Tumaini Carayol In a statement on Monday announcing her withdrawal from the event, Osaka said she was leaving the tournament so that the focus could return to tennis after days of attention and widespread discussion. “This isn’t a situation I ever imagined or intended when I posted a few days ago,” Osaka wrote on social media. “I think now the best thing for the tournament, the other players and my well-being is that I withdraw so that everyone can get back to focusing on the tennis going on in Paris. “I never wanted to be a distraction and I accept that my timing was not ideal and my message could have been clearer. More importantly I would never trivialise mental health or use the term lightly.” In her original statement, Osaka said she expected to be fined and Gilles Moretton, the French Tennis Federation (FFT) president, said last Thursday that his organisation would penalise Osaka. What Naomi Osaka's French Open withdrawal means for tennis – video explainer However, the organisation offered no official response until the lengthy statement signed by the four grand slam tournaments on Sunday after Osaka’s first-round win. Their heavy handed approach to Osaka has been criticised as a disproportionate response, forcing Osaka to choose between either risking significant punishment or else resuming the press duties that have her anxiety. The attention Osaka has received was only compounded by the announcement of her fine and possible default. We’re not the good guys: Osaka shows up problems of press conferences | Jonathan Liew On Thursday evening Osaka’s older sister, Mari, attempted to support her sister by providing further context of her struggles in a post on Reddit. She said Osaka had been hurt by frequent questioning about her ability on clay and that she felt she was being “told that she has a bad record on clay.” After losing in the first round of the WTA tournament in Rome, Mari Osaka said her sister was “not OK mentally.” After some criticism, Mari Osaka deleted her post. In her withdrawal statement, the four-time grand slam champion said she has suffered from “long bouts of depression” since the 2018 US Open final. Osaka defeated Serena Williams then to win her first grand slam title in a controversial match that similarly led to significant attention and queries from the media. “Anyone that knows me knows I’m introverted, and anyone that has seen me at the tournaments will notice that I’m often wearing headphones as that helps dull my social anxiety,” Osaka wrote. Osaka concluded her statement by saying she suffers “huge waves of anxiety” before speaking with the media. “So here in Paris I was already feeling vulnerable and anxious so I thought it was better to exercise self‑care and skip the press conferences. I announced it preemptively because I do feel like the rules are quite outdated in parts and I wanted to highlight that,” she wrote. Osaka has received support from numerous public figures since her announcement. “Stay strong. I admire your vulnerability,” wrote Coco Gauff in response. Billie Jean King added on Twitter: “It’s incredibly brave that Naomi Osaka has revealed her truth about her struggle with depression. Right now, the important thing is that we give her the space and time she needs. We wish her well.” It’s incredibly brave that Naomi Osaka has revealed her truth about her struggle with depression. Right now, the important thing is that we give her the space and time she needs. We wish her well. — Billie Jean King (@BillieJeanKing) May 31, 2021 Martina Navratilova tweeted her best wishes, saying: “I am so sad about Naomi Osaka. I truly hope she will be OK. “As athletes we are taught to take care of our body, and perhaps the mental and emotional aspect gets short shrift. This is about more than doing or not doing a press conference. Good luck Naomi - we are all pulling for you! “And kudos to Naomi Osaka for caring so much about the other players. While she tried to make a situation better for herself and others, she inadvertently made it worse. Hope this solution, pulling out, as brutal as it is will allow her to start healing and take care of her SELF.” I am so sad about Naomi Osaka.I truly hope she will be ok. As athletes we are taught to take care of our body, and perhaps the mental & emotional aspect gets short shrift. This is about more than doing or not doing a press conference. Good luck Naomi- we are all pulling for you! — Martina Navratilova (@Martina) May 31, 2021 Two hours after Osaka’s announcement, Moretton conducted a press conference in which he read out a statement in French and English, calling Osaka’s withdrawal “unfortunate” and wishing her “the quickest possible recovery.” He left without fielding any questions from the press.
1
Trump's Election Fraud Washington DC
Observers say the United States is in danger of a systemic crisis, as US President Donald Trump is still trying to end his electoral defeat by threatening Georgia's Foreign Secretary Brad Raffinsperger just days before the formal vote count. Are doing From Electoral College Chinese analysts said Monday that while challenges from Trump and Republican politicians may not affect the final outcome, they do undermine the US electoral system and the authority of Western democracies. According to an audio recording released by the Washington Post, Trump flattered the Republican, Raffinsperger, begged and threatened to get 11,780 votes [Biden won by 11,779 votes in Georgia] ۔ Carl Bernstein, a veteran journalist who uncovered the 1972 Watergate scandal that led to the resignation of then-President Richard Nixon, recently called Trump a "subversive president" of the electoral system. Is willing to undermine and try to act illegally, inappropriately and immorally "to provoke a revolt," the Huff Post reported. Regarding the involvement of the armed forces, former U.S. defense secretaries on Sunday jointly announced their opposition to Trump's attempts to influence the election.Calling the United States "dangerous, non-existent," according to CNN. Legal and unconstitutional territory. The Washington Post, some US legal experts have called Trump's move a "textbook of electoral fraud." Chinese experts who reached out to the Global Times said that the architects of the US system could not be expected to have Trump's presidency there.Which would do great damage to the entire system. However, it cannot be assumed that the American system cannot be restored after Trump leaves the White House, because Trump is a reflection of the American crisis, not just a cause. The way the current system is designed is not able to cope with the changing situation.The separation of powers and the parties have failed to fully reflect the different interests.Whether the Supreme Court can make a reasonable decision is also in doubt. Has become a question.The director of the Institute of International Affairs at Renmin University in China described the plight of the United States. Wang said the benefits of globalization vary from state to state in the United States, from the other coast to the interior, from New England to Sunbilt.This gap intensifies industry, race and class conflicts.Which explains the selection of Biden's cabinet members and minimizes contradictions at the surface. Lu Jiang, a research fellow at the Chinese Academy of Social Sciences in Beijing, told the Global Times on Monday that the "constitutional crisis".Which was widely discussed in the US media during the campaign last year, was a harsh word. Yes, and there is no sign of a nationwide upheaval yet, but the current situation in the United States suggests that the transition to power will be "invasive." Challenges initiated by Republican senators and lawmakers in the House will extend the process but will not affect the outcome.But the biggest loss to the American political system is that all the stains during and after the election deprive other countries of confidence in the consistency and credibility of American policy-making. At least 12 GOP senators, a quarter of Senate Republicans, announced plans Wednesday to challenge the election of U.S.President-elect Joe Biden's Electoral College, according to U.S. media outlet Politico. America's "system benefits" in the past may not always be in line with the new environment.Without adjustment, Wang said, it is impossible to move forward and could lead to internal divisions. Chinese Foreign Ministry spokes woman Hua Chunying suggested to the Global Times at a press conference on Monday that a referendum be held on how the Chinese people view the United States.He was responding to a question on Monday about the State Department's latest baseless allegations that the Communist Party of China "violates the rule of international law." In a poll released by Global Times in December, 65.6 percent of respondents found the Trump administration "offensive." while more than 70 percent believe that China has the maximum advantage over the United States. In the wake of US interference in China's domestic affairs in Taiwan, Xinjiang and Hong Kong, 81.7% of the approximately 2,000 Chinese participants in China's 16 largest cities took a "strongly opposed" stance. Most Chinese view the US presidential election 2020 as "a low-quality talk show" rather than an opportunity to learn something developed from the West.As more and more people think the US system trusts its elected leaders.Unable to create and maintain. Analysts say they have refrained from making dangerous and irrational decisions to the detriment of the country and the world.
1
Automated Control of Problematic Individuals
When Simone Biles stepped aside in the Olympic Games, was she engineering world peace by depriving the competition of a focal point for their jealousy? If you have access to the intellectual work of well-educated or talented people, should you act like Robin Hood and distribute their work to a group of angry, jealous, deprived, uneducated or untalented people so that they can decorate themselves with it and quiet the flames of their rage? When flames of rage occur at all levels of the economic scale and emotional stupidity doesn’t discriminate according to pay grade, such a system might lead to the economically rich stealing from the talented poor — does that serve a social good? What if the poorer people don’t know it is happening.. does that change the calculation? I have a lot of questions and not a lot of answers, but given the acceleration of art creation and distribution enabled by tech today, these questions must be bothering a lot of people, even though I haven’t seen any public discussions about them. My guess is that these disruptive discussions are quarantined on university campuses and not allowed in the public/internet sphere, but I find the resulting shallowness on the internet unnerving. More often than not, the media fans the flames of heated yet shallow discussions, delivering the worst of both worlds without resolving the cognitive dissonance. I often read headlines and think, “Whoever approved that should be fired” and I wonder if the editor was assisted by a software package to make their decision. A few days ago, I jotted down some hate-filled, flame-fanning, clickbait headlines on the front page of The Guardian. “Three Americans Produce Enough Carbon to Kill One Person” climate propaganda that conflated some ridiculous statistics in a meaningless way. “Head of German Cycling Team Sent Home for Racist Slur” the coach had shouted “Catch the camel riders!” as his team approached the Algerian team This is certainly bad style, but would it have been racist if the Algerian coach had shouted, “Catch the sausage eaters!”? I can see that implying that a country is non-technologically advanced is an insult, but when neither camel riding nor sausage eating is particularly shameful, I’m not sure that these statements qualify as racist — an insult to a person’s genome. Rude and unsportsmanlike, sure, but I’d have to know more about the history of the usage of the term ‘camel rider’ in Germany to know if it is racist. If it is something that people said to immigrants on the street to make them feel like outsiders, then it is certainly xenophobic, but given their history, and their primarily Turkish, Polish, and Russian, non-camel-associated immigrant populations I’d be surprised if there has ever been an Arabian immigrant who was insulted by a German angrily calling him a camel rider. I could be wrong. There are a lot of nice Turkish carpet shops, so maybe a German got angry at a price and called a Turkish person a camel rider under his breath without knowing that camels aren’t really associated with that country, but I doubt that. Turkey is a popular vacation destination, so he’d know better. If the coach had yelled, “Catch the Mariachis!” at a group of Mexican bicycle racers, would that have been racist? Being a Mariachi is kind of cool and unique as is camel riding. “Catch the Trump voters!” would certainly be more insulting than camel riding or sausage eating while “Catch those cowboys!” wouldn’t be insulting at all. I think I’m digging myself into a hole here. Covert racism and xenophobia are far more insidious in my opinion. For example, there is an article on Quora about French people eating rare songbirds that have been fattened up on millet. Is this for real? Why should I trust that this person didn’t just make this up as a political wedge.. or to pollute an information platform with misinformation or show off how good he is a making stuff up? It does seem that Quora has been flooded with click bait of late. Either way, the Guardian headlines made me wonder who had decided to use the platform to promote anti-American and anti-German hatred for their British audience and if their appearance had anything to do with the ousting of The Guardian’s chief executive. Her removal was announced in a press release that described a move towards a more ‘reader centric strategy to grow the distribution of their publications.’ Today ‘reader-centric’ often means tailoring a person’s newsfeed to their individual cookie history and if they think you like or need to see hate-filled clickbait to fulfill their propaganda goals, you’ll get more of that type of material. Rather than being manipulated by such tailoring, I’d like an app that categorizes news articles with tags that help me identify what I want to read, so that when I visit a site like The Guardian and my cursor hovers over an article, a bubble will notify me if I am looking at an article that is bull shit = lies dog shit = low quality chicken shit = cowardly horse shit = unfair bat shit = crazy ape shit = aggressive bear shit = obvious jack shit = nothing cat shit = toxoplasmosis pig shit = politics bird shit = female banter Perhaps I could have a filter that allows me to decide how much mind warping cat shit and ape shit pig shit I want to consume on a given day — just to mess up the media organization’s social engineering models. What do I mean by social engineering models? Let me explain. Once upon a time, people were limited to analyzing financial, social, and physical systems with classical, analytic equations that had a set of initial conditions and boundary conditions. Today, people have access to ‘models’ in which the rules of individual interactions and boundaries change all of the time. Suppose that you have a method of identifying people who have a high likelihood of committing a crime or becoming cult leaders and you have a method of neutralizing the threat they pose to their community by feeding them certain types of news articles or giving them a lot of internet attention — turning them into a type of minor celebrity, at least from their own limited perspective. Should you use this method on a widespread scale? What if the system that creates the illusion of minor-celebrity status also catches non-criminals in its web? Is it ethical? After all, the reason that those people have a high likelihood of committing a crime, becoming a cult leader, or being vulnerable to an attention trap is because they were wronged or ignored in some substantial way. Both Hitler and Manson formed cults because they were angry about their art/music careers not taking off, but should all potential Mansons be neutralized through the illusion of artistic celebrity status or pre-emptive prison? When he was arrested, my 34 year old cousin was a punk rock performer and not a Charles Manson, but he seems to have been locked up in prison for his commitment to counterculture. I don’t know the details, but the insider, family gossip is that he got a very raw deal. It just doesn’t seem possible that such a guy could get a 45 year federal sentence for forwarding a single email with a picture of a naked underaged woman he didn’t know, unless you know all of the surrounding details. The short version of the story is that he was selling marijuana and the feds wanted him to snitch on his supplier, so they decided that he was an irredeemable scumbag and entrapped him by having a nude picture of an underage girl sent to him by email. Perhaps the picture was funny and he didn’t know she was underage, perhaps he was high on marijuana at the time, but he stupidly forwarded it to his band-mates and it was on this basis that a swat team was sent to his home. There were also rumors that the girl in the photo was the daughter of an influential man who helped make sure that he got locked up, but with the number of covert operations being used by law enforcement, I wouldn’t believe such a backstory unless I knew this young woman’s family personally. The word on the street was that he got a 45 year sentence because he was too afraid to snitch on the guy who supplied him with marijuana, so what they did was charge him with ten counts of child pornography: one count for opening the email they sent him, one count for not automatically deleting it, and 8 counts for ‘attempt to distribute’, as when he clicked on the forward button to his friends. Why this was an ‘attempt to distribute’ and why the email was blocked is unknown to me. The situation sounds a bit like entrapment and when I was a kid that was considered to be illegal. Maybe they just wanted to make sure that guys like him were locked up when the riots started in 2020. Politically, he was definitely what you’d call ‘antifascist’ and perhaps the fact that he owned a gun and lived near the police station made some people paranoid about him. By the time they showed up at his door with a swat team, he was mentally fragile and he threatened to shoot himself in order to avoid being taken to jail. When I googled his name, the headlines all read: “After a standoff, man was arrested for child pornography from his home in a shipping container near the police station.” They certainly made him sound scary, but I don’t remember my cousin as being a scary or creepy guy. He was certainly not a criminal mastermind or psychopath. He was just looking for an easy way through life that didn’t conflict with his core beliefs. He didn’t like seeing people or animals hurt by *the system* and he and his girlfriend worked in an animal shelter for several years before he started a vegan sandwich shop. His business wasn’t sustainable and his girlfriend left him, so he turned back to his childhood love of punk rock music and fixing motorcycles, lawn mowers, and cars. That’s what I saw when I looked at him. He always cleaned up well for family gatherings. I don’t know him particularly well, but when he was 13, he lived with my family after his family was struck by a terrible tragedy involving antidepressant medication that culminated in his anesthesiologist father’s suicide and the death of two of his childhood friends. Their physician mother set their house on fire and murdered them after attempting to poison her physician husband. His mother was indirectly responsible for the tragedy and she wasn’t able to keep him under control, so she paid a *reform school* to kidnap him from his bed at night and take him away to their facility in the desert. He escaped from the reform school, ended up on the street, and my family adopted him for a few years. From my perspective, he was just a kid from a privileged community who liked SHARP (anti-racist) punk rock music and whose mom wanted him to take his ADHD medication so that he could do well in school, but he didn’t like taking the medication because it made him feel numb and unable to feel the music he liked. His younger brother dutifully took his ADHD medication and became an MD/PhD neuroradiologist. Does that mean that kids who don’t take their ADHD medication are destined to live in shipping containers and get caught in federal dragnets? I mean, if he was truly a dangerous pervert, he would’ve worked harder to make himself appealing and not live in a shipping container. That’s hardly a chick magnet, and I know that he wasn’t destitute. His father’s death had left him with a trust fund and my guess is that he was just trying to live in a minimalist fashion and not feed the system that he felt had betrayed too many people. For someone who was exposed to as much trauma as he was, I wouldn’t begrudge him the healing catharsis of punk rock music or marijuana for that matter. I think he was dealing with his broken heart with the best tools available to him.. up until he was entrapped and arrested. It makes me sad that no one was able to save him from this. I find it doubly upsetting to compare this situation to the case of a guy I went to high school with who became a youth pastor. He was seriously predatory, sexting with teen girls he mentored and inviting them over to his nice, suburban home. When he was caught, he was clearly guilty, but he organized a deal that kept him out of prison because his father had a career as an undercover police officer. I don’t know why I buried this sad story at the end of a meandering perseveration about the impact of automation on social engineering. Perhaps I just have the sense that this is connected in some way to what happened to my cousin. My guess is that he was resisting automated media programming and got flagged by automated criminal identification algorithms. He was eaten by the very system he was afraid of for his whole life. the image in the header is from : https://www.peakpx.com/en/hd-wallpaper-desktop-npggj Don't forget to share this! h3 Loading...
204
No-code startup Bubble raises $100M
'No-code' startup Bubble raises $100 mln in round led by Insight Partners July 27, 2021 12:30 PM UTC Updated ago July 27 (Reuters) - Bubble, a New York based startup that allows non-coders to design and create web applications, said on Tuesday it raised $100 million in a funding round led by private equity firm Insight Partners. Technology applications for people with little programming experience -- known as "no-code" or "low-code" in Silicon Valley -- have attracted fresh funding during the pandemic to help overcome a bottleneck created by a surge in e-commerce and the digitization of businesses amid a shortage of coders. Emmanuel Straschnov, Bubble co-founder, said he and his partner started the company in New York in 2012 as the city was seeing a jump in tech startups but entrepreneurs with expertise in different industries with good ideas were struggling to find programmers to help them launch their companies. Bubble's platform allows entrepreneurs to build web applications like Airbnb or Twitter without relying on engineers, he said. Today it has more than 1 million users worldwide and has tripled its revenue in the past year, the company said. Straschnov declined to disclose the company's latest valuation. He said the funding would be used to hire more engineers and launch "boot camps" to teach students and others how to use Bubble. Reporting By Jane Lanhee Lee; editing by Richard Pullin Our Standards: The Thomson Reuters Trust Principles. Read Next Technologyp Canada facing rising threat from cyberattacks, defence minister says 6:08 AM UTC Technologyp Twitter's head of brand safety and ad quality to leave June 2, 2023 Disruptedp In challenge to Meta, Apple expected to unveil mixed-reality headset June 2, 2023 Technologyp Twitter's head of trust and safety says she has resigned June 2, 2023
3
Microsoft criticizes Apple’s new App Store rules for streaming game services
Following the complaints of several developers over the last months, Apple today announced some changes to the App Store Review Guidelines regarding streaming game platforms. However, it doesn’t seem that other companies have approved these changes, at least this is what Microsoft says. Prior to today’s App Store Review Guidelines changes, Apple rejected any streaming game app on the grounds that the company must review and approve each game individually. The new guidelines allow game streaming apps to be released on the App Store, but the rules are the same as before. Streaming games are permitted so long as they adhere to all guidelines — for example, each game update must be submitted for review, developers must provide appropriate metadata for search, games must use in-app purchase to unlock features or functionality, etc. Of course, there is always the open Internet and web browser apps to reach all users outside of the App Store. In response to the new App Store Review Guidelines, Microsoft told The Verge The Verge this maintains “a bad experience for customers” as Apple is still trying to enforce strict rules for this category of apps, which makes it impractical to launch them in the App Store. The company was testing its xCloud gaming platform on iOS, but it was discontinued last month for not complying with App Store policies. Apple wants each streaming game to be released as a standalone app rather than a single app that works as an alternative to the App Store. In other words, if Microsoft wants xCloud on iOS, it will have to release all 100+ games on the App Store as individual apps and each one will have to go through Apple’s review process. Streaming games must also be adapted to offer any additional item purchases through Apple’s in-app purchases system. According to Microsoft, the main purpose of xCloud is to make the gaming experience as easy and intuitive as any movie or music streaming service, and Apple’s rules would prevent just that: Gamers want to jump directly into a game from their curated catalog within one app just like they do with movies or songs, and not be forced to download over 100 apps to play individual games from the cloud. We’re committed to putting gamers at the center of everything we do, and providing a great experience is core to that mission. Microsoft will officially launch the xCloud platform on Android devices next week, but the company hasn’t mentioned if it has plans to launch xCloud games on the iOS App Store in line with the App Store guidelines — which seems very unlikely. You can find the full updated App Store Review Guidelines on Apple’s developer website. Add 9to5Mac to your Google News feed. You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
3
The Inspiring Feats of Charles Steinmetz, the “Engineer’s Engineer”
January 10, 2022 p Biljana Ognenova Charles Proteus Steinmetz was a mathematical genius, a problem solver, and an “engineer's engineer.” Steinmetz had the rare ability to both teach theory and practice electrical engineering with equal aplomb. Having lived in the golden era of electrical engineering alongside names like Edison, Tesla, Einstein, Thomson, and Westinghouse, Charles Steinmetz was somewhat of a hidden gem among electricity giants—but only for those who haven't investigated the life and the work of this remarkable scientist. Charles Proteus Steinmetz (born Karl August Rudolf Steinmetz on April 9, 1865, in Breslau, Prussia—today's Wroclaw in Poland) had congenital kyphosis—a forward curvature of the spine. Because of this condition, he only stood four feet tall, a trait that made a significant mark on his personal life. Proven as an outstanding student of mathematics, chemistry, economics, and medicine at the University of Breslau, the young Charles Steinmetz took his chance as a U.S. immigrant in 1889, where he landed his first job in electrical engineering at the firm of Rudolf Eickemeyer in Yonkers, New York. As he became increasingly interested in practical engineering solutions, Steinmetz created his first small research lab. It was there that Steinmetz used a mathematical equation to identify a phenomenon in power losses—known as the Law of Hysteresis or Steinmetz’s Law—which lead to breakthroughs in both AC and DC systems. From that point forward, he started his long string of professional successes that threw him into the heart of the fast-developing electricity industry at the beginning of the 20th century. In the period from 1889 all the way to his death in 1923, Steinmetz was employed while also taking prominent social and university roles. Nonetheless, electrical engineering remained his primary passion. Before he finally settled in Schenectady in 1894, he published a paper on magnetic hysteresis, establishing his reputation as a leading scientist at the age of 27. Steinmetz was immediately thrown into the spotlight when both the AIEE (American Institute of Electrical Engineers) and GE (General Electric) showed interest in his work. Until 1893, Steinmetz lived in Lynn, Massachusetts, where he worked with Elisu Thomson from GE. According to Edison Center, GE initially offered to buy Eickemeyer's business to get Steinmatz's astuteness as a package deal with Eickemeyer's transformer patents. Steinmetz attracted much attention because he was the first to mathematically explain hysteresis loss. In Schenectady, New York, where GE built a major plant, Charles Steinmetz established the first GE research lab in which most of his groundbreaking discoveries took place. Apart from painstakingly working on AC power systems research, Steinmetz became an academic at Union College, teaching electrical engineering and electrophysics. Later in life, he was a president of the city council, a president of the board of education of Schenectady, and an (AIEE) president from 1901 to 1902. Steinmetz was a hardworking, exuberant personality who loved electric cars; he even drove the Detroit electric car from the back seat. Steinmetz once solved a longstanding problem for GE engineers in two days when he made a chalk mark on a poorly-performing generator to indicate the place that needed repairs. When the famous manufacturer asked for an itemized bill, the bill included two items: $1 for making the mark and $9,999 for knowing where to put the mark. The Law of Hysteresis explains power losses due to magnetism in electric circuits turning into heat. Before Steinmetz calculated the power loss that occurs due to magnetism, engineers had to build devices to understand the losses. Now, it was possible to know the numbers in advance. By creating a mathematical method to calculate losses in AC circuits, Steinmetz helped electrical engineers efficiently work with alternating current systems and accelerated the adoption of widespread commercial AC devices. The famous Steinmetz equation has been upgraded and improved over the years, but the original coefficients for magnetic materials remain the same. Steinmetz also established the theory of electrical transients, which he formulated while studying lightning bolts. His studies of these traveling waves eventually led to the invention of protective devices for high-transmission power lines. Among the other notable products of Charles Steinmetz are the first version of metal-halide lamps and a high-powered generator that could generate power of more than 1,000,000 hp for 1/100,000 of a second. At the GE research lab, Steinmetz also helped create W. Coolidge's X-ray and Albert Hull's vacuum tube. Steinmetz identified first as a researcher, secondly as an educator, and finally—a businessman. Not many patents stand behind his name, but that is only because he didn't take much interest in the commercial side of his work. Steinmetz was driven by results. He was less interested in accolades than he was in behind-the-scenes research, accounting for his low profile in mainstream publications. Had it not been for this engineer's mathematical genius, research in ferromagnetic and ferroelectric materials for spintronics may not be where it is today. His research has made a long-lasting impact on AC and DC systems and power design at large. All images used courtesy of the Edison Tech Center.
4
Relaxo – A transactional document database built on top of Git
{{ message }} ioquatix/relaxo You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Best Hidden Spy App for Android – Try SPY24 Android Tracker
SPY24 is the world’s most trusted cell phone spy software to monitor any smartphone and tablet remotely It works on a range of Mobile Spy Software Track p p p p p p p p p p p p p p p p p p p p p SPY24 Support Cell We’re Here to Help! WhatsApp
3
Mozilla VPN can save you money this holiday season
It helps you shop on international sites Retailers sometimes sell items to local shoppers that they don’t sell globally. Use Mozilla VPN to set your location to another country and see what might be for sale. It prevents surcharging based on where you live Retailers sometimes adjust pricing for different regions. Use Mozilla VPN to set your location to one of dozens of other cities. It protects your credit card info on public wifi Shopping on the go? Mozilla VPN encrypts your network activity, making it safer to conduct transactions out and about.
2
Google found to have violated Sonos patents, blocking import of Google devices
In January 2020, Sonos filed two lawsuits against Google, claiming that the latter stole its multiroom speaker technology and infringed on 100 patents. In September, Sonos then sued Google alleging that the company's entire line of Chromecast and Nest products violated five of Sonos’ wireless audio patents. A judge (preliminarily) ruled in favor of Sonos. Now it's gone from bad to worse for Google, as the preliminary findings have been finalized by the U.S. International Trade Commission. As a result, Google is not allowed to import any products that violate patents owned by Sonos, which Sonos argues includes Google Pixel phones and computers, Chromecasts, and Google Home/Nest speakers. These products produced by Google are often made outside of the United States and imported, hence why this is a big deal for Google. In the ruling (via The New York Times ), Google was also served a cease & desist in order to stop violating Sonos' patents. It has been theorized that as a result of the lawsuit, Google had removed Cast volume controls in Android 12, though it was recently added back with the January 2022 security patch. Sonos has previously said that it had proposed a licensing deal to Google for patents the company was making use of, but that neither company was able to reach an agreement. Sonos said that it had shared details of its proprietary technology with Google in 2013 when both companies weren't competitors, though Google later moved into the audio space with the release of devices like the Google Home. There are still two more lawsuits pending against Google filed by Sonos, meaning that it's unlikely this is the last we've heard of this spat. The ruling will now go to U.S. President Joe Biden, who can potentially veto it within the next 60 days before it comes into effect. The patents said to be infringed are the following: Sonos gave the following statement to Bloomberg : While Google may sacrifice consumer experience in an attempt to circumvent this importation ban, its products will still infringe many dozens of Sonos patents, its wrongdoing will persist, and the damages owed Sonos will continue to accrue. Alternatively, Google can -- as other companies have already done -- pay a fair royalty for the technologies it has misappropriated. Google gave the following statement to Bloomberg: While we disagree with today’s decision, we appreciate that the International Trade Commission has approved our modified designs and we do not expect any impact to our ability to import or sell our products. We will seek further review and continue to defend ourselves against Sonos’s frivolous claims about our partnership and intellectual property.
1
Learn Integration Testing with React Hook Form
Max Rozen (@RozenMD) Do you sometimes worry that your tests don't make sense? Struggling to get what people mean by "test from the user's perspective" and the classic piece of advice "test functionality, not implementation details"? You're not alone! I felt the same thing when I started using React Testing Library. I used to think of testing as just checks you had to do on individual components, directly asserting against props and state, and would struggle to think about how to test the integration of several components together. What does "integration test" even mean? It helps to think of integration testing like a bigger unit test, except the unit you're testing is the combination of several smaller components. More concretely, instead of just testing a Button component, or a TextField component in isolation, we're going to test that they work when placed together into a form. Let's get started! We're going to be testing a form almost every public web app you're going to build has: a Login form. It's probably one of the most important parts of your app (in terms of business value), so let's be confident it actually works! Setup We're going to be using create-react-app, because it comes bundled with @testing-library/react. I'm also using react-hook-form to build our form, because it's the fastest way I know to build a form in React apps. Steps Clone the repo Run: yarn start You should see something like this: At this point, if you ran yarn test, you would see the following: PASS src/pages/Login.test.js ✓ integration test (177ms) Test Suites: 1 passed, 1 total Tests: 1 passed, 1 total Snapshots: 0 total Time: 4.134s Ran all test suites. Watch Usage: Press w to show more. So how do we get here? First off, here's the code from our integration test: import React from 'react' ; import { render , fireEvent , screen } from '@testing-library/react' ; import user from '@testing-library/user-event' ; import Login from './Login' ; test ( 'integration test' , async ( ) => { const USER = 'some-username' ; const PASS = 'some-pass' ; render ( < Login /> ) ; const userInput = screen . getByLabelText ( / / i ) ; user . type ( userInput , USER ) ; const passwordInput = screen . getByLabelText ( / / i ) ; user . type ( passwordInput , PASS ) ; const submitButton = screen . getByText ( / / i ) ; fireEvent . click ( submitButton ) ; expect ( await screen . findByText ( / / i ) ) . toBeInTheDocument ( ) ; expect ( await screen . findByText ( / / i ) ) . toBeInTheDocument ( ) ; } ) ; It might look like we're doing a few complicated things above, but essentially, we've taken the steps a user takes when logging in, and turned it into a test: Load the Login screen Click on the username input, and type the username Click on the password input, and type the password Click the Submit button Wait around for some sign that the Login worked Some testing-library specific things worth calling out: We import '@testing-library/user-event' to let us to type into our inputs We import fireEvent from the '@testing-library/react' library to let us to click on our Button component We've marked the test async to enable us to use findByText() We use findByTest() after performing an action that may be asynchronous - like filling out a form. (findByText returns a Promise, letting us await until it finds the text it's looking for before continuing) We've built a test that can type into our TextField components, click on our Button component, and trigger the Form component's onSubmit function, without ever referring to implementation details! If you're confused about findByText vs getByText, don't worry - that's normal. In general though, findBy functions are for use after async actions (like clicking the Submit button), and getBy functions are for general use. React Testing Library also has a cheatsheet with tips to help you decide which one to use. Conclusion You've just started to understand integration testing, but you best believe there's a lot more to it than this article! If you want a more advanced perspective of integration testing your forms, I highly recommend reading the testing section of React Hook Form's Advanced Usage guide. (Shameless plug for the useEffect book I wrote below) A few years ago when I worked at Atlassian, a useEffect bug I wrote took down part of Jira for roughly one hour. Knowing thousands of customers can't work because of a bug you wrote is a terrible feeling. To save others from making the same mistakes, I wrote a single resource that answers all of your questions about useEffect, after teaching it here on my blog for the last couple of years. It's packed with examples to get you confident writing and refactoring your useEffect code. In a single afternoon, you'll learn how to fetch data with useEffect, how to use the dependency array, even how to prevent infinite re-renders with useCallback. Master useEffect, in a single afternoon.
2
Show HN: A little multithreaded ray tracer written in a few lines of TypeScript
{{ message }} MathisBullinger/ray You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Serbia's Robot Tanklet Can Ride Helicopters into Battle
Sep 21, 2020, 11:00am EDT This article is more than 2 years old. p Miloš robot at its Partner 2017 debut. Srđan Popović / (CC BY-SA 4.0) Rumbling out of helicopters and across grassy fields, the gun-toting robots feel anachronistic. They look like an early dieselpunk fantasy, a rickety background bit of detail in a cartoon about an alternate World War I. In actual reality, Serbia’s Miloš uncrewed ground vehicle is one of the likelier faces of modern war robots. Spotted in concert with infantry and other vehicles in a recent military exercise, the Miloš is a brutally simple design. With a tracked body and a turret, it carries a machine gun, grenade launcher and room for attached rocket weapons. The turret contains cameras, both infrared and electro-optical. It is, like most modern robots, remotely controlled, a platform that is piloted by a human secure over 1.5 miles away in a control vehicle. In scale and effect, the Miloš brings to mind the Renault FT, a small French tank from the Great War. Of all the armored vehicles thrown into the muck of the Western Front, the Renault FT is the one that scans most immediately as a tank, because it set the template for all the most successful designs that would follow. Rather than the land battleships that bristled with cannons and guns, the FT had just a two-person crew, and mounted its one weapon in a turret on top. The key innovation with remotely driven robot weapons, for now, is not so much the imitation of a classic form — it is that they take a classic, useful body, and then remove the human beings inside it from danger. MORE FOR YOU Today’s ‘Quordle’ Answers And Clues For Sunday, June 4 Travel This Summer With These 6 Google And ChatGPT AI Companions New Security Warning Issued For Google's 1.8 Billion Gmail Users The makers of Miloš boast that the vehicle can be controlled at distances of up to 6 miles away, if the signal is relayed through drones. The machine can move for up to 2.5 hours, or be on in a more passive observe-and-shoot mode for 8 hours. Miloš weighs 1500 lbs, is 5.5 feet long, 2.5 feet wide, and just over 3 feet tall. It is also all-electric, which makes it quieter than one might expect for a tracked and turreted machine. Like other gun turret robots, the machine is less a full-scale replacement for infantry, and more another kind of weapon platform that can fight alongside humans on foot. With the ability to be transported into battle inside a helicopter, it is hard to imagine the terrestrial battlefield in the next decades that will not see fighting accompanied by robots. Remote-controlled robots for now, at least. Watch Miloš in action below: p Twitter or LinkedIn. Check out my website.
1
September 20th in Stamps Chulalongkorn, Jacob Grimm, Simon Wiesenthal
Here are some events that happened on September 20th. It could be an event or a person that died or was born on that day 1853 Born: Chulalongkorn, Siamese king (d. 1910) Chulalongkorn, also known as King Rama V, reigning title Phra Chula Chom Klao Chao Yu Hua  (20 September 1853 – 23 October 1910), was the fifth monarch of Siam under the House of Chakri. He was known to the Siamese of his time as Phra Phuttha Chao Luang (พระพุทธเจ้าหลวง, the Royal Buddha). His reign was characterized by the modernisation of Siam, governmental and social reforms, and territorial concessions to the British and French. As Siam was threatened by Western expansionism, Chulalongkorn, through his policies and acts, managed to save Siam from colonization. All his reforms were dedicated to ensuring Siam's survival in the face of Western colonialism, so that Chulalongkorn earned the epithet Phra Piya Maharat (พระปิยมหาราช, the Great Beloved King). Stamps from Thailand/Siam depicting Chulalongkorn 1863 Died: Jacob Grimm, German philologist and mythologist (b. 1785) Jacob Ludwig Karl Grimm (4 January 1785 – 20 September 1863), also known as Ludwig Karl, was a German philologist, jurist, and mythologist. He is known as the discoverer of Grimm's law of linguistics, the co-author of the monumental Deutsches Wörterbuch, the author of Deutsche Mythologie, and the editor of Grimm's Fairy Tales. He was the elder of the Brothers Grimm. A collection of fairy tales was first published in 1812 by the Grimm brothers, known in English as Grimms' Fairy Tales. From 1837–1841, the Grimm brothers joined five of their colleague professors at the University of Göttingen to form a group known as the Göttinger Sieben (The Göttingen Seven). They protested against Ernest Augustus, King of Hanover, whom they accused of violating the constitution. All seven were fired by the king. Stamps from Germany, East Germany and Berlin featuring the Grimm brothers or their fairy tales 2005 Died: Simon Wiesenthal, Austrian human rights activist, Holocaust survivor (b. 1908) Simon Wiesenthal (31 December 1908 – 20 September 2005) was a Jewish Austrian Holocaust survivor, Nazi hunter, and writer. He studied architecture and was living in Lwów at the outbreak of World War II. He survived the Janowska concentration camp (late 1941 to September 1944), the Kraków-Płaszów concentration camp (September to October 1944), the Gross-Rosen concentration camp, a death march to Chemnitz, Buchenwald, and the Mauthausen-Gusen concentration camp (February to 5 May 1945). After the war, Wiesenthal dedicated his life to tracking down and gathering information on fugitive Nazi war criminals so that they could be brought to trial. In 1947, he co-founded the Jewish Historical Documentation Centre in Linz, Austria, where he and others gathered information for future war crime trials and aided refugees in their search for lost relatives. He opened the Documentation Centre of the Association of Jewish Victims of the Nazi Regime in Vienna in 1961 and continued to try to locate missing Nazi war criminals. He played a small role in locating Adolf Eichmann, who was captured in Buenos Aires in 1960, and worked closely with the Austrian justice ministry to prepare a dossier on Franz Stangl, who was sentenced to life imprisonment in 1971. In the 1970s and 1980s, Wiesenthal was involved in two high-profile events involving Austrian politicians. Shortly after Bruno Kreisky was inaugurated as Austrian chancellor in April 1970, Wiesenthal pointed out to the press that four of his new cabinet appointees had been members of the Nazi Party. Kreisky, angry, called Wiesenthal a "Jewish fascist", likened his organisation to the Mafia, and accused him of collaborating with the Nazis. Wiesenthal successfully sued for libel, the suit ending in 1989. In 1986, Wiesenthal was involved in the case of Kurt Waldheim, whose service in the Wehrmacht and probable knowledge of the Holocaust were revealed in the lead-up to the 1986 Austrian presidential elections. Wiesenthal, embarrassed that he had previously cleared Waldheim of any wrongdoing, suffered much negative publicity as a result of this event. With a reputation as a storyteller, Wiesenthal was the author of several memoirs containing tales that are only loosely based on actual events. In particular, he exaggerated his role in the capture of Eichmann in 1960. Wiesenthal died in his sleep at age 96 in Vienna on 20 September 2005 and was buried in the city of Herzliya in Israel. The Simon Wiesenthal Center, located in Los Angeles, is named in his honor. Austrian and Israeli joint issue depicting Simon Wiesenthal
272
Nx: Multi-dimensional tensors Elixir lib with multi-staged compilation (CPU/GPU)
{{ message }} p / nx / h2 Permalink p You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
1
Solving Martin Gardner's chess problem using simulated annealing
I've found this in the Martin Gardner's "The Colossal Book of Short Puzzles and Problems" book: (It also appears in his earlier "The Unexpected Hanging and Other Mathematical Diversions" book.) The problem to find such a placement of chess pieces, so that maximum of squares are under attack. Another variant is: minimum squares under attack. Also: one variant is when bishops can share the same color. Another variant is when you have opposite colored bishops. This is my simulated annealing solver, with the code been shamelessly stolen from the GNU Scientific Library library. My solutions coincides with Gardner's. Maximum: all 64 squares attacked:|.|.|.|.|.|.|.|R| |.|.|.|.|.|.|.|R| |.|.|.|.|.|.|.|.| |R|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|N| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|K|.|.|.|N|.| |.|K|N|.|.|.|.|.| |.|N|.|.|.|.|.|.||.|.|.|B|Q|B|.|.| |.|.|.|B|.|.|.|.| |.|.|.|B|Q|B|.|.| |.|.|.|.|.|.|N|K||.|K|N|.|.|.|.|.| |.|.|.|.|B|.|.|.| |.|.|.|.|.|.|.|.| |.|.|.|B|Q|B|.|.||.|.|.|.|.|.|.|N| |.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|N|.|.|.|R|.| |R|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||R|.|.|.|.|.|.|.| |Q|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|R| |.|R|.|.|.|.|.|.| (You can clearly see that some solutions are symmetric to each other.) Maximum: 63 squares attacked, opposite-colored bishops are used:|.|.|.|.|.|.|.|R| |.|.|.|.|.|.|R|.| |.|.|.|.|K|.|.|.| |.|.|Q|.|.|.|.|B||.|.|.|.|.|Q|.|.| |.|.|.|.|.|.|.|Q| |.|.|.|.|N|.|.|.| |.|.|.|.|.|.|.|R||.|.|.|.|.|.|R|.| |.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|K|.|.|B|.|.|.| |.|.|.|.|.|B|.|.| |.|.|.|.|.|B|.|.||.|.|.|.|.|.|.|.| |.|.|B|.|.|.|.|.| |.|.|.|.|.|B|.|.| |.|.|N|.|.|.|.|.||N|B|.|N|K|.|.|.| |.|.|.|.|.|N|.|.| |.|.|.|N|.|.|.|R| |.|.|N|.|.|.|K|.||.|B|.|.|.|.|.|.| |.|.|N|.|.|.|.|.| |.|Q|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|R|.|.| |R|.|.|.|.|.|.|.| |.|R|.|.|.|.|.|.|Attacked squares: Attacked squares: Attacked squares: Attacked squares:|*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*||*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|.| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*||*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*||.|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|.|*||*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|.|*|*|*|*|*| |*|*|*|*|*|*|*|*||*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*||*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*||*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*| Minimum: 16 squares attacked:|.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|N||.|.|.|.|.|.|K|R||.|.|.|.|.|N|R|Q||.|.|.|.|.|.|B|B|Attacked squares:|.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.||.|.|.|.|.|.|*|.||.|.|.|.|.|*|.|.||.|.|.|.|*|*|*|*||.|.|.|*|.|*|*|*||.|.|.|.|.|*|*|*||.|.|.|*|.|.|*|*| What about bruteforce? You would need to enumerate $\approx 64^8 = 281474976710656$ positions. I always wanted to know, how many bishops you have to use to cover all 64 squares? It's 10. Maximum: Minimum:|.|.|.|.|.|.|.|.| |.|.|.|.|B|.|B|.||.|.|.|B|.|.|.|.| |.|.|.|.|.|B|.|B||.|.|.|.|.|B|.|.| |.|.|.|.|.|.|B|.||.|.|B|B|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|B|B|B|B|.| |B|.|.|.|.|.|.|.||.|.|.|B|.|.|.|.| |.|B|.|.|.|.|.|.||.|.|.|B|.|.|.|.| |B|.|B|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|B|.|.|.|.|.|.|Attacked squares: Attacked squares:|*|*|*|*|*|*|*|*| |.|.|.|.|*|.|*|.||*|*|*|*|*|*|*|*| |.|.|.|*|.|*|.|*||*|*|*|*|*|*|*|*| |.|.|*|.|*|.|*|.||*|*|*|*|*|*|*|*| |.|*|.|*|.|*|.|*||*|*|*|*|*|*|*|*| |*|.|*|.|*|.|.|.||*|*|*|*|*|*|*|*| |.|*|.|*|.|.|.|.||*|*|*|*|*|*|*|*| |*|.|*|.|.|.|.|.||*|*|*|*|*|*|*|*| |.|*|.|*|.|.|.|.|attacked_total=64 attacked_total=21 Bruteforce? $\approx 64^{10} = 1152921504606846976$ positions. How many knights you need to cover 64 squares? It's 14: Maximum: Minimum:|.|.|.|.|.|.|.|.| |N|.|N|.|N|.|.|.||.|.|N|.|.|N|.|.| |.|.|.|.|.|.|.|.||.|.|N|N|N|N|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|N|.|N|.|.|.|N||.|.|N|N|N|N|.|.| |N|.|.|.|N|.|.|.||.|N|N|.|.|N|N|.| |.|.|.|.|.|.|.|N||.|.|.|.|.|.|.|.| |N|.|.|.|N|.|.|.||.|.|.|.|.|.|.|.| |.|N|.|N|.|.|.|N|Attacked squares: Attacked squares:|*|*|*|*|*|*|*|*| |.|.|.|.|.|.|.|.||*|*|*|*|*|*|*|*| |*|.|*|.|*|.|*|.||*|*|*|*|*|*|*|*| |.|*|.|*|.|*|.|.||*|*|*|*|*|*|*|*| |.|.|*|.|.|.|*|.||*|*|*|*|*|*|*|*| |.|*|.|*|.|*|.|.||*|*|*|*|*|*|*|*| |*|.|*|.|*|.|*|.||*|*|*|*|*|*|*|*| |.|*|.|*|.|*|.|.||*|*|*|*|*|*|*|*| |.|.|*|.|.|.|*|.|attacked_total=64 attacked_total=21 Bruteforce? $\approx 64^{14} = 19342813113834066795298816$ positions. Maximum: Minimum:|.|.|.|.|.|.|.|.| |Q|.|Q|.|Q|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||Q|.|.|Q|.|.|Q|.| |.|.|Q|.|Q|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|Q|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|Q|.|.|.| |.|.|.|.|.|.|.|.|Attacked squares: Attacked squares:|*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*||*|*|*|*|*|*|*|*| |*|*|*|*|*|*|.|.||*|*|*|*|*|*|*|*| |*|*|*|*|*|*|*|*||*|*|*|*|*|*|*|*| |*|*|*|*|*|*|.|*||*|*|*|*|*|*|*|*| |*|.|*|.|*|.|*|.||*|*|*|*|*|*|*|*| |*|*|*|.|*|*|.|*||*|*|*|*|*|*|*|*| |*|.|*|.|*|.|*|.||*|*|*|*|*|*|*|*| |*|.|*|.|*|.|.|*|attacked_total=64 attacked_total=47 Bruteforce? $\approx 64^5 = 1073741824$ positions, that would be easy to do. 4 queens' placement demonstrates nice patterns: Maximum: Minimum:|.|.|.|.|.|.|.|.| |Q|.|.|.|.|.|.|Q||.|.|.|.|.|.|Q|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|Q|.|Q|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|Q|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |Q|.|.|.|.|.|.|Q|Attacked squares: Attacked squares:|*|*|.|*|*|*|*|*| |*|*|*|*|*|*|*|*||*|*|*|*|*|*|*|*| |*|*|.|.|.|.|*|*||*|*|*|*|*|*|*|*| |*|.|*|.|.|*|.|*||*|*|*|*|*|*|*|*| |*|.|.|*|*|.|.|*||*|*|*|*|*|*|*|*| |*|.|.|*|*|.|.|*||*|*|*|*|*|*|*|*| |*|.|*|.|.|*|.|*||*|*|.|*|*|*|*|*| |*|*|.|.|.|.|*|*||*|*|.|*|*|*|*|*| |*|*|*|*|*|*|*|*|attacked_total=61 attacked_total=40 And if I only use one knight, one bishop, one rook, one queen, one king: Maximum: Minimum:|.|.|.|.|.|.|.|R| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|N|.|Q|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|N||.|.|B|.|.|K|.|.| |.|.|.|.|.|.|B|R||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|K|Q|Attacked squares: Attacked squares:|*|*|*|*|*|*|*|*| |*|.|.|.|.|.|.|.||*|.|.|*|.|.|*|*| |.|*|.|.|.|.|.|.||*|*|*|*|.|*|*|*| |.|.|*|.|.|.|.|.||.|.|*|*|*|*|.|*| |.|.|.|*|.|.|*|.||*|*|*|.|*|*|*|*| |.|.|.|.|*|*|.|.||.|*|*|*|*|*|*|*| |.|.|.|.|.|*|.|*||*|*|*|*|*|*|*|*| |.|.|.|.|.|*|*|*||*|*|.|*|*|*|*|*| |.|.|.|.|.|*|*|*|attacked_total=53 attacked_total=15 Two knights and two bishops: Maximum: Minimum: Minimum (opposite-colored bishops):|.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|B| |B|.|.|.|.|.|.|B||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|N|.| |.|N|.|.|.|.|N|.||.|.|.|N|N|.|.|.| |.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|B|B|.|.|.| |.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |.|N|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|.|.|.|.|.|.| |B|.|.|.|.|.|.|.| |.|.|.|.|.|.|.|.|Attacked squares: Attacked squares: Attacked squares:|*|.|*|*|*|*|.|*| |.|.|.|.|*|.|.|.| |.|.|.|*|*|.|.|.||*|*|*|.|.|*|*|*| |.|.|.|.|.|.|*|.| |.|*|.|.|.|.|*|.||.|*|*|.|.|*|*|.| |.|.|.|.|*|.|.|.| |.|.|.|*|*|.|.|.||.|*|*|*|*|*|*|.| |.|.|.|.|.|*|.|*| |*|.|*|.|.|*|.|*||.|.|*|*|*|*|.|.| |*|.|*|.|.|.|.|.| |.|.|.|.|.|.|.|.||.|.|*|*|*|*|.|.| |.|.|.|*|.|.|.|.| |.|.|.|.|.|.|.|.||.|*|*|.|.|*|*|.| |.|*|.|.|.|.|.|.| |.|.|.|.|.|.|.|.||*|*|.|.|.|.|*|*| |.|.|.|*|.|.|.|.| |.|.|.|.|.|.|.|.|attacked_total=38 attacked_total=10 attacked_total=10 .. in pure C, get it here, with no external dependencies, compile: gcc find.c -O3 -lm Weed out symmetrical solutions and leave only unique ones. UPD: at Reddit: 1, 2. List of my other blog posts. Yes, I know about these lousy Disqus ads. Please use adblocker. I would consider to subscribe to 'pro' version of Disqus if the signal/noise ratio in comments would be good enough.
3
Getting Graphical Output from Our Custom RISC-V Operating System in Rust
An operating system is used to make our job easier when using graphics. In our instance, in addition to everything else. In this post, we will be writing a GPU (graphics processing unit) driver using the VirtIO specification. In here, we will allow user applications to have a portion of the screen as RAM–with what is commonly known as a framebuffer. We command the virtual GPU (virtio-gpu) by sending certain commands to the host (the device). The guest (the OS driver) has an allocation of RAM that becomes the framebuffer. The driver then tells the device, “hey, here’s the RAM that we’re going to use to store pixel information.” The RAM is contiguous in our OS, but according to the specification, this isn’t strictly required. We will give the driver a rectangle. Everything that falls within that rectangle will be copied to the host. We don’t want to keep copying the entire buffer over and over again. We will be using the virtio protocol that we used for the block driver here, so I won’t rehash the general virtio protocol. However, the device-specific structures are a bit different, so we’ll cover that part more in depth. A framebuffer must be large enough to store \(\text{width}\times\text{height}\times\text{pixel size}\) number of bytes. There are \(\text{width}\times\text{height}\) number of pixels. Each pixel has a 1-byte red, green, blue, and alpha channels. So, each pixel is exactly 4 bytes with the configuration we’re going to specify. The framebuffer for our junior GPU driver is going to support a fixed resolution of \(640\times 480\). If you’re a child of the 90s, you saw this resolution a lot. In fact, my first computer, a Laser Pal 386, had a 16-color monitor with a resolution of 640 pixels wide with 480 pixels tall. There are red, green, and blue pixels so close together that by varying the intensity of these three channels, we can change the color. The closer we get to our monitors, the easier a pixel is to see. Pixels on a Viewsonic VX2770SMH-LED monitor. You can see these little squares. If you squint enough, you can see that they aren’t pure white. Instead, you can see bits of red, blue, and green. That’s because each one of these little squares is subdivided into three colors: yep, red, green, and blue! To make white, these pixels are turned up to 11 (get the joke?). To make black, we turn off all three channels of that pixel. The resolution refers to how many of these squares are on our monitor. This is a 1920×1080 monitor. That means that there are 1920 of these squares going left to right, and there are 1080 of these squares from top to bottom. All in all, we have \(1920\times 1080=2,073,600\) number of pixels. Each one of these pixels is expressed using 4 bytes in the framebuffer, meaning we need \(2,073,600\times 4=8,294,400\) bytes in RAM to store the pixel information. You can see why I limited our resolution to 640×480, which only requires \(640\times 480\times 4=1,228,800\) bytes–a bit over a megabyte. The GPU device requires us to read a more up-to-date VirtIO specification. I’ll be reading from version 1.1, which you can get a copy here: https://docs.oasis-open.org/virtio/virtio/v1.1/virtio-v1.1.html. Specifically, chapter 5.7 “GPU Device”. This is an unaccelerated 2D device, meaning that we must use the CPU to actually form the framebuffer, then we transfer our CPU formulated memory location to the host GPU, which is then responsible for drawing it to the screen. The device uses a request/response system, where we the driver make a command to request something from the host (the GPU). We add a bit of extra memory into our request so that the host can formulate its response. When the GPU interrupts us, we can take a look at this response memory location to see what the GPU told us. This is much like the status field on the block driver, where the block device tells us the status of our last request. Each request starts with a Command Header, which in Rust looks as follows: #[repr(C)]struct CtrlHeader {ctrl_type: CtrlType,flags: u32,fence_id: u64,ctx_id: u32,padding: u32} The header is common for all requests and all responses. We can differentiate by the CtrlType enumeration, which is: #[repr(u32)]enum CtrlType {/* 2d commands */CmdGetDisplayInfo = 0x0100,CmdResourceCreate2d,CmdResourceUref,CmdSetScanout,CmdResourceFlush,CmdTransferToHost2d,CmdResourceAttachBacking,CmdResourceDetachBacking,CmdGetCapsetInfo,CmdGetCapset,CmdGetEdid,/* cursor commands */CmdUpdateCursor = 0x0300,CmdMoveCursor,/* success responses */RespOkNoData = 0x1100,RespOkDisplayInfo,RespOkCapsetInfo,RespOkCapset,RespOkEdid,/* error responses */RespErrUnspec = 0x1200,RespErrOutOfMemory,RespErrInvalidScanoutId,RespErrInvalidResourceId,RespErrInvalidContextId,RespErrInvalidParameter,} I took this directly from the specification, but Rust-ified the names to avoid getting yelled at by the linter. Recall that the framebuffer is just a bunch of bytes in memory. We need to put a structure behind the framebuffer so the host (the GPU) knows how to interpret your sequence of bytes. There are several formats, but all-in-all, they just re-arrange the red, green, blue, and alpha channels. All are exactly 4 bytes, which makes the stride the same. The stride is the spacing from one pixel to another–4 bytes. #[repr(u32)]enum Formats {B8G8R8A8Unorm = 1,B8G8R8X8Unorm = 2,A8R8G8B8Unorm = 3,X8R8G8B8Unorm = 4,R8G8B8A8Unorm = 67,X8B8G8R8Unorm = 68,A8B8G8R8Unorm = 121,R8G8B8X8Unorm = 134,} The type, unorm, is an 8-bit (1-byte) unsigned value from 0 through 255, where 0 represents no intensity and 255 represents full intensity, and a number in between is a linear-interpolation between no and full intensity. Since there are three color (and one alpha), that gives us \(256\times 256\times 256=16,776,216\) different colors or levels of colors. For this tutorial, I selected R8G8B8A8Unorm = 67, which has red first, green second, blue third, and alpha fourth. This is a common ordering, so I’ll select it to make it easy to follow along. Our selected format makes the pixel structure look as follows: Recall that each individual component R, G, B, and A are each one byte a piece, so each Pixel referred to by (x, y) is 4 bytes. This is why our memory pointer is a Pixel structure instead of a byte. Just like all other virtio devices, we set up the virtqueues first and then we work on device-specific initialization. In my code, I just directly copied-and-pasted from the block driver into the gpu driver. The only thing I added to the Device structure was the framebuffer and dimensions of the framebuffer. pub struct Device {queue: *mut Queue,dev: *mut u32,idx: u16,ack_used_idx: u16,framebuffer: *mut Pixel,width: u32,height: u32,} The specification tells us to do the following in order to initialize the device and get things ready to draw. I Rust-ified some of the content to match our enumerations. Recall that our request and response come packaged together. We will put them in separate descriptors, but whenever we get a response back from the device, it is going to be easier if we free just once to free both the request and response. So, in Rust, I created the Request structure to support doing this. struct Request {request: RqT,response: RpT,}impl Request {pub fn new(request: RqT) -> *mut Self {let sz = size_of::() + size_of::();let ptr = kmalloc(sz) as *mut Self;unsafe {(*ptr).request = request;}ptr}} let rq = Request::new(ResourceCreate2d {hdr: CtrlHeader {ctrl_type: CtrlType::CmdResourceCreate2d,flags: 0,fence_id: 0,ctx_id: 0,padding: 0,},resource_id: 1,format: Formats::R8G8B8A8Unorm,width: dev.width,height: dev.height,});let desc_c2d = Descriptor {addr: unsafe { &(*rq).request as *const ResourceCreate2d as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_NEXT,next: (dev.idx + 1) % VIRTIO_RING_SIZE as u16,};let desc_c2d_resp = Descriptor {addr: unsafe { &(*rq).response as *const CtrlHeader as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_WRITE,next: 0,};unsafe {let head = dev.idx;(*dev.queue).desc[dev.idx as usize] = desc_c2d;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).desc[dev.idx as usize] = desc_c2d_resp;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).avail.ring[(*dev.queue).avail.idx as usize % VIRTIO_RING_SIZE] = head;(*dev.queue).avail.idx = (*dev.queue).avail.idx.wrapping_add(1);} All we’re really telling the GPU here is our resolution and the format of the framebuffer. When we create this, the host gets to configure itself, such as allocating an identical buffer to make transfers from our OS. let rq = Request3::new(AttachBacking {hdr: CtrlHeader {ctrl_type: CtrlType::CmdResourceAttachBacking,flags: 0,fence_id: 0,ctx_id: 0,padding: 0,},resource_id: 1,nr_entries: 1,},MemEntry {addr: dev.framebuffer as u64,length: dev.width * dev.height * size_of::() as u32,padding: 0, });let desc_ab = Descriptor {addr: unsafe { &(*rq).request as *const AttachBacking as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_NEXT,next: (dev.idx + 1) % VIRTIO_RING_SIZE as u16,};let desc_ab_mementry = Descriptor {addr: unsafe { &(*rq).mementries as *const MemEntry as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_NEXT,next: (dev.idx + 2) % VIRTIO_RING_SIZE as u16,};let desc_ab_resp = Descriptor {addr: unsafe { &(*rq).response as *const CtrlHeader as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_WRITE,next: 0,};unsafe {let head = dev.idx;(*dev.queue).desc[dev.idx as usize] = desc_ab;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).desc[dev.idx as usize] = desc_ab_mementry;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).desc[dev.idx as usize] = desc_ab_resp;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).avail.ring[(*dev.queue).avail.idx as usize % VIRTIO_RING_SIZE] = head;(*dev.queue).avail.idx = (*dev.queue).avail.idx.wrapping_add(1);} The backing is exposed to the GPU through the MemEntry structure. This essentially is a physical address in guest RAM. The MemEntry, aside from padding, is just a pointer and a length. Notice that I created a new structure called Request3. This is because this step requires three separate descriptors: (1) the header, (2) the mementry, (3) the response, whereas usually we only need two descriptors. Our structure is much like a normal Request, but it includes the mementries. struct Request3 { request: RqT, mementries: RmT, response: RpT,}impl Request3 { pub fn new(request: RqT, meminfo: RmT) -> *mut Self { let sz = size_of::() + size_of::() + size_of::(); let ptr = kmalloc(sz) as *mut Self; unsafe { (*ptr).request = request; (*ptr).mementries = meminfo; } ptr }} let rq = Request::new(SetScanout {hdr: CtrlHeader {ctrl_type: CtrlType::CmdSetScanout,flags: 0,fence_id: 0,ctx_id: 0,padding: 0,},r: Rect::new(0, 0, dev.width, dev.height),resource_id: 1,scanout_id: 0,});let desc_sso = Descriptor {addr: unsafe { &(*rq).request as *const SetScanout as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_NEXT,next: (dev.idx + 1) % VIRTIO_RING_SIZE as u16,};let desc_sso_resp = Descriptor {addr: unsafe { &(*rq).response as *const CtrlHeader as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_WRITE,next: 0,};unsafe {let head = dev.idx;(*dev.queue).desc[dev.idx as usize] = desc_sso;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).desc[dev.idx as usize] = desc_sso_resp;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).avail.ring[(*dev.queue).avail.idx as usize % VIRTIO_RING_SIZE] = head;(*dev.queue).avail.idx = (*dev.queue).avail.idx.wrapping_add(1);} When we want to write to a buffer, we will refer to it by its scanout number. If we had two scanouts, we could draw on one while the other is displayed to the screen. This is called double-buffering, but for our purposes, we don’t do this. Instead, we draw on the same framebuffer, then transfer certain portions for the GPU to update the display. After we signal QueueNotify, the virtio register “GO” button, then the GPU will create a new buffer internally, set the backing store, and set the scanout number to this buffer. We now have an initialized framebuffer! We now have memory that contains pixels. However, we have our own memory, and the GPU has its own memory. So, to get ours to the GPU, it needs to be transferred. We set the backing store during initialization, so we now only have to refer to what we want updated by its scanout number. Invalidation is important, since updating the entire screen every time we make a change is very expensive. In fact, if we transfer our entire screen, we need to transfer \(640\times 480\times 4=1,228,800\) bytes. For framerates, such as 20 or 30 frames per second, we need to transfer this number of bytes 20 or 30 times a second! Instead of transferring everything, we invalidate certain portions of the framebuffer, and the GPU will only copy over those Pixels that fall within the invalidated region, whose coordinates are defined by a Rect structure. #[repr(C)]#[derive(Clone, Copy)]pub struct Rect {pub x: u32,pub y: u32,pub width: u32,pub height: u32,}impl Rect {pub const fn new(x: u32, y: u32, width: u32, height: u32) -> Self {Self {x, y, width, height}}} Notice that this Rect is defined by an upper-left coordinate (x, y) and then a width and height. Rectangles can be defined by their coordinates (x1, y1), (x2, y2) or an initial coordinate and width and height. I don’t see anything in the spec about the former, but when I try to invalidate and transfer, it appears that it’s treating the rectangle as the latter. Oh well, more testing I guess… Invalidating is just transferring the data from the guest (driver) to the host (GPU). This just copies the memory, to update the framebuffer, we execute a flush command. pub fn transfer(gdev: usize, x: u32, y: u32, width: u32, height: u32) { if let Some(mut dev) = unsafe { GPU_DEVICES[gdev-1].take() } { let rq = Request::new(TransferToHost2d { hdr: CtrlHeader {ctrl_type: CtrlType::CmdTransferToHost2d,flags: 0,fence_id: 0,ctx_id: 0,padding: 0, },r: Rect::new(x, y, width, height),offset: 0,resource_id: 1,padding: 0,});let desc_t2h = Descriptor {addr: unsafe { &(*rq).request as *const TransferToHost2d as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_NEXT,next: (dev.idx + 1) % VIRTIO_RING_SIZE as u16,};let desc_t2h_resp = Descriptor {addr: unsafe { &(*rq).response as *const CtrlHeader as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_WRITE,next: 0,};unsafe {let head = dev.idx;(*dev.queue).desc[dev.idx as usize] = desc_t2h;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).desc[dev.idx as usize] = desc_t2h_resp;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).avail.ring[(*dev.queue).avail.idx as usize % VIRTIO_RING_SIZE] = head;(*dev.queue).avail.idx = (*dev.queue).avail.idx.wrapping_add(1);}// Step 5: Flushlet rq = Request::new(ResourceFlush {hdr: CtrlHeader {ctrl_type: CtrlType::CmdResourceFlush,flags: 0,fence_id: 0,ctx_id: 0,padding: 0,},r: Rect::new(x, y, width, height),resource_id: 1,padding: 0,});let desc_rf = Descriptor {addr: unsafe { &(*rq).request as *const ResourceFlush as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_NEXT,next: (dev.idx + 1) % VIRTIO_RING_SIZE as u16,};let desc_rf_resp = Descriptor {addr: unsafe { &(*rq).response as *const CtrlHeader as u64 },len: size_of::() as u32,flags: VIRTIO_DESC_F_WRITE,next: 0,};unsafe {let head = dev.idx;(*dev.queue).desc[dev.idx as usize] = desc_rf;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).desc[dev.idx as usize] = desc_rf_resp;dev.idx = (dev.idx + 1) % VIRTIO_RING_SIZE as u16;(*dev.queue).avail.ring[(*dev.queue).avail.idx as usize % VIRTIO_RING_SIZE] = head;(*dev.queue).avail.idx = (*dev.queue).avail.idx.wrapping_add(1);}// Run Queueunsafe {dev.dev.add(MmioOffsets::QueueNotify.scale32()).write_volatile(0);GPU_DEVICES[gdev-1].replace(dev);}} So, our transfer first tells the host that we’ve updated a certain portion of the framebuffer, which is specified as x, y, width, and height. Then we do what is called a resource flush to get the GPU to commit all transfers to the screen. This is a fairly easy section. Most of the device responses come in the form of NODATA, which is just an acknowledgment that it made the request. Also, notice that unlike the block driver, we don’t have watchers here. This allows us to asynchronously update the screen. The whole point of this is to get a user space application drawing stuff to the screen. Generally, we wouldn’t give the full framebuffer to any user space application that wants it, but for our purposes, we can live with it for now. Instead, we would have a window manager delegate certain rectangles of the framebuffer to different applications. The window manager would also be responsible for handling events and sending the appropriate events to the GUI application. To allow our userspace applications to use the GPU, we need two system calls. One to get a pointer to the framebuffer. Recall that we first must map the framebuffer to the userspace’s MMU table. This is why we allocated pages instead of using kmalloc. let dev = (*frame).regs[Registers::A0 as usize];(*frame).regs[Registers::A0 as usize] = 0;if dev > 0 && dev <= 8 {if let Some(p) = gpu::GPU_DEVICES[dev - 1].take() {let ptr = p.get_framebuffer() as usize;gpu::GPU_DEVICES[dev-1].replace(p);if (*frame).satp >> 60 != 0 {let p = get_by_pid((*frame).pid as u16);let table = ((*p).get_table_address() as *mut Table).as_mut().unwrap(); let num_pages = (p.get_width() * p.get_height() * 4) as usize / PAGE_SIZE;for i in 0..num_pages {let vaddr = 0x3000_0000 + (i << 12);let paddr = ptr + (i << 12);map(table, vaddr, paddr, EntryBits::UserReadWrite as i64, 0);}}(*frame).regs[Registers::A0 as usize] = 0x3000_0000;}} As you can see above, we grab the framebuffer from the GPU device and map it to 0x3000_0000. Currently, I calculate the number of pages for the framebuffer, which is \(\frac{640\times 480\times 4}{4,096}=300\). So, we need exactly 300 pages for this resolution. So, now we have a framebuffer, so the userspace application can write what it wants into this memory location. However, a write doesn’t immediately update the screen. Recall that we must transfer and then flush to get the results written to the screen. This is where our second system call comes into play. let dev = (*frame).regs[Registers::A0 as usize];let x = (*frame).regs[Registers::A1 as usize] as u32;let y = (*frame).regs[Registers::A2 as usize] as u32;let width = (*frame).regs[Registers::A3 as usize] as u32;let height = (*frame).regs[Registers::A4 as usize] as u32;gpu::transfer(dev, x, y, width, height); I showed the transfer function above, which just makes two requests: (1) CmdTransferToHost2d and (2) CmdResourceFlush. When the userspace application makes this system call, the results will be flushed to the screen and hence, it’ll be visible to the user. I don’t error check in the system call itself. The transfer function will error check the device, and the device will error check the x, y, width, and height. So, if this is incorrect, the transfer function will silently fail, and nothing will update to the screen. To see something displayed to the screen, we need to be able to draw the simplest things, rectangles. If we have a width of the rectangle small enough, we can draw straight lines–horizontally or vertically! We are given a contiguous piece of memory in row-major format. That means that we exhaust each column in a row before we move to the next row. So, framebuffer[0] and framebuffer[1] are columns 0 and 1 of row 0. The calculation is fairly straight forward to get to the next row, we must go one past the last column. So, the formula becomes: struct Pixel {unsigned char r;unsigned char g;unsigned char b;unsigned char a;};void set_pixel(Pixel *fb, u32 x, u32 y, Pixel &color) { // x is column, y is row if (x < 640 && y < 480) { fb[y * 640 + x] = color; }} So, the function above writes to a single Pixel. This structure is a 4-byte structure containing red, green, blue, and alpha bytes. However, we want two different types of rectangle drawing: fill and stroke. Fill will fill the area of the rectangle with the given Pixel structure (color) whereas stroke is just the outline of a rectangle. void fill_rect(Pixel *fb, u32 x, u32 y, u32 width, u32 height, Pixel &color) { for (u32 row = y; row < (y+height);row++) { for (u32 col = x; col < (x+width);col++) { set_pixel(fb, col, row, color); } }}void stroke_rect(Pixel *fb, u32 x, u32 y, u32 width, u32 height, Pixel &color, u32 size) { // Essentially fill the four sides. // Top fill_rect(fb, x, y, width, size, color); // Bottom fill_rect(fb, x, y + height, width, size, color); // Left fill_rect(fb, x, y, size, height, color); // Right fill_rect(fb, x + width, y, size, height + size, color);} Of course, when I tried to brag about drawing rectangles to a friend of mine, he mentions the following. Oh no…I don’t have cos/sin/tan or anything like that in my OS. I couldn’t say no, and I couldn’t be beaten by a simple cosine, right? Challenge accepted. I ended up writing a cosine function based on an infinite series, but he took it several steps further and wrote several ways and benchmarked them to see which was better in terms of memory footprint, accuracy, and speed (see link below in Conclusions and Further Reading). Here’s mine: f64 cos(f64 angle_degrees) {f64 x = 3.14159265359 * angle_degrees / 180.0;f64 result = 1.0;f64 inter = 1.0;f64 num = x * x;for (int i = 1;i <= 6;i++) {u64 comp = 2 * i;u64 den = comp * (comp - 1);inter *= num / den;if ((i & 1) == 0) {result += inter;}else {result -= inter;}}return result;} This is an infinite series, but we can get more accuracy with more terms. For a compromise, the for loop’s termination, i <= 6, is the number of terms, so 6 terms gives us alright accuracy for graphics, at least from what I can visually tell on a \(640\times 480\) screen. Now, the fun part. Let’s see if this works! Here’s our userspace code. int main() { Pixel *fb = (Pixel *)syscall_get_fb(6); Pixel blue_color = {0, 0, 255, 255}; Pixel red_color = {255, 0, 0, 255}; Pixel green_color = {0, 255, 0, 255}; Pixel white_color = {255, 255, 255, 255}; Pixel orange_color = {255, 150, 0, 255}; fill_rect(fb, 0, 0, 640, 480, white_color); stroke_rect(fb, 10, 10, 20, 20, blue_color, 5); stroke_rect(fb, 50, 50, 40, 40, green_color, 10); stroke_rect(fb, 150, 150, 140, 140, red_color, 15); fill_rect(fb, 10, 300, 500, 100, orange_color); syscall_inv_rect(6, 0, 0, 640, 480); return 0;} Let’s add in our cosine function and see what happens! void draw_cosine(Pixel *fb, u32 x, u32 y, u32 width, u32 height, Pixel &color) { for (u32 i = 0; i < width;i++) { f64 fy = -cos(i % 360); f64 yy = fy / 2.0 * height; u32 nx = x + i; u32 ny = yy + y; fill_rect(fb, nx, ny, 2, 2, color); }} Our operating system is starting to look more and more like a normal operating system. We still need an input system so that we can interact with our operating system, but that’ll be the next thing we tackle. Sometime in the future, we will compile newlib so that we have a standard library in userspace. Right now, we’re forced to write our own functions. For a great read regarding cosine and the challenges with it, head on over to Dr. Austin Henley’s blog on cosine.
2
A Manhattan Project for online identity (2011)
In 1993, Peter Steiner famously wrote in the New Yorker that “on the Internet, nobody knows you’re a dog.” In 2011, trillions of dollars in e-commerce transactions and a growing number of other interactions have made knowing that someone is not only human, but a particular individual, increasingly important. Governments are now faced with complex decisions in how they approach issues of identity, given the stakes for activists in autocracies and the increasing integration of technology into the daily lives of citizens. Governments need ways to empower citizens to identify themselves online to realize both aspirational goals for citizen-to-government interaction and secure basic interactions for commercial purposes. It is in that context that the United States federal government introduced the final version of its National Strategy for Trusted Identities in Cyberspace (NSTIC) this spring. The strategy addresses key trends that are crucial to the growth of the Internet operating system: online identity, privacy, and security. Image Credit: Official White House Photo by Pete Souza The NSTIC proposes the creation of an “identity ecosystem” online, “where individuals and organizations will be able to trust each other because they follow agreed upon standards to obtain and authenticate their digital identities.” The strategy puts government in the role of a convener, verifying and certifying identity providers in a trust framework. First steps toward this model, in the context of citizen-to-government authentication, came in 2010 with the launch of the Open Identity Exchange (OIX) and a pilot at the National Institute of Health of a trust frameworks — but there’s a very long road ahead for this larger initiative. Online identity, as my colleague Andy Oram explored in a series of essays here at Radar, is tremendously complicated, from issues of anonymity to digital privacy and security to more existential notions of insight into the representation of our true selves in the digital medium. The need to improve the current state of online identity has been hailed at the highest levels of government. “By making online transactions more trustworthy and better protecting privacy, we will prevent costly crime, we will give businesses and consumers new confidence, and we will foster growth and untold innovation,” President Obama said in a statement on NSTIC. The final version of NSTIC is a framework that lays out a vision for an identity ecosystem. Video of the launch of the NSTIC at the Commerce Department is embedded below: “This is a strategy document, not an implementation document,” said Ian Glazer, research director on identity management at Gartner, speaking in an interview last week. “It’s about a larger vision: this where we want to get to, these are the principles we need to get there.” Jim Dempsey of the Center for Democracy and Technology (CDT) highlighted a critical tension at a forum on NSTIC in January: government needs a better online identity infrastructure to improve IT security, online privacy, and support e-commerce but can’t create it itself. Andy Ozment, White House Director for Cybersecurity Policy, said in a press briefing prior to the release of NSTIC that the strategy was intended to protect online privacy, reduce identity theft, foster economic growth on the Internet and create a platform for the growth of innovative identity services. “It must be led by the private sector and facilitated by government,” said Ozment. “There will be a sort of trust mark — it may not be NSTIC — that certifies that solutions will have gone through an accreditation process.” Instead of creating a single online identity for each citizen, NSTIC envisions an identity ecosystem with many trusted providers. “In the EU [European Union] mentality, identity can only exist if the state provides it,” said Glazer. “That’s inherently an un-American position. This is frankly an adoption of the core values of the nation. There’s a rugged individualism in what we’re incorporating into this strategy.” Glazer and others who have been following the issue closely have repeatedly emphasized that NSTIC is not a mandate to create a national online identity for every American citizen. “NSTIC puts forth a vision where individuals can choose to use a smaller number of secure, privacy-preserving online identities, rather than handing over a new set of personal information each time they sign up for a service,” said Leslie Harris, president for the Center for Democracy and Technology, in a prepared statement.  “There are two key points about this Strategy:  First, this is NOT a government-mandated, national ID program; in fact, it’s not an identity ‘program’ at all,” Harris said.  “Second, this is a call by the Administration to the private sector to step up, take leadership of this effort and provide the innovation to implement a privacy-enhancing, trusted system.” Harris also published a guest post at Commerce.gov that explored how the national identity strategy “envisions a more trustworthy Internet.” The NSTIC was refined in an open government consultation process with multiple stakeholders over the course of the past year, including people from private industry, academics, privacy advocates and regulators. It is not a top-down mandate but instead a set of suggested principles that its architects hope will lead to a health identity ecosystem. “Until a competitive marketplace and proper standards are adopted across industry, we actually continue to have fewer options in terms of how we secure our accounts than more,” said Chris Messina in an interview with WebProNews this year. “And that means that the majority of Americans will continue using the same set of credentials over and over again, increasing their risk and exposure to possible leaks.” For a sense of the constituencies involved, read through what they’re saying about NSTIC at NIST.gov. Many of those parties are involved in an ongoing open dialogue on NSTIC at NSTIC.us. “The commercial sector is making progress every week, every month, with players for whom there’s a lot of money involved,” said Eric Sachs, product manager for the Google Security team and board member of the OpenID Foundation, in an interview this winter. “These players have a strong expectation of a regulated solution. That’s one reason so many companies are involved in the OpenID Foundation. Businesses are finding that if they don’t offer choices for authentication, there’s significant push back that affects business outcomes.” Functionally, there will now be an NSTIC program office in the Department of Commerce and a series of roundtables held across the United States over the next year. There will be funding for more research. Beyond that, “milestones are really hard to see in the strategy,” said Glazer. “We tend to think of NSTIC’s goal as a single, recognizable state. Maybe we should be thinking of this as DARPA for identity. It’s us as a nation harnessing really smart people on all sides of transactions.” Improving online identity will require government and industry to work together. “One role government can play is by aggregating citizen demand and asking companies to address it,” said Sachs. “Government is doing well by coming to companies and saying that this is an issue that affects everyone on the Internet.” There are serious risks to getting this wrong, as the Electronic Frontier Foundation highlighted in its analysis of the federal online identity plan last year. The most contentious issue with NSTIC lies in its potential to enable new harms instead of preventing them, including increased identity theft. “While we’re concerned about the unsolved technological hurdles, we are even more concerned about the policy and behavioral vulnerabilities that a widespread identity ecosystem would create,” wrote Aaron Titus of Identity Finder, which has released a 39-page analysis of NSTIC’s effect on privacy: We all have social security cards and it took decades to realize that we shouldn’t carry them around in our wallets. Now we will have a much more powerful identity credential, and we are told to carry it in our wallets, phones, laptops, tablets and other computing devices. Although NSTIC aspires to improve privacy, it stops short of recommending regulations to protect privacy. The stakes are high, and if implemented improperly, an unregulated Identity Ecosystem could have a devastating impact on individual privacy. It would be a mistake, however, to “freak out” over this strategy, as Kaliya Hamlin has illuminated in her piece on the NSTIC in Fast Company: There [are] a wide diversity of use cases and needs to verify identity transactions in cyberspace across the public and private sectors. All those covering this emerging effort would do well to stop just reacting to the words “National,” “Identity,” and “Cyberspace” being in the title of the strategy document but instead to actually talk to the the agencies to understand real challenges they are working to address, along with the people in the private sector and civil society that have been consulted over many years and are advising the government on how to do this right. So no, the NSTIC is not a government ID card, although information cards may well be one of the trusted sources of for online identity in the future, along with smartphones and other physical tokens. The online privacy issue necessarily extends far beyond whatever NSTIC accomplishes, affecting every one of the billions of people now online. At present, the legal and regulatory framework that governs the online world varies from state to state and from sector to sector. While the healthcare and financial world have associated penalties, online privacy hasn’t been specifically addressed by legislation. As online privacy debates heat up again in Washington when Congress returns from its spring break, that may change. Following many of the principles advanced in the FTC privacy report and the Commerce Department’s privacy report last year, Senator John McCain and Senator Kerry introduced an online privacy bill of rights in March. After last week’s story on the retention of iPhone location data, location privacy is also receiving heightened attention in Washington. The Federal Trade Commission, with action on Twitter privacy in 2010 and Google Buzz in 2011, has moved forward without a national consumer privacy law. “I think you’ll see some enforcement actions on mobile privacy in the future,” Maneesha Mithal, associate director of the Division of Privacy and Identity Protection, told Politico last week. For companies to be held more accountable for online privacy breaches, however, the U.S. Senate would need to move forward on H.R. 2221, the Data Accountability and Trust Act (DATA) that passed the U.S. House of Representatives this year. To date, the 112th Congress has not taken up a national data breach law again, although such a measure could be added to a comprehensive cybersecurity bill. “The old password and user-name combination we often use to verify people is no longer good enough. It leaves too many consumers, government agencies and businesses vulnerable to ID and data theft,” said Commerce Secretary Gary Locke during the strategy launch event at the Commerce Department in Washington, D.C. The problem is that “people are doing billions of transactions with LOA1 [Level of Assurance 1] credentials already,” said Glazer. “That’s happening today. It’s costing business to go verify these things before and after the transaction, and the principle of minimization is not being adhered to. “ Many of these challenges, however, come not from the technical side but from the user experience and business balance side, said Sachs. “Most businesses shouldn’t be in the business of having passwords for users. The goal is educating website owners that unless you specialize in Internet security, you shouldn’t be handling authentication.” Larger companies “don’t want to tie their existence to a single company, but the worse they’re doing in a given quarter, the more willing they are to take that risk,” Sachs said. “One username and password for everything is actually very bad ‘security hygiene,’ especially as you replay the same credentials across many different applications and contexts (your mobile phone, your computer, that seemingly harmless iMac at the Apple store, etc),” said Messina in the WebProNews interview. “However, nothing in NSTIC advocates for a particular solution to the identity challenge — least of all supporting or advocating for a single username and password per person.” “If you look at the hopes of the NSTIC, it moves beyond passwords,” said Glazer. “My concern is that it’s authenticator-fixated. Let’s make sure it’s not solely smartcards or one-time passwords.” There won’t be a magic bullet here, as is the same conclusion faced by so many other great challenges for government and society. Some of the answers to securing online privacy and identity, however, won’t be technical or legislative at all. They will lie in improving the digital literacy of all online citizens. That very human reality was highlighted after the Gawker database breach last year, when the number of weak passwords used online became clear. “We’re going to set the behavior for the next generation of computing,” said Glazer. “We shouldn’t be obsessed with one or two kinds of authenticators. We should have a panoply. NSTIC is aimed at fostering the next generation of behaviors. It will involve designers, psychologists, as well as technologists. We, the identity community, need to step out of the way when it comes to the user experience and behavioral architecture. NSTIC may be the “wave of the future” but, ultimately, the success or failure of this strategy will rest on several factors, many of them lying outside of government’s hands. For one, the widespread adoption of Facebook’s social graph and Facebook Connect by developers means that the enormous social network is well on its way to becoming an identity utility for the Internet. For another, the loss of anonymity online would have dire consequences for marginalized populations or activists in autocratic governments. Ultimately, said Glazer, NSTIC may not matter in the ways that we expect it to. “I think what will come of this is a fostering of research at levels, including business standards, identity protocols and user experience. I hope that this will be a Manhattan Project for identity, but done in the public eye.” There may be enormous risks to getting this wrong, but then that was equally true of the Apollo Project and Manhattan Project, both of which involved considerably more resources. If the United States is to enable its citizens to engage in more trusted interactions with government systems online, something better than the status quo will have to emerge. One answer will be services like Singly and the Locker Project that enable citizens to aggregate Internet data about ourselves, empowering people with personal data stores. There are new challenges ahead for the National Institute of Standards and Technology (NIST), too. “NIST must not only support the development of standards and technology, but must also develop the policy governing the use of the technology,” wrote Titus. What might be possible? Aaron Brauer-Rieke, a fellow at the Center for Democracy and Technology, described a best-case scenario to Nancy Scola of techPresident: I can envision a world where a particularly good trust framework says, “Our terms of service that we will take every possible step to resist government subpoenas for your information. Any of the identity providers under our framework or anyone who accepts information from any of our identity providers must have those terms of service, too.” If something like that gains traction, that would be great. That won’t be easy, but the potential payoffs are immense. “For those of us interested in the open government space, trusted identity raises the intriguing possibility of creating threaded online transactions with governments that require the exchange of only the minimum in identifying information,” writes Scola at techPresident. “For example, Brauer-Rieke sketched out the idea of an urban survey that only required a certification that you lived in the relevant area. The city doesn’t need to know who you are or where, exactly, you live. It only needs to know that you fit within the boundaries of the area they’re interested in.” Online identity is, literally, all about us. It’s no longer possible for governments, businesses or citizens to remain static with the status quo. To get this right, the federal government is taking the risk of looking to the nation’s innovators to create better methods for trusted identity and authentication. In other words, it’s time to work on stuff that matters, not making people click on more ads.
1
The Missing Piece in DevSecOps
DevOps’ way of working is creating great business values for companies. It also creates new requirements, challenges, and opportunities for the cybersecurity practice, which is evolving towards what is often called DevSecOps. While the exact definition of DevSecOps varies, the core is the same; DevSecOps is about embedding Security into DevOps’ ways of working. DevSecOps offers great potentials, but it is also a challenge in practice for many CISOs and DevOps teams. Today, companies typically leverage several automated tools. However, they are to a large extent separate silos that identify separate lists — most often long lists — of risks and vulnerabilities. This creates complexities, inefficiencies, costs, and risks, and do often slows down DevOps organizations. A key capability has been missing in the toolset. A capability that empowers both DevOps Teams and CISO by continuously: Yes, there are plenty of different risks. But what is the holistic risk exposure of my high-value assets? Are we good, or do we need to take action? Yes, there are plenty of ways that we can reduce risks. But what of all possible actions shall I prioritize? And what shall I not prioritize? It is simply not feasible to do everything everywhere. Yes, we should do these analyses continuously but it is simply not possible to do it manually. It would be great to integrate automated holistic analyses in our CI/CD. This article includes: DevSecOps is a major evolution of the cybersecurity practice. So, before going into practical details, let’s take a starting point from the overall business perspective. In the overall requirements, challenges, and opportunities that DevOps and DevSecOps bring to organizations, CISOs, and DevOps teams in practice. Let us explore a typical illustrative example: An organization — DevOpsCo — has a DevOps way of working. It could have two, ten, fifty, or even hundreds of DevOps teams. Each team is Developing and Operating their part of the overall company system environment. It is not uncommon that one team push several releases per day. And each release naturally has an impact on the security posture of the environment. And not only on the team’s environment but also on other teams’ environments and the total infrastructure. This small illustration does in itself illustrate both how important it is to embed security into the DevOps workflows to make DevSecOps practically viable and the magnitude of the challenge. But this dynamic is just one part of the challenge. A longer list of key challenges includes the following: “Sec” in DevSecOps is not a discrete step or phase, but an integrated part of the activities required to deliver software or service in a secure fashion, as illustrated in the typical DevSecOps loop. Today different activities and tools of the AppSec program will typically attach to different phases of the DevOps loop. Security training of developers, design reviews based on threat modeling, design and code reviews as well as SAST tools like SonarQube for source code inspection are all part of the Plan and Code phases. In the Build and Packaging phases, you typically find security scanning of vulnerabilities in the supply chain dependencies through solutions from companies like debricked or Snyk. Moving into the Test phase, security testing is done and often automated by DAST tools, and from the Release phase and onwards a set of more traditional cyber operational tools are employed, including vulnerability scanners for the infrastructure, WAFs, and various types of log monitoring and correlation tools including SIEMs. Interestingly, a perfect secure development process will still not be a guarantee against breaches of the application after it has transitioned into a live, deployed state. The application context, such as e.g. the Identity and Access Management (IAM) configuration will often be different from a test environment and is bound to change over time. Time will also cause changes to interdependent services and will cause new vulnerabilities to be discovered, both inside the application as well as in the infrastructure on which the application depends. Time is clearly not on the defender’s side. Furthermore, continuous deployment into public Cloud environments such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform creates an even more challenging situation as both the control plane (asset management operations) and data plane (the application and related service assets) are available over the Internet and often delegated directly to DevOps team. And, maybe most importantly, the different tools are to a large extent separate silos. Silos that identify separate lists — most often long lists — of risks and vulnerabilities. This creates complexities, inefficiencies, costs, and risks, and do often slows down DevOps organizations. As described in the earlier sections; DevSecOps is a natural way forward for DevOps organizations. But it also imposes several quite significant challenges. To make DevSecOps practically viable, automated tooling has a key role. Today, companies typically leverage a number of automated tools for DevSecOps. However, they are to a large extent separate silos that identify separate lists, most often long lists, of risks and vulnerabilities. A key capability has been missing in the toolset. A tool that continuously: Yes, there are plenty of different risks. But what is the holistic risk exposure of my high-value assets? Are we good, or do we need to take action? Yes, there are plenty of ways that we can reduce risks. But what of all possible actions shall I prioritize? And what shall I not prioritize? It is simply not feasible to do everything everywhere. Yes, we should do these analyses continuously but it is simply not possible to do it manually. It would be great to integrate automated holistic analyses in our CI/CD. These capabilities empower DevOps teams to get continuous insights on key questions as “Are we secure enough?”, “What are the weakest links?” and “What of all things possible should we do to improve our security posture?”. And it enables the CISO function to get an overview and tracking of the security risk posture and get pin-pointed insights when and where needed. So how can we address the challenges and get the capabilities needed? Historically, the answer has most often been to implement more generic guidelines on patching, authentication, etc — which means that you will typically overspend in lower-risk areas and underspend in high-risk areas — and/or to try to conduct these analyses manually — which then often turns into not doing them at all or doing them at a too high level only. Now, new technology enables organizations to get these central capabilities needed through automated tooling. By leveraging AI-based, automated attack simulations, organizations are able to cut through complexity, gain key insights, and take proactive actions where it really matters. One leading company that is leveraging fully automated attack simulations is Klarna. Klarna is a payments company that is one of Europes’ largest banks and one of the world’s highest-valued and fastest-growing fintech. Klarna leverages automated attack simulations to continuously manage its security risk posture in highly dynamic cloud environments. The concepts that Klarna leverages consist of three steps. In the first step, the tool generates digital twin models of the systems in scope. The second step is to simulate thousands of attacks towards the digital twins, capturing all possible ways attackers can potentially reach your high-value assets. The third step is to provide the user with key insights from the simulations; risk levels, key risks, and effective risk mitigation actions. By leveraging automated attack simulations: While simulation is probably one of the few ways to risk assess a large-scale environment in continuous change, the key is to build the model continuously based on the real environment. Automation is an important leap to moving away from human subjective assessment and the rigidity of formal strict security frameworks, and to keeping consistent security. Through automated simulations, companies can take on the challenge of looking at the whole while having clear control of all the details and moving pieces. The simulation capability increases both the capability to see how changes in one team’s environment can affect others and to make assessments more consistent. In the end, it increases security where it really counts. 👋 Join FAUN today and receive similar stories each week in your inbox! ️ Get your weekly dose of the must-read tech stories, news, and tutorials. strong Twitter strong🐦strong Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬
2
My Quest for the Perfect Window Manager: A History in Screenshots (2004)
Welcome, gentle reader. Sit down, relax, and let me spin a tale for you. A tale of joy, of sadness, of passion and betrayal, a tale of....Linux Window Managers. (Yes, "over-dramatic" is going to be my keynote here.) My earliest introduction to the world of *nix Window Managers was Fvwm. The horror. The clunkiness. The ugliness. Oh, how I hated it. It began at university. If a workstation wasn't running its own commercial desktop, it was invariably running some old, pre-installed, unconfigured version of Fvwm. Occasionally it was Fvwm95, a superficial imitation of Windows. But it was on my own freshly installed Red Hat Linux system that I truly learned to loathe Fvwm. I remember Red Hat's Fvwm setup as a byzantine web of config files written in an indecipherable alien language--what I later learned to be M4 scripts. I have vague, nightmarish memories of hot-pink titlebars, fat blocky window borders, black and white load graphs, and garish pixmap buttons. And the puke-green emacs background (at this time, I was too naive to understand what Fvwm controlled and what Xdefaults controlled.) And the page-switch delay, just long enough to be annoying if you were trying to switch, and just short enough to be annoying if you weren't. And the insufferable sloppy focus. My first timid attempts to tame this monstrosity were soundly squelched. Red Hat's Fvwm was controlled, as mentioned, by a web of system files: it paid no attention to any puny .fvwm2rc in the user's home directory. After a few weeks of flailing, I gave up. I retreated to the comfort of the console, returning to X only when a particularly insatiable Netscape-craving arose. Fast-forward a year or so. I've gained a lot of confidence with the command line--I'm even doing a bit of *nix programming and scripting--but I still live in fear of Fvwm. But wait! My shiny new Red Hat 6.2 comes with two new alternatives: KDE and Gnome. I install KDE 1.1.2 and give it a whirl. My god, it actually looks good. It actually makes sense. Who ever thought a Linux window manager could do that? It's pointy and clicky, has a taskbar with launchers, icons with more than 4 bits in the palette, and window borders that aren't 3 inches thick. I click in a window, and it raises like it's supposed to. There are even plain-english config files to play with! KDE made me fall in love with Linux all over again. I could theme it, tweak it, and generally make it do whatever I wanted. I started with the GUI configuration utilities and worked my way into editing the config files by hand. (Alas, I have no extant KDE1 screenshots to offer.) My first impression of KDE2 was dismal: I compiled it with over-aggressive optimizations and ended up with a crashy, unstable desktop. This experience propelled me into a whirlwind tour of alternate window managers, which ended up being loads of fun and also gave me my first experience with tweaking WM source code. If memory serves, I sampled Enlightenment, Window Maker, and Afterstep briefly, spent a couple of weeks in BlackBox, and a couple more in wm2. wm2 was the subject of an ascetic streak, where, disgusted by the seeming excesses of KDE and Enlightenment, I reveled for awhile in a minimalist environment. It was also the first WM I hacked, thanks to its simple and well-written code. In due time, I tired of minimalism, and started to miss my friendly old KDE. I decided to give KDE 2.0 another chance. This time, I followed the advice of various net pundits and compiled it with more conservative settings. The result: a much more stable, though still slightly warty desktop. Ironically, it also ran faster than it did with high optimizations! Over time, KDE 2 and I became good friends. I never fell in love quite the way I did with KDE 1--it bugged me that KDE was growing more complex and less friendly to the casual hacker, and the more I grew as a programmer the more it bugged me--but we had fun. I really liked the new look, particularly the "marble" style and System++ window decorations. I used Konqueror heavily--it never seemed quite as zippy as Netscape 4.x, but it was certainly more modern and featureful. This phase lasted quite a while, and included another brief code-hacking expedition. Here's what my desktop looked like most of the time (click to enlarge): The titlebars in this screenshot are the result of the aforementioned hacking expedition. I tweaked the System++ plugin so that, like ModSystem, it uses titlebar-foreground instead of window-background as its main titlebar color. Result: it actually looks right :-) (The wishlist/patch for this was ignored and eventually discarded from the KDE bug database without comment--am I the only person who cares about System++?). For any who are interested, here's the patch as applied to KDE 2.2.2. One other notable feature of this screenshot is the heavy use of Copland Gnome icons, as distributed and tweaked for KDE by the creator of the Photon theme. My second departure from the KDE world came upon installing KDE 2.1. Unlike most of the world, I was underwhelmed and disenchanted. It was even more heavyweight than 2.0--it seemed like I sat at the desk for a full minute just waiting for KDE to start up! Off I went on my second WM tour, with an eye for the sleek, the fast, and the hacker-friendly. After a sampling of Sawfish and a second taste of Enlightenment and BlackBox, I settled on Window Maker. Window Maker had seemed unreasonably austere to me on my first try, and I also had bad memories of the original NextStep which inspired it. This time, I tried a little harder to get to know it, and dove into theming, menu editing, and of course, dockapps. This was the next long leg of my Window Manager trek. Window Maker never pleased me entirely; it always seemed a little too restrictive and too quirky (e.g. the resize mechanism, the lack of a visual pager, the "clip", the silly contortions necessary to get one icon to launch multiple xterms). I never really caught on to the Zen of Window Maker, I guess. But it was Good Enough(tm), and it had style. A well-configured Window Maker with lots of alpha-blended dock tiles and Largo icons looks very snappy indeed. Here's a shot showing one of my favorite Window Maker themes: So what finally led me away from Window Maker? It seemed the more I got to know it, the more antsy I became. The same tight vision that gave Window Maker an aura of style also made it inflexible and controlling. The developer community mirrored this. The typical mailing list reaction to any suggestion of change was not "show us the patch", but "over our dead bodies". The code itself was out of my league, too complex and multi-layered for my budding X-programming skills to work with. Attempts to make even small cosmetic changes failed. I went back to KDE for a little while, which by now was up to version 2.2, but quickly realized that I was heading in the opposite direction from where I wanted to go. It had become entirely too heavyweight for my PII 333 (still okay for a patient user, maybe--but I'm not patient :-), and besides, it seemed less casual-hacker-friendly than ever. The code was an enormous wriggling mass that took a full day to compile. Mysterious binary files like "ksyscoca" inhabited my ~/.kde. KDE 2.2 was eminently friendly and usable on a fast system, but it did not encourage dabbling beyond the GUI configuration dialogs. That's fine for many, but I was becoming more and more of a source-code-twiddling, config-file-editing junkie. I decided to go looking for a Window Manager that truly catered to the soul of a hacker. You see where this is headed, don't you? I decided it was time for me to face my old enemy, Fvwm. I had evolved quite a bit since our last meeting, into less of a passive user and more of a hacker, and had heard a lot of interesting rumors as well. I heard that Fvwm was boundlessly flexible, and had hidden delights to offer those who could tame it. I heard that it was scriptable beyond the dreams of mortals. I even heard that, despite the contrary evidence of almost every Fvwm screenshot in existence, it could actually look good! All of this turned out to be true. And so I came full circle. Now, let me play the role of proselytizer, and tell you some things you may not know about this underrated little WM. Fvwm can be just about anything you want it to be. The fat pink title bars and motif window decorations are not obligatory--but that's only the beginning. Want click-to-focus? You've got it. Want cute dockapps? They work just fine in FvwmButtons. Want fancy pixmaps in your titlebars? Go ahead. How about an MP3 playlist at the click of a mouse button, or a menu-based file browser? Write a Perl script and use PipeRead. Fvwm's dynamic menus are more powerful than those of any other Window Manager I've seen (save possibly AfterStep, its offspring). You can make Fvwm look like just about any other WM you want. It can give an almost perfect imitation of Window Maker, 'Doze, probably even BlackBox. But let me warn you: it doesn't want to. Fvwm is Fvwm. It has a soul of its own. It is not trendy. It is not l33t. It does not have a Vision. It doesn't even have a pronounceable name (although the F is pronounced "Feline" :-). It has a user base with lots of technical know-how but decidedly odd ideas about aesthetics. Fortunately, you're allowed to invent your own beauty. The soul of Fvwm is Frankenstein. Write a 20-page .fvwm2rc. Take the best elements from all the other WM's and discard the rest, and Fvwm will be happy, and so will you. Example: I recently surfed over to the website of a new trendy WM called FluxBox. I noticed an item in the feature list, "wheel scroll changes workspace", and thought "ah, neat idea". I pulled up my .fvwm2rc in emacs, and a few minutes later, my mouse wheel changed workspaces. That is the joy of Fvwm. The more I use Fvwm, the less patience I have with the limitations of newer, supposedly better Window Managers. I feel I'm a slave to the preferences, prejudices, and oversights of the author. Why can't I bind the keypad keys in KDE, without having to use a modifier? Why can't I ever have more than two titlebar buttons in Window Maker? What does BlackBox have against pixmaps? Why do so few WM's bother to implement a proper virtual desktop? You get the idea. Well, Fvwm goes the opposite route. You can have ten different titlebar buttons if you really want. Believe it or not, some people do. Over time, I even grew to like a few of the same Fvwm hallmarks that had repulsed me in the beginning. Sloppy focus, automatic page-switching, and non-click-raises all won me over eventually. I now get claustrophobic in MS Windows without other pages to move to, impatient when I can't change focus with a nudge of the mouse or copy text in a window without raising it. Another point of interest: Fvwm's source code. I had heard ominous mentions of how old, crufty and complex it was, having been based on the original TWM, but when I actually went in and played with it, I found the code quite clear and easy to change. I was able to ameliorate one of the few aesthetic drawbacks of Fvwm that could not be fixed by .fvwm2rc alone: the lack of multiple-pixmap-themable titlebars. What's more, the developers warmly welcomed my patch (after my showing off a screenshot or two :-) and added it to CVS. Another such drawback--Fvwm's lack of PNG support--has also recently been addressed. With alpha-blending, no less! Oh--and Fvwm fully supports EWMH. The developer community is low-key, but active and enthusiastic in their work. This, for me, is a major plus, and stands in sharp contrast to the negative vibes on the Window Maker mailing list. Contrary to popular belief, Fvwm is far from a dead project. It isn't for everyone, but if you're the kind of person who prefers Mutt to Gmail, prefers typing to clicking, gets annoyed when programs write their config files without asking, and generally expects their WM to do what they say when they say without a fuss, then it might just be for you. And if you're not that kind of person, well...wait for it. Linux tends to have this effect on people :-) So, on to the next screenshots. The below two depict a configuration I used for quite some time, before finally discarding it in favor of my current setup. Pager, mini-xterm, and icon manager / tasklist on the upper left, gkrellm the uber-monitor at lower right, mini-snapshot of the current background in the pager. My theming setup was/is built around a glorified Perl script, fvwm-theme. Each theme specified a background, a window decor, a gkrellm skin, and an xterm theme, and changing these settings en masse or individually was a matter of a few mouse clicks. Having fallen head-over-heels for my old nemesis, I never parted ways with Fvwm again--but I did decide to give KDE 2.2.2 a whirl on the side. Background: I came home to Virginia for an extended visit with my parents, while my husband and I sorted out Issues with the Canadian Immigration office. It's a long story. Anyhow, my mother's shiny new 1 gig Athlon was soon running Linux (Slackware, my new distro of choice) alongside Windows XP, and given its horsepower, it seemed like a prime candidate for KDE. And indeed, with a bit of spit and polish (iKons, new splash screen. System++ patch), it ran well. Here's what it looked like: (The redheaded dolphin lover in the floral shirt is my mother.) Back home in Canada, I decided, with some trepidation, to try installing the latest KDE 3.0.1 on our old PII 333 (if only for my husband's sake--he likes doing his thesis work in *nix, but he really doesn't like endlessly fiddling with a window manager). So I did, and oy, it was slower than ever. Pretty, though. Here's what my setup for him looked like: I will always have a soft spot in my heart for KDE, I think. Like Fvwm, its developer community is active and friendly. KDE holds the original credit for taking away my fear and loathing of X, and doubtless that of many other Linux newbies. However, even my husband tired of its bloatedness eventually, and the way said bloatedness taxed the resources of our aging computer. This led to the creation of fvwm-desktop, a simple Windoze-esque desktop built on top of Fvwm. And now, what you've all been waiting for: my Fvwm Setup du jour. Despite the awesome compact utility of gkrellm, I ended up discarding it. While gkrellm looks just spiffy on space-age L33T desktops, the only time I run L33T desktops is when I'm making screenshots. Set against the naturey backgrounds and wood-backed xterms I usually favor, gkrellm sticks out like a sore thumb, no matter what skin it's wearing. So I replaced it with a handful of cute dockapps, and my desktop has been more appealing ever since. As you'll see, I also tired of the mini-wallpaper-in-the-pager trick. So, without further ado, three brand new screenshots: Want to learn more about my setup? I've documented the whole thing here. ETA: This page was written over ten years ago, but I still use Fvwm to this day (June 2014). And it still looks a lot like those final screenshots, except that I no longer use the email notification dockapp (I have a "message in a bottle" fall into my bubblemon instead) nor the sticky notes dockapp (I wrote my own reminder program), and my bubblemon looks perpetually shallow now since memory is cheap.
35
Show HN: Responsive NextJS Themes for Dashboards, Landing Pages and Blogs
SaaS Starter Kits React Templates Nextjs Templates Eleventy Templates Nextjs is a React-based framework to build static and dynamic websites and web applications. On the official website, Nextjs is defined as a hybrid framework: Because Nextjs is built on top of React, you can use any React libraries and React components. So, you can build rich and interactive applications. Like React, Nextjs suffers a lack of template and themes to build landing pages. We can build yourself from scratch but, you'll lose a lot of time to design and implement a Nextjs template instead of growing your business. At Creative Designs Guru, we solve this issue by providing a list of Nextjs templates. We are a team of designs and developers who have created React landing page templates and homepage. Our Nextjs Templates include everything to build landing pages. We provide you a list of React components like the navigation bar, hero component, pricing card, testimonial component, etc. You can use in different use cases and our Nextjs Themes can be easily customized to meet your needs. You can make your visitor signing up for your newsletter, buying your product or service, or signing up for a trial version of your product. If you prefer to use other themes, SaaS Starter Kits are the perfect solution for full-stack templates. Or, if you are interested to build a Next.js SaaS, it helps to quick start your SaaS project with authentication and subscription payment.
1
Fed explores ‘once in a century’ bid to remake the U.S. dollar
The explosive rise of private cryptocurrencies in recent years motivated the Fed to start considering a digital dollar to be used alongside the traditional paper currency. The biggest driver of concern was a Facebook-led effort, launched in 2019, to build a global payments network using crypto technology. Though that effort is now much narrower, it demonstrated how the private sector could, in theory, create a massive currency system outside government control. Now, central banks around the world have begun exploring the idea of issuing their own digital currencies — a fiat version of a cryptocurrency that would operate more like physical cash — that would have some of the same technological benefits as other cryptocurrencies. That could provide unwelcome competition for banks by giving depositors another safe place to put their money. A person or a business could keep their digital dollars in a virtual “wallet” and then transfer them directly to someone else without needing to use a bank account. Even if the wallet were operated by a bank, the firm wouldn’t be able to lend out the cash. But unlike other crypto assets like Bitcoin or Ether, it would be directly backed and controlled by the central bank, allowing the monetary authorities to use it, like any other form of the dollar, in its policies to guide interest rates. The Federal Reserve Bank of Boston and the Massachusetts Institute of Technology’s Digital Currency Initiative are aiming next month to publish the first stage of their work to determine whether a Fed virtual currency would work on a practical level — an open-source license for the most basic piece of infrastructure around creating and moving digital dollars. But it will likely be up to Congress to ultimately decide whether the central bank should formally pursue such a project, as Fed Chair Jerome Powell has acknowledged. Lawmakers on both sides of the aisle are intrigued, particularly as they eye China’s efforts to build its own central bank digital currency, as well as the global rise of cryptocurrencies, both of which could diminish the dollar’s influence. Democrats have especially been skeptical about crypto assets because there are fewer consumer protections and the currencies can be used for illicit activity. There are also environmental concerns posed by the sheer amount of electricity used to unlock new units of digital currencies like Bitcoin. Warren suggested the Fed project could resolve some of those concerns. “Legitimate digital public money could help drive out bogus digital private money, while improving financial inclusion, efficiency, and the safety of our financial system — if that digital public money is well-designed and efficiently executed,” she said at a hearing on Wednesday, which she convened as chair of the Senate Banking Committee’s economic policy subcommittee. Other senators highlighted the potential for central bank digital wallets to be used to deliver government aid more directly to people who don’t have bank accounts. A digital dollar could also be designed to have more high-tech benefits of some cryptocurrencies, like facilitating “smart contracts” where a transaction is completed once certain conditions are met. Neha Narula, who’s leading the effort at MIT to work with the Boston Fed on a central bank digital currency, called the project “a once-in-a-century opportunity to redesign the dollar” in a way that supports innovation much like the internet did. Still, there are a slew of unanswered policy questions around how a digital dollar would be designed, such as how people would get access to the money, or how much information the government would be able to see about individual transactions. The decision is also tied to a far more controversial policy supported by Democrats like Warren and Senate Banking Chair Sherrod Brown to give regular Americans accounts at the Fed. “What problem is a central bank digital currency trying to solve? In other words, do we need one? It’s not clear to me yet that we do,” Sen. Pat Toomey (R-Pa.) said. “In my view, turning the Fed into a retail bank is a terrible idea.” And, “the fact that China is creating a digital currency does not mean it’s inevitable that the yuan would displace the U.S. dollar as the world’s reserve currency,” he said. For their part, banks fear a Fed-issued digital currency could make it easier for customers to pull out large amounts of deposits and convert them to digital dollars during a crisis — the virtual equivalent of a bank run — putting financial stress on their institutions and making less money available to provide credit for people, businesses and markets. It could also potentially deprive them of customers, something the lenders say would interfere with lawmakers’ vision of increased financial inclusion. “While it is true that deposit accounts are often the first step towards inclusion, the benefits of a long-term banking relationship go well beyond a deposit account,” the ABA said in its statement. “The same is not true of a [central bank digital currency] account with the Federal Reserve, which would not grow into a lending or investing relationship.” The Bank Policy Institute, which represents large banks, has also argued that many of the benefits of a digital dollar are “mutually exclusive (because they are predicated on different program designs) or effectively non-existent (because the program design that produces them comes with costs that are for other reasons unbearable).” “The decision on whether to adopt a central bank digital currency in the United States is appropriately a long way off,” BPI President and CEO Greg Baer said. “There are also complex and serious costs that will need to be considered.” But many lawmakers think it’s worth the effort to look into it. “The Federal Reserve should continue to explore a digital [currency]; nearly every other country is doing that,” Sen. Bill Hagerty (R-Tenn.) said at the hearing, citing the risk for the U.S. to lose its ability to deploy economic sanctions as effectively with decreased usage of the dollar.
1
Authors and 'Progressive' Book Publisher Sue Elizabeth Warren over Free Speech
A progressive publishing company and the authors of a book critical of the U.S. government's response to the coronavirus emergency have sued Sen. Elizabeth Warren for allegedly attempting to pressure Amazon.com into yanking their title, The Truth About COVID-19: Exposing the Great Reset, Lockdowns, Vaccine Passports, and the New Normal. Joining Chelsea Green Publishing and authors Dr. Joseph Mercola, an osteopath, and Ronnie Cummins in the suit against Warren is Robert F. Kennedy Jr., a well-known vaccine critic who wrote the forward to the book. The lawsuit is based on a lengthy letter Warren wrote to Amazon CEO Andy Jassy accusing the company he runs of "peddling misinformation" by labeling the book a "best-seller" and allowing it to be at the top of results when consumers search for information about COVID-19. Chelsea Green Publishing was founded in 1984 to promote "progressive politics" along with "sustainable living…and, most recently, integrative health and wellness," according to its website, and its titles have earned accolades from The New York Times and several other outlets. Not this time, though, and in taking on Warren, the self-described "progressive" company is attacking one of its own, a powerful U.S. senator whose also known for her progressive politics. The lawsuit filed in Seattle by Arnold & Jacobowitz cites Bantam Books v. Sullivan, a 1963 case in which the U.S. Supreme Court found that letters from lawmakers complaining of books constituted "thinly veiled threats" of repercussions, illegal "prior restraint" and the "suppression" of free speech, even if the letter's author lacked "power to apply formal legal sanctions." Attorney Nathan Arnold claimed that the most "egregious" thing from Warren's letter is her complaint that Mercola "asserts that vitamin C, vitamin D, and quercetin…can prevent COVID-19 infection," noting that the FDA disagrees. As of the publication of this story, Amazon still sells both the book and Mercola's supplements at its website. Newsweek could not find information to confirm that the FDA disagrees. "The CDC's own science and data shows that vitamin D deficiency is a major issue when it comes to fighting COVID, so the book agrees with the CDC and Warren disagrees with both of them," Arnold said. Newsweek could find no conclusive scientific evidence to back up this claim. Massachusetts Senator Elizabeth Warren called on Amazon to provide clarity on both its algorithm and what it's done to stop the spread of COVID-19 misinformation. Above, she delivers remarks on the first day of the Democratic National Convention at the Wells Fargo Center, July 25, 2016 in Philadelphia. Photo by Aaron P. Bernstein/Getty Images "If unpopular speech can be regulated, then you guys in the media are next, frankly," Arnold told Newsweek. "If the First Amendment doesn't protect political speech, it's basically gutted, and that's not a partisan position. Ironically, my partner, the other guy on the name of the law firm, has an 'Elizabeth Warren' bumper sticker on his Toyota. We're a left-leaning law firm, and I'd be shocked if less than 90 percent of our firm is vaccinated." The lawsuit notes that Warren's letter suggests that selling the book is "unethical, unacceptable, and potentially unlawful," though the attorneys wonder "what laws the sale of the Truth About COVID-19 is potentially breaking." Beyond monetary damages, the lawsuit is asking the court to declare Warren's conduct to be "unlawful and unconstitutional" and to demand that she issue a "public retraction of her letter." "It was Warren's intention in publishing her public letter to Amazon to cause Amazon and other booksellers to censor The Truth About COVID-19," the lawsuit argues. "The censorship Warren intended to bring about included demoting "The Truth About COVID-19, concealing it from users, and/or ceasing to sell it altogether." The lawsuit also notes that Warren, on a May 13 appearance on The Late Show With Stephen Colbert, threatened certain corporations with billions of dollars in additional taxes, adding, "Amazon, I'm looking at you," which the suit claims is a "thinly veiled threat" meant to amplify her letter, which came on September 7. Three days after Warren's letter, Chelsea Green Publishing received notice that its book would temporarily no longer be sold as an e-book by Barnes & Noble, according to the lawsuit, and on October 1, Amazon told the publisher it would no longer run ads promoting the book. Warren didn't respond to a request for comment. If she does, this article will be updated. While Warren's letter refers primarily to The Truth About COVID-19, she also complains about other titles sold on Amazon, including Reversing the Side Effects of the COVID-19 Vaccine and Ivermectin: What You Need to Know—a COVID-19 Cure. Of the former, she objects to "falsehoods" that the vaccine is "making people sick and killing them," while she says the latter touts a drug that "is used to treate (sic) parasites in livestock" (her letter does not note that a human version of ivermectin has been available for four decades). Warren's letter points out that Amazon has previously removed books "that frame LGBTQ+ identity as a mental illness" and books "linking autism to childhood vaccines," then she asks Amazon to modify its algorithms so that they no longer direct consumers "to books and other products containing COVID-19 misinformation." Lawyers for the plaintiff argue in the lawsuit that the book, which has become a Wall Street Journal and USA Today best-seller, "expresses viewpoints, ideas, opinions, facts and factual hypotheses about the pandemic that Senator Warren and many others in her political party not only disfavor but have systematically sought to suppress." "What Warren did is far too close to a digital version of book burning," Arnold said. Mercola told Newsweek that 250,000 copies of his book were sold and it reached the No. 1 spot at Amazon prior to Warren's letter, though after the letter and some alleged changes to how the online retailer presented the title, it dropped out of the Top 100. "Senator Warren's letter to Amazon is unlawful and an egregious example of the dangerous censorship movement that is building among tyrants that are supposed to be defending Americans," Mercola told Newsweek. Mercola also said his book speculates that the coronavirus emanated from a lab in Wuhan, China, while Warren and others from her party had previously dismissed the notion as a conspiracy theory, and he says evidence points to him being right and them being wrong. Warren's letter also complains that the book predicted that pandemic restrictions (presumably mask-wearing, proof of vaccinations and social distancing) would become permanent, even though the authors' statement included the word probably and the jury is still out as to how accurate or inaccurate their opinion will ultimately be. "If Americans lose free speech, we lose everything," Mercola said. "Most Americans clearly understand this and know if you truly defend free speech, you must defend the speech of those you disagree with."