labels
float32
1
4.24k
Title
stringlengths
1
91
Text
stringlengths
1
61.1M
1
Crypto collective raises $40M to buy rare copy of U.S. Constitution
It started as something of a joke. A crypto collective, calling itself ConstitutionDAO, popped up on Nov. 14 to announce it would be bidding on a Sotheby’s auction for a historic first printing of the U.S. Constitution, and that it planned to raise the money to do so via crowdfunding. But in a world where meme stocks and Dogecoin have soared, the crypto community rallied to the cause. By Monday, the group had raised $3 million. By mid-day Thursday, with less than seven hours left before the auction, the fund had topped $40 million. That makes the collective a likely shoo-in to win the auction. The document, initially, was estimated to fetch between $15 million and $20 million by Sotheby’s. This is the first time in 33 years that one of the 13 copies of the Official Edition of the Constitution, surviving from a printing of 500 issued for submission to the Continental Congress, will be put up for auction. Exactly how much the group has raised is still something of a mystery. The number isn’t being disclosed until after the project, according to the core team that’s running it. ConstitutionDAO is a decentralized autonomous organization (DAO), a type of group that has become increasingly popular in cryptocurrency circles for promising shared management of organizational assets and a democratized voting structure that dictates how it will operate. The group says anyone who contributes to the fund will share ownership of the Constitution and have a vote in the decision about what to do with it. That could range from donating it to the Smithsonian Institution to creating NFT artwork from the document. ConstitutionDAO says that, should it win the auction, it hopes to find an organization to partner with that will publicly and freely display it to visitors. If they lose, contributors will get their money back, minus unspecified fees. The auction is scheduled to take place at 6:30 p.m. ET on Thursday, Nov. 18. Subscribe to Fortune Daily  to get essential business stories straight to your inbox each morning.
19
Exalting Data, Missing Meaning (2017)
Science developed out of philosophy. Before we had Copernicus’s planetary tables or Newton’s equations of motion, we had Aristotle’s rhetoric. That medieval natural philosophy was wrong, but it still made useful predictions. I’ve spent my career making things in the worlds of consumer products and education, and in both domains, I’ve observed a common failure mode in decision-making: an overriding obsession with data—with appearing scientific—and an associated repudiation of philosophy. That’s a totally appropriate obsession in some fields, like manufacturing, transportation, and aviation. But in consumer products, education, and so many other domains involving the messiness of humanity, the data obsession falls prey to hidden errors and distorts our true goals. Worse, it deprives us of truly meaningful insights that are available via philosophy, intuition, and stories, but not yet fully explicable through quantitative systems. This danger lurks in domains that have not yet been systematized. When it comes to people, we lack Newton’s equations of motion. Actually: we don’t even know what the equivalent equation should be measuring. Even if we did, we probably don’t have instruments that can measure those quantities. Can there ever be equations which can usefully describe these phenomena? I think so, but I don’t know we can even be sure of that. Until we build more powerful explanatory theories of these domains, we must respect the role of philosophy and beware the dangers of playing scientist. Let’s talk about test scores. Have you ever gotten a fine grade in a class—say, calculus—but later felt like you didn’t really understand what was going on? That you can follow the steps you’d learned to solve problems like ones your class tackled, but you can’t explain why they worked or apply them in new contexts? This experience seems totally ubiquitous! So: how much do you trust test scores for making decisions about educational systems? In fields like education and design, we measure only indirect proxies: page clicks, time on site, changes in test scores, survey responses, and so on. Then we try to make decisions with those measurements. This is like wiggling a rod, attached to a complex set of gears, attached to the thing we want to measure, which in turn is attached (in mysterious ways) to the thing we’re actually measuring. When we don’t truly understand the mapping between what we really want to know and those proxies, we easily miss important consequences. Hours of practice worksheets at Kumon might make a child great at arithmetic, but what does it do to their curiosity? To their desire to learn independently as an adult? When Spotify opts you in by default to noisy push notifications (“the Beatles are now on Spotify!”), they might increase some engagement score, but they also annoy their users. That annoyance may not show up in any dashboard: maybe users keep using the service exactly as much, but when some PR fiasco blows up the following year, they’re less inclined to take Spotify’s side. Because we don’t understand this mapping, we have to make many more guesses. And with every guess, there’s some chance that we see some result purely by chance. Statistical hypothesis testing only has meaning when you account for all the hypotheses you’ve tried. Alternately, sometimes people don’t even make guesses, and they just go hunting in the data. If you go looking for patterns in a sufficiently large data set, you’ll certainly find some! (”Significant” from XKCD by Randall Munroe) We know that correlation doesn’t imply causation. Sometimes, you’ll find strong correlations by random chance—just like in the comic above: the data suggest that this shade of blue causes people to engage the most! But it was just random noise, and if you recolor more elements in that shade of blue, you won’t actually make anyone happier, other than perhaps your own community’s navel-gazers. Another danger is that your correlations may be masking more important underlying phenomena. Say you want people to share their big news on your social network first. You can’t measure that directly, but you have a proxy metric: photos included with those posts have EXIF data indicating when they were taken. You decide you want to minimize the time between the photo being taken and the photo being shared. To figure out how to proceed, you hunt for correlations in your logs of users’ behavior. Say you discover a strong correlation between how quickly a photo uploads and how likely it is that users share photos of big news immediately. You tell your engineers to focus on optimizing upload time! You ship your optimized photo uploader… but you don’t see any benefits in the metric you were measuring. Turns out, you didn’t see this correlation by chance: you saw it because people with faster upload times can afford better cellular connections, which means they’re more likely to upload photos while they’re out and about, as opposed to waiting until they get to unmetered WiFi. Photo upload time was itself just a proxy measure of the true root cause. Even if we’re pretty sure we don’t have any hidden causes or consequences lurking, and we carefully account for all our hypotheses, we must remember that these are proxies we’re optimizing. As the situation varies, these proxies’ connection to your true goals may taper—or reverse! Vitamin C can prevent disease, when taken in small quantities, if your diet already didn’t contain much of it. But that doesn’t mean you should take a hundred times as much seeking a hundred times the benefit (as double-Nobel-laureate Linus Pauling did): you’ll see no marginal benefit, and you’ll just excrete it all. In the worst cases, fixation on these proxies can create perverse incentives. Say that you want to prepare students for a life of solving challenging problems. It’s true that minimizing missed days of school may help make that happen—but past a certain point, other factors will dominate. If you optimize too aggressively for zero missed days of school, you may easily reverse the correlation, disrupting students’ family lives or creating an atmosphere which makes students resent their autocratic school. If you make a product, total usage time might seem like a good proxy for customer joy. But if you take that metric too seriously, you’d be punished for making a change which would help a customer accomplish a given task in less time than before. There’s a subtler issue with exalting data in these domains—one my research partner May-Li Khoe has patiently explained for me over and over again. If you try to design something with human meaning by steering toward maximum impact on business outcomes, you’re very likely to end up with little human meaning… which in turn will likely harm whatever business outcomes you’re measuring in the long term. Similarly, “teaching to the test” sucks the fascination and participation out of classrooms in exactly the way you would expect. This talk from Frank Lantz covers the issue wonderfully in game design (this quote at 33:30; my thanks to Bret Victor for the pointer): The dilemma of quantitative, data-driven game design…. So here’s an analogy: Imagine you have a friend who has trouble forming relationships… “I don’t know what I’m doing wrong. I go on a date, and I bring a thermometer so I can measure their skin temperature. I bring calipers so I can measure their pupil, to see when it’s expanding and contracting…” The point is, it doesn’t even matter if these are the correct things to measure to predict someone’s sexual arousal. If you bring a thermometer and calipers with you on a date, you’re not going to be having sex… Imagine that two teachers have exactly the same measured impact on their class’s test scores. How likely is it that they have the same impact on creating empowered thinkers? You decide to adjust some variable because in the past, it correlated highly with increased product usage. How likely is it that this change better solves a meaningful problem for the user? We’ve seen that there are plenty of dangers in making decisions based primarily on indirect measures with hazy connections to our true goals. Yet clearly, great teachers and great designers do operate effectively in these unsystematized fields! They have insight; they have intuition. These come from an internalized philosophy about the field, drawn from experience, observation, and stories. Yes, their philosophies are imperfect; and no, they can’t necessarily give you a set of calipers you can use to make your own decisions. But if you ask about one student interaction in particular, or one product detail in particular, they can often explain in hindsight why their philosophy pushed them in one direction or another. Listen enough, and you might build some intuition of your own. This is not just luck or some kind of confirmation bias—there’s an underlying consistency to these experts’ taste. It’s clearly visible even if neither you nor they can quantitatively describe how they’re doing what they’re doing. Great teachers sure do manage to consistently be great teachers, in a way others consistently recognize, even without a dashboard and A/B tests. Of course, we might have to watch for a while to see that an expert delivers insights consistently, as opposed to by chance—that’s why knowledge worker interviews are so hard!—but it’s clear that some experts’ ideas are more consistently successful than others. That consistency is what meaning is. How do you know your house exists? After all, you don’t experience it directly: your contact with it is mediated by all kinds of layers of fuzzy visual processing and your own faulty memory. It exists because it’s reliably where it was last time. It exists because when you’re inside it, you consistently see the same imagery, with shadow angles mediated as you expect by the seasons. It exists because others can talk to you about your house and say things, interpreted through a winding auditory system, that somehow match your own fuzzy perceptions. It exists because your fingers can feel the shape of the house number on your door, which matches the shape on that lease you remember signing long ago. The same logic tells us that when an expert consistently makes decisions broadly regarded as successful, and can explain their philosophy with rhetoric that makes intuitive sense, there is probably a there there. Your house is more systematized—we can precisely measure its height, draw blueprints, predict its mass—but society could still effectively talk about houses before we had any of those tools. Until we discover those tools (and the questions we want to ask with them!), all we’ve got is tradition, expertise, rhetoric, philosophy. If we listen with balanced skepticism and curiosity, those can be powerful tools themselves. I don’t need to preach so strongly. In practice, we usually can’t ignore domain philosophy and experts’ intuition anyway. Meaningful philosophy is meaningful—so even if we say we’re throwing it out, our intuition often remains entangled with our decisions. I see this all the time in product decisions. For instance, someone might believe that sign-up walls make for a bad product for a variety of philosophical reasons, but they justify this decision outwardly by pointing to some data from one product’s blog about their A/B test on the subject. That data is not the reason they decided to ditch sign-up walls. It’s just the reason they’re giving to others (and often, themselves) about why they made the decision. This behavior represents a sort of homage to science… while simultaneously violating its core principles. In the education space, people are very excited about growth mindset interventions. The rough idea: if you can persuade a child that intelligence can grow with practice and hard work (just like their muscles), then they’ll actually perform better in school. The recent enthusiasm for interventions in this space follows a series of randomized controlled trials documented in studies by Carol Dweck and her team at Stanford. These interventions probably are effective! But: the field’s quantitative results are actually quite modest in effect size. The studies alone can’t justify the magnitude of the excitement about this topic; that follows the magnitude of people’s pre-existing intuitive beliefs in these interventions. The problem is that when the education community talks about this topic, it primarily justifies growth mindset interventions with these studies. This type of motivated reasoning corrupts the dialog around decision-making. We should use provisional data like this to support—not supplant—our philosophies. When two people disagree philosophically about an issue in an unsystematized domain, but allow only quantitative arguments, they end up fighting a proxy war through data weaker than their own beliefs. Worse: if we ever do invent powerful predictive systems for these fields, we’ll need our scientific wits about us, unsullied by post-hoc spin. I hope it’s clear that I’m not arguing for us to generally abandon data and systematic thought. This scientistic obsession is a reasonable defense mechanism! After all, before precise measurement, physicists used to debate with rhetoric, and we ended up with phlogiston (i.e. things burn because they contain an element called phlogiston; phlogiston is lost to the air when a thing burns; things can’t burn inside a jar because that air can’t absorb any more phlogiston). It’s in fields without reliable systems that we can’t measure our way to understanding. Building systems in those fields is a critical project, and progress can be made. Meta-analysis and multitrait-multimethod tests have certainly helped us lay some foundations. Yet while fields’ systems are under construction, we must be careful not to put too much weight on them. They’re not yet structurally sound. Intuition, philosophy, and expertise deliver all kinds of useful tentative explanations. If we monitor their predictions over time, we’ll discover limitations, and our theories will evolve. All the while, we’ll spot patterns, incorporate provisional systematic concepts, and fluidly evolve our beliefs, taking the best evidence however it comes. Joy, belonging, and empowerment may live in this figure’s “qualitative black box,” but we can still produce explanations for how they arise. Those explanations may well involve measurable inputs and outputs. But if we insist on explaining joy through, say, engagement time and Net Promoter Scores, we’ll get exactly as much joy as we deserve.
1
Software Testing Is Tedious. AI Can Help
These days, every business is a software business. As companies try to keep up with the rush to create new software, push updates, and test code along the way, many are realizing that they don’t have the manpower to keep pace, and that new developers can be hard to find. But, many don’t realize that it’s possible to do more with the staff they have, making use of new advances in AI and automation. AI can be used to address bugs and help write code, but it’s greatest time saving opportunity may be in unit testing, in which each unit of code it checked — tedious, time-consuming work. Using automation here can free up developers to do other (more profitable) work, but it can also allow companies to test more expansively and thoroughly than they would have before, addressing millions of lines of code — including legacy systems that have been built on — that may have been overlooked. i p i p i p Annotate i p Buy Copies i p Leer en español Ler em português In software development, speed is king: whoever can roll out bug-free updates the fastest wins the market. While tech companies already know this, the rest of the business community is quickly catching on. Senior leaders at companies are recognizing that their businesses — whether their primary industry is car manufacturing or food service or finance — is also becoming a software business. Software now controls factories, manages inventory, trades stocks and increasingly is the most important interface with customers. But if software is the key to staying competitive, companies need to maximize the productivity of their expensive and scarce software developers. Automation of time-wasting tasks is the quickest way to do so. Take a look at the user interface of new car entertainment systems — most look pretty much the same as they did in cars from five years ago. Most drivers prefer Google Maps over their own car map system for its superior user interface and up-to-the-second accuracy. Food service companies continue to waste food because they are unable to predict demand. Such examples are everywhere, but not because better solutions don’t exist. Projects are backlogged and subject to triage as developers work to keep up. As companies race to catch up, however, they’re also quickly learning a hard second lesson: there are not enough software developers available to write all the necessary code, and demand is only going up. Writing software requires not just many hours of painstaking work handcrafting millions of lines of code, but also time to test that code. Developers surveyed in 2019 said they spend 35% of their time testing software. As more companies move ahead with digital transformations, workloads for developers are rising and qualified staff is harder to find. Because companies can’t simply conjure more developers, they’re left with one choice: find a way to do more with the staff they have. That may be easier to accomplish than it sounds. Few C-suite executives understand the inefficiencies buried in their software development processes and how addressing those inefficiencies can significantly sharpen their competitive edge. With advances in artificial intelligence and greater automation of the software creation process, it’s increasingly possible to relieve developers of the important but routine and repetitive tasks that currently take up as much as half their time — tasks like writing unit tests, which verify that code behaves as expected. CEOs and CTOs should ask how often their organization deploys software. If it’s only a few times a year or less, they likely need an automated software pipeline to stay competitive. And competitive businesses understand that parts of that pipeline are ripe for automation every year. Now, the time is ripe to automate testing. Not all of the software development workflow can be automated, but gradual improvements in technology have made it possible to automate increasingly significant tasks: Twenty years ago, a developer at SUN Microsystems created an automated system — eventually named Jenkins — that removed many of the bottlenecks in the continuous integration and continuous delivery software pipeline. Three years ago, Facebook rolled out a tool called Getafix, which learns from engineers’ past code repairs to recommend bug fixes. Ultimately these advances — which save developers significant time — will limit failures and downtimes and ensure reliability and resilience, which can directly impact revenue. But as AI speeds up the creation of software, the amount of code that needs to be tested is piling up faster than developers can effectively maintain. Luckily, automation — and new automated tools — can help with this, too. Historically, key tasks that require developers to manually write code have been harder to automate. For example, unit testing — in which the smallest discrete pieces of code are checked — has become a cornerstone of enterprise software, and another common bottleneck that until only recently has become possible to address with automation tools. Unit tests are written and run by software developers to ensure that a section of an application behaves as intended. Because unit tests run early and quickly, at the time the code is being written, developers can fix problems as they write code and ship finished software much faster as a result. But writing unit tests is a tedious, error-prone, and time-consuming exercise that takes away from their more creative work — work that also makes money for the business — as testers comb back over their colleagues’ work. And testing is, in many ways, more labor intensive than software construction. For every unit of software, tests must be written for performance, for functionality, for security, and so on. It’s a $12 billion industry, yet nearly all of that money is spent on manual effort, much of it outsourced. Here’s where automation comes in. Algorithms — whether developed internally or in readymade tools — can write tests exponentially faster than human developers and automatically maintain the tests as the code evolves. What’s more, the automated tests can be written in a way that humans can easily understand. This represents a remarkable opportunity to save skilled labor when applications these days can involve tens of millions of lines of code. Adopting this kind of automation offers companies a few significant advantages. First, it allows for testing that simply wouldn’t have happened before. More than simply replacing labor automation can do necessary work that many companies are currently overlooking because it’s too labor intensive. Many of the services and applications that now power the world are massive in scale. No one person has a complete vision of everything that’s going on. Companies have reams of legacy code that has never been properly unit tested. As the code evolves, quality issues develop, but the companies can’t afford to rewrite or start over. Without good tests that run early, it is very easy to introduce new bugs when iterating and upgrading software, requiring a huge, time-consuming effort to find and fix them later on — which limits how often the code can be released Consider a case from banking. Hundreds of millions of lines of code, for example, run the biggest banks in the world. For banking applications developed entirely in-house, conflicts can arise as the software evolves, particularly when companies are shipping new versions faster. Consumers have come to expect automatic updates and increasing functionality over time, so many banks are adopting continuous integration and continuous delivery, shrinking the turnaround time for developing a new feature or making a change to the code from months to minutes. To address this, banks such as Goldman Sachs have started using AI to automate the writing of unit tests, and other financial institutions will likely follow. Second, it allows them to push new software and updates more often. Data collected by the authors of Accelerate , the bible for this model of software development, showed that organizations that push code more often also have a fifth the failure rate and are 170 times faster at recovering from software downtime. Finally, the time saved by developers can be spent solving more challenging problems and thinking up new ways to make users happier. A less obvious benefit is that it gives developers the breathing space to deal with unplanned work, changes to the plan to meet customer needs, or improvement work. This helps employers retain engineering talent, but also means developers can react more quickly. Automation is coming to all parts of the software development process, some sooner than others — as AI systems become increasingly powerful, the options for automation will only grow. OpenAI’s massive language model, GPT-3, can already be used to translate natural human language into web page designs and may eventually be used to automate coding tasks. But eventually, large portions of the software construction, delivery, and maintenance supply chain are going to be handled by machines. AI will, in time, automate the writing of application software altogether. For now, though, CEOs and CTOs should look to the areas that can currently be automated, such as the writing of unit tests and other low-level but critical tasks. And, they should stay on the lookout for other areas where they can eventually do the same as technology advances. Finally, leaders need to build these expectations into long-term business plans, because companies who don’t are headed for a very tight bottleneck.
1
How to Reignite Skills as a Big Data Analyst in 2021
According to BlueVenn, around 72 percent of marketers said data analysis is crucial if you’re looking to thrive in the competitive business landscape. While every success can be analyzed using data analysis, it has now become one of the major areas every business can use during their market research. This could be done whenever the company is likely to launch its products or services in the market. As a result, the need for a big data analyst is on the rise. With the ongoing challenge the industry is facing, you might be wondering what the industry would be like for someone looking to get into the big data field? Will these professionals remain in demand? How likely has the COVID-19 affected the market demand? The global big data market will be worth USD 229.4 billion by 2025, says MarketsandMarkets. The demand for such professionals will never cease. Every industry out there is seeking to hire professionals from the field. More importantly, data scientists and data analysts were positioned to be one among the topmost emerging roles by 2022 by the 2018 Future of Jobs Report. As big data continues to rule the business world, tech professionals can also start looking for opportunities in job roles like – database administrators, chief data officers, data architects, BI analysts, data analysts, or business analysts. However, there are certain learning skills you need to acquire before making a head start. For someone looking to start a data, analyst career must be able to analyze relevant skills and career path before deciding. Below are the core skills a data analyst needs to start learning today. We will cover both major areas, technical and soft skills to land a job in the big data field. Python-statistical programming is often used in data analytics. This will help you gain better insights in areas like web scraping, how you can develop web applications using Python, and data gathering. Having an added advantage in these skills might draw you closer to landing your first job. R is great in conducting analysis on an ad hoc basis and exploring multiple data sets while Python is ideal in performing predictive analysis and performing advanced analyses. Besides technical skills the candidate is also required to have soft skills: A career in the big data field can always begin with obtaining knowledge and skills using the data analyst certification program. Not only will these certifications provide the necessary skillset but also the latest knowledge every organization is looking to hire. Below are multiple areas where you can grow your big data career: However, if you’re looking to shift careers and bring a change in your present job role, you might want to consider looking at these new data analyst careers: A big data engineer is currently the most sought-after job role in the data realm. The average salary is around USD 150,000 per annum. But to get lucrative compensation, you still need to constantly upgrade your skills. Now might be an ideal time to start a career in the big data realm.
12
Tobias Bernard Explains GNOME’s Power Structure
People new to the GNOME community often have a hard time understanding how we set goals, make decisions, assume responsibility, prioritize tasks, and so on. In short: They wonder where the power is. When you don’t know how something works it’s natural to come up with a plausible story based on the available information. For example, some people intuitively assume that since our product is similar in function and appearance to those made by the Apples and Microsofts of the world, we must also be organized in a similar way. This leads them to think that GNOME is developed by a centralized company with a hierarchical structure, where developers are assigned tasks by their manager, based on a roadmap set by higher management, with a marketing department coordinating public-facing messaging, and so on. Basically, they think we’re a tech company. This in turn leads to things like If you’ve been around the community for a while you know that this view of the project bears no resemblance to how things actually work. However, given how complex the reality is it’s not surprising that some people have these misconceptions. To understand how things are really done we need to examine the various groups involved in making GNOME, and how they interact. The GNOME Foundation is a US-based non-profit that owns the GNOME trademark, hosts our Gitlab and other infrastructure, organizes conferences, and employs one full-time GTK developer. This means that beyond setting priorities for said GTK developer, it has little to no influence on development. Update: As of June 14, the GNOME Foundation no longer employs any GTK developers. The people actually making the product are either volunteers (and thus answer to nobody), or work for one of about a dozen companies employing people to work on various parts of GNOME. All of these companies have different interests and areas of focus depending on how they use GNOME, and tend to contribute accordingly. In practice the line between “employed” contributor and volunteer can be quite blurry, as many contributors are paid to work on some specific things but also additionally contribute to other parts of GNOME in their free time. Each module (e.g. app, library, or system component) has one or more maintainers. They are responsible for reviewing proposed changes, making releases, and generally managing the project. In theory the individual maintainers of each module have more or less absolute power over those modules. They can merge any changes to the code, add and remove features, change the user interface, etc. However, in practice maintainers rarely make non-trivial changes without consulting/communicating with other stakeholders across the project, for example the design team on things related to the user experience, the maintainers of other modules affected by a change, or the release team if dependencies change. The release team is responsible for coordinating the release of the entire suite of GNOME software as a single coherent product. In addition to getting out two major releases every year (plus various point releases) they also curate what is and isn’t part of the core set of GNOME software, take care of the GNOME Flatpak runtimes, manage dependencies, fix build failures, and other related tasks. The Release Team has a lot of power in the sense that they literally decide what is and isn’t part of GNOME. They can add and remove apps from the core set, and set system-wide default settings. However, they do not actually develop or maintain most of the modules, so the degree to which they can concretely impact the product is limited. Perhaps somewhat unusually for a free software project GNOME has a very active and well-respected design team (If I do say so myself :P). Anything related to the user experience is their purview, and in theory they have final say. This includes most major product initiatives, such as introducing new apps or features, redesigning existing ones, the visual design of apps and system, design patterns and guidelines, and more. However: There is nothing forcing developers to follow design team guidance. The design team’s power lies primarily in people trusting them to make the right decisions, and working with them to implement their designs. No one person or group ultimately has much power over the direction of the project by themselves. Any major initiative requires people from multiple groups to work together. This collaboration requires, above all, mutual trust on a number of levels: This atmosphere of trust across the project allows for surprisingly smooth and efficient collaboration across dozens of modules and hundreds of contributors, despite there being little direct communication between most participants. This concludes the first part of the series. In part 2 we’ll look at the various stages of how a feature is developed from conception to shipping.
1
Trial Set for Theranos Founder
Elizabeth Holmes, the founder of the now-defunct blood-testing startup Theranos, is set to go on trial today in a federal court, where prosecutors are set to reveal the evidence for fraud charges against the former rising star. The defendant is facing charges of fraud for allegedly making false claims about her business, mainly that the company’s devices which are designed to take a drop of blood from a finger prick have the capability to run a range of tests more efficiently, effectively, and faster and then other, more conventional laboratory tests. The trial is set to be one of the most closely followed trials of an American corporate executive in a long time. The trial will be presided over by U.S. District Judge Edward Davila, and the jury will hear the opening remarks beginning with the prosecution. The defendant is reportedly set to take the witness stand at some point during the trial. Holmes has pleaded not guilty on all counts of wire fraud and conspiracy. Another previous Theranos executive, Ramesh “Sunny” Balwani, has also been scheduled to be tried separately and has also pleaded not guilty on all counts. The prosecution has stated that Holmes and Balwani defrauded investors in a five-year span between 2010 and 2015 and lied to customers when the company first started making its tests more accessible commercially, especially during a partnership with the pharmacy giant Walgreens. The court filings have been released and reveal a romantic relationship between Holmes and Balwani where Holmes alleges that he abused her mentally and emotionally. Balwani has denied all such allegations. The attorneys for the defendant have stated that Holmes is likely to take the witness stand and testify on her relationship and how it affected her mental health and overall state. This marks a rare occasion as defendants often do not testify at their own trials as it can potentially open them up to cross-examination by the prosecution that can prove to be risky. Many legal analysts are anticipating that her lawyers will try to raise questions about Holmes’ knowledge and overall intentions during the alleged scheme. In order to effectively convict Holmes of fraud they believe, the prosecution will have to prove her intent at the time. The defense has already looked to limit the amount of evidence in the trial and has been unsuccessful at doing so. Judge Davalia has ruled that jurors can hear about complaints from patients on faulty test results and about a critical U.S. government report following an inspection of a Theranos facility in 2016.
3
Everything I ever learned about creating online courses. 1: Product Development
In this article I’ll share some of my best practices for creating online courses. This time I’ll focus on the product development side. Of course if you want to sell courses, you’ll need to learn marketing as well, but I’ll just cover development here. Tomi Mester p Follow 14 min read p Jul 17, 2021 -- Listen Share Disclaimer: I’m not an “online course guru.” (Who is, anyway?) I can’t tell you everything you wanted to know about all the online courses. My experiences are unique to my field and my niche. I can only share what I learned and I acknowledge that there are many ways to approach this topic. I’ve been doing online courses since 2016, I’m part of online course related mastermind groups, and so on. I’ve learned a lot during the years… But I can’t give you a universal recipe. Just so you know. strong Teachable strong here . (Disclaimer: these were affiliate links.) </promo> This way you can have an online product and break out from selling your time. You’ll sell your knowledge instead. The main advantage with this way is that you can partly automate and scale your income while doing what you love. (Of course, this applies only if you have a profession that you love and you also like teaching.) This way you can be location independent, you don’t need to deal with deliveries, and your maintenance costs are basically your cost of living. They get more flexible and higher-quality education. They can attend anytime, anywhere. It doesn’t matter where they live or they only have time between 10pm and 12pm. If they don’t get it at first, they can replay what you said and they can ask questions on Skype or email without interfering with the dynamics of the class. And because it’s something you prepared, recorded and edited, the quality can’t be ruined by unexpected things (like you get sick, the sound system breaks, etc.) If you do your job well, it’s guaranteed that your students get the best of you. I truly believe that for a better world we need a better way of distributing knowledge. Digital education is a new opportunity to reach more people more effectively with better quality education. I don’t believe digital education will replace classrooms. Mentors, good teachers, and inspiring educators will be needed till the end of time. But if classroom and online education can cooperate well, we can bring education to a higher level, globally. Ideally you meet two criteria. Without these two it’s very difficult, but of course you can develop both if you don’t start out that way. Being a good teacher is more of a soft skill and a personality trait. It’s possible to practice and develop the skill of teaching.. However, there are people who are great teachers by default and others who might not ever get there. And that’s fine. Do you like to explain things? Do people say that you are good at it? That’s a good start and probably you’ll be fine. The second thing is being an expert. Being an expert means that you’ve spent years doing something. This part doesn’t depend as much on your personality. You can become an expert in anything over time. You just have to love the topic and you’ll naturally get better at it with time. After a few years, you’ll learn enough that you’ll be able to teach it to others who are in the same place you were a few years ago. Simple as that. The easier way to approach this is to create a course where you collect the things you wish someone would have told you before you started to learn your given topic. strong Teachable strong here . (Disclaimer: these were affiliate links.) </promo> When you think of creating online courses, you don’t necessarily need to make courses about your day-to-day job. I know someone who is a coder but loves to learn languages, so he builds language courses rather than coding courses. And that makes a lot of sense! If you are a marketer who loves to do crochet and macrame, maybe make courses on that. The point is that your topic is not necessarily what you do 8–10 hours a day. Many people have lots of suggestions on this. The thing that worked for me is choosing a topic that I’m truly passionate about. If you aren’t passionate about anything, then focus on your inner work and try finding your thing before you start off with anything. This doesn’t need any research, you only have to look inside. If you are not an expert in anything or you haven’t found your passion yet, then it’s not yet the right time for you to start making courses. In this case, start learning something and in the meantime, spend time developing your teaching skills. It’ll take a few years. I know this sounds devastating, since most online course gurus promise you quick bucks. It’s just not true that you can get rich quick this way. Regardless, if you start now, eventually you’ll get there. In my opinion there are two types of online courses, and it’s a decision that you have to make at the very beginning. It’s not that any version is better or worse as an online business. But they need a completely different mindset to be successful. Hot dogs are liked by many. If you go to the beach, there will be at least 2–3 hot dog stands. It’s cheap to make and doesn’t cost much to the customer. A stand sells ~300 hot dogs a day — or more. To be profitable as a hot dog type of online business, you need lots of customers and you’ll have more competition. But it’ll be easier to create a product, too. In big cities, there are thousands of hot dog stands. They look the same, they sell the same and most of them are doing just fine. On the other side, there is sushi: Sushi is a niche food. Quite a few people like it, but not as many as hot dogs. It’s also rather in the premium food category. There’s a story about this sushi restaurant in NYC which is only open for two hours a day. From 6–8 pm they can serve a limited number of people, they are wildly expensive, and there are miles-long lines in front before they open. Obviously they have many fewer customers, but they are unique because of their branding and positioning. And the most important thing is this: anyone can make a hot dog in 30 seconds, but to be able to make great sushi, you have to study for years before even selling your first uramaki. Of course the hot dog vs sushi question is a spectrum. Not every online course can be sushi. (Let’s take the example of cooking courses: the problem is that there are many cooking books and courses on the market for a low price or even for free already. The game is already set, so it’s very difficult to create something so unique that you can sell at a higher price point, as a premium cooking course. It’s not impossible, but it’s difficult.) Before creating online courses, I had taught offline for at least 3 years, so I can truly compare them. Even though I see the pros and the cons of classroom teaching, I have to admit that by now I’m fully convinced of the advantages of teaching online. Still there are 3 things that are difficult to achieve online (not impossible though). So these are the things that classroom courses are still better for: Note: You can mix online and offline elements to use the advantages of both! One of my friends, for instance, teaches presentation theory online, but they practice in a classroom. strong Teachable strong here . (Disclaimer: these were affiliate links.) </promo> I won’t tell you which is better, but I’ll tell you the differences. The launch model means that your online course is not always open, students can’t join whenever they want. Registration is only open for a certain period of time (e.g. for 1 week). This way participants form a group and they’ll learn together. And you as the teacher can work with them more easily. Evergreen courses are always open, everybody can enroll whenever and go forward at their own pace. Which one do you choose? It really depends on you and the topic of your course. I personally prefer the launch model for a few reasons: Note: There is another technique, which is called “deadline funnel” — it’s a hybrid of the launch and evergreen model. I won’t go into detail here. It’s a more advanced technique, if you want to learn more about it, check out deadlinefunnel.com . (Affiliate link.) Once you have picked the main topic of your online school (e.g. cooking, gardening, programming, data science, etc.) there are still way too many options to choose the specific topic of your first course. Will you go with a beginner or an advanced course? Which part will you talk about? E.g. in my field, data science, I had to decide whether I want to create my first course about SQL, Python, statistics, business or something else… If you’ve read this far, I’m assuming that you are already an expert in your field and you have good teaching skills. And you are 100% sure that you want to create an online course. Great! But how do you decide what exactly in your expertise is interesting enough to create a good online course about? What will people find valuable enough to pay for? Here’s my recipe to figure it out! It’s basically a research process that ensures that you won’t spend weeks or months on creating something that nobody wants. It also helps you to monetize your teaching efforts from the very beginning — while learning what parts of your knowledge are worth being put into a course and taught on scale. This research process has four steps. The first thing I would do if I started now is not offline courses nor online courses, but consulting. If you already have 3–4 clients with whom you work as a consultant right now, your best practice is to take notes on the questions that come up often, and also watch out for their wow and aha moments! Note: If you want to create a course about your hobby, these “clients” could be friends, too, who you are not consulting with but simply help. For example if you are an online marketing person and your friends always call you for WordPress advice, it might be smart to take notes about the most frequent questions you get and start to collect them for your first WordPress Course. People asking you shows that there is demand for the information. Also that they think you are the expert they can trust. In these 1-on-1 situations, you can find out what people are curious about. It’s even better if you can already charge for this — since it also shows that people are happy to pay for your knowledge. The second step is sort of scaling Step 1. Whatever you learned there, you’ll show it to more people. At Step 2, whether you’ll go with an online webinar or a classroom presentation, try to find something that doesn’t require too much effort from you (no video recording, no editing, no copywriting), just a draft on slides that you can talk about. Again, you want feedback and don’t want to spend too much time with things that won’t pay off. The beauty of a live presentation is that you can interact with your audience, get their facial expressions and their questions live, but there’s still not too much time invested. You can test how your course works. If you haven’t charged anything for consulting, at this point, it’s crucial to test whether people want to pay for what you offer. If people don’t pay for it, there is no point in putting the effort into developing an online course. (Except if you are okay creating things for free — which also has its beauty!) Note: There could be many reasons behind people not paying for your live presentation. Maybe everything’s on Youtube for free already in that specific topic. Maybe there is no need for an expert at all, people can learn by themselves. Or maybe you failed in the messaging/marketing. Or you haven’t found the right audience. Whatever is wrong, it’s better to know at this step. Maybe you can fix it. Maybe you can’t. But being aware of it is definitely better than wasting your time for nothing. it’s time to start thinking about turning it into an online course. Again, online courses require much, much more time than an offline course because of scripting, recording and editing. Thus a great (and safe) strategy that I use all the time when creating a new course is that I actually sell the course before recording and publishing it . It’s important that I’m always very transparent about it. I call it “beta version” or “early access version” and I usually offer a 50% discount to those who enroll. The relationship will be more personal, because I’m going to record it while teaching them and I react to the students’ needs much better. They know that it’s not the final 100% fine-tuned version. But that’s fine — it can be a win-win situation. This is again a potential exit point: if no-one buys, it’s possible that it was a great presentation live, but it’s not working on video. It happens. It happened to me, too, several times. It can be a bit hard to accept, but in the end I was happy that I didn’t kill a bunch of time on recording, editing and fine-tuning a course that nobody would buy anyway. Usually, I launch the course online 2–3 more times before publishing the final version. Even if it becomes an evergreen course after all, throughout the first year I still go with the launch model and open the registration for a limited period of time only, because this way I can have a group which gives me feedback after each lesson and course. I use the feedback to optimize the content, but also landing pages, marketing copies and everything. Sometimes I just need to re-record some parts or add some new elements, or edit a bit, but there is always something to optimize. For instance, after around 18 months of testing online, I completely recreated my flagship course and I can tell you, it got much better. It was a lot of work, but it paid off, for sure. From Step 1 (consulting) to Step 4 (a ready made online course), this process — for me at least — usually takes 1–2 years. So it’s definitely not a quick process. Now that you are sure your course is awesome, all the fine-tuning in marketing is done, you know who it is for and what’s the right wording to sell it, you can lie back and watch how it scales. How it brings in more happy students — and how it makes money for you. Whether you do the evergreen or the launch model, from this point there is much less effort needed from your side. Note: I don’t believe that it can ever be fully passive income. But more about that in another post. One last question: how do you price your online course? Time to get back to the hot dog vs. sushi metaphor. Let’s say you want to make $15,000 within a year (or even with one launch). It’s basic mathematics. If your product costs $5, you’ll have to make 3,000 sales. If it’s $50, you have to sell 300 and if it’s $500 you only need 30 people to buy it. I know this sounds basic, but so many people fail to do the math. Remember, a sushi kind of online course — that’s unique and can be positioned as a premium course — can always be priced higher than a hot dog kind of course. For instance, I have a $20 course on my blog that hasn’t sold 500 copies ever since I created it (I published it in 2018). Compared to that, I also have a $500 product, too — and every time I launch it more than 40 students enroll. Just do the math. For smaller creators, it’s easier to make a decent income with a higher-priced product with just a few participants than finding thousands of people to enroll. Note: and to be honest, I enjoy working closely with 40–50 participants much more than trying to work closely with 400–500, which is close to impossible. strong Teachable strong here . (Disclaimer: these were affiliate links.) </promo> This was my take on how to develop an online course. There are many more approaches out there, but as the title says, this is what I’ve learned about the topic. I hope you found interesting bits and pieces in there.
1
80/20 is half-ass when value is logistic
80/20 is a great heuristic for power laws and asymptotic progress, but be careful when value accrues logistically. My friend swyx wrote an article, “80/20 is the new half-ass”, that pushed my buttons. But on a second read, I realized that nothing he said is wrong. It bothered me because it dunked on one of my favorite tools without advice for using it well. To me, the 80/20 heuristic is a mental model about diminishing marginal returns. 80/20 thinking works best in two cases: something has a power law distribution of a value progress to completion is asymptotic In both of these situations, 80/20 thinking can help with making decisions. However, applying 80/20 thinking when effort is asymptotic but value is logistic is a huge trap. The original inspiration of the 80/20 heuristic comes from the Pareto Principle, named after the 19th Italian economist Vilfredo Pareto. Pareto observed that 80% of the land in Italy was owned by 20% of the population. The 80/20 heuristic is a reminder to look for power law distributions of characteristics. For example, if I want to free up space on my hard drive, I should start searching the biggest files for ones I can delete. Starting with the smallest files, or starting with a random selection of files will take longer to free up the same amount of space. This applies to more qualitative issues, too. I was once in a job that was eating me up. I took an afternoon to list the 20% of my activities responsible for 80% of the happiness or satisfaction in the job. And I listed the 20% of activities that gave me 80% of the headaches and misery. Focusing on doing more of one list and less of the other had huge leverage on the quality of my work life. In power law situations, there is disproportionate value acting on the most impactful subset of the whole. If I’m baking a cake and stop at 20%, I’ll have a soggy mess. But many situations are more like Zeno’s Paradox: for a given amount of effort, I can only cover half the remaining distance. In many endeavors, an “80/20” ratio captures the feeling of relative progress. The first 20 units of work gets you 80% there. Following Zeno’s Paradox, the next 80 units of work get you 99.97% there. It doesn’t matter if the ratio is really 80/20. The insight is recognizing there are diminishing returns to incremental effort. The decision to continue or stop requires deciding when the value gained exceeds the opportunity cost of spending that effort elsewhere. For example, if I’m planning a two-week vacation in Italy, spending several days cramming an Italian phrase book might be a good return on time spent. I’ll have learned only a small fraction of the Italian language, but it will have a big benefit for my short trip. On the other hand, if I were planning to move permanently to Italy, it would probably be worth spending months or years taking Italian classes for greater fluency. Perfect is the enemy of good is another way of thinking about this concept. In many asymptotic situations, there is disproportionate value in partial completion. 80/20 thinking goes wrong if you confuse completion with value. Just because completion is asymptotic with effort doesn’t meant that value accrues asymptotically as well. In many endeavors, value follows a logistic curve, where value barely accumulates until a threshold is reached, then value leaps up and only then does the remaining value accrue asymptotically. It’s a mistake to aim for 20% of the effort without knowing where the value comes. It’s also a mistake to focus on absolute achievement when relative achievement controls the value. For example, learning 80% of a programming language may seem great in absolute terms for what you can do with it, but in a job hunt there’s little differential value in the 80% that’s easy to learn. The value of your knowledge follows a logistic function centered on the average of everyone else you’re competing with. In this sense, I agree with swyx: 80/20 thinking is half-ass when it’s an excuse to be lazy. But used thoughtfully, 80/20 is a valuable decision making heuristic for situations with power laws and diminishing marginal returns.
2
DOS games in the browser using Web Assembly
We are working hard to bring you the best oldschool classic games that you can play online. If you like what we have done here and if you want to help us to add more games and functionality, you can support our work with any type of donation. Thank you and keep playing!
1
How ASML is planning to continue the shrink
ASML’s senior vice president technology Jos Benschop recently revealed how the EUV ecosystem centered around his company’s lithography scanners will continue to shrink chip structures for the decade to come. Going from i-line bulbs in the mid-80s to EUV light sources today, the resolution required to pattern the world’s leading-edge chips has gone down by two orders of magnitude. For a while now, ASML has been saying that this historic trend can continue for at least another decade. But other than putting high-NA EUV lithography on the roadmap, it hasn’t gone into much detail on how this feat will be accomplished. Speaking at the SPIE Advanced Lithography online conference recently, ASML’s senior vice president technology Jos Benschop lifted the curtain. The issue that his audience probably was most eager to hear about is stochastics. Indeed, Benschop himself referred to it as “the elephant in the room.” Stochastic errors, which are random variations in patterning, have been plaguing EUV lithography right from the start. It has been a hotly debated subject, particularly at specialist forums such as the SPIE AL conference. And it still is. Even though stochastics clearly hasn’t been a showstopper for commercial implementation – the EUV counter reached 26 million exposed wafers by the end of last year – things get progressively more challenging as chip structures keep shrinking. Introduction of high-NA EUV lithography is part of the solution, but will not make the problem disappear. Stochastic effects manifest themselves as random, non-repeating and isolated defects in the printed pattern. The result can be locally broken lines, missing or ‘kissing’ (merging) contacts or microbridges that link structures that shouldn’t be connected, among other things. Any one of these defects could ruin an entire chip. Without countermeasures, yields would suffer detrimentally as the margins of error shrink along with chip features. These printing failures arise partly from not having enough photons. EUV photons are relatively high in energy, so the same amount of energy can create fewer 13.5-nanometer photons than, say, 193-nanometer photons even if they would be generated with the same efficiency. Because every spot on a wafer gets hit by photons according to a random distribution pattern, the lower the number of photons, the more random fluctuations. Increasing exposure time will increase the number of photons that land on a spot but will also decrease throughput and hence increase patterning costs. Clearly, more powerful EUV light sources, producing more photons, will help. Benschop showed a source power roadmap that goes up to 800 watts. It will take years, however, before that number is reached. For comparison, chipmakers are currently using 250-watt sources. In research, ASML recently demonstrated 420-watt power sustained for several hours and 500 watts in peak bursts. An equally important factor in stochastics is the resist. Ideally, a resist would be a continuum. In reality, it’s ‘broken up’ in molecules, which are by definition discrete entities – ones that will never spread completely evenly across a surface no matter how hard you try. Additionally, the absorption of energy-rich EUV photons can set off all sorts of secondary chemical reactions in the resist. ASML and chipmakers are counting on resist makers to come up with resist formulations that cram more molecules in a ‘pixel’ and that absorb more photons while suppressing undesirable processes. So, there are roadmaps for source power and resist characteristics. Will that be enough to keep stochastics in check without escalating light doses to painful levels? No, Benschop showed. Additional measures will be required. In his presentation, Benschop zoomed in on an issue with a large stochastic component called edge-placement error (EPE), which is basically the margin of error for positioning IC features relative to each other. It’s “perhaps the most decisive factor in the future of shrinking,” he said. Currently, at the 7nm node, about 40 percent of the EPE budget is down to stochastics. The other two major contributing factors are the overlay (also 40 percent) and the optical proximity correction settings (OPC, 20 percent). There’s room for improvement in the OPC, Benschop showed: deep learning techniques enable an OPC accuracy improvement of up to 77 percent. This gives a little more breathing room for stochastic errors. Then there’s another factor contributing to stochastics, which hasn’t been mentioned yet: contrast. Contrast is improved by migrating to high-NA and reducing the k1 factor, a collection of everything that increases resolution other than increasing NA and reducing wavelength (remember: maximum resolution = k1 * (NA/λ). An important k1-reducing measure will be switching to a more advanced mask design called an attenuated phase-shift mask. Another will be extending ASML’s holistic lithography suite. By having metrology, computational modeling and advanced scanners controls working together, errors in the patterning process can be detected and corrected, often in real time. As an example, Benschop highlighted carefully controlling how light hits the reticle, ie shaping the light beam into the optimal ‘pupil shape’ (which can range from simple spots to complex patterns) for a given pattern. Even with all these improvements in place – higher source power, moving to high NA, better resists and k1 reductions including OPC improvement – it’s not enough to get stochastics under control while maintaining cost-effective throughput. That’s why ASML has little choice but allowing for a larger percentage of the EPE budget to come from stochastics: up to 60 percent in the future. The difference will have to be made up by improving OPC and overlay. “We’ll be even more aggressive on the overlay roadmap,” Benschop said. “We plan to improve overlay faster than the resolution. This way, we manage to maintain almost constant productivity moving into the future. Assuming the aggressive source roadmap, high-NA and low k1 as shared. This holistic view of scanner, mask, resist, computation, as well as metrology coming together, will bring us increased resolution at acceptable productivity moving forward into the future.” This article was written with contributions from René Raaijmakers
1
Adding BPF target support to the Rust compiler
When I created this blog back in September my goal was to post at least once a month. It's December now and you're reading my second post, so I'm not exactly off to a great start. 🤔 Things have been busy on the Rust BPF front though! At the end of October I began working on a blog about the current state of things, exactly one year after I started getting involved. While doing that I finally started feeling inspired enough to try and add a BPF target to rustc, something that had been on my todo list for a very long time but never managed to find the time to work on. (Aah... if only someone wanted to sponsor all this work... wink wink!) A couple of weeks ago I finally sent a pull request to get the new target(s) merged. The changes were pretty straightforward, with the only unexpected thing being that I ended up having to write https://github.com/alessandrod/bpf-linker - a partial linker needed to enable rustc to output BPF bytecode. I'm going to tell you why I had to write the linker in a moment, but first, let's start with looking at how clang - the de facto standard BPF compiler - compiles C code to BPF. BPF doesn't have things like shared libraries and executables. Programs are compiled as object files, then at run-time they're relocated and loaded in the kernel where they get JIT-ted and executed. Because of that, and because for a long time function calls were not allowed so everything had to be inlined, BPF programs written in C are typically compiled as a strong, with library code written in header files and included with #include directives. This compilation model is simple and effective: one compilation unit goes in, one object file comes out. Because BPF programs tend to be small, recompiling the whole source code on every change is generally not an issue. Since everything gets compiled together, there's no need for linking separate compilation artifacts. (You see where this is going?) Rust uses a different compilation model. Code is split into crates. Crates can't be lumped together with #include directives, they are always compiled independently as one or more compilation units. alessandro@ubvm:~/src/app$ cargo treeapp v0.1.0 (/home/alessandro/src/app) └── dep v0.1.0 (/home/alessandro/src/dep) alessandro@ubvm:~/src/app$ cargo build Compiling dep v0.1.0 (/home/alessandro/src/dep) Compiling app v0.1.0 (/home/alessandro/src/app) Finished dev [unoptimized + debuginfo] target(s) in 0.19s app is an application crate that depends on a library dep. When building, the following happens: The rust compiler is invoked twice: first to compile the dep crate as a rust library, then to compile the app crate as an executable. When the app crate is compiled, the pre-compiled dep crate is provided as input to the compiler via the --extern option. This compilation model always produces strong, which then must be strong together to produce the final output. The rust compiler uses an internal linker abstraction, whose implementations spawn to external linkers like ld, lld, link.exe and others. Therefore, to add a new target with this model we need a linker for the target. Since clang never links anything when targeting BPF though, it turns out that lld - the LLVM linker - can't link BPF at all. So I wrote a new linker. bpf-linker takes LLVM bitcode as input, optionally applies target-specific optimizations, and outputs a single BPF object file. The inputs can be bitcode files (.bc), object files with embedded bitcode (eg .o files produced compiling with -C embed-bitcode=yes), or archive files (.a or .rlib). The linker works with anything that can output LLVM bitcode, including clang. There are a couple of reasons for taking bitcode as input instead of object files. Only a subset of Rust (just like only a subset of C) can be compiled to BPF bytecode. Therefore bpf-linker tries to push code generation as late as possible in the compilation process, after link-time optimizations have been applied and dead code has been eliminated. This avoids hitting potential failures generating bytecode for unsupported Rust code that is actually unused (eg, parts of the core crate that are never used in a BPF context). Another reason is that the linker might need to apply extra optimizations like --unroll-loops and --ignore-inline-never when targeting older kernel versions that don't support loops and calls. The rustc fork at https://github.com/alessandrod/rust/tree/bpf includes two new targets, bpfel-unknown-none and bpfeb-unknown-none which generate little endian and big endian BPF respectively. The targets automatically invoke bpf-linker so with that fork, compiling a BPF project with Rust is finally as easy as: alessandro@ubvm:~/src/app$ cargo build --target=bpfel-unknown-none Compiling dep v0.1.0 (/home/alessandro/src/dep) Compiling app v0.1.0 (/home/alessandro/src/app) Finished dev [unoptimized + debuginfo] target(s) in 1.98s alessandro@ubvm:~/src/app$ file target/bpfel-unknown-none/debug/app target/bpfel-unknown-none/debug/app: ELF 64-bit LSB relocatable, eBPF, version 1 (SYSV), not stripped Getting the targets merged will probably take a while, but worry not! With a little trick, you can use the linker to compile BPF code with stable Rust already today! I made bpf-linker implement a wasm-ld compatible command line. Since rustc already knows how to invoke wasm-ld when targeting webassembly, it can be made to use bpf-linker with the following options: alessandro@ubvm:~/src/app$ cargo rustc -- \ -C linker-flavor=wasm-ld \ -C linker=bpf-linker \ -C linker-plugin-lto Compiling dep v0.1.0 (/home/alessandro/src/dep) Compiling app v0.1.0 (/home/alessandro/src/app) Finished dev [unoptimized + debuginfo] target(s) in 0.68s alessandro@ubvm:~/src/app$ file target/debug/app target/debug/app: ELF 64-bit LSB relocatable, eBPF, version 1 (SYSV), not stripped Let's see what those options do: And voilà! Go compile some BPF with stable rust now 🎉 bpf-linker is obviously new and needs more testing. Over the next few weeks I'm going to add more unit tests and try it on more Rust code. I'm also thinking of trying to link the whole Cilium BPF code with it just to test with a large, complex code base. While working on the rustc target, at some point I went off on a bit of a tangent and ended up making some changes to LLVM and the kernel so I'm going to try and finish those off. They are needed to implement the llvm.trap intrinsic so panic!() can be implemented in a generic way, instead of having to resort to program-specific hacks like jumping to an empty program with bpf_tail_call(). I'll probably do a whole separate post about that. Finally, after my last post I received some truly great feedback! I was especially pleased to hear from a couple of companies that are using BPF and that are considering using it with Rust. I was equally pleased to see that there's a group of people developing BPF in C that feel strongly that I'm wasting my time and that Rust brings nothing over C, being BPF statically verified, not needing the borrow checker etc. They gave me inspiration for a post I'm hoping to publish soon, which will cover why I think that Rust has the potential to become as central to the BPF ecosystem as it is central to WebAssembly development today. Until next time!
1
The Future of Visual Studio Extensions
October 28th, 2020 With new improvements and additions such as GitHub Codespaces, Git Integrations, and IntelliCode Team Completions, Visual Studio has expanded to make development easier, more customizable, and accessible from any machine.  As we continue evolving Visual Studio, what about extensions?!  While still early in the design phase, we are creating a new extensibility model. This will make extensions more reliable, easier to write, and supported locally and in the cloud. Tired of seeing a feature or Visual Studio crash because of an extension?  Today’s in-proc extensions have minimal restrictions over how they can influence the IDE and other extensions. Thus, they are free to corrupt Visual Studio if the extension experiences an error or crash.  One of our biggest changes to the Visual Studio extension model is that we are making extensions out-of-proc. This will help ensure increased isolation between internal and external extension APIs, where a buggy extension can crash without causing other extensions or the entire IDE to crash, hang, or slow down along with it. Extensions are cool to use but can be difficult to write.  Inconsistent APIs, overwhelming architecture, and having to ask your teammates how to implement what should be a basic command are common feedback items from extension writers.  Discovering all these APIs can be challenging and once you do find them, it can be hard to know where or when to use them.  Luckily, designing the new out-of-proc extension model gives us the opportunity to completely redesign the Visual Studio extension APIs.  So, our goal is making writing extensions easier with more uniform, discoverable APIs and continually updated documentation.  Most importantly, the new model will preserve the power and extensive UI customizability options that today’s model provides. Figure 1 shows the code required to initialize a command for Mads Kristensen’s Image Optimizer extension.  Figure 2 shows an example of what initializing the same command could look like using the new model.  Here, lines 31 and 32 in Figure 2 condenses and represents Figure 1’s AddCommand method calls. Figure 1’s DesignTimeEnvironment code (DTE) representing the IDE and its controlling methods is passed in via the VisualStudioExtensibility class in Figure 2.  Ideally, this simplifies the code and knowledge required to properly initialize the class. Part of GitHub Codespaces’ appeal is the ability to have a customized dev environment accessible via the cloud across machines.  For many developers, a customized environment is incomplete without extensions.  The current model’s unrestricted access to the IDE and lack of asynchronous APIs don’t make it ideal for a seamless, crash-less, and responsive client/server experience for Codespaces. To round out our new extensibility model goals, we will make your essential extensions available both locally and remotely. The road to completing the new extension model is a long one and today’s model will not be replaced overnight.  We are still in the conceptual phases of this new model, so sharing your experiences as either an extension user or an extension writer is very important.  To help us improve the extension experience, please share in the comments what extensions you can’t live without or complete this survey. Stay tuned for future extension developments!
2
A guide to using your career to help solve the world’s most pressing problems
You have 80,000 hours in your career. This makes it your best opportunity to have a positive impact on the world. If you’re fortunate enough to be able to use your career for good, but aren’t sure how, our career guide can help you: Get new ideas for fulfilling careers that do good Compare your options Make a plan you feel confident in It’s based on 10 years of research alongside academics at Oxford. We’re a nonprofit, and everything we provide is free. Career guide Our career guide covers everything you need to know about how to find a fulfilling career that does good, from why you shouldn’t “follow your passion”, to why medicine and charity work aren’t always the best way to help others. It’s full of practical tips and exercises, and at the end, you’ll have a draft of your new career plan. Read our career guide Or read the two-minute summary, or get the guide as a free book. Research List of the world's most pressing problems The issue you work on is probably the most important factor determining your impact. It’s important to focus on issues that are not only big, but also neglected and tractable. We have advice on how to compare different issues, and a list of especially pressing problems you could help tackle in your career. Read problem profiles List of the most impactful careers we've identified so far The highest impact paths are those that put you in the best position to tackle the most pressing problems. To help you get ideas for ways to contribute, we review some common options and list some unusual but especially high-impact paths. Read career reviews Advanced series The series covers our most important and novel research findings about how to increase the impact of your career, including: what “doing good” even means, why reducing existential risk might be humanity’s biggest and most neglected priority, and how to avoid accidentally making things worse. Read our advanced series Podcast In-depth conversations about the world's most pressing problems and how you can use your career to solve them. Subscribe by searching for “80,000 Hours” wherever you get podcasts, or click one of the buttons below: Recommended episodes View all → Cass Sunstein on how social change happens, and why it’s so often abrupt & unpredictable Ajeya Cotra on worldview diversification and how big the future could be Toby Ord on the precipice and humanity’s potential futures Hilary Greaves on moral cluelessness, population ethics, & harnessing the brainpower of academia to tackle the most important research questions Having a successful career with depression, anxiety and imposter syndrome Nova DasSarma on why information security may be critical to the safe development of AI systems Chris Olah on what the hell is going on inside neural networks David Chalmers on the nature and ethics of consciousness Job board Our job board provides a curated list of publicly advertised vacancies that we think are particularly promising. We post roles that we believe are opportunities to either (and often both): Contribute to solving key global problems. Develop the career capital — skills, experience, knowledge, connections and credentials — to solve these problems in the future. View the job board Recent vacancies View all → Get 1-1 advice If you’re interested in working on one of the global problems we highlight, apply to speak with our team one-on-one for free. We can discuss which problem to focus on, look over your plan, introduce you to mentors, and suggest roles that suit your skills. Get 1-1 advice Our community Back in 2011, we helped found the effective altruism community. It’s a group of people devoted to using evidence and reason to figure out the best ways to help others — whether through donations, political advocacy, or their careers. Learn more about the community and how it might be able to help you to have a more impactful career. Our community Who are we? We started 80,000 Hours when we were about to graduate from Oxford in 2011. Our aim was to provide the advice we wish we’d had back then: transparently explained, based on the best research available, and willing to ask the big questions. By doing this, we hope to get the next generation of leaders tackling the world’s most pressing problems. About us How come this is all free? We’re an independent nonprofit funded by individual donors and philanthropic foundations. They donate to us so that we can help people have a greater positive impact on the world. We don’t accept any corporate sponsorship or advertising fees. Our donors Who is this for? Our aim is to help people tackle the world’s biggest and most neglected problems, and our advice is aimed at people who have the good fortune to be able to make that their focus, as well as the security to change paths. Due to our limited capacity, some of our advice focuses on a narrow range of paths, and is especially aimed at talented college students and graduates aged 18–30, though many of the ideas we cover are relevant to everyone. Our audience What research is your advice based on? Our advice is based on hundreds of expert interviews; what we’ve learned advising 4,000+ people one-on-one over 10 years; and where possible, the academic literature on global problems and career success. We’re affiliated with the Global Priorities Institute at the University of Oxford. What is our advice based on? Reader stories Learn more about our impact Have a greater positive impact with your career Join our newsletter to receive a free copy of our career guide, and weekly high-impact job opportunities and updates on our research. You’ll be joining our community of over 300,000 people, and can unsubscribe in one click.
3
Dump Google Chrome and keep (almost) all the benefits
I've been a Google Chrome user for, oh, a very long time. I switched to it because the competition had become stagnant and bloated. ZDNET Recommends The best browsers for privacy Most people use Google Chrome as their default browser. But privacy is another matter for the online ad giant. Read now Now I've switched away from Google Chrome because, well, it's become stagnant and bloated. The RAM usage and the way Chrome burns through battery life on laptops has gone to the point where it's unacceptable. I've switched to Brave. Brave is fast, secure, packed with privacy features, has a built-in ad-blocker, supports most of the Google Chrome extensions available, and there's even an optional (paid-for premium) VPN. It's a fully functional browser with everything you'd expect from a modern browser. Must read:  Why you need to urgently update all your iPhones, iPads, and Macs - NOW! Now, there are some downsides to switching to Brave, and I've detailed some of them here. These are less related to web browsing itself and more to do with the interface between Brave and the cryptocurrency community. The more I use Brave, the less this bothers me. One thing that I'm happy with about this shift is that I don't feel like I'm losing much -- especially where it comes to browser extensions. Basically, they just work. You go to the Google Chrome web store, find the extension, and download it. I've heard from people in the past who have had problems with certain extensions, but I've not come across that. I imagine there are outliers, and if you know of any, let me know. It's weird how browsing with Brave feels very much like browsing with Google Chrome, except I get far better performance (the speed with which pages load up has to be seen to be believed), better battery life (a good hour on my laptop), and far better privacy protection. Also, switching from Chrome to Brave was a snap. Everything worked, and because the two browsers share the Chromium heritage, everything felt familiar and easy to use. After a day or so, I'd totally forgotten that I wasn't using Google Chrome. If you're looking for a change from Google Chrome -- or any of the other incumbent browsers -- then take a look at Brave. I came to it with low expectations, and now I'm a total convert to the browser. Brave is available for Windows 64-bit, Windows 32-bit, macOS Intel, macOS ARM64 and Linux, and can be downloaded for iOS and Android from the relevant app stores. I highly recommend it.
29
Cache-Control Recommendations
Monday, September 13, 2021, in Security Cache-Control is one of the most frequently misunderstood HTTP headers, due to its overlapping and perplexingly-named directives. Confusion around it has led to numerous security incidents, and many configurations across the web contain unsafe or impossible combinations of directives. Furthermore, the interactions between various directives can have surprisingly different behavior depending on your browser. The objective of this document is to provide a small set of recommendations for developers and system administrators that serve documents over HTTP to follow. Although these recommendations are not necessarily optimal in all cases, they are designed to minimize the risk of setting invalid or dangerous Cache-Control directives. Recommendation Safe for PII Use Cases Header Value Don't cache (default) Yes API calls, direct messages, pages with personal data, anything you're unsure about max-age=0, must-revalidate, no-cache, no-store, private Static, versioned resources No Versioned files (such as JavaScript bundles, CSS bundles, and images), commonly with names such as loader.0a168275.js max-age=n, immutable Infrequently changing public resources, or low-risk authenticated resources No Images, avatars, background images, and fonts max-age=n Don't cache (default): max-age=0, must-revalidate, no-cache, no-store, private When you're unsure, the above is the safest possible directive for Cache-Control. It instructs browsers, proxies, and other caching systems to not cache the contents of the request. Although it can have significant performance impacts if used on frequently-accessed public resources, it is a safe state that prevents the caching of any information. It may seem that using no-store alone should stop all caching, but it only prevents the caching of data to permanent storage. Many browsers will still allow the caching of these resources to memory, even if it doesn't write them to disk. This can cause issues where shared systems may contain sensitive information, such as browsers maintaining cached documents for logged out users. Although no-store may seem sufficient to instruct content delivery networks (CDNs) to not cache private data, many CDNs ignore these directives to varying degrees. Adding private in combination with the above directives is sufficient to disable caching both for CDNs and other middleboxes. Static, versioned resources: max-age=n, immutable If you have versioned resources such as JavaScript and CSS bundles, this instructs browsers (and CDNs) to cache the resources for n seconds, while not purging their caches even when intentionally refreshing. This maximizes performance, while minimizing the amount of complexity that needs to get pushed further downstream (e.g. service workers). Care should be taken such that this combination of directives isn't used on private or mutable resources, as the only way to "bust" the cache is to use an updated source document that refers to new URLs. The value to use for n depends upon the application, and is ideally set to a bit longer than the expected document lifetime. One year (31536000) is a reasonable value if you're unsure, but you might want to use as low as a week (604800) for resources that you want the browser to purge faster. Infrequently changing public resources or low-risk authenticated resources: max-age=n If you have public resources that are likely to change, simply set a max-age equal to a number (n) seconds that makes sense for your application. Simply using max-age will allow user agents to still use stale resources in some circumstances, such as when there is poor connectivity. There is no need to add must-revalidate outside of the unlikely circumstance where the resource contains information that must be reloaded if the resource is stale. For brevity, this only covers the most common directives used inside Cache-Control. If you are looking for additional information, the MDN article on Cache-Control is pretty exhaustive. Note that its recommendations differ from the recommendations in this document. max-age=n (and s-maxage=n ) Surveys of Cache-Control across the internet have identified numerous anti-patterns in broad usage. This list is not meant to be extensive, but simply to demonstrate how complex and sometimes misleading that the Cache-Control directive can be. While there are times that max-age and must-revalidate are useful in combination, for the most part this is saying that you can cache a file but then must immediately distrust it afterwards even if the hosting server is down. Instead use max-age=604800, which says to cache it for a week while still allowing the use of a stale version if the resource is unavailable. no-cache tells user agents that they must check to see if a resource is unmodified via ETag and/or Last-Modified with each request, and so neither max-age=604800 nor must-revalidate do anything. You still see these directives appearing in Cache-Control responses, as part of some long-treasured lore for controlling how Internet Explorer caches. But these directives have never worked, and you're wasting precious bytes1 by continuing to send them. While the HTTP Expires header works the same way as max-age in theory, the complexity of its date format means that it is extremely easy to make a minor error that looks valid, but where browsers treat it as max-age=0. As a result, it should be avoided in preference of the far more simple max-age directive. Not only is the behavior of Pragma: no-cache largely undefined, but HTTP/1.0 client compatibility hasn't been necessary for about 20 years. [Category: Security] [Tags: Cache-Control]
1
Mirror neurons: Enigma of the metaphysical modular brain
2012 Jul-Dec; 3(2): 118–124. J Nat Sci Biol Med. p 10.4103/0976-9668.101878 PMCID: PMC3510904 PMID: 23225972 Mirror neurons: Enigma of the metaphysical modular brain Author information Copyright and License information Disclaimer Abstract Mirror neurons are one of the most important discoveries in the last decade of neuroscience. These are a variety of visuospatial neurons which indicate fundamentally about human social interaction. Essentially, mirror neurons respond to actions that we observe in others. The interesting part is that mirror neurons fire in the same way when we actually recreate that action ourselves. Apart from imitation, they are responsible for myriad of other sophisticated human behavior and thought processes. Defects in the mirror neuron system are being linked to disorders like autism. This review is a brief introduction to the neurons that shaped our civilization. Keywords: Autism, neurons, visuospatial INTRODUCTION Mirror neurons represent a distinctive class of neurons that discharge both when an individual executes a motor act and when he observes another individual performing the same or a similar motor act. These neurons were first discovered in monkey's brain. In humans, brain activity consistent with that of mirror neurons has been found in the premotor cortex, the supplementary motor area, the primary somatosensory cortex, and the inferior parietal cortex []. Open in a separate window Originally discovered in a subdivision of the monkey's premotor cortex, area F5, mirror neurons have later been also found in the inferior parietal lobule (IPL).[1] IPL receives a strong input from the cortex of the superior temporal sulcus (STS), a region known to code biological motion, and sends output to ventral premotor cortex including area F5.[2] Neurophysiological (EEG, MEG, and TMS), and brain-imaging (PET and fMRI) experiments provided strong evidence that a fronto-parietal circuit with properties similar to the monkey's mirror neuron system is also present in humans.[3] As in the monkey, the mirror neuron system is constituted of IPL and a frontal lobe sector formed by the ventral premotor cortex plus the posterior part of the inferior frontal gyrus (IFG). DEVELOPMENT Human infant data using eye-tracking measures suggest that the mirror neuron system develops before 12 months of age, and that this system may help human infants understand other people's actions. Two closely related models postulate that mirror neurons are trained through Hebbian or associative learning.[4,5] THE HEBBIAN THEORY Donald Hebb in 1949 postulated that a basic mechanism for synaptic plasticity wherein an increase in synaptic efficacy arises from the presynaptic cell's repeated and persistent stimulation of the postsynaptic cell. When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased. The theory is often summarized as “Cells that fire together, wire together.” This Hebbian theory attempts to explain “associative learning”, in which simultaneous activation of cells leads to pronounced increases in synaptic strength between those cells. Such learning is known as Hebbian learning. DISCOVERY In 1990s, a group of neurophysiologists placed electrodes in the ventral premotor cortex of the macaque monkey to study neurons specialized for the control of hand and mouth actions.[6] They recorded electrical signals from a group of neurons in the monkey's brain while the monkey was allowed to reach for pieces of food, so the researchers could measure their response to certain movements. They found that some of the neurons they recorded from would respond when the monkey saw a person pick up a piece of food as well as when the monkey picked up the food. In another experiment, they showed the role of the mirror neuron system in action recognition, and proposed that the human Broca's region was the homologue region of the monkey ventral premotor cortex. Subsequently, a study by Ferrari Pier Francesco and colleagues described the presence of mirror neurons responding to mouth actions and facial gestures.[7] A recent experiment by Christian Keysers and colleagues have shown that, in both humans and monkeys, the mirror system also responds to the sound of actions.[8] Functional magnetic resonance imaging (fMRI) can examine the entire brain at once and suggests that a much wider network of brain areas shows mirror properties in humans than previously thought. These additional areas include the somatosensory cortex and are thought to make the observer feel what it feels like to move in the observed way.[9] Neuropsychological studies looking at lesion areas that cause action knowledge, pantomime interpretation, and biological motion perception deficits have pointed to a causal link between the integrity of the IFG and these behaviors.[10,11] Transcranial magnetic stimulation studies have confirmed this as well.[12] Mukamel et al. recorded activity from 1177 brain neurons of 21 patients suffering from intractable epilepsy. The patients had been implanted with intracranial depth electrodes to identify seizure foci for potential surgical treatment. Electrode location was based solely on clinical criteria; the researchers, used the same electrodes to “piggyback” their research. The experiment included three phases; making the patients observe facial expressions (observation phase), grasping (activity phase), and a control experiment (control phase). In the observation phase, the patients observed various actions presented on a laptop computer. In the activity phase, the subjects were asked to perform an action based on a visually presented word. In the control task, the same words were presented, and the patients were instructed not to execute the action. The researchers found a small number of neurons that fired or showed their greatest activity both when the individual performed a task and when they observed a task. Other neurons had anti-mirror properties, that is, they responded when the participant saw an action but were inhibited when the participant performed that action. The mirror neurons found were located in the supplementary motor area and medial temporal cortex.[13] POSTULATED FUNCTIONS OF MIRROR NEURONS IN HUMANS Intention understanding Mirror neurons are associated with one of the most intriguing aspect of our complex thought process, that is “Intention understanding”. There are two distinct processes of information that one can get observing an action done by another individual. The first component is WHAT action is being done? And the second more complex component is WHAT FOR or, WHY (Intention) the action is being done. is a representation of the consequences described. The complex beauty of the discussed subject is the second component where our mirror neurons premonate the future action which is yet to occur. Two neuroscientists[14] first hypothesized that mirror neurons are involved in intention understanding, which was later supported by fMRI study. In this experiment, volunteers were presented with hand actions without a context and hand actions executed in contexts that allowed them to understand the intention of the action agent. The study demonstrated that actions embedded in contexts yielded selective activation of the mirror neuron system. This indicates that mirror areas, in addition to action understanding, also mediate the understanding of others’ intention.[15] These data indicate that the mirror neuron system is involved in intention understanding, though, it fails to explain the specific mechanisms underlying it. Open in a separate window In order to explain this hypothesis, a study[16] was carried out on two rhesus macaque monkeys []. The monkeys were trained to perform two actions with different goals. The schematic representation is shown in . Open in a separate window In the first, the monkey had to grasp an object in order to place it in a container. In the second, it had to grasp a piece of food to eat it. The initial motor acts, reaching and grasping, were identical in the two situations, but the final goal oriented action was different. The activity of neurons was recorded from the IPL, which has long been recognized as an association cortex that integrates sensory information. The results showed that 41 mirror neurons fired selectively when the monkey executed a given motor act (e.g. grasping). However interestingly, only specific sets (15 neurons) within the IPL fired during the second goal constrained acts. Some of these “action-constrained” motor neurons had mirror properties and selectively discharged during the observation of motor acts when these were embedded in a given action (e.g., grasping-for-eating, but not grasping-for-placing). Thus, the activation of IPL action-constrained mirror neurons give information not only about, but also on why grasping is done (grasping-for-eating or grasping-for placing). This specificity allowed the observer not only to recognize the observed motor act, but also to code what will be the next motor act of the not-yet-observed action, in other words to understand the intentions of the action's agent. Autism and intention understanding It has been postulated and proved by neuroscientists that the inability of autistic children to relate to people and life situations in the ordinary way depends on a lack of a normally functioning mirror neuron system.[17–19] EEG recordings mu waves from motor areas are suppressed when someone watches another person's move, a signal that may relate to the mirror neuron system. This suppression was less in children with autism. Basically autism is characterized by two neuropsychiatric abnormalities. First is the defect in the social-cognitive domain which presents as mental aloneness, a lack of contact with the external world and lack of empathy. The second is sensorimotor defects like temper tantrums, head banging, and some form of repetitive rituals. All these are now suggested to be because of some anomaly of the mirror neuron development. One interesting phenomena in autism is the inability to comprehend abstract reasoning and metaphors, which in normal humans is subserved by left supramarginal gyrus rich in mirror neurons. Mirror neuron abnormalities have also been blamed for a number of other autistic problems like language difficulties, self-identification, lack of imitation, and finally intention understanding. However, the autistic enigma continues as whether the primary deficit in intention understanding found in autistic children is due to damage of the mirror neuron system as it is responsible for understanding the actions of others, or rather there exists more basic defects in the organization of the motor chains. In other words, the fundamental deficit in autistic children resides in the incapacity to organize their own intentional motor behavior. Emotions and empathy Many studies have independently argued that the mirror neuron system is involved in emotions and empathy.[20–23] Studies have shown that people who are more empathic according to self-report questionnaires have stronger activations both in the mirror system for hand actions and the mirror system for emotions, providing more direct support for the idea that the mirror system is linked to empathy. Functions mediated by mirror neurons depend on the anatomy and physiological properties of the circuit in which these neurons are located. Emotional and empathetic activations were found in parieto-premotor circuits responsible for motor action control. In a fMRI experiment[24] represented schematically below, [] one group of participants were exposed to disgusting odorants and, the other group, to short movie clips showing individuals displaying a facial expression of disgust. It was found that the exposure to disgusting odorants specifically activates the anterior insula and the anterior cingulate. Most interestingly, the observation of the facial expression of disgust activated the same sector of the anterior insula.[25] In agreement with these findings, the data are obtained in another fMRI experiment that showed activation of the anterior insula during the observation and imitation of facial expressions of basic emotions. Open in a separate window Similar results[26,27] have been obtained for felt pain and during the observation of a painful situation, which was involved another person loved by the observer. Taken together, these experiments suggest that feeling emotions is due to the activation of circuits that mediate the corresponding emotional responses. Evolution of language and mirror neurons The discovery of mirror neurons provided strong support for the gestural theory of speech etymology. Mirror neurons create a direct link between the sender of a message and its receiver. Thanks to the mirror mechanism, actions done by one individual become messages that are understood by an observer without any cognitive mediation. The observation of an individual grasping an apple is immediately understood because it evokes the same motor representation in the parieto-frontal mirror system of the observer. On the basis of this fundamental property of mirror neurons and the fact that the observation of actions like hand grasping activates the caudal part of IFG (Broca's area), neuroscientists proposed that the mirror mechanism is the basic mechanism from which language evolved.[28] Humans mostly communicate by sounds. Sound-based languages, however, do not represent the only natural way for communication. Languages based on gestures (signed languages) represent another form of complex, fully-structured communication system. This hypothesis argues that speech is the only natural human communication system, the evolutionary precursor of which is from animal calls. The argument goes as follows: Humans emit sound to communicate, animals emit sounds to communicate, therefore human speech evolved from animal calls. The contradictions of the above syllogism are: The anatomical structures underlying primate calls and human speech are different. Primate calls are mostly mediated by the cingulate cortex and by deep, diencephalic, and brain stem structures. In contrast, the circuits underlying human speech are formed by areas located around the Sylvian fissure, including the posterior part of IFG. Animal calls are always linked to emotional behavior contrary to human speech. Speech is mostly a dyadic, person-to-person communication system. In contrast, animal calls are typically emitted without a well-identified receiver. Human speech is endowed with combinatorial properties that are absent in animal communication. Humans do possess a “call” communication system like that of non-human primates and its anatomical location is similar. This system mediates the utterances that humans emit when in particular emotional states (cries, yelling, etc.). These utterances are preserved in patients with global aphasia. THEORIES OF LANGUAGE EVOLUTION AND ROLE OF MIRROR NEURON SYSTEM The alternate hypothesis According to this theory, the initial communicative system in primate precursors of modern humans was based on simple, elementary gesturing.[29] Sounds were then associated with the gestures and became progressively the dominant way of communication. In fact, the mirror mechanism solved, at an initial stage of language evolution, two fundamental communication problems: Parity and direct comprehension. Thanks to the mirror neurons, what counted for the sender of the message also counted for the receiver. No arbitrary symbols were required. The comprehension was inherent in the neural organization of the two individuals. It is obvious that the mirror mechanism does not explain by itself the enormous complexity of speech. but, it solves one of the fundamental difficulties for understanding language evolution, that is, how and what is valid for the sender of a message become valid also for the receiver. Hypotheses and speculations on the various steps that have led from the monkey mirror system to language have been recently advanced.[30] In humans, functional MRI studies have reported finding areas homologous to the monkey mirror neuron system in the inferior frontal cortex, close to Broca's area, one of the hypothesized language regions of the brain. This has led to suggestions that human language evolved from a gesture performance/understanding system implemented in mirror neurons. Mirror neurons have been said to have the potential to provide a mechanism for action-understanding, imitation-learning, and the simulation of other people's behavior. It must be noticed that the mirror neuron system seems to be inherently inadequate to play any role in the syntax, given that this definitory property of human languages which is implemented in hierarchical recursive structure is flattened into linear sequences of phonemes making the recursive structure not accessible to sensory detection. Theory of cross-modal abstraction The ability to make consistent connections across different senses may have initially evolved in lower primates, but it went on developing in a more sophisticated manner in humans through remapping of mirror neurons which then became co-opted for other kinds of abstraction that humans excel in, like reasoning metaphors. Development of sophisticated modules inside the brain makes us unique as far as language is concerned. Examples: The connections between the inferior temporal gyrus (fusiform gyrus/visual processing area) and the auditory area guide sound mediated visual abstraction/synesthesia. V. S. Ramachandran, a cognitive neuroscientist, demonstrates this through his famous bouba-kiki effect through cross-modal abstraction []. In this experiment, if we are to name the following diagrams with two given options (bouba and kiki) then, our brain predominantly names as bouba, and as kiki.[31] Open in a separate window Analysis of bouba is abstracted in the visual center as somewhat gross, voluptuous, rounded, etc., and kiki is abstracted as somewhat sharp or more chiseled. Example 2: Similarly doing “pincer-like” hand gestures while pronunciation of terms like “tiny”, “little”, “diminutive”, and pouting the lips outwards while pronunciation of words like “you” meaning pointing towards someone. These features signify cross-modal connections of neurons between face and hand area in the motor cortex (motor-to-motor synkinesia). Onomatopoiec theory This theory also revolves around mirror neurons. Onomatopoeia show how man perceives sound. Sounds are defined as disturbances of mechanical energy that propagates through matter as a wave. What makes a particular sound distinct from others are its properties like frequency, wavelength, period, amplitude, and speed. Onomatopoeia is an attempt to produce the sound we hear by converting it into symbols. For instance, we would say the sound a gun makes when it is fired is “BANG”. Although the actual sound is different, we have come to associate “BANG” with a gun. This symbolic association of sound which we perceive through vision in the form of a specific word with correct interpretation is hypothesized to be possible because of mirror neurons. Theory of recursive em bedding Michael Corballis, an eminent cognitive neuroscientist, argues that what distinguishes us in the animal kingdom is our capacity for recursion, which is the ability to embed our thoughts within other thoughts. “I think, therefore I am” is an example of recursive thought, because the thinker has inserted himself into his thought. Recursion enables us to conceive of our own minds and the minds of others. It also gives us the power of mental “time travel” that is the ability to insert past experiences, or imagined future ones, into present consciousness. Corballis demonstrates how these recursive structures led to the emergence of language and speech, which ultimately enabled us to share our thoughts, plan with others, and reshape our environment to better reflect our creative imaginations. Mirror neurons shape the power of recursive embedding. Theory of mind This theory suggests that humans can construct a model in their brains of the thoughts and intentions of others. We can predict the thoughts, actions of others. The theory holds that humans anticipate and make sense of the behavior of others by activating mental processes that, if carried into action, would produce similar behavior. This includes intentional behavior as well as the expression of emotions. The theory states that children use their own emotions to predict what others will do. Therefore, we project our own mental states onto others. Mirror neurons are activated both when actions are executed, and the actions are observed. This unique function of mirror neurons may explain how people recognize and understand the states of others; mirroring observed action in the brain as if they conducted the observed action.[32] A schematic diagram showing the various areas in the brain that may have accelerated the evolution of protolanguage [].[33] Open in a separate window Human self-awareness It has been speculated that mirror neurons may provide the neurological basis of human self-awareness. Mirror neurons can not only help simulate other people's behavior, but can be turned “inward” to create second-order representations or meta-representations of ones own earlier brain processes. This could be the neural basis of introspection, and of the reciprocity of self-awareness and other awareness.[34] CONCLUSION Although the enigma of human brain is unfathomable, but still the indefatigable attempts made by the ever aspiring cognitive neuroscientsts has opened up a realm of metaphysical secrets in the mirror neuron modular brain that has shaped our civilization. Footnotes Source of Support: Nil. Conflict of Interest: None declared. REFERENCES 1. Rizzolatti G, Fogassi L, Gallese V. Neurophysiological mechanisms underlying the understanding and imitation of action. p p2 [PubMed] [Google Scholar] 2. Jellema T, Baker CI, Oram MW, Perrett DI. Cell populations in the banks of the superior temporal sulcus of the macaque monkey and imitation. In: Melzoff AN, Prinz W, editors. The imitative mind. Development, evolution and brain bases. Cambridge: Cambridge University Press; 2002. [Google Scholar] 3. Rizzolatti G, Craighero L. The Mirror-Neuron System. p p27 [PubMed] [Google Scholar] 4. Falck-Ytter T, Gredebäck G, von Hofsten C. Infants predict other people's action goals. p p9 [PubMed] [Google Scholar] 5. Heyes CM. Where do mirror neurons come from? p p34 [PubMed] [Google Scholar] 6. Gallese V, Fadiga L, Fogassi L, Rizzolatti, Giacomo Action recognition in the premotor cortex. p p119 [PubMed] [Google Scholar] 7. Ferrari PF, Gallese V, Rizzolatti G, Fogassi L. Mirror neurons responding to the observation of ingestive and communicative mouth actions in the ventral premotor cortex. p p17 [PubMed] [Google Scholar] 8. Keysers C, Kaas JH, Gazzola V. Somatosensation in social perception. p p11 [PubMed] [Google Scholar] 9. Gazzola V, Keysers C. The observation and execution of actions share motor and somatosensory voxels in all tested subjects: Single-subject analyses of unsmoothed fMRI data. p p19 [PMC free article] [PubMed] [Google Scholar] 10. Saygin AP, Wilson SM, Dronkers NF, Bates E. “Action comprehension in aphasia: Linguistic and non-linguistic deficits and their lesion correlates” p p42 [PubMed] [Google Scholar] 11. Tranel D, Kemmerer D, Adolphs R, Damasio H, Damasio AR. Neural correlates of conceptual knowledge for actions. p p20 [PubMed] [Google Scholar] 12. Candidi M, Urgesi C, Ionta S, Aglioti SM. “Virtual lesion of ventral premotor cortex impairs visual perception of biomechanically possible but not impossible actions” p p3 [PubMed] [Google Scholar] 13. Mukamel R, Ekstrom AD, Kaplan J, Iacoboni M, Fried I. Single-Neuron Responses in Humans during Execution and Observation of Actions. p p20 [PMC free article] [PubMed] [Google Scholar] 14. Gallese V, Goldman A. Mirror neurons and the simulation theory of mind-reading. p p12 [PubMed] [Google Scholar] 15. Iacoboni M, Molnar-Szakacs I, Gallese V, Buccino G, Mazziotta JC, Rizzolatti G. Grasping the intentions of others with one's own mirror neuron system. p p3 [PMC free article] [PubMed] [Google Scholar] 16. Fogassi L, Ferrari PF, Gesierich B, Rozzi S, Chersi F, Rizzolatti G. Parietal lobe: From action organization to intention understanding. p p308 [PubMed] [Google Scholar] 17. Ramachandran VS, Oberman LM. Broken mirrors: A theory of autism. p p5 [PubMed] [Google Scholar] 18. Oberman LM, Hubbard EM, McCleery JP, Altschuler EL, Ramachandran VS, Pineda JA. EEG evidence for mirror neuron dysfunction in autism spectral disorders. p p24 [PubMed] [Google Scholar] 19. Dapretto M, Davies MS, Pfeifer JH, Scott AA, Sigman M, Bookheimer SY, et al. Understanding emotions in others: Mirror neuron dysfunction in children with autism spectrum disorders. p p9 [PMC free article] [PubMed] [Google Scholar] 20. Preston SD, de Waal FB. Empathy: Its ultimate and proximate bases. p p25 [PubMed] [Google Scholar] 21. Decety J, Jackson PL. The functional architecture of human empathy. p p3 [PubMed] [Google Scholar] 22. Gallese V, Goldman AI. “Mirror neurons and the simulation theory” p p2 [PubMed] [Google Scholar] 23. Gallese V. The “Shared Manifold” hypothesis: From mirror neurons to empathy” 2001;:33–50. p8 [Google Scholar] 24. Carr L, Iacoboni M, Dubeau MC, Mazziotta JC, Lenzi GL. Neural mechanisms of empathy in humans: A relay from neural systems for imitation to limbic areas. p p100 [PMC free article] [PubMed] [Google Scholar] 25. Wicker B, Keysers C, Plailly J, Royet JP, Gallese V, Rizzolatti G. Both of us disgusted in my insula: The common neural basis of seeing and feeling disgust. p p40 [PubMed] [Google Scholar] 26. Saarela MV, Hlushchuk Y, Williams AC, Schurmann M, Kalso E, Hari R. The compassionate brain: Humans detect intensity of pain from another's face. p p17 [PubMed] [Google Scholar] 27. Singer T. The neuronal basis and ontogeny of empathy and mind reading: Review of literature and implications for future research. p p6 [PubMed] [Google Scholar] 28. Rizzolatti G, Arbib MA. Language within our grasp. p p21 [PubMed] [Google Scholar] 29. Corballis MC. Much ado about mirrors. p p7 [PubMed] [Google Scholar] 30. Arbib MA. From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics. p p2 [PubMed] [Google Scholar] 31. Ramachandran VS. Unlocking the Mystery of Human Nature. Noida, UP, India: Random House; 2010. The Tell-Tale Brain; p. 109. [Google Scholar] 32. Gallese V, Keysers C, Rizzolatti G. A unifying view of the basis of social cognition. p p8 [PubMed] [Google Scholar] 33. Ramachandran VS. Unlocking the Mystery of Human Nature. Noida, UP, India: Random House; 2010. The Tell-Tale Brain; p. 177. [Google Scholar] 34. Ramachandran VS. Self Awareness: The Last Frontier, Edge Foundation web essay. [Last accessed on 2011 July 26]. Available from: http://www.edge.org/3rd_culture/rama08/rama08_index.html .
247
Fast-Paced Multiplayer (Part I): Client-Server Game Architecture
<< Series Start Gabriel Gambetta Client-Server Game Architecture | Client-Side Prediction and Server Reconciliation | Entity Interpolation | Lag Compensation | Live Demo Translations: Korean | Russian This is the first in a series of articles exploring the techniques and algorithms that make fast-paced multiplayer games possible. If you’re familiar with the concepts behind multiplayer games, you can safely skip to the next article – what follows is an introductory discussion. Developing any kind of game is itself challenging; multiplayer games, however, add a completely new set of problems to be dealt with. Interestingly enough, the core problems are human nature and physics! It all starts with cheating. As a game developer, you usually don’t care whether a player cheats in your single-player game – their actions don’t affect anyone but him. A cheating player may not experience the game exactly as you planned, but since it’s their game, they have the right to play it in any way they please. Multiplayer games are different, though. In any competitive game, a cheating player isn’t just making the experience better for himself, they’re also making the experience worse for the other players. As the developer, you probably want to avoid that, since it tends to drive players away from your game. There are many things that can be done to prevent cheating, but the most important one (and probably the only really meaningful one) is simple : don’t trust the player. Always assume the worst – that players will try to cheat. This leads to a seemingly simple solution – you make everything in your game happen in a central server under your control, and make the clients just privileged spectators of the game. In other words, your game client sends inputs (key presses, commands) to the server, the server runs the game, and you send the results back to the clients. This is usually called using an authoritative server, because the one and only authority regarding everything that happens in the world is the server. Of course, your server can be exploited for vulnerabilities, but that’s out of the scope of this series of articles. Using an authoritative server does prevent a wide range of hacks, though. For example, you don’t trust the client with the health of the player; a hacked client can modify its local copy of that value and tell the player it has 10000% health, but the server knows it only has 10% – when the player is attacked it will die, regardless of what a hacked client may think. You also don’t trust the player with its position in the world. If you did, a hacked client would tell the server “I’m at (10,10)” and a second later “I’m at (20,10)”, possibly going through a wall or moving faster than the other players. Instead, the server knows the player is at (10,10), the client tells the server “I want to move one square to the right”, the server updates its internal state with the new player position at (11,10), and then replies to the player “You’re at (11, 10)”: In summary: the game state is managed by the server alone. Clients send their actions to the server. The server updates the game state periodically, and then sends the new game state back to clients, who just render it on the screen. The dumb client scheme works fine for slow turn based games, for example strategy games or poker. It would also work in a LAN setting, where communications are, for all practical purposes, instantaneous. But this breaks down when used for a fast-paced game over a network such as the internet. Let’s talk physics. Suppose you’re in San Francisco, connected to a server in the NY. That’s approximately 4,000 km, or 2,500 miles (that’s roughly the distance between Lisbon and Moscow). Nothing can travel faster than light, not even bytes on the Internet (which at the lower level are pulses of light, electrons in a cable, or electromagnetic waves). Light travels at approximately 300,000 km/s, so it takes 13 ms to travel 4,000 km. This may sound quite fast, but it’s actually a very optimistic setup – it assumes data travels at the speed of light in a straight path, with is most likely not the case. In real life, data goes through a series of jumps (called hops in networking terminology) from router to router, most of which aren’t done at lightspeed; routers themselve introduce a bit of delay, since packets must be copied, inspected, and rerouted. For the sake of the argument, let’s assume data takes 50 ms from client to server. This is close to a best-case scenario – what happens if you’re in NY connected to a server in Tokyo? What if there’s network congestion for some reason? Delays of 100, 200, even 500 ms are not unheard of. Back to our example, your client sends some input to the server (“I pressed the right arrow”). The server gets it 50 ms later. Let’s say the server processes the request and sends back the updated state immediately. Your client gets the new game state (“You’re now at (1, 0)”) 50 ms later. From your point of view, what happened is that you pressed the right arrow but nothing happened for a tenth of a second; then your character finally moved one square to the right. This perceived lag between your inputs and its consequences may not sound like much, but it’s noticeable – and of course, a lag of half a second isn’t just noticeable, it actually makes the game unplayable. Networked multiplayer games are incredibly fun, but introduce a whole new class of challenges. The authoritative server architecture is pretty good at stopping most cheats, but a straightforward implementation may make games quite unresponsive to the player. In the following articles, we’ll explore how can we build a system based on an authoritative server, while minimizing the delay experienced by the players, to the point of making it almost indistinguishable from local or single player games. Part II: Client-Side Prediction and Server Reconciliation >>