labels
float32 1
4.24k
| Title
stringlengths 1
91
| Text
stringlengths 1
61.1M
|
---|---|---|
8 | Welcome to the Next Generation of Notability, 11.0 | Notability Blog
p Follow
4 min read p Nov 1, 2021 --
68
Listen
Share
Millions of people from around the world use Notability to create incredible notes and organize their lives. We take SO MUCH pride in our vibrant users — students, creators, mathematicians, writers, designers, doodlers and beyond!
As our community continues to grow, so does our platform. We’re excited to announce the next generation of Notability is here! Today’s announcement brings a new pricing model and new features.
We’re going free! Notability is officially a free app, available to anyone with an iPad, iPhone or Mac! The free version provides the same Notability experience you know and love with limits on editing and some features. We’re so excited to make Notability available to more creators, learners and teachers across the globe!
Want more? Try our annual subscription option For those who use Notability regularly and would like to enjoy an unlimited note taking experience with premium content, we’re offering an annual subscription. In addition to unlimited note taking, subscribers will have access to advanced technology and fresh creative content such as planners, stickers, and more. Our subscription service will enable us to provide the highest quality experience for our users going forward and provide Notability for free to K-12 institutions. Our previous one-time paid upfront model made it difficult to continue to advance and innovate our app. Learn more about our price update here.
Current customers can continue to enjoy Notability If you’ve purchased Notability prior to November 1rst, 2021, you will have lifetime access to all existing features and any content previously purchased in the app. This includes the core Notability experience that users know and love, including unlimited editing, iCloud sync, and any features or content that was previously purchased through the Notability Shop.
…Now it’s time to cue the drum roll, please. In all 11 years of updates, 11.0 marks our biggest release yet! It introduces a brand new platform for note sharing called the Notability Gallery, and top requested features you’ll love.
Say hello to Notability 11.0 The Notability Gallery — For the first time ever, you can publish your Notability work to the world! The Gallery is a new platform accessible within Notability and on the web, where you can share your creativity, browse, and enjoy all types of content created by other Notability users. Whether you’re studying for an exam, beginning a new project, or looking for inspiration — you can search for notes on any topic and learn from a wide range of notetakers. And it’s easy to share and collaborate with the community — simply publish your notes from the share menu inside the app. Visit the Gallery to get inspired now.
Browse other notes, search topics/tags, save your favorites for later, and download notes from the Gallery! Flexible Organization— Organizing your beautiful library of notes just got easier! You can now create dividers inside dividers and also create subjects with the same name.
Templates — A fresh new template menu allows you to instantly create notes from an array of hand crafted templates. Whether you’re managing your daily to-do’s, monthly finances, or creating new music — there is a template made for you!
Choose from an all new selection of templates, each customizable! Page Manager — Our page manager got a big upgrade too; giving you more effortless control over your notes! Now you can apply bulk actions across multiple pages with a sleek, new expandable view. You can even copy and move pages from one note to another!
We look forward to continuing to grow Notability’s capabilities in ways that make it even more useful and inspiring to you. Our community of creators drives our feature development, so please reach out to us at support@gingerlabs.com with your requests as you take 11.0 for a spin, and check out our FAQs with any questions.
p p p p p
Follow 2.2K Followers Thoughts from the makers of Notability: powerful, yet wonderfully simple note-taking and PDF annotation.
Follow Help
Status
Writers
Blog
Careers
Privacy
Terms
About
Text to speech
Teams |
4 | The Map of Mathematics | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
2 | Pattern-matching changes for C# 9.0 | We are considering a small handful of enhancements to pattern-matching for C# 9.0 that have natural synergy and work well to address a number of common programming problems:
Parenthesized patterns permit the programmer to put parentheses around any pattern. This is not so useful with the existing patterns in C# 8.0, however the new pattern combinators introduce a precedence that the programmer may want to override.
We permit a type as a pattern:
This retcons the existing is-type-expression to be an is-pattern-expression in which the pattern is a type-pattern, though we would not change the syntax tree produced by the compiler.
One subtle implementation issue is that this grammar is ambiguous. A string such as a.b can be parsed either as a qualified name (in a type context) or a dotted expression (in an expression context). The compiler is already capable of treating a qualified name the same as a dotted expression in order to handle something like e is Color.Red. The compiler's semantic analysis would be further extended to be capable of binding a (syntactic) constant pattern (e.g. a dotted expression) as a type in order to treat it as a bound type pattern in order to support this construct.
After this change, you would be able to write
Relational patterns permit the programmer to express that an input value must satisfy a relational constraint when compared to a constant value:
Relational patterns support the relational operators <, <=, >, and >= on all of the built-in types that support such binary relational operators with two operands of the same type in an expression. Specifically, we support all of these relational patterns for sbyte, byte, short, ushort, int, uint, long, ulong, char, float, double, decimal, nint, and nuint.
The expression is required to evaluate to a constant value. It is an error if that constant value is double.NaN or float.NaN. It is an error if the expression is a null constant.
When the input is a type for which a suitable built-in binary relational operator is defined that is applicable with the input as its left operand and the given constant as its right operand, the evaluation of that operator is taken as the meaning of the relational pattern. Otherwise we convert the input to the type of the expression using an explicit nullable or unboxing conversion. It is a compile-time error if no such conversion exists. The pattern is considered not to match if the conversion fails. If the conversion succeeds then the result of the pattern-matching operation is the result of evaluating the expression e OP v where e is the converted input, OP is the relational operator, and v is the constant expression.
Pattern combinators permit matching both of two different patterns using and (this can be extended to any number of patterns by the repeated use of and), either of two different patterns using or (ditto), or the negation of a pattern using not.
A common use of a combinator will be the idiom
More readable than the current idiom e is object, this pattern clearly expresses that one is checking for a non-null value.
The and and or combinators will be useful for testing ranges of values
This example illustrates that and will have a higher parsing priority (i.e. will bind more closely) than or. The programmer can use the parenthesized pattern to make the precedence explicit:
Like all patterns, these combinators can be used in any context in which a pattern is expected, including nested patterns, the is-pattern-expression, the switch-expression, and the pattern of a switch statement's case label.
Due to the introduction of the type pattern, it is possible for a generic type to appear before the token =>. We therefore add => to the set of tokens listed in 7.5.4.2 Grammar Ambiguities to permit disambiguation of the < that begins the type argument list. See also https://github.com/dotnet/roslyn/issues/47614.
Are and, or, and not some kind of contextual keyword? If so, is there a breaking change (e.g. compared to their use as a designator in a declaration-pattern).
We expect to support all of the primitive types that can be compared in an expression using a relational operator. The meaning in simple cases is clear
But when the input is not such a primitive type, what type do we attempt to convert it to?
We have proposed that when the input type is already a comparable primitive, that is the type of the comparison. However, when the input is not a comparable primitive, we treat the relational as including an implicit type test to the type of the constant on the right-hand-side of the relational. If the programmer intends to support more than one input type, that must be done explicitly:
It has been suggested that when you write an and combinator, type information learned on the left about the top-level type could flow to the right. For example
Here, the input type to the second pattern is narrowed by the type narrowing requirements of left of the and. We would define type narrowing semantics for all patterns as follows. The narrowed type of a pattern P is defined as follows:
The addition of or and not patterns creates some interesting new problems around pattern variables and definite assignment. Since variables can normally be declared at most once, it would seem any pattern variable declared on one side of an or pattern would not be definitely assigned when the pattern matches. Similarly, a variable declared inside a not pattern would not be expected to be definitely assigned when the pattern matches. The simplest way to address this is to forbid declaring pattern variables in these contexts. However, this may be too restrictive. There are other approaches to consider.
One scenario that is worth considering is this
This does not work today because, for an is-pattern-expression, the pattern variables are considered definitely assigned only where the is-pattern-expression is true ("definitely assigned when true").
Supporting this would be simpler (from the programmer's perspective) than also adding support for a negated-condition if statement. Even if we add such support, programmers would wonder why the above snippet does not work. On the other hand, the same scenario in a switch makes less sense, as there is no corresponding point in the program where definitely assigned when false would be meaningful. Would we permit this in an is-pattern-expression but not in other contexts where patterns are permitted? That seems irregular.
Related to this is the problem of definite assignment in a disjunctive-pattern.
We would only expect i to be definitely assigned when the input is not zero. But since we don't know whether the input is zero or not inside the block, i is not definitely assigned. However, what if we permit i to be declared in different mutually exclusive patterns?
Here, the variable i is definitely assigned inside the block, and takes it value from the other element of the tuple when a zero element is found.
It has also been suggested to permit variables to be (multiply) defined in every case of a case block:
To make any of this work, we would have to carefully define where such multiple definitions are permitted and under what conditions such a variable is considered definitely assigned.
Should we elect to defer such work until later (which I advise), we could say in C# 9
Then, we would have time to develop some experience that would provide insight into the possible value of relaxing that later.
These new pattern forms introduce many new opportunities for diagnosable programmer error. We will need to decide what kinds of errors we will diagnose, and how to do so. Here are some examples:
This case can never match (because the input cannot be both an int and a double). We already have an error when we detect a case that can never match, but its wording ("The switch case has already been handled by a previous case" and "The pattern has already been handled by a previous arm of the switch expression") may be misleading in new scenarios. We may have to modify the wording to just say that the pattern will never match the input.
Similarly, this would be an error because a value cannot be both 1 and 2.
This case is possible to match, but the or 1 at the end adds no meaning to the pattern. I suggest we should aim to produce an error whenever some conjunct or disjunct of a compound pattern does not either define a pattern variable or affect the set of matched values.
Here, 0 or 1 or adds nothing to the second case, as those values would have been handled by the first case. This too deserves an error.
A switch expression such as this should be considered exhaustive (it handles all possible input values).
In C# 8.0, a switch expression with an input of type byte is only considered exhaustive if it contains a final arm whose pattern matches everything (a discard-pattern or var-pattern). Even a switch expression that has an arm for every distinct byte value is not considered exhaustive in C# 8. In order to properly handle exhaustiveness of relational patterns, we will have to handle this case too. This will technically be a breaking change, but no user is likely to notice. |
1 | Spanish fashion brand Desigual approves 4 day working week | Barcelona, 7 October 2021. Desigual’s employees voted yes. A month after the company proposed to its employees at the headquarters – excluding sales and operations teams – the possibility of reducing their work week to 4 days (Monday to Thursday) with the option to work from home on one of these days, the initiative has been approved securing more than 86% of the vote. The company had set an ambitious goal of gaining the support of at least 66% of the staff involved in the measure and the result has exceeded all expectations.
This is a bold step forward in making the company the best place to work, and having secured the support of the employees, Desigual is moving forward with the implementation of disruptive working and work-life balance models and is becoming the first fashion brand in Spain to offer these conditions. This initiative is a clear reflection of the brand’s philosophy, which is expressed through its “Life is Awesome” claim, and goes hand in hand with the creativity and innovation that has always characterised Desigual as a company which chooses to think and do in a different and disruptive way.
As Desigual’s CEO Alberto Ojinaga explained, “We are very happy that this initiative has been backed by such a large majority of our employees. We knew it was a risky proposal and that it could make some hesitant, but we are confident that it will contribute to improving the work-life balance for everyone at Desigual. We are excited about the new stage that we are embarking on today, which embodies the innovative and daring mindset that has always set us apart and is shared by the whole team.”
“The new work week will require an adaptation process and a unified effort, but the pandemic has shown us that we can organise work and teams in a different way and continue to be efficient by prioritising what really matters. Moreover, this initiative makes us more appealing as an organisation, which will allow us to retain and attract the best talent. We are a different, disruptive, young and optimistic company that is constantly transforming itself while not being afraid to propose new initiatives, and projects like the one that has been approved today only confirm this. We would love for the decision made today by Desigual’s employees to set a precedent and inspire other companies,” concluded Ojinaga.
The vote, which involved around 500 employees based in the company’s Barcelona headquarters, had a participation rate of xx% and was conducted in person, under notarial supervision and using ballot boxes. In total, 86% of employees voted in favour of adopting the 4-day work week. A team of spokespeople made up of 10 employees chosen by their colleagues were in charge of organising and counting the votes.
The voting process took place today at Desigual’s headquarters and votes were counted on the spot to guarantee the utmost transparency of the process.
Awesome Culture by Awesome People
Almost 500 employees working at Desigual’s headquarters will benefit from this measure, and starting tomorrow they will work Monday to Thursday with the option to work from home one day a week. The new hours will bring some changes to the terms of the contracts of the employees who will benefit from this measure, who will be working 34 hours a week instead of the current 39.5.
This new format also involves a salary decrease associated with the adjustment of hours (13%). The company is proposing to share this reduction by assuming 50% of the difference, which means that employees will only see a 6.5% decrease to their salaries.
According to Coral Alcaraz, Desigual’s People Director, “It has been very satisfying to see how employees got involved in the information and voting process, and that they appreciated such a disruptive and innovative measure in favour of flexibility and work-life balance. At Desigual we put people at the centre of our decisions and strongly believe in collaboration between all our teams to enhance work-life balance and improve the health and wellbeing of all our employees.” This strategy is part of the global vision and policies of Human Resources at Desigual, which, under the Awesome Culture by Awesome People concept (an iteration of our long-standing motto La vida es chula – Life is Awesome) is showing its commitment to health and wellbeing, sustainability, equality, work-life balance and flexibility as its cornerstones.
This initiative is part of a larger plan to offer disruptive working and work-life balance models, which also involves implementing improvements for our other collectives that can’t be part of this new work week – due to the specific requirements of their positions, as is the case for store staff and the sales and operations teams – and also aims to strengthen the service provided to the stores and logistics centres.
Desigual is an international fashion brand that was established in Barcelona in 1984. It is famous for the individuality and unique character of its creations, which aim to bring positivity and authenticity to thousands of people who want to express the best version of themselves.
The company currently has a workforce of over 2,700 employees and is present in 89 countries through 10 sales channels, 438 monobrand stores and six product categories: Woman, Man, Kids, Accessories, Shoes and Sport. |
35 | A Library Demand List | This table The table below takes the current New York Times Best Sellers list for combined print and e-book fiction and adds a bit of information for each title reflecting the demand for its e-book edition at a collection of U.S. public libraries, selected for their size and geographic diversity.
Here's how it works. I take the fifteen current NYT Best Sellers and “re-rank” them according to:
the number of holds, to get a sense of the relative number of patrons waiting for each e-book.
the number of copies owned, to get a sense of which e-books libraries have purchased/licensed in great quantity. These tend to be books that have lingered on the list and/or were well-promoted ahead of time.
the ratio of holds to copies owned, to get a sense of not just which books are popular, but which are “more popular than expected”; think acceleration instead of velocity. These tend, conversely, to be newer books and/or surprise hits. (This is my favorite ranking.)
In the table, for each book, I give you a sense of how widely those new library ranks diverge from its NYT rank. A book at the top of the NYT list but with (relatively) low demand at these public libraries will be coded with red arrows; a book low on the list that is hotly demanded will be solid green.
A dash (-) indicates no divergence in rank. A typographical dagger (†) indicates that no library holds any copies of the e-book.
Read more details.
I don’t display the raw number of e-book holds because this isn’t a full accounting of all U.S. public libraries (I wish!) so the numbers have meaning only in comparison to each other, not as free-floating measurements.
And, I'll repeat this, because it's important: the library ranks are calculated within the current NYT list, not among like, all library e-books. I do not currently have a way to survey all library e-books 😉
One more wrinkle! Sometimes, when a book is very popular, libraries will purchase a “cost-per-circulation” license, which means they can pay for loans to patrons on demand and, as a result, those books at those libraries will have zero holds; you ask for the e-book, you get it. This muddles my rankings a bit! Unfortunately, I have no way to determine how e-books are being licensed at different libraries, and this murkiness is one of the reasons I wanted to keep these re-rankings very “high level”—directional indications, not exact accountings.
I think these views of the NYT list are interesting because library e-book lending has exploded in the past few years, and now consitutes a very important channel for reading in the United States. It feels worthwhile to try to understand how its patterns both mirror and diverge from book buying.
I am being cryptic about where this data comes from, for Secret Reasons, but/and I think this is compatible with my desire to show the broad gist. The NYT list is gist-y, after all — not a raw tally of books sold, but a deeper divination of commercial momentum.
If you’re not familiar with the supply side of the library e-book equation, it’s worth reading Dan Cohen’s post outlining the myriad acquisition models for these weird entities. It’s … a lot!
Project scope: This is intended as a sketch, and I consider it finished. I’ll keep this page in sync with the NYT list for at least one year, until February 2022.
Update: It's February 2022, so I will no longer be updating this sketch. Thanks for checking in!
–Robin |
1 | OpenGL 2D Facade (25): Get the Z of a pixel – Design Patterns and Video Games | OpenGL 2D Facade (25): Get the Z of a pixel
In this post, I show how to get the z of a pixel using the OpenGL Z-Buffer. I use it to identify the tile below the mouse cursor. This approach is faster than ray casting, as it let the GPU do the job!
This post is part of the OpenGL 2D Facade series
To check that it works fine, the player click on items in the world, and the character tells what it is:
The usual approach is to cast a ray from the pixel and find the closest intersecting face. In 2D, we look for all the faces that contain the pixel. Since our faces are rectangles, the computation of the intersection is simple. On layers with regularity, like grids, it can be even easier. Once we found faces that contain the pixel, we read the tile texture to see if the pixel is transparent, in which case we ignore the face. In the end, we select the face with the lowest depth value.
As you can imagine, ray casting requires many computations. With the approach based on the Z-Buffer, we can reduce that do almost nothing and save CPU time for other tasks.
We can ask OpenGL for any value of the Z-Buffer. For instance, we can get the Z-Buffer of a pixel (x,y):
data = glReadPixels(x, screenHeight - 1 - y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT)
zbuffer = float(data[0])
Remind that the Y-axis of OpenGL is bottom-up, this is why we invert y.
This zbuffer value is in [0,1], so we need to convert it to NDC (Normalized Device Coordinates):
z = 2 * zbuffer - 1
Finally, we "linearize" this z value to get the depth of the pixel, as shown in the previous post:
zNear = 0.001
zFar = 1.0
maxDepth = 65536
a = maxDepth * zFar / (zFar - zNear)
b = maxDepth * zFar * zNear / (zNear - zFar)
depth = a + b / z
With these settings, the depth value is between 0 (front) and 65535 (background).
We extend the ZBuffer class with these formulae:
p ZBuffer:
zNear = 0.001
zFar = 1.0
maxDepth = 65536
a = maxDepth * zFar / (zFar - zNear)
b = maxDepth * zFar * zNear / (zNear - zFar)
-> float: p depth2z (depth: float)
return ZBuffer.b / (depth - ZBuffer.a)
-> float: p z2depth (z: float)
return ZBuffer.a + ZBuffer.b / z
-> float: p zbuffer2z (zbuffer: float)
return 2 * zbuffer - 1
-> float: p zbuffer2depth (zbuffer: float)
return ZBuffer.z2depth(2 * zbuffer - 1)
We also add a new method in the OpenGL facade that returns the depth of a pixel (x,y):
-> float: p getPixelDepth (self, x: int, y: int)
data = glReadPixels(x, self.screenHeight - 1 - y, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT)
zbuffer = float(data[0])
depth = ZBuffer.zbuffer2depth(zbuffer)
return depth
Since we assign a range of depth values for each layers, we can find the layer of a pixel. It is implemented in the getPixelLayer() method of the facade:
-> Tuple[Union[, LayerGroup], int, Union[, Layer], int]: p getPixelLayer (self, x: int, y: int)NoneNone
depth = int(round(self.getPixelDepth(x, y)))
for layerGroupIndex, layerGroup in enumerate(self.__layerGroups):
if layerGroup is None:
continue
for layerIndex, layer in enumerate(layerGroup):
if layer is None:
continue
if layer.hasDepth(depth):
return layerGroup, layerGroupIndex, layer, layerIndex
return None, -1, None, -1
Note the hasDepth() method of facade layers: it returns True if the layer uses the depth value, False otherwise. The implementation of these methods depends on each case and is straightforward.
Finding the face of a pixel depends on the type of the layer. In the case of a grid, we want the cell coordinates of the face. We add a new method getPixelCell() in the GridLayer class:
-> (int, int): p getPixelCell (self, x: int, y: int)
depth = int(round(self._gui.getPixelDepth(x, y)))
viewX, viewY = self._layerGroup.getTranslation()
cellX = (x + viewX) // self.tileWidth
for cellY, rowDepth in enumerate(self.__depths):
if rowDepth == depth:
return cellX, cellY
return -1, -1
Line 2 gets the depth of the pixel. We need it to find the right cell.
Line 3 gets the current shift of the layer. The coordinates of the pixel are relative to the screen or window; we need to translate them to world coordinates.
Line 4 translates the x screen/window coordinate to cell world coordinate. Note that we can't do the same with y coordinates because there are items larger than a row. For instance, big trees are two tiles tall.
Lines 5-7 parse all depths used by the layer and return the cell y coordinate corresponding to the pixel's depth.
In the case of a characters layer, we want all the characters at some pixel location. We add a new method getPixelCharacterIndices() in the CharactersLayer class:
-> List[int]: p getPixelCharacterIndices (self, x: int, y: int)
depth = int(round(self._gui.getPixelDepth(x, y)))
if not self.hasDepth(depth):
return []
viewX, viewY = self._layerGroup.getTranslation()
return self.findFaces(x + viewX, y + viewY)
Lines 2-4 check that there is a character at screen/window coordinates (x, y). It can't be faster!
Line 5 gets the current shift of the layer to convert screen/window coordinates to world coordinates.
Line 6 uses a new method findFaces() of the OpenGLLayer class. It uses Numpy to find faces intersecting a given (faster than pure Python code):
-> List[int]: p findFaces (self, x: float, y: float)
spriteScreenX = -1 + x * self.__mesh.screenPixelWidth
spriteScreenY = 1 - y * self.__mesh.screenPixelHeight
x1 = self.__vertices[:, 1, 0]
y1 = self.__vertices[:, 1, 1]
x2 = self.__vertices[:, 3, 0]
y2 = self.__vertices[:, 3, 1]
mask = (x1 <= spriteScreenX <= x2) and (y2 <= spriteScreenY <= y1)
return mask.nonzero()[0].tolist()
We assume that we won't get a lot of characters simultaneously (e.g., less than a thousand), so this procedure should always run fast.
I improved the text layers so they can display several texts. I also updated characters so they can have text on top of their head. I based these implementations on dynamic meshes, using a design I am not happy with. I'll present a better solution in the next post.
Download code & assets
In the next post, I'll show how to create dynamic meshes. |
3 | Amazon's Profits, AWS and Advertising | People argue about Amazon a lot, and one of the most common and long-running arguments is about profits. The sales keep going up, and it takes a larger and larger share of US retail every year (7-8% in 2019), but it never seems to make any money. What’s going on?
Well, to begin with, this idea itself is a little out of date: if we zoom in on that net income line, we can see that Amazon’s profitability appears to have shot up in the last couple of years. But what else is going on?
An obvious response here is that all of the profit is coming from AWS: it’s easy to assume that AWS’s profits subsidise losses in the rest of the company. By extension, if anti-trust intervention split AWS apart from the rest of the company, those cross-subsidies would go away and Amazon would have to put up prices, or grow more slowly, or at any rate be a less formidable and aggressive competitor.
That doesn’t really stand up to examination.
AWS has been around in some form since the early 2000s, but Amazon didn’t disclose financials, and the products looked so cheap that many people presumed that it must be making a loss. Then, in 2015, reporting regulations meant that Amazon had to start giving numbers (with historic figures back to 2013), and we discovered that in fact it was hugely profitable. Hence, in 2015 and 2016 AWS was the great majority of Amazon’s reported operating income. However, that’s not true anymore - the US business is also now generating substantial operating profit, and, on this basis, it’s only the international business that’s losing money.
There is an easy way to get this wrong if you’re not careful. If you only look at AWS and total operating profit, or aggregate the numbers that Amazon reports into AWS and ‘Other’, the resulting numbers will be misleading: the losses for RoW balance out the profit in the USA and make it look as though AWS is the only profitable business, especially from 2015 to 2017. The chart below shows the result: close to $3bn of operating profit in the US business in 2017 has vanished, and so has a $3bn operating loss in the RoW business. Don’t do this.
This is a great illustration of a broader challenge: Amazon is lots of businesses, but you only see the profitability of the aggregate.
Hence, Amazon reports revenue and operating income for three segments: AWS, USA and Rest of World. The chart below shows the revenue (and also lets you see that AWS is a much higher-margin business).
However, this is not the only kind of disclosure that Amazon gives. If you scroll a little further through the 10-K, you will find that since 2014 the company has also disclosed revenue (though not profitability) on a quite different and much more informative basis.
‘First party ecommerce’, where Amazon sells you things on its own behalf on the Amazon website, is now only about half of Amazon’s revenue. Another third comes from providing platforms for other people to do business: AWS is one part of this and Marketplace, or ‘third party services’ is the other.
Amazon lets other companies list products on its website and ship them through its warehouses as the ‘Marketplace’ business. It charges them a fee for this, and it reports the fee as revenue (‘third party services’), and it makes a profit on that. Amazon doesn’t treat the value of the actual purchases as its own revenue, which is in line with US accounting rules, since technically Amazon is only acting as an agent. So, if you buy a $1,000 TV on Amazon from a third party supplier, Amazon will charge the supplier (say) $150 in fees for shipping and handling and commission, and only report $150 as revenue. However, it has started stating, in rounded numbers, what percentage third party sales make up of total sales on Amazon - so-called ‘gross marketplace value’ or GMV. Last year, it was about 60%.
As an intellectual exercise, it’s interesting to think about what this would look like if the accounting rules were different and everything sold and processed through Amazon’s website was reported as Amazon revenue. On an operational level, this is pretty much what happens today: Amazon’s own ecommerce product teams are charged an internal fee by the logistics platform and by the digital platform in much the same way that external marketplace vendors are charged a fee. Hence, if these were reported on a like-for-like basis, Amazon’s revenue in 2019 would have been close to $450bn.
Marketplace gets quite a lot of attention these days, but it’s also worth a quick look at one of the small and insignificant-looking series on that chart - ‘ads and other’. The vast majority of this is Amazon’s business selling placement on the home page and in product search results, which it has built up from almost nothing in the last five years. This is what that business looks like in isolation - it did close to $15bn of revenue in 2019. A billion here and a billion there can add up to real money.
Amazon doesn’t disclose profitability for this segment, but we can make some informed (wild) guesses. So: it mostly leverages existing technical infrastructure and engineering resource. It must have meaningful numbers of sales and operations people, but the system itself is mostly automated. It will have knock-on consequences to other parts of the business - for example, it may steer sales to product with higher or lower profitability. And it seems reasonable to assume that it has pretty high margins.
So, for comparison, Alphabet had a 2019 operating margin before R&D and TAC (it’s at least arguable that neither apply here) of 57%, and AWS reported 2019 operating margin of 26%. On that basis it’s reasonable to suggest that the ad business is contributing as much operating income as everything else apart from AWS, and it’s not absurd to suggest it might be close to matching AWS.
Close to six years ago I wrote a pretty popular essay about Amazon’s business at the time - ‘Why Amazon has no profits’. That made two points.
First, Amazon is not one business - it’s many different businesses, at different stages of maturity and profitability. Some of those businesses are established and highly profitable and others are new and in a start-up loss making phase, but you can’t really see from the outside, because all of the money gets both aggregated and reinvested. You can see exactly the same thing in these charts. Amazon is not a loss-making business that will eventually have to raise prices to make money; rather, it’s many businesses leveraging a common platform and a common balance sheet.
Second, Amazon is run for cash, not net income. Jeff Bezos always says that he runs it for ‘trailing 12 months’ absolute free cashflow’, not net income, and it’s had positive cashflow since 2002, which is before some startup founders were born. It’s a drawback of these charts that they’re based on operating income, not cashflow, but that’s what we’ve got. There’s also a bunch of other interesting things one could dig into - stock compensation, say, or the cash conversion cycle. But the important thing is that if you want to understand a company, it’s worth reading the accounts. |
3 | Wikipedia has some beautiful alternative themes | Wikipedia, as a website powered by MediaWiki (a wiki software), is a skinnable website, which means the presentation (look and feel) of the pages can be changed. As of February 2022 there are five available skins: Vector 2022 (default on desktop), Vector 2010, Minerva Neue (mobile), MonoBook, and Timeless. The following screenshots show the current skins along with preview links that allow anyone to load this page using them:
Vector 2022 ()
Vector 2010 ()
MonoBook ()
Timeless ()
Minerva Neue ()
Once you have an account and logged in, go to Special:Preferences and the "Skin" section of its Appearance tab. With the default skin, the Preferences page can be accessed at any time from the links placed in the top right corner. Choose your skin and then click Publish changes. Then, all pages will be loaded with the new selected skin.
If you are not logged in, you can only use the default skin (Vector 2022) normally, however, any user may change the skin of one page at a time by adding ?useskin=skinname to the end of the URL, or &useskin=skinname for page URLs which already include a ? (e.g., ?useskin=vector or ?useskin=vector-2022).
Skin selection bookmarklet
You may want to create a bookmarklet in your local web browser in order to easily switch skin of any page (including articles and pages without {{select skin}} templates) with the click of a button.
For example, to create a bookmark to change the current page to MonoBook:
Create a bookmark in your browser
Edit the bookmark (or change its so-called properties) so that the address field reads this exactly:javascript:var url = new URL(location.href); url.searchParams.set('useskin', 'monobook'); location.href = url;
Use the apostrophe found on your keyboard (not typographer's quotes)
This works whether you are logged in or logged out; it works even if you don't have a user account at all. If you have any questions or suggestions related to this, the correct venue is WP:VPT. Be sure to link to [[Wikipedia:Skin#Skin selection bookmarklet]] in your request.
You must click it again each time you navigate to another page (if you would like to view the page to which you navigated in that special skin). Nevertheless, various browser add-ons allow you to circumvent this issue (a list of these add-ons and how to use them is however out of scope of this page).
Default Wikipedia skins are defined in several CSS and JavaScript files. Some of them are only editable by people having write access to the Wikimedia Foundation servers, and some are simple wiki pages belonging to the MediaWiki namespace. These wiki pages can be seen directly on Wikipedia, but to prevent vandals from breaking the whole website appearance, they are fully protected, hence making them only editable by administrators (however, any changes can be suggested on their talk page). See Wikipedia:Catalogue of CSS classes.
Users can customise the way default skins appear using specific subpages of their user page. These subpages are viewable by anybody but can only be edited by the user that the subpage belongs to and by administrators. Modifying these wiki pages only affects their owner.
Customisation may involve one or both of
There are two ways to apply customisation:
You can use both the common and the skin-specific files; if you do this, the common file is loaded before the skin-specific one.
Note: these links are redirects to i customisation subpages; Special:MyPage is an alias for your userpage (try it and see). For example, when your username is Example , Special:MyPage/common.css will direct you to User:Example/common.css.
After you have edited your personal skin files, if the changes do not appear right away, you may need to wait 30 seconds or more for the servers to update, then bypass your browser cache to see the change.
See also How to import MonoBook settings into Vector. For the list of all CSS and JavaScript files involved in the rendering process, see Wikipedia:Catalogue of CSS classes.
The CSS files can be used for all manner of customisation for those fluent in Cascading Style Sheets (CSS). A common use which is relatively straightforward is hiding a system message or template you don't wish to see; see Wikipedia:Customisation#Hiding specific messages.
On Wikipedia, JavaScript can be used to add new features such as add find/replace textboxes or give advanced rollback options. There are scripts to customise everything, from default font style to custom buttons.
Many script pages can be imported and used. Different scripts can also be used in conjunction with each other to accomplish several tasks at once. Some scripts are available as "Gadgets", which means they can be installed by simply ticking a box in the "Gadgets" tab of Special:Preferences.
In order to add pre-existing scripts to your JavaScript page, add {{subst:js|name of script}} to the file. More detailed instructions can be found at the Wikipedia:User scripts/Guide.
"Global" CSS and JavaScript
Additionally to the above, you can also create files at meta:Special:MyPage/global.css and meta:Special:MyPage/global.js. These will then be loaded on all Wikimedia wikis.
Modern ()
Cologne Blue ()
The Modern and Cologne Blue skins are deprecated. They can no longer be selected as default skins, but users who had previously selected them can still use them, and they can still be viewed temporarily using the ?useskin URL parameter. They are no longer maintained, so some features may not work on them and any bugs in them will probably not be fixed.
Cologne Blue was created in 2002 and deprecated in 2019; see Tech News, discussion on Wikipedia, and Phabricator. Modern was created in 2008 and deprecated in 2021; see Phabricator, Wikipedia discussion #1, Wikipedia discussion #2, Wikipedia discussion #3.
If you absolutely must use either of these skins, you can enable them by viewing Special:Preferences in the relevant skin:
The b skin, which was the original Wikipedia skin c. 2001, is only available on the Nostalgia Wikipedia.
The following skins were removed in 2013 due to low usage numbers and lack of support: |
2 | Miss Shilling’s Orifice Helped Win the War | © 2020 All Rights Reserved. Do not distribute or repurpose this work without written permission from the copyright holder(s).
Listen to Podcast
The Supermarine Spitfire is nearly synonymous with Britain in World War II. It was a superb fighter plane, beloved of its pilots for its speed and agility, and by the British public as a bona fide national icon and war winner. That status was in no small part thanks to its elegant elliptical wings and the evocative roar of its Merlin engine.
But it was also fundamentally broken. Its engine would often cut out just when the dogfights of the Battle of Britain got interesting: in diving, a staple defensive and attacking maneuver. Unfortunately for the British, the Luftwaffe’s Messerschmitt 109s suffered no such shortcoming, giving the 109s a decisive edge in combat against British fighters.
Britain won the Battle of Britain by the narrowest of margins, and so it was imperative that a solution was found if British fighters were to contribute to the liberation of Europe. Found one was, and from the most unlikely of sources: a female engineer named Beatrice Shilling. That a solution came from a woman circa 1940 was improbable enough, but perhaps as improbable was the simplicity of her solution.
Miss Shilling’s orifice, as it came to be known, arrived just in time for the bitterly fought European offensive that paved the way for the D-Day invasion of Normandy. It without question saved the lives of British pilots. Arguably, it helped turn the tide of the war.
The problem with the Spitfire’s Rolls-Royce Merlin engine was first spotted in 1938 when the first production models of Spitfire rolled off the factory lines at Woolston, Southampton. At the time, it wasn’t seen as much of a problem: diving simply wasn’t something pilots did much before World War II. But by the height of the Battle of Britain in 1940, it was a matter of basic survival.
Also typically glossed over in post-war recollection is the Spitfire’s troubled beginnings. It arrived years later than planned, and at great expense, due to administrative bungling that originated at the top levels of the British government, cascaded down through incompetent captains of industry, before crashing onto factory floors and their chaotic organization of labor. If and when there was any organization of labor, that is.
Despite a prototype Spitfire having first flown in March 1936—the same year that work began on Castle Bromwich for the sole purpose of Spitfire production—things only improved in May 1940 when, upon becoming Prime Minister, Winston Churchill appointed Lord Beaverbrook as Minister for Air Production. Beaverbrook immediately transferred ultimate responsibility of Spitfire production to the engineering firm Vickers. Under Vickers, production would eventually ramp up to 300 planes a month.
In July 1940, Alex Dunbar, the newly appointed manager at the Castle Bromwich Spitfire factory, wrote, “Incidentally, we are sacking 60 jig and tool draughtsmen next week. We have tried to find out what they are doing but the answer’s not a lemon. In the meantime, we build the odd Spitfire or two.”
But the problems with the Spitfire were only beginning. The Battle of Britain was a success despite a serious flaw in the Rolls-Royce Merlin engine used in both the Spitfire and the Hurricane. The Hurricane was still the workhorse fighter of the Royal Air Force (RAF), thanks in part to production issues with the Spitfire. It was a more than capable machine except for the engine issues it shared with its glamorous sibling.
That flaw was a tendency for the engine to falter under so-called negative gravity, when powering into a dive. Negative gravity is a confusing term because the problem wasn’t one of gravity, or even force, but rather of acceleration and inertia. When an aircraft accelerates downward faster than it would in free fall, anything not bolted down, like the blood surging through the pilot’s vessels, will accelerate more slowly due to its own inertia. In relative terms, so far as life aboard the aircraft is concerned, those things are moving upward rather than downward.
Inertia’s a tricky thing, particularly when it comes to liquid fuel. Imagine the skin of a pilot’s face going into a dive, engines at full power. Now imagine the same thing happening to the liquid fuel in the engine, and specifically the chamber of the carburetor designed to regulate the flow of fuel: fuel effectively flows upward. Unfortunately, the fuel outlet holes in Merlin engines were at the bottom of the chamber while, when accelerating into a dive, the fuel was forced to the top, unable to reach the engine. The engines were being starved of fuel, causing what was known as a “weak cut” and resulting in a tendency to stutter, or even to cut out completely, under negative gravity. Either way, the result was loss in power. It was just a question of how long that loss would last.
The trouble could also strike when hitting turbulence. “I opened fire at about three hundred yards, and seemed to hit the one I was aiming at because he pulled up sharply,” Wing Commander Bob Doe later recalled. “At that moment (I must have been down to about a hundred yards), I hit his slipstream and my engine cut—stone dead.” Dogfights are presumably made all the more memorable with the added frisson of engine trouble.
This was problematic for RAF pilots for a number of reasons. First among them was that, by 1940, it was becoming increasingly evident that diving at full tilt was actually rather a good idea, tactically speaking, not to mention an intuitive reaction to being chased down by the fearsome Messerschmitt Bf 109E and Bf 110C fighters of the German Luftwaffe. Diving had the distinct advantage of making it harder to be shot down. Second was that Luftwaffe pilots being tailed by a British fighter did much the same thing to evade their pursuers, and there was really only one way to stay in hot pursuit of a diving fighter plane: dive straight after it. The third was that the Daimler-Benz engines of the Messerschmitts did not share the Merlin’s fuel-system flaw.
Luckily, the British pilots had home-court advantage. The proximity of their fuel supply meant that they could remain in combat for longer than the Luftwaffe, who had to travel some distance to theaters of war across the English Channel, fight, then fly home again.
Overall, then, the sixteen-week-long Battle of Britain was characterized by its numerous narrow margins. Though the battle was over by November 1940, the war in Europe was not. There was still the task of liberating large swathes of mainland Europe, and both the Spitfire and the Hurricane would play vital roles. Quite simply, an answer to the stalling problem needed to be found, and quickly.
Naturally, the war effort put its finest minds to the task, including Cyril Lovesey of Rolls-Royce and A.D. Fisher of the Royal Aircraft Establishment. Both devised ingenious technical solutions, both of which failed utterly in solving the problem. Their valves and diaphragms may have solved the weak cut, but this was only the start of the trouble. It was swiftly followed by a longer and more dangerous “rich cut”—when the chamber floods with excess fuel, giving a ratio of fuel to air that is too high, or too rich, to actually burn.
It would take an intimate knowledge of engines to identify the more serious problem. So it was remarkable for any number of reasons that, in 1940s Britain, the ultimate solution would come from a woman.
It was around the time of the first production Spitfire test flights of 1938 that Beatrice “Tilly” Shilling rebuilt and tested her own beloved machine: a 490cc Norton motorcycle. The same machine saw her become the fastest woman ever to ride a motorcycle at Brooklands: the first purpose-built race track in the world, which had opened near Weybridge in Surrey in 1907. Shilling was an exceptional racer, but an even better engineer.
To say that women tended not to be engineers in the early part of the 20th century would be stretching the definition of understatement to its very limits. The same stretch could be made for politicians, doctors, land owners, or indeed any role of influence.
But, even in childhood, there were signs that Beatrice Shilling would not conform to societal norms. When Shilling was still small enough to be known as Baby among the family, she was reported to have had this short exchange with her mother:
Mother: “Baby, you mustn’t bite your sister.”
Beatrice: “Have.”
Mother: “Well, say that you’re sorry.”
Beatrice: “Shan’t.”
Around 1919, the 10-year-old Beatrice grew tired of being left behind by her sisters on cycling trips and began to save for a motorcycle. Aged 12, she built a working Meccano model spinning wheel and promptly won a national competition set by Meccano Magazine.
By the age of 14, she’d achieved her goal, her chosen bike a two-stroke Royal Enfield. When she wasn’t giving her sister Anne pillion rides, she was taking it to bits and putting it back together again, engine and all. Physically speaking, Beatrice was small, which would later prove advantageous.
At 15, she decided engineering was the career for her. The problem was that it was 1924. “The average woman does not possess the same engineering instinct as the average man,” was one opinion recorded in the Daily News at around that time. It belonged to the manager of the Education Research Department at British Westinghouse. “For a woman in the 1920s a career in lion-taming would have been more realistic,” observes Shilling’s biographer, Matthew Freudenberg.
But there was a glimmer of hope in the form of the Women’s Engineering Society. It was borne, in 1919, out of the need to protect the right to work of women who had been allowed to work in arms manufacture and other engineering trades during World War I.
In May 1926, the Society distributed a letter to girls schools throughout the country. Shilling’s mother convinced her to reply. By age 17, she became an apprentice electrical engineer learning the ropes, or more accurately cables, in the new electrical power plant at Bungay, Suffolk.
With support from the Women’s Engineering Society, Shilling joined the Department of Electrical Engineering at the Victoria University of Manchester in October 1929. She was one of only two women accepted that year; up from the none that had been accepted before. So her student record card didn’t allow for the possibility of female titles or honorifics.
Fortunately, her education afforded her the freedom to attend classes in thermodynamics and mechanical engineering, closer to her true interest: engines. Shilling received her Master of Science in 1933, and promptly joined a lecturer, Dr G.F. Mucklow, in researching fuel consumption, heat loss, and supercharger performance in Rolls-Royce and Napier single cylinder engines.
When Beatrice Shilling turned up at Brooklands Race Track on 24 August 1934, no one thought much of her new 490cc Norton motorcycle. Like the regular riders, Shilling had come for the Hutchinson 100 event and its attractive prize pot.
It takes two things to race at Brooklands: a motor vehicle, and the intestinal fortitude to use it at tremendous velocity. Even in 1906, it was a track built for sheer speed. More or less egg-shaped in plan, it consists of little more than long straights and even longer, steeply banked bends. And by 1934, the track had lost its smooth concrete complexion, having had potholes gouged out of it by decades of hulking racing cars.
Brooklands’ resulting holes were patched, but they still made for a bumpy surface, which was disconcerting at those 100-mph speeds. Worse, a section of the track crosses the river Wey, creating undulations capable of throwing fast racers clear of the track. To complicate matters further, this was on one of the banked turns. Those chasing the fastest lap times would race near the steepest, highest edge of the bank—circumstances under which staying in contact with the ground is, according to conventional wisdom, a good idea.
Having given the Norton the once over, course handicapper “Ebby” Ebblewhite sent Shilling off as one of the first riders in the three-lap race. Because the event saw great power discrepancies between the competing machines, it was Ebby’s job to let the slowest bikes out first, with the most powerful 1,000cc “scratch bikes” starting last. An extract from the contemporary publication Motor Cycling records what happened next:
“A feature of the first handicap was the brilliant riding of Miss B. Shilling on a very standard-looking 490cc Norton. After a slowish first lap, she made up for lost time with a second circuit of 101.02 mph, thus joining the select ranks of Gold Star holders, being the second woman motorcycle racer to do so.”
Evidently impressed, Ebby put her on scratch for the next race, making her the first woman to do so when racing against men. She promptly lapped at 101.85 mph. Shilling was not the first woman to claim a Brooklands Gold Star—that honor was accorded to Fanny Blenkiron in April 1934—but Shilling would go on to set the record for the fastest woman to lap the circuit at 106 mph. Her record stands today.
Shilling was fast for several reasons. Her size was one advantage. At five feet tall, she was short enough to lie more or less flat, reducing drag. And though history doesn’t record all the specifics of the tweaks and modifications Shilling made to her machine, they were numerous. Freudenberg indicates that, in later letters to friends, she experimented with the length of inlet tract. This is the part of the engine that supplies the potent mixture of air and fuel to the cylinders, its length affecting power, torque, and fuel efficiency. Though she wouldn’t have known it at the time, Shilling was gaining the unique understanding that would prove so pivotal in saving the Spitfire and Hurricane in the depths of World War II: the physics of providing fuel to engines.
The following years saw more tweaks, greater speeds, and more impressive wins, including against professional racers, and track record holders like Ben Bickell and Noel Pope. It was 1935 when she set her enduring speed record.
In 1938 and 1939, the Norton motorbike was rebuilt from scratch with a homemade supercharger, and a new fuel tank where the saddle once was. Unfortunately, Shilling was too short to ride it. By then she was married, and her husband George Naylor, who stood a whole foot taller, took to racing instead. He became an able rider with the help of Shilling’s coaching, which was certainly unique: when they married in 1938, it had been on the condition that Naylor first earn himself a Brooklands Gold Star. It came in the nick of time. Racing at Brooklands ended forever in 1939. World War II had begun.
By the outbreak of war, Shilling had been working at the Royal Aircraft Establishment for more than three years. She initially wrote technical documentation, work that the hands-on Shilling found tedious. But, Freudenberg writes:
“By then Beatrice’s professional gifts had been recognized. She could identify the physical laws involved in a new problem or requirement, quantify them and design a mechanical solution. She could also design the tests that ensured that the instrument or engine performed as intended, and if not, why not.”
By November 1939, after a series of promotions, she had reached the position of Technical Officer in charge of carburettor research and development work. In other words, she was perfectly positioned to tackle the skittish Merlin engines of the Hurricane and the Spitfire.
In lieu of a fix, pilots had devised their own workarounds: dramatic maneuvers deployed if and when the engine cut out. Peter Brothers, a flight lieutenant with No. 32 Squadron, later recalled his own novel solution:
“When an enemy fighter dived from behind, fired and carried on diving past, one could not immediately dive in pursuit without the engine temporarily cutting and causing one to be left far behind. This could be avoided by rolling upside down, pulling back on the stick into a dive (positive g) then rolling level in the dive. […] Similarly, on sighting a target below, one suffered momentarily if one pushed the nose down to attack, a grave disadvantage.”
Tests ordered and overseen by Shilling identified the true problem: the rich cut that followed the weak cut addressed by Lovesey and Fisher.
Shilling worked out the precise volume and pressure of fuel being pumped into the chamber by the Merlin engine and designed a brass restrictor with a hole precisely the diameter needed to allow maximum flow of fuel, and therefore maximum power, without flooding the engine. Crucially, Shilling’s solution could be fitted without the removal of the carburettor, so the fix could be made in situ at operational airfields. Though it didn’t eliminate engine cut-outs altogether, it did minimize it to an acceptable degree.
This left only the logistical problem of how to get the restrictors to Fighter Command airfields in good time. Shilling organized a small band of engineers to assist, though inevitably she travelled up and down the country solo, and by her preferred mode of travel: her trusty Norton motorcycle. By then she’d made the small concession, perhaps under duress, to detune the Norton somewhat to make it more suitable for public roads.
“Her appearance at airfields with a bag of tools and a brisk manner became something of a legend,” Freudenberg writes. “Thanks to Sir Stanley Hooker, the engineer who led supercharger development at Rolls-Royce at the time, the restrictor became known as Miss Shilling’s orifice to pilots and fitters of the RAF.”
Early versions of the restrictor have been described as cone-shaped, though in her 2018 book Women of Invention: Life-Changing Ideas by Remarkable Women, Charlotte Montague describes the original restrictors as brass thimbles. They were later refined to a flat washer design, though inevitably Shilling and team would go on to eliminate them, and the entire negative gravity problem, by redesigning the carburettor outright.
Her efforts were of inestimable value to Fighter Command. Though the Battle of Britain, which prevented a Nazi invasion of Britain, was doubtless its crowning achievement, the ever-improving Spitfire would go on to perform operations for the rest of the war.
But fighting over mainland Europe, it was the British fighters that suffered from their limited range, and in some operations, losses were catastrophically high. It was due to these ongoing narrow margins in battle that we can be sure Shilling’s innovation saved lives.
Years later, Keith Maddock, chief engineer at Hangar 42, an RAF base during the war, went so far as to describe the restrictor as a war-winning modification. “Beatrice Shilling helped us to win World War II—of that there is no doubt,” he told the BBC in 2017. Her war efforts weren’t limited to improvements to the Merlin engine. She also contributed to a range of engines to improve starting in freezing conditions, and operation at higher altitudes.
Though she undoubtedly deserved to rise to the top of the Royal Aircraft Establishment, this was prevented by the organization’s leadership, which was entirely male. She disliked formality and bureaucracy, joking after the war that Britain had been on the winning side due to a shortage of paper. Yet Freudenberg records that she was as capable of dealing with senior members of government and industry as successfully as anyone. Her technical knowledge was very possibly unsurpassed. If nothing else, she was eventually awarded the Order of the British Empire for her wartime efforts.
Biographer Montague likewise argues that a case can be made that Shilling helped to win the war. Unfortunately, it’s impossible to quantify her contribution. But at least as significant is the trail Shilling blazed as a pioneering woman in British engineering. She later served on the committee of the Women’s Engineering Society, and actively encouraged young women into engineering careers. She is often described as “a flaming pathfinder of Women’s Lib,” including by her alma mater, now Manchester University. (Though in Spitfire: The Biography, writer Jonathan Glancey attributes the descriptor to a colleague of Shilling’s.)
After the war, Shilling was compelled to work in the burgeoning fields of rocketry, ramjets, and guided weaponry. The war may have been won, but with NATO powers vying with Russia, military and aeronautical supremacy remained high on Britain’s agenda. In 1955, she was promoted to be the Royal Aircraft Establishment’s Senior Principal Scientific Officer. By 1957, she was in charge of heat tests on scale models of the liquid oxygen tanks of what would become the Blue Streak missile: initially developed as part of Britain’s nuclear deterrent and later repurposed as the European Space Agency’s Europa satellite launch system.
Shilling died on 04 November 1990 from cancer of the spine. Her husband George Naylor, who himself had gone on to fly Lancaster bombers during World War II, died six years later. He had always been proud of Shilling, and especially her solution to the problem of negative gravity.
The question remains as to what Shilling herself thought of the dubious term “Miss Shilling’s orifice.” Freudenberg reckons that it “probably amused her, recognising the language of familiarity rather than disrespect.” By the time the words were being bandied about Britain’s airfields she was certainly held in high esteem, and Freudenberg records her dry sense of humor and “unrepentant brevity.”
He has a point. After the war, Shilling had taken to racing cars rather than bikes, but that came to a sudden end in a bad crash on 23 June 1962. Naylor noted at the time that Shilling “was virtually pushed off course by a clot who could not drive and who was determined not to be beaten by a woman.”
Shilling’s take? “He was an ex-RAF pilot, so he was too busy checking the instruments to look where he was going.”
© 2020 All Rights Reserved. Do not distribute or repurpose this work without written permission from the copyright holder(s).
Since you enjoyed our work enough to print it out, and read it clear to the end, would you consider donating a few dollars at https://www.damninteresting.com/donate ? |
1 | Ozy Media CEO claims his company will reopen, but doesn’t explain how | Ozy Media isn't shutting down, co-founder and CEO Carlos Watson told CNBC on Monday.
Watson had informed employees Friday that the board had voted to close down the company.
watch now The CEO of scandal-ridden Ozy Media, Carlos Watson, on Monday tried to explain away the various transgressions that led to his company's demise last week while pledging to revive the media company despite the flight of investors and advertisers.
It's another dramatic twist in Ozy Media's sudden downfall. Watson had informed employees Friday that the board had voted to shut down the company, CNBC reported.
"We were premature," Watson said of the decision in an interview on CNBC's "Squawk Box." He added that over the weekend, the company had "good conversations" with investors and advertisers.
"We have lots of things we have to do to improve, but I very genuinely feel like we have a meaningful, transformational voice," he added. "At our best, this will be our Lazarus moment."
It's unclear if the company's staff will return or how Watson, who co-founded Ozy Media, plans to continue operations, and when. A spokesperson didn't immediately respond to a request for comment. The company had 75 full-time employees, according to Axios.
The New York Times first reported last week that the company's chief operating officer, Samir Rao, impersonated a YouTube executive on a call with Goldman Sachs while seeking out a $40 million investment. The company also allegedly inflated viewer metrics.
That report set off a week of probing and departures from the company.
Billionaire investor Marc Lasry resigned as chair of Ozy Media last Thursday, saying the company required experience in crisis management and investigations. Former BBC anchor Katty Kay also resigned from the company.
watch now CNBC reported Thursday that Watson lied when he claimed Sharon and Ozzy Osbourne invested in his company. The Osbournes filed a trademark lawsuit in 2017 over the company's name Ozy Fest, which is the firm's annual concert and festival.
"The final resolution was that they would get stock in our company," Watson said. "In my mind, people who own shares in our company are investors."
Ozy also promised former producers that it was filming a show for A&E, which later turned out to be a lie, according to the Times. The program eventually appeared on Ozy.com and YouTube.
Watson told CNBC on Monday the company "originally conceived" the show with A&E. "We realized that they were on a different timetable than we were, so we shifted to YouTube," he said.
"There's no doubt about it that last summer, as the show started, we originally hoped we were going to do it with A&E," he added. Watson then name-checked a handful of celebrity guests who appeared on the show.
The Times also reported Ozy touted "The Carlos Watson Show" as "Amazon Prime's First Talk Show." However, Ozy had been uploading the show to the platform through a commonly used service that receives no promotion from Amazon. The company later complained and Ozy apologized, according to the report.
"We definitively made some mistakes ... I know we want to have larger conversations about whether mistakes are ingrained in who we are or whether, like a lot of young companies, we made mistakes but that was the 20%, not the 80% of who we are," Watson told CNBC.
In the interview, Watson repeatedly defended his media company, while also peppering in some admissions of wrongdoing.
"It was an incredibly salacious week," he later said, criticizing the media reports about his company. Watson then added the company should have been better on data, marketing, and leadership and culture.
-- CNBC's Alex Sherman contributed to this report.
Subscribe to CNBC on YouTube. |
1 | What does On Deck's success mean for the future of business school? | January 23rd, 2021: Greetings from Taipei. We are in day two of a mandatory house arrest / quarantine but in 12 days we will be able to emerge into a nearly covid-free country. We are planning on staying here indefinitely until we get a better sense of the vaccine and travel situation.
👋 Greetings to the nearly 200 new subscribers from last week. I love hearing from people so hit reply and let me know what’s on your mind. If you missed my newsletter last week, I wrote about why I was writing a book . ( strong : I shipped the book - it’s called The Pathless Path and you can learn more here )
If you stumbled upon this from the web, sign up here to get the best free newsletter on navigating the pathless path and making sense of the modern weirdness of work.
I want to explore the changing dynamics of how people earn prestige, the new dynamics of credentials and status, and what this might mean for how people navigate work and their careers in the future.
I am going to look at this through the lens of the full-time MBA, something I did from 2010 to 2012 , but increasingly irrelevant and overpriced in today’s world.
I’m a fan of the “jobs to be done” framework. It looks at the job that a service or product is performing. It’s not a useful frame to argue whether or not a business school is worth it. Instead it’s more useful to see what jobs the service performs and if there are better ways to pay for those jobs.
There have been a lot of arguments over why people get MBAs and what people are paying for. In my own experience and upon reflection, I think the MBA gives people three core things:
Confidence: Being in a positive, ambitious, and friendly environment that will help you overcome the everyday friction and self-doubt that most people face when trying to achieve anything, leading to an increase in confidence.
Hidden Codes: Learning some of the hard to understand behaviors, norms, calculations, frameworks, and styles of communication that enable one to aim towards large goals without getting laughed at by the established leaders in these spaces
High-Paying Jobs: Access to a number of high-paying jobs across a wide range of industries and functions that hire a new crop of MBA graduates every year and know the unique demands of this group of graduates
This system works extremely well, I think, because of the incredibly tight link between MBA programs and the companies that hire from those schools. The schools need a steady number of openings of jobs that pay six-figures and offer interesting problems and potential career paths. The companies need a steady crop of middle to senior managers that all behave in predictable ways and might produce future executives.
The secret force keeping this together is a large group of alumni at these companies. They want to keep hiring the students from their schools because it helps them stay connected to a University (often a source of nostalgia and memories of youth) and gives them a source of prestige within their company, especially if they discover a “rising star.”
This state of affairs has been operating quite smoothly and without much competition for 50+ years. As the industrial economy has grown and globalization and financialization have created more attractive post-MBA career paths with 6-figure jobs, the MBA programs have been able to expand and raise tuition.
It’s also notable that the MBA has steadily infiltrated the tech industry despite the popular meme that MBAs don’t know how to run businesses. Currently the top four biggest tech companies all have MBA CEOs: Cook (Fuqua), Pinchai (Wharton), Nadella (Booth), and soon, by the end of the year, Jassy (HBS).
I graduated in 2012 and over the last eight years, I’ve seen a steady flow of my classmates from more traditional industrial-era companies to BigTech. If there is one other thing you learn at business school, it’s where you can make the most money working a full-time job.
Despite full-time MBAs costing more than $75,000 per year just in tuition one can still make a pretty good argument that these programs are worth it, especially if you are confident you want to work in the types of jobs these schools have access to. The median salaries of HBS grads is almost $150,00 and if you know anything about BigTech or Finance salaries, its quite easy once you’re in these worlds to find a job making almost a quarter million dollars a year within five years after grad school.
Despite many people thinking the technology industry would make the MBA irrelevant, it strengthened it. However, the technologies and social networks it produced are enabling a new way for the worker to engage and own their own career in ways we haven’t really seen at scale. I predict that 2020 be seen as a major inflection point for many full-time MBA programs.
Consider that when I graduated college in 2007 almost no one used LinkedIn and my only source of potential connections was through my school’s alumni database or the connections of people I knew.
Thirteen years later the ambitious young person is part of many different networks. When I lived in New York I was part of many different communities:
An informal group of people interested in organization change I met through my writing online
Virtual and in-person events led by companies like Culture Amp, Live in The Grey and other startups on wide ranges of topics
A group called IVY which held events for young professionals but kind of operated as a way to meet potential significant others
A volunteer group with 100 other young professionals as part of a two-year program I was part of
Alumni networks of two companies I worked for
Two alumni groups for my schools
Now, as a citizen of gig world, I am part of many more networks. In a world where connecting with others based on interests, skills, and passion is easier than ever, connecting based on a common degree feels stale. Hence why I rarely open emails from my schools. They are stuck in a world based on credential and endowment size rather than having an understanding that in an opt-in world, where you devote your energy is a signal to others about what matters to you.
Schools are going to have a hard time adjusting to a world in which anyone in the world can now earn prestige from their desk. Some examples:
Launching a startup and joining incubators around the world
taking an online course with an engaging community
attend conferences and meetups in your field
Writing or sharing content publicly in your expertise
Hosting or participating in online meetups
Engage with people in your industry on Twitter and share interesting ideas
Attending online schools like Lambda Academy
Schools are certainly still in the mix but for many people, it’s becoming a much smarter bet to at least test out one of these alternative pathways before spending months studying for exams and dropping 5 or 6 figures for a credential.
That’s not to say this is easy to understand. This new dynamic of network-driven prestige is messy and often illegible to people who have not engaged with it.
Older people will often say things to me like “you can do what you do because you went to MIT.” They are still operating in the hierarchical prestige view of the world. Success is a result of having earned the right credentials.
They are often surprised when I share that I landed a writing gig on Twitter after sharing an essay I wrote on a similar topic or that hired me as a presentaiton coach after watching my YouTube video.
My essay and video were both proof-of-skill . They were able to exactly the person they needed. For good or bad, this type of dynamic is coming to every part of the economy which is going to put tremendous pressure on high-priced credential-awarding institutions but it also offers a tremendous advantage for the individuals and employers that learn to operate in a new way.
One of the reasons the MBA has remained relevant is that better options have not emerged.
Consider my friend who has felt stuck in his career for four or five years. He’s been successful but doesn’t really like what he is doing and has wanted to experiment with working on startups. He’s not a natural at reinventing himself and has been talking about doing a full-time MBA as the way to make that shift.
Top MBA programs are mostly driven people that aren’t sure what they want to do but are sure that they want to be successful.
I’ve coached many people like him over the years and 100% of them end up going to business school. Yet last week he texted me that he had decided to do the On Deck Fellowship. For $2,500 he’ll join a 10-week program focused on helping people transitioning to launching a startup or at least working in that space.
He’s saving $147,000, two years of salary loss, and is likely going to get many of the benefits that the MBA would given him, especially the confidence and hidden codes mentioned above.
As these better options emerge people are starting to realize that paying an insane amount of money for a grad degree is quite a dramatic way to make a career change.
p Unbundling the university:Education - YouTube, MasterclassAccountability - LambdaNetwork/Alumni - Village Global, On DeckCredential - Github for X, P2PFlirting w/ Marxism - Read the NYTDating - TinderMoving away from parents - Airbnb, CommonFriends - Clubhouse, Twitter https://t.co/1QY3zKca0a On Deck is creating the space for strong connections to form and making the bet that people don’t need to hang out at a University Campus for another 94 weeks to continue to develop those bonds, find their own opportunities and learn the skills they need to follow their path.
The driven young person is used to already accessing all sorts of apps, communities, networks, and services and as higher education becomes unbundled there will be countless new paths that emerge.
On deck is betting “finding the others” is a bigger problem to be solved rather than getting access to high-prestige jobs. If people can find the others and start working on what they want to be working on, especially the kind of people already doing that, the jobs and opportunities will come.
They even share their playbook:
Not that schools are going to do anything about it.
A few people forwarded this essay “ What’s Wrong With The Way We Work ?” from the New Yorker this week. It raises the question “why do we still all work so much and why do some people need to work multiple jobs to pay their bills?”
It offers a brief snapshot of the history of work but its answer to the question, CEO pay, doesn’t seem to me to be all that big of a driver. A convenient scapegoat maybe but not the reason our relationship with work is so fraught.
I think a far more compelling answer to the previous question has to do with religion. A reader pointed me to the religious-oriented writings of Andy Crouch and David Zahl who argue that we have repurposed our religiouse impulses towards a secular world. Here is Zahl :
The point here–and I can feel myself yawning inside as I write this–is not that work is bad, or careerism is necessarily dehumanizing. The point is simply that religious observance hasn’t faded apace “secularization” so much as migrated—and we’ve got the anxiety to prove it. We are seldom not in church, often many different ones at the same time (St Elon’s of Perpetual Productivity, Apostle Maguire’s Soulmate Chapel, First Station of the CrossFit, etc.). Of course, never-ending church attendance is only a problem to the extent that the ‘gospel’ on offer at all of these rings such a similar note of, well, law. Blessed Are Those Who Perform (So Perform, Dammit!).
Here is Crouch sharing a similar sentiment sentiment about Steve Jobs in 2011, arguing that Jobs was the epitome of the “gospel of the secular age” which he defines:
It has the great virtue of being based only on what we can all perceive—it requires neither revelation nor dogma. And it promises nothing it cannot deliver—since all that is promised is the opportunity to live your own unique life, a hope that is manifestly realizable since it is offered by one who has so spectacularly succeeded by following his own "inner voice, heart and intuition."
To Zahl, “what we’re actually worshiping when we obsess over food or work or politics is not the thing itself but how that thing makes us feel.”
So in our relationship with work we are combining a religious impulse with the modern idea that we should be happy and be true to ourselves. This is a ton of pressure and its no surprise that so many people seem to be burned out and frustrated.
This interview in Playboy (SFW) from Marshall McLuhan was jaw-dropping in how many predictions about the future he seemed to get right. Consider this passage and 95% might still apply today
Yes, and to the booming business psychiatrists are doing. All our alienation and atomization are reflected in the crumbling of such time-honored social values as the right of privacy and the sanctity of the individual; as they yield to the intensities of the new technology's electric circus, it seems to the average citizen that the sky is falling in. As man is tribally metamorphosed by the electric media, we all become Chicken Littles, scurrying around frantically in search of our former identities, and in the process unleash tremendous violence. As the preliterate confronts the literate in the postliterate arena, as new information patterns inundate and uproot the old, mental breakdowns of varying degrees – including the collective nervous breakdowns of whole societies unable to resolve their crises of identity – will become very common.
It is not an easy period in which to live, especially for the television-conditioned young who, unlike their literate elders, cannot take refuge in the zombie trance of Narcissus narcosis that numbs the state of psychic shock induced by the impact of the new media. From Tokyo to Paris to Columbia, youth mindlessly acts out its identity quest in the theater of the streets, searching not for goals but for roles, striving for an identity that eludes them.
Do read the whole thing if you are interested in culture, media, and technology.
Thanks for reading this week’s issue. If you’d like to support my journey and help keep nudging me to write this newsletter you can consider becoming an ongoing patron by becoming a paid subscriber or if you’d like just share a nice tweet on Twitter. |
1 | Magnetic accelerators made with neodymium magnets | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
7 | PyTorch 1.9 | We are excited to announce the release of PyTorch 1.9. The release is composed of more than 3,400 commits since 1.8, made by 398 contributors. The release notes are available here. Highlights include:
Along with 1.9, we are also releasing major updates to the PyTorch libraries, which you can read about in this blog post.
We’d like to thank the community for their support and work on this latest release. We’d especially like to thank Quansight and Microsoft for their contributions.
Features in PyTorch releases are classified as Stable, Beta, and Prototype. You can learn more about the definitions in this blog post.
In 1.9, the torch.linalg module is moving to a stable release. Linear algebra is essential to deep learning and scientific computing, and the torch.linalg module extends PyTorch’s support for it with implementations of every function from NumPy’s linear algebra module (now with support for accelerators and autograd) and more, like torch.linalg.matrix_norm and torch.linalg.householder_product . This makes the module immediately familiar to users who have worked with NumPy. Refer to the documentation here.
We plan to publish another blog post with more details on the torch.linalg module next week!
The Complex Autograd feature, released as a beta in PyTorch 1.8, is now stable. Since the beta release, we have extended support for Complex Autograd for over 98% operators in PyTorch 1.9, improved testing for complex operators by adding more OpInfos, and added greater validation through TorchAudio migration to native complex tensors (refer to this issue).
This feature provides users the functionality to calculate complex gradients and optimize real valued loss functions with complex variables. This is a required feature for multiple current and downstream prospective users of complex numbers in PyTorch like TorchAudio, ESPNet, Asteroid, and FastMRI. Refer to the documentation for more details.
To help with debugging and writing reproducible programs, PyTorch 1.9 includes a torch.use_determinstic_algorithms option. When this setting is enabled, operations will behave deterministically, if possible, or throw a runtime error if they might behave nondeterministically. Here are a couple examples:
>>> a = torch . randn ( 100 , 100 , 100 , device = 'cuda' ). to_sparse ()
>>> b = torch . randn ( 100 , 100 , 100 , device = 'cuda' )
# Sparse-dense CUDA bmm is usually nondeterministic >>> torch . bmm ( a , b ). eq ( torch . bmm ( a , b )). all (). item ()
False
>>> torch . use_deterministic_algorithms ( True )
# Now torch.bmm gives the same result each time, but with reduced performance >>> torch . bmm ( a , b ). eq ( torch . bmm ( a , b )). all (). item ()
True
# CUDA kthvalue has no deterministic algorithm, so it throws a runtime error >>> torch . zeros ( 10000 , device = 'cuda' ). kthvalue ( 1 )
RuntimeError : kthvalue CUDA does not have a deterministic implementation ...
PyTorch 1.9 adds deterministic implementations for a number of indexing operations, too, including index_add, index_copy, and index_put with accum=False. For more details, refer to the documentation and reproducibility note.
A torch.special module, analogous to SciPy’s special module, is now available in beta. This module contains many functions useful for scientific computing and working with distributions such as iv, ive, erfcx, logerfc, and logerfcx. Refer to the documentation for more details.
nn.Module parameterization allows users to parametrize any parameter or buffer of an nn.Module without modifying the nn.Module itself. It allows you to constrain the space in which your parameters live without the need for special optimization methods.
This also contains a new implementation of the spectral_norm parametrization for PyTorch 1.9. More parametrization will be added to this feature (weight_norm, matrix constraints and part of pruning) for the feature to become stable in 1.10. For more details, refer to the documentation and tutorial.
We are releasing Mobile Interpreter, a streamlined version of the PyTorch runtime, in beta. The Interpreter will execute PyTorch programs in edge devices, with reduced binary size footprint.
Mobile Interpreter is one of the top requested features for PyTorch Mobile. This new release will significantly reduce binary size compared with the current on-device runtime. In order for you to get the binary size improvements with our interpreter (which can reduce the binary size up to ~75% for a typical application) follow these instructions. As an example, using Mobile Interpreter, we can reach 2.6 MB compressed with MobileNetV2 in arm64-v7a Android. With this latest release we are making it much simpler to integrate the interpreter by providing pre-built libraries for iOS and Android.
Starting from 1.9, users can use the TorchVision library on their iOS/Android apps. The Torchvision library contains the C++ TorchVision ops and needs to be linked together with the main PyTorch library for iOS, for Android it can be added as a gradle dependency. This allows using TorchVision prebuilt MaskRCNN operators for object detections and segmentation. To learn more about the library, please refer to our tutorials and demo apps.
We are releasing a new video app based on PyTorch Video library and an updated speech recognition app based on the latest torchaudio, wave2vec model. Both are available on iOS and Android. In addition, we have updated the seven Computer Vision and three Natural Language Processing demo apps, including the HuggingFace DistilBERT, and the DeiT vision transformer models, with PyTorch Mobile v1.9. With the addition of these two apps, we now offer a full suite of demo apps covering image, text, audio, and video. To get started check out our iOS demo apps and Android demo apps.
TorchElastic, which was open sourced over a year ago in the pytorch/elastic github repository, is a runner and coordinator for PyTorch worker processes. Since then, it has been adopted by various distributed torch use-cases: 1) deepspeech.pytorch 2) pytorch-lightning 3) Kubernetes CRD. Now, it is part of PyTorch core.
As its name suggests, the core function of TorcheElastic is to gracefully handle scaling events. A notable corollary of elasticity is that peer discovery and rank assignment are built into TorchElastic enabling users to run distributed training on preemptible instances without requiring a gang scheduler. As a side note, etcd used to be a hard dependency of TorchElastic. With the upstream, this is no longer the case since we have added a “standalone” rendezvous based on c10d::Store. For more details, refer to the documentation.
In addition to TorchElastic, there are a number of beta features available in the distributed package:
(Beta) CUDA support is available in RPC: Compared to CPU RPC and general-purpose RPC frameworks, CUDA RPC is a much more efficient way for P2P Tensor communication. It is built on top of TensorPipe which can automatically choose a communication channel for each Tensor based on Tensor device type and channel availability on both the caller and the callee. Existing TensorPipe channels cover NVLink, InfiniBand, SHM, CMA, TCP, etc. See this recipe for how CUDA RPC helps to attain 34x speedup compared to CPU RPC.
strong: ZeroRedundancyOptimizer can be used in conjunction with DistributedDataParallel to reduce the size of per-process optimizer states. The idea of ZeroRedundancyOptimizer comes from DeepSpeed/ZeRO project and Marian, where the optimizer in each process owns a shard of model parameters and their corresponding optimizer states. When running step(), each optimizer only updates its own parameters, and then uses collective communication to synchronize updated parameters across all processes. Refer to this documentation and this tutorial to learn more.
(Beta) Support for profiling distributed collectives: PyTorch’s profiler tools, torch.profiler and torch.autograd.profiler, are able to profile distributed collectives and point to point communication primitives including allreduce, alltoall, allgather, send/recv, etc. This is enabled for all backends supported natively by PyTorch: gloo, mpi, and nccl. This can be used to debug performance issues, analyze traces that contain distributed communication, and gain insight into performance of applications that use distributed training. To learn more, refer to this documentation.
Module Freezing is the process of inlining module parameters and attributes values as constants into the TorchScript internal representation. This allows further optimization and specialization of your program, both for TorchScript optimizations and lowering to other backends. It is used by optimize_for_mobile API, ONNX, and others.
Freezing is recommended for model deployment. It helps TorchScript JIT optimizations optimize away overhead and bookkeeping that is necessary for training, tuning, or debugging PyTorch models. It enables graph fusions that are not semantically valid on non-frozen graphs - such as fusing Conv-BN. For more details, refer to the documentation.
The new PyTorch Profiler graduates to beta and leverages Kineto for GPU profiling, TensorBoard for visualization and is now the standard across our tutorials and documentation.
PyTorch 1.9 extends support for the new torch.profiler API to more builds, including Windows and Mac and is recommended in most cases instead of the previous torch.autograd.profiler API. The new API supports existing profiler features, integrates with CUPTI library (Linux-only) to trace on-device CUDA kernels and provides support for long-running jobs, e.g.:
def trace_handler ( p ):
output = p . key_averages (). table ( sort_by = "self_cuda_time_total" , row_limit = 10 )
print ( output )
p . export_chrome_trace ( "/tmp/trace_" + str ( p . step_num ) + ".json" )
with profile (
activities = [ ProfilerActivity . CPU , ProfilerActivity . CUDA ],
# schedule argument specifies the iterations on which the profiler is active schedule = torch . profiler . schedule (
wait = 1 ,
warmup = 1 ,
active = 2 ),
# on_trace_ready argument specifies the handler for the traces on_trace_ready = trace_handler
) as p :
for idx in range ( 8 ):
model ( inputs )
# profiler will trace iterations 2 and 3, and then 6 and 7 (counting from zero) p . step ()
More usage examples can be found on the profiler recipe page.
The PyTorch Profiler Tensorboard plugin has new features for:
Inference Mode API allows significant speed-up for inference workloads while remaining safe and ensuring no incorrect gradients can ever be computed. It offers the best possible performance when no autograd is required. For more details, refer to the documentation for inference mode itself and the documentation explaining when to use it and the difference with no_grad mode.
torch.package is a new way to package PyTorch models in a self-contained, stable format. A package will include both the model’s data (e.g. parameters, buffers) and its code (model architecture). Packaging a model with its full set of Python dependencies, combined with a description of a conda environment with pinned versions, can be used to easily reproduce training. Representing a model in a self-contained artifact will also allow it to be published and transferred throughout a production ML pipeline while retaining the flexibility of a pure-Python representation. For more details, refer to the documentation.
prepare_for_inference is a new prototype feature that takes in a module and performs graph-level optimizations to improve inference performance, depending on the device. It is meant to be a PyTorch-native option that requires minimal changes to user’s workflows. For more details, see the documentation for the Torchscript version here or the FX version here.
TorchScript has a hard requirement for source code to have type annotations in order for compilation to be successful. For a long time, it was only possible to add missing or incorrect type annotations through trial and error (i.e., by fixing the type-checking errors generated by torch.jit.script one by one), which was inefficient and time consuming. Now, we have enabled profile directed typing for torch.jit.script by leveraging existing tools like MonkeyType, which makes the process much easier, faster, and more efficient. For more details, refer to the documentation.
Thanks for reading. If you’re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Facebook, Twitter, Medium, YouTube, or LinkedIn. |
365 | Analog TV Station on ESP8266 | {{ message }}
cnlohr/channel3
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session. |
1 | Removing Duplicates in SQL | There are a couple of ways to remove duplicate rows from a table in SQL e.g. you can use temp tables or a window function like row_number() to generate artificial ranking and remove the duplicates. By using a temp table, you can first copy all unique records into a temp table and then delete all data from the original table and then copy unique records again to the original table. This way, b, but with large tables, this solution will require additional space of the same magnitude as the original table. The second approach doesn't require extra space as it removes duplicate rows directly from the table. It uses a ranking function like p to assign a row number to each row.
By using partition by clause you can reset the row numbers on a particular column. In this approach, all unique rows will have row number = 1 and duplicate rows will have row_number > 1, which gives you an easy option to remove those duplicate rows. You can do that by using a common table expression (see T-SQL Fundamentals) or without it on Microsoft SQL Server.
No doubt that SQL queries are an integral part of any programming job interview which requires database and SQL knowledge. The queries are also very interesting to check the candidate's logical reasoning ability.
Earlier, I have shared a list of frequently asked SQL queries from interviews and this article is an extension of that. I have shared a lot of good SQL-based problems on that article and users have also shared some excellent problems in the comments, which you should look at.
Btw, this is the follow-up question of another popular SQL interview question, how do you find duplicate records in a table, which we have discussed earlier. This is an interesting question because many candidates confuse themselves easily.
Some candidate says that they will find duplicate by using group by and printing name which has counted more than 1, but when it comes to deleting this approach doesn't work, because if you delete using this logic both duplicate and unique row will get deleted.
This little bit of extra detail like row_number makes this problem challenging for many programmers who don't use SQL on a daily basis. Now, let's see our solution to delete duplicate rows from a table in SQL Server.
By the way, if you are new to Microsoft SQL Server and T-SQL then I also suggest you join a comprehensive course to learn SQL Server fundamentals and how to work with T-SQL. If you need a recommendation then I suggest you go through the Microsoft SQL for Beginners online course by Brewster Knowlton on Udemy. It's a great course to start with T-SQL and SQL queries in SQL Server.
Before exploring a solution, let's first create the table and populate it with test data to understand both problems and solutions better. I am using a temp table to avoid leaving test data into the database once we are done. Since temp tables are cleaned up once you close the connection to the database, they are best suited for testing.
In our table, I have just one column for simplicity, if you have multiple columns then the definition of duplicate depends on whether all columns should be equal or some key columns e.g. name and city can be the same for two unique persons. In such cases, you need to extend the solution by using those columns on key places e.g. on a distinct clause in the first solution and on the partition by in the second solution.
Anyway, here is our temp table with test data, it is carefully constructed to have duplicates, you can see that C++ is repeated thrice while Java is repeated twice in the table.
-- create a temp table for testing
create table #programming (name varchar(10));
-- insert data with duplicate, C++ is repeated 3 times, while Java 2 times
insert into #programming values ('Java');
insert into #programming values ('C++');
insert into #programming values ('JavaScript');
insert into #programming values ('Python');
insert into #programming values ('C++');
insert into #programming values ('Java');
insert into #programming values ('C++');
-- cleanup
drop table #programming
1. How to remove duplicate in SQL using temp table - Example
Yes, this is the most simple but logical way to remove duplicate elements from a table and it will work across databases like MySQL, Oracle, or SQL Server. The idea is to copy unique rows into a temp table. You can find unique rows by using a distinct clause. Once unique rows are copied, delete everything from the original table and then copy unique rows again. This way, all the duplicate rows have been removed as shown below.
p
select distinct name into #unique from #programming
delete from #programming;
insert into #programming select * from #unique
-- check after
select * from #programming
name
Java
C+ +
JavaScript
Python
You can see the duplicate occurrences of Java and C++ have been removed from the #programming temp table.
2. Delete Duplicates using row_number() and derived table - Example
The row_number() is one of several ranking functions provided by SQL Server, It also exists in the Oracle database. You can use this function to provide ranking to rows. You can further use partition to tell SQL server that what would be the window. This way row number will restart as soon as a different name comes up but for the same name, all rows will get sequential numbers e.g. 1, 2, 3, etc. Now, it's easy to spot the duplicates in the derived table as shown in the following example:
select * from (select *, row_number()
OVER ( partition by name order by name) as rn
from #programming) dups
name rn
C+ + 1
C+ + 2
C+ + 3
Java 1
Java 2
JavaScript 1
Python 1
Now, you can remove all the duplicates which are nothing but rows with rn > 1, as done by following SQL query:
delete dups
from (select *, row_number()
over ( partition by name order by name) as rn
from #programming)
dups
WHERE rn > 1
(3 row(s) affected)
now, if you check the #programming table again there won't be any duplicates.
select * from #programming JavaCJavaScriptPython
name+ +
This is by far the simplest solution and also quite easy to understand but it doesn't come to your mind without practicing. I suggest solving some SQL puzzles from Joe Celko's classic book, SQL Puzzles, and Answers , Second Edition to develop your SQL sense. It's a great practice book to learn and master SQL logic.
3. How to remove duplicates using CTE (Common Table Expression) - Example
The CTE stands for common table expression, which is similar to a derived table and used to the temporary result set that is defined within the execution scope of a single SELECT, INSERT, UPDATE, DELETE, or CREATE VIEW statement. Similar to a derived table, CTE is also not stored as an object and lasts only for the duration of the query. You can rewrite the previous solution using CTE as shown below:
;with cte
as (select row_number()
over (partition by name order by(select 0)) rn
from #programming)
delete from cte where rn > 1
The logic is exactly similar to the previous example and I am using select 0 because it's arbitrary which rows to preserve in the event of a tie as both contents the same data. If you are new to CTE then I suggest reading T-SQL Fundamentals, one of the best books to learn SQL Server fundamentals. Here is a nice summary of all three ways to remove duplicates from a table using SQL:
That's all about . As I said, this is one of the frequently asked SQL queries, so be prepared for that when you go for your programming job interview. I have tested the query in SQL Server 2008 and they work fine and you might need to tweak them a little bit depending upon the database you are going to use like MySQL, Oracle, or PostgreSQL. Feel free to post, if you face any issue while removing duplicates in Oracle, MySQL, or any other database.
Other Frequently asked SQL queries from Interviews
How to find the 2nd highest salary of an employee in SQL? (answer)
How to join three tables in one SQL query? (solution)
How do find all table names in a database? (query)
What is the difference between View and Materialized View in Database? (answer)
How do you create a backup of the table or copy of the table using SQL? (answer)
How do you find all customers who have never ordered? (solution)
Can you write a pagination query for Oracle using row_number? (query)
How do you find Nth highest salary of an employee using the correlated query? (solution)
10 Frequently asked SQL Query interview questions (solution)
Write a SQL query to find all table names on a database in MySQL (solution)
Top 5 Websites to learn SQL online for FREE? (resource)
5 Courses to learn Oracle and Microsoft SQL Server database (courses)
4 ways to find the Nth highest salary in SQL (solution)
Difference between Self and Equi Join in SQL? (answer)
5 Free Courses to learn Oracle and SQL Server? (courses)
Top 5 Courses to learn MySQL Database for Beginners (Courses)
Difference between clustered and non-clustered indexes in SQL? (answer)
Write a SQL query to copy or backup a table in MySQL (solution)
Difference between Primary and Candidate key in table? (answer)
5 Free Courses to learn T-SQL and SQL Server for Beginners (Courses)
Difference between Unique and Primary key in table? (answer)
What is the difference between UNION and UNION ALL in SQL? (answer)
Thanks for reading this article so far. If you like this SQL tutorial to remove duplicates then please share it with your friends and colleagues If you have any questions or feedback then please drop a note. P.S. - If you are new to the SQL world and looking for free SQL and database courses to learn SQL fundamentals then you can also check out my list of free Udemy courses to learn SQL . These are really great SQL courses that are available for free on Udemy and Coursera and you can use them to build your SQL skills. |
1 | What Is Inactivity Monitoring and Why Do You Need It for Personal Safety? | Modern technology obliges us to be more attentive to our health, get information faster about significant changes around us, and take better care of personal safety. Inactivity monitoring capabilities have proven their worth over years of use in medicine. Today, inactivity monitoring reaches a new level of prevalence and usefulness. You can use it in modern devices and applications as part of the personal emergency response system (PERS). In this article, we will explain inactivity monitoring in detail and share some tips about its use so that you can choose the most convenient and advantageous personal safety device or app with inactivity monitoring for your family members and loved ones.
Inactivity monitoring is documenting or fixing the absence of a person’s activity by a given parameter.
From a technical point of view, the absence of a given activity becomes a trigger to start a chain of predetermined reactions that the device is programmed for. The most straightforward scheme is the notification of inactivity by a signal.
In the healthcare industry, inactivity monitoring is a part of patient monitoring systems. In hospitals, the systems enable the nursing staff to be constantly aware of the patients' condition. Remote patient control systems allow early detection of dangerous conditions, become a valuable part of diagnostics, and let health professionals take better outpatient care.
Broadly speaking, all devices and software with inactivity monitoring can be grouped into medical alert devices and general-purpose items.
Medical inactivity monitoring devices are a part of vital sign monitoring systems. The devices are usually attached to the patient's body and monitor their pulse, blood oxygen saturation, body temperature, breathing movements, and other indicators. A built-in system transmits all the information received in the form of signals to the monitoring station. There, the data from the devices, including inactivity monitoring devices, are visualized on dashboards. Clinicians can generate reports for medical histories and adjust treatment plans flexibly based on the dynamical change tracking.
Studies show that this system of monitoring patients in intensive care units or general care units and specialized nursing homes allows for optimal distribution of the workload of the nursing staff.
Both timeliness of response and effectiveness of treatment are improved. Since inactivity monitoring and other inpatient control systems are not always sufficient, specialists are conducting surveys and research for improvements.
To summarize, in hospitals, inactivity monitoring is the part of patient control systems that helps to learn in time about the medical emergency without the patient's active signal.
Remote patient monitoring is another important application of inactivity monitoring in medicine, especially the inactivity alert systems for chronic disease management. For example, when monitoring patients with mental health issues like Alzheimer's disease, it is essential not to cause unnecessary anxiety. Inactivity monitoring copes with this very well, and if you set the parameters correctly, it prompts the doctor in time to initiate contact with the patient.
Inactivity monitoring is an essential feature of personal emergency response systems and personal safety apps, including the senior and women safety systems. However, it’s still not present in many medical or emergency alert systems (compare the pricing and functionality of popular solutions here). Those having the inactivity monitoring functionality can be roughly divided into inactivity monitoring devices and inactivity monitoring apps.
Inactivity monitoring devices include pendants and wearables like bracelets, necklaces, or keychains.
Their operating principle is focused on movement. The device's program sets the maximum allowable time for the ‘movement not registered’ period. If the device owner does not move for longer, the inactivity monitoring device sends a signal. Depending on the specific company providing the service, the alert can go to:
local medical emergency services,
emergency contacts recorded in the personal emergency response system.
Thus, the speed of response and the result will depend on the services involved and the ability of emergency contacts to respond. Another critical point to mention is that lack of movement does not always indicate an urgent condition. In case of a sensor malfunctioning, lost or forgotten device, or even incorrect settings for the inactivity period, you might end up having to pay a hefty price for the emergency services responding to a false alert.
Inactivity monitoring apps take a more modern approach to supervision. Their set parameters are related to the use of a smartphone. People usually don't leave their smartphones for long, so there are many activities related to them, from turning off the alarm clock in the morning to watching the news and meditating before going to bed. Lack of interaction with a smartphone is more indicative of a non-standard situation than immobility. This especially relates to older adults, people with physical limitations, or sedentary work. And inactivity monitoring is a crucial feature for a safety app.
AllsWell Alert is a personal safety and emergency alert application available both for Android and iOS. Its distinctive feature is combining the panic button functionality with inactivity monitoring. As it’s a downloadable mobile app, it's easily accessible, handy, and doesn't require any professional assistance for installation and use.
To activate the inactivity monitoring system in the AllsWell Alert App:
download the AllsWell Alert app for Android or iOS,
enter your emergency contacts.
That's it! The system is now tracking your inactivity periods and will send an alert to your emergency contact if you remain inactive for long. You can always cancel the alert notification or give a call to your loved ones to ensure them you’re fine. To facilitate installation, we’ve preconfigured the BED and WAKE time, as well as the typical inactivity period average for the majority of users. However, you can customize these at any time in your Dashboard.
The kind of interaction with the smartphone doesn't matter: you can call or take photos, watch videos of scroll social networks, text, or just handle your smartphone in any way. We also won’t be recording or monitoring your actual actions. With smart settings, the possibility of false alerts is minimal. And when it comes to your loved ones, it is much better to double-check than to miss an incident.
Download AllsWell Alert App for IOS or Android, try the inactivity monitoring system for a free month, and get peace of mind about your relatives' safety for only $9.99/month! |
2 | Arctic Lightning Strikes Tripled in a Decade | To continue, please click the box below to let us know you're not a robot. |
2 | The Social Sector Is Contributing to the Broader OSS Ecosystem | Community
The Social Sector is Contributing to the Broader OSS Ecosystem As the world becomes more interconnected and complicated, so too does the expanse of open source ecosystems. While the majority of open source software (OSS) lies with corporate technology companies,…
Mala Kumar
January 26, 2021
As the world becomes more interconnected and complicated, so too does the expanse of open source ecosystems. While the majority of open source software (OSS) lies with corporate technology companies, in the last year, the Tech for Social Good program at GitHub has highlighted many unique opportunities, barriers and challenges that face open source in the social sector.
In GitHub’s last State of the Octoverse, we highlighted one difference. Most OSS tends to be what we call “infrastructure” tech-scripts to make servers run faster, code that contributes to systems architecture, APIs, and the like. In the social sector, however, OSS still heavily leans to applications with graphic user interfaces (GUIs). Many of these applications fill use cases seen in the social sector that private tech companies tend not to address. One such example is the social sector organization, Dimagi, which builds, maintains and helps governments and organizations deploy its CommCare app. In a panel we hosted last May for GitHub Satellite, Dimagi’s CTO, Clayton Sims, described how more than 7% of the world’s population is registered in and receives services in a CommCare application, including COVID-19 contact tracing app deployments that are using CommCare.
One interesting trend that has emerged from mature social sector tech companies, like Dimagi, is that they are now contributing back to broader “infrastructure” open source globally. Simon Kelly, a software engineer at Dimagi, described a few of the contributions their team has made recently, including an Ansible ‘monit’ module rewrite. In fact, Simon is now the official maintainer of this module. Dimagi has also contributed to an Ansible ‘postgresql_ext’ bug fix and an improvement to CitusDB’s upgrade procedure.
Dimagi notes that one reason these contributions have come about is because of the unique challenges they face in the industry. The social sector most often prioritizes their services and products around the greatest social good need, not necessarily the best business opportunity. Because of the regulatory requirements of some of Dimagi’s clients, such as government policies that require data to be stored “in-country”, Dimagi has made its platform available in both on-prem as well as “cloud”-like environments. Some of these environments are managed by Dimagi and others are managed by third party organizations using tooling that Dimagi has built. These challenges have required Dimagi to adopt a large range of open source solutions, and specifically for the cases above, has put them in a unique position to contribute back to OSS.
While we don’t expect the social sector to pivot away entirely from applications that have a GUI, we’re excited at the prospect of social sector solutions finding their way into more mainstream tech. Stay tuned for more insights and trends from this unique space!
Subscribe to The GitHub Insider
A newsletter for developers covering techniques, technical guides, and the latest product innovations coming from GitHub.
Subscribe
Yes please, I’d like GitHub and affiliates to use my information for personalized communications, targeted advertising and campaign effectiveness. See the GitHub Privacy Statement for more details.
Subscribe |
2 | Chipmaker Xilinx says car supply crunch goes beyond semiconductors | Arrow Artboard Created with Sketch. Artboard Created with Sketch. Title Chevron Title Chevron Icon Facebook Icon Linkedin Icon Mail Positive Arrow Semiconductors Chipmaker Xilinx says car supply crunch goes beyond semiconductors Shortage of other inputs also poses uncertainty for rosy 2021 growth outlook
Victor Peng, president and CEO of Xilinx, says the U.S. chipmaker is doing its best to meet customers' needs amid a severe semiconductor shortage. (Photo courtesy of Xilinx) TAIPEI -- U.S. chipmaker Xilinx has warned that the supply crunch affecting the auto industry will not be quickly resolved, saying problems extend beyond semiconductor manufacturing to supplies of other materials and components.
Victor Peng, Xilinx president and CEO, told Nikkei Asia in an interview that he hoped a serious shortage would not be prolonged but pointed to constraints in other parts of the supply chain.
Sponsored Content
Take your reading anywhere with offline reading functions
Never miss a story with breaking news alerts
Customize your reading experience
Nikkei Asian Review, now known as Nikkei Asia, will be the voice of the Asian Century. |
1 | Target investors with the most effective media | Target investors with the most effective media Rebbelith believes growing companies should have the opportunity to communicate their value directly to target investors through the most effective media, and that they shouldn't have to use their finite budget for the scattergun, imprecise PR agency work currently available.
So we ran a research project, and used our systems to identify exactly which media and journalists are valued by some of the busiest venture capital firms in the UK at the moment.
Down Arrow (light)
Access the full research here Discover which media are most valued by 19 of the most active UK investment funds.
start here Or read on for an executive summary.
form protected by ReCaptcha v2
Why is targeting essential? Every team at a young company knows exactly how valuable their solution, product or service is. But for investors it may be less certain and this can make it hard to raise capital.
This means companies need to target the right media precisely and eliminate guesswork to get the results they want.
Down Arrow (light)
Active UK investment firms Rebbelith examined 19 of the most active private investment firms in the UK according to their most recent deal activity. With the majority focused on technology, together they invest right across the development spectrum, from seed and early stage, series B and onwards, right through to scale-ups.
Click on an investor for their background.
Down Arrow (light)
IQ Capital
Index Ventures
MMC Ventures
Octopus Ventures
Backed VC
Maven Capital Partners
Accel
Forward Partners
AlbionVC
Force Over Mass Capital
Localglobe
Fair By Design
Beringea
Eight Roads Ventures
Insight Partners
Seedcamp
Ascension Ventures
Startup Funding Club SEIS Fund
Fuel Ventures
Valued media Rebbelith's systems then revealed 34 separate newspapers, magazines, podcasts, blogs and online new sites that are valued by these investors.
Growing companies can target these publications to get the most impact from their investor communications campaigns.
Of these, 9 publications had the most value.
Down Arrow (light)
Valued journalists Rebbelith also identified 27 individual journalists valued by the investor firms.
Companies can now target the people whose coverage actually reaches investors.
There were 6 that had the most value across the sample of investor firms.
Down Arrow (light)
What can you do with this insight? With this latest research from Rebbelith, growing companies can select for themselves those media that they may wish to target. This could be for current, upcoming or future publicity campaigns for reaching investors.
Importantly, they can now make those selections based on real-world audience data, rather than on the expensive approximations currently available from PR agencies.
The full research with interactive maps is available through the access table below.
If there’s a specific audience, group of key customers, investors, or if you already know which audience you want to target with your company’s media work, a bespoke research project can be built for you.
Down Arrow (light)
Rebbelith is a corporate & brand intelligence consultancy using interactive visual insight. We give organisations a truly competitive edge and help them make better decisions.
Our systems analyse media and networks at speed and scale anywhere in the world. This puts our clients in control, and helps them navigate complexity and uncertainty with confidence.
Down Arrow (light)
Access the full research here Discover which media are most valued by 19 of the most active UK investment funds.
start here form protected by ReCaptcha v2 |
1 | Fast Markdown parser and HTML renderer implemented in WebAssembly | Fast Markdown parser and HTML renderer implemented in WebAssembly.
Get it from GitHub |
3 | Creating a Zip File from Scratch | Let's try creating a ZIP file from scratch. Since I'm mostly interested in the layout and meta-data we'll be using the option to add
the files uncompressed. That means I don't have to learn the [deflate] compression algorithm at the same time.
The resulting knowledge is still useful in that it will allow us to bundle a set of files into a single, ubiquitous format.
unix [cpio] (or even [tar]) would be easier but it's not ubiquitous.... I'm looking at you DOS.
I'll be using C for this, since it's ubiquitous... since I like it. Also, C is good for the kind of low-level bit manipulation
we'll be doing here.
Even if you don't like C there is still some useful information here about the zip file format.
(I was going to use Common Lisp, but it's not ubiquitous.)
Zip files are read from the end. This allows the zip file to built in a single pass.
(It also allows some other tricks like prefixing data at the start of a zip file e.g. an executable to unpack the remainder.)
A zip reader finds the final record by searching backwards for the record's magic number e.g. 0x06054b50
(a record is just a block of bytes in the file).
Every zip file must contain an End of Central Directory Record and the
simplest zip file consists of only an End of Central Directory Record. That would be an empty zip file, but a zip file nevertheless.
note the mention of disk numbers, this file format was invented back in the days of archives spanning multiple floppy disks.
The EOCD format is given here.
https://en.wikipedia.org/wiki/ZIP_(file_format)#End_of_central_directory_record_(EOCD)
Remember that all integers (including the magic) must be written little-endian (i.e. byte reversed).
I will assume your platform does that automatically since most people will be using an Intel/AMD processor.
(Check the C functions ntohs() and ntohl() if your platform is big endian.)
This is the C function which will write the EOCD record.
the full code will be linked at the bottom of the article
/* EOCD record [End of Central Directory] (all integers little-endian) https://en.wikipedia.org/wiki/ZIP_(file_format)#End_of_central_directory_record_(EOCD) == offset len description 0 4 End of central directory signature = 0x06054b50 4 2 Number of this disk 6 2 Disk where central directory starts 8 2 Number of central directory records on this disk 10 2 Total number of central directory records 12 4 Size of central directory (bytes) 16 4 Offset of start of central directory, relative to start of archive 20 2 Comment length (n) 22 n Comment*/static void eocd(unsigned nentries, FILE *f){ enum{magic=0x06054b50, disknum=0, no_comment=0, }; size_t debug_size = curoffset; uint32_t const cdsz = curoffset - cdoffset; putu32(magic, f); putu16(disknum, f); putu16(disknum, f); putu16(nentries, f); putu16(nentries, f); putu32(cdsz, f); putu32(cdoffset, f); putu16(no_comment, f); // no comment to put debug_size = curoffset-debug_size; assert(22==debug_size);}
Build the code and call the program to produce our zip file (on stdout):
$ make mkzip$ ./mkzip >ex.zip$ file ex.zipex.zip: Zip archive data (empty)
Congratulations! We have created a zip file from scratch. That will be all for today. Thank you for watching...
$ unzip -l ex.zipArchive: ex.zipwarning [ex.zip]: zipfile is empty
Ok, so we'd actually like to add some files to our archive. Otherwise what's the point?
Every file we add must be prefixed with a local file header and, after all the files are added, we add a Central Directory record which
contains the locations (offsets) of all those files.
Firstly let's add the local file header. It needs the length of the file and the length of the file name. No problem.
More difficult is that the header needs the CRC32 checksum of the file (to allow the zip reader to check the integrity of the archive).
https://en.wikipedia.org/wiki/ZIP_(file_format)#Local_file_header
The trailing data descriptor record which can be used to add the CRC32 checksum and file size/compressed size after we've written the file. This can help simplify the zip writer's job. We will not use these here.
I have a feeling that CRC32 checksums will be difficult. So let's just write a zero crc32 and see what happens...
Call local_file_header for each file we want to add to the zip. I've defined a struct zfile_t to hold the file name and some
other information like it's size and crc32 (set to zero for now). full code linked at the end.
/* Local File Record https://en.wikipedia.org/wiki/ZIP_(file_format) == offset len description 0 4 Local file header signature = 0x04034b50 (read as a little-endian number) 4 2 Version needed to extract (minimum) 6 2 General purpose bit flag 8 2 Compression method 10 2 File last modification time 12 2 File last modification date 14 4 CRC-32 of uncompressed data 18 4 Compressed size 22 4 Uncompressed size 26 2 File name length (n) 28 2 Extra field length (m) 30 n File name 30+n m Extra field*/static void local_file_header(zfile_t *zf, FILE *f){ enum{magic=0x04034b50, bit_flags=0, no_compression_method=0, mod_time=0, mod_date=0, no_extra=0, }; size_t debug_size = curoffset; unsigned const fnamelen = strlen(zf->fname); putu32(magic, f); putu16(EXTRACTOR_MIN_VERSION, f); putu16(bit_flags, f); putu16(no_compression_method, f); putu16(mod_time, f); putu16(mod_date, f); putu32(zf->crc32, f); putu32(zf->sz, f); // compressed size == uncompressed size cos not compressing putu32(zf->sz, f); putu16(fnamelen, f); putu16(no_extra, f); putbytes((uint8_t const *)zf->fname, fnamelen, f); debug_size = curoffset-debug_size; assert((30+fnamelen)==debug_size);}
If we try to unzip the file now we get this error. We don't have the Central Directory record yet.
$ unzip -l ex.zipArchive: ex.zipwarning [ex.zip]: 25784 extra bytes at beginning or within zipfile(attempting to process anyway)warning [ex.zip]: zipfile is empty
This record/block contains a list of all the files in the archive (it need not contain all the files we added, nor list them in the same
order; this allows deleting files without rewriting the entire archive, important if you have 10 floppy disks...).
The central directory doesn't have a separate header. It's not really a record, more a sequence of records, one per file of interest.
These records are similar to the local file header records but have a few extra fields.
Finally we must remember to update our, now non-empty, EOCD with link back to our central directory record. Until now we were just
writing zero into the EOCD fields [size of central directory] and [offset of central directory].
We can easily capture the file offset before we start writing the Central Directory and get the size by subtracting it from the file
offset after we have written the Central Directory.
/* Central Directory Entry https://en.wikipedia.org/wiki/ZIP_(file_format) == offset len description 0 4 Central directory file header signature = 0x02014b50 4 2 Version made by 6 2 Version needed to extract (minimum) 8 2 General purpose bit flag 10 2 Compression method 12 2 File last modification time 14 2 File last modification date 16 4 CRC-32 of uncompressed data 20 4 Compressed size 24 4 Uncompressed size 28 2 File name length (n) 30 2 Extra field length (m) 32 2 File comment length (k) 34 2 Disk number where file starts 36 2 Internal file attributes 38 4 External file attributes 42 4 Relative offset of local file header. This is the number of bytes between the start of the first disk on which the file occurs, and the start of the local file header. This allows software reading the central directory to locate the position of the file inside the ZIP file. 46 n File name 46+n m Extra field 46+n+m k File comment*/static unsigned cdir(zfile_t files[], FILE *f){ unsigned nfiles=0; enum{magic=0x02014b50, bit_flags=0, no_compression_method=0, mod_time=0, mod_date=0, no_extra=0, no_comment=0, disknum=0, fileattr_internal=0, fileattr_external=0, }; cdoffset = curoffset; for(zfile_t *zf=files; zf->fname; zf++,nfiles++){ size_t debug_size = curoffset; unsigned const fnamelen = strlen(zf->fname); putu32(magic, f); putu16(CREATOR_VERSION, f); putu16(EXTRACTOR_MIN_VERSION, f); putu16(bit_flags, f); putu16(no_compression_method, f); putu16(mod_time, f); putu16(mod_date, f); putu32(zf->crc32, f); putu32(zf->sz, f); // compressed size == uncompressed size cos not compressing putu32(zf->sz, f); putu16(fnamelen, f); putu16(no_extra, f); putu16(no_comment, f); putu16(disknum, f); putu16(fileattr_internal, f); putu32(fileattr_external, f); putu32(zf->offset, f); putbytes((uint8_t const *)zf->fname, fnamelen, f); // no extra field // no comment debug_size = curoffset-debug_size; assert((46+fnamelen)==debug_size); } return nfiles;}
Let's try unpacking our putative archive again. We learned that zip uses date formats based from 1980-00-00 (which is similar to the
unix epoch 1970-01-01, except with bigger shoulder pads and more hair gel). At the moment I'm not too interested in the timestamps.
$ unzip -l ex.zipArchive: ex.zip Length Date Time Name--------- ---------- ----- ---- 4596 1980-00-00 00:00 readme.zipformat 6694 1980-00-00 00:00 mkzip.c 18040 1980-00-00 00:00 mkziperror: expected central file header signature not found (file #4). (please check that you have transferred or created the zipfile in the appropriate BINARY mode and that you have compiled UnZip properly)
We do see the correct filenames and the sizes match (the current) sizes of the files. Good news!
$ wc -c readme.zipformat mkzip.c mkzip 4596 readme.zipformat 6694 mkzip.c18040 mkzip
However, something is wrong with my central file directory. Maybe the contents, maybe my offset to it in EOCD is wrong?
The problem is in the EOCD. There are 2 fields [number of central directory records] and [total number of central directory records].
I had initially set these both equal 1 cos of a misunderstanding on my part that there is a Central Directory record, rather
than a sequence of them. After fixing that (correct in the code above already) I finally reach my crc32 checksum problem (i.e. that
I've just written a zero for the checksum).
$ unzip -t ex.zipArchive: ex.zip testing: readme.zipformat bad CRC 7261ec57 (should be 00000000) testing: mkzip.c bad CRC e7d510f6 (should be 00000000) testing: mkzip bad CRC 7858c036 (should be 00000000)
I could leave things like this since the zip archive will actually unpack in this state (with GNU/linux [unzip] at least).
Finding the correct variant of the crc32 function for zip proved tricky. There is not a lot of information out there
and what there is, is contradictory.
The best description, and the one which finally worked, was this one at OSDev.
https://wiki.osdev.org/CRC32
Most of the code online uses a pre-generated look-up table with 256 entries, but this page actually showed the code to generate
the table.
The algorithm given assumes a single byte string to checksum but I load files in blocks. So I must be careful to do the init
and complete parts separately and only the kernel on each block. (I wasn't careful, I repeatedly 1's-complemented the crc
and got the wrong answer...)
Building the lookup-table. Normally this table would be generated to a C file once and included in the final zip tool. I just build it each time.
static uint32_t poly8_lookup[256];static void crc32_mktab(void){ uint32_t const crc_magic = 0xedb88320; uint32_t *table = poly8_lookup; uint8_t index=0,z; do{ table[index]=index; for(z=8;z;z--) table[index]=(table[index]&1)?(table[index]>>1)^crc_magic:table[index]>>1; }while(++index);}
The kernel of the crc32 calculation, called on blocks of file bytes as I read them in: Trivial No?
static uint32_t crc32update(uint32_t crc, uint8_t const *p, size_t n){ while (n-- !=0) crc = poly8_lookup[((uint8_t) crc ^ *(p++))] ^ (crc >> 8); return crc;}
The main file crc32 function. It reads the entire file and rewinds it. It would be trivial to incorporate this into the file copying
function and avoid reading files twice. That's why I was too lazy to do it...
static uint32_t filecrc32(FILE *f){ uint32_t crc=0xffffffff; char buf[8192]; size_t rsz; while((rsz=fread(buf, 1, sizeof(buf), f))){ crc = crc32update(crc, (uint8_t const *)buf, rsz); } assert(0==fseek(f, 0, SEEK_SET)); return ~crc;}
After setting the correct crc32 in both the local file header and the central directory entry my zip archive now tests as good:
$ make mkzip && ./mkzip >ex.zip$ unzip -t ex.zipArchive: ex.zip testing: readme OK testing: mkzip.c OK testing: mkzip OKNo errors detected in compressed data of ex.zip.
Of course I've ignored the dates, the internal/external file attributes and the proper version numbers for creator and extractor, but
the zip archive appears healthy enough as it is.
I'd be interested to hear how well it does on other unzip tools. I've tried GNU/linux unzip and 7z and both worked without complaint.
One might wonder how our zip file differs from a 'real' zip file. Conveniently the 'real' zip tool has a [-Z store] option
which allows us to produce a zip without compression, just like ours.
$ zip -Z store out.zip readme.zipformat mkzip.c mkzip adding: readme.zipformat (stored 0%) adding: mkzip.c (stored 0%) adding: mkzip (stored 0%)
Gasp they are not indentical! (No real surprise actually considering all the liberties we've been taking with the format).
$ cmp out.zip ex.zipout.zip ex.zip differ: byte 5, line 1
Looking in more detail we see lots of similarities. The 'real' zip file has a 'real' crc32 checksum, we just have zero
(written before I fixed crc32).
The 'real' zip file has an extra field after the filename, we have none.
$ hd out.zip | head00000000 50 4b 03 04 0a 00 00 00 00 00 47 4f 7a 51 a2 bb |PK........GOzQ..|00000010 b0 9b 04 16 00 00 04 16 00 00 10 00 1c 00 72 65 |..............re|00000020 61 64 6d 65 2e 7a 69 70 66 6f 72 6d 61 74 55 54 |adme.zipformatUT| <--- the extra field we don't have00000030 09 00 03 36 7c bf 5f 36 7c bf 5f 75 78 0b 00 01 |...6|._6|._ux...|00000040 04 e8 03 00 00 04 e8 03 00 00 23 20 5a 49 50 20 |..........# ZIP |$ hd ex.zip | head00000000 50 4b 03 04 00 00 00 00 00 00 00 00 00 00 00 00 |PK..............|00000010 00 00 04 16 00 00 04 16 00 00 10 00 00 00 72 65 |..............re|00000020 61 64 6d 65 2e 7a 69 70 66 6f 72 6d 61 74 23 20 |adme.zipformat# | <--- file contents immediately after name00000030 5a 49 50 20 66 69 6c 65 20 66 72 6f 6d 20 73 63 |ZIP file from sc|
1. misunderstanding CD 'record' in the spec. I should have put nentries in the EOCD.
2. crc32 - expected this to be hard. Difficult and confusing information on web. When unsure which of your 3 possible solutions
are correct it's difficult to focus on one. I made a small mistake (complementing many times, not just once at the end).
This cost me the most time.
These reverse-engineering concepts are covered in this article.
The code for producing an uncompressed zip file. It takes a list of file names on the command line and writes the zip file to stdout.
Use it like this:
$ ./mkzip file1 file2 .... fileN >ex.zip
(home) |
2 | Retail, rent and things that don't scale | Morioka Shoten
I generally think about retail as sitting on a spectrum from logistics to experience. At the logistics end, you know exactly what you want and retail’s job is to provide the most efficient way to get it. At the experience end, you don’t know, and retail’s job is to help you, with ideas, suggestion, curation and service.
Ecommerce began at the logistics end, as a new and (sometimes) more efficient way to get something. It’s not always more efficient - you don’t have your lunch mailed to your desk. Rather, the right logistics in retail is a matter of algebra. How much inventory is needed, how many SKUs, how big are the products, how fast can they be shipped, how often do you buy them, how far would you be willing to drive or walk, does the product need to be kept cold, or warm, what’s the cost per square foot - there are all sorts of possible inputs to the equation, and if you visualised them all you’d have a many-dimensional scatter diagram, that would tell you why Ikea has giant stores on the edge of town, why Walgreens has small local stores, and why you can buy milk on every block in Manhattan but not a fridge.
The internet adds a new set of possibilities to that algebra. Amazon sells anything that can be stored on a shelf in a vast warehouse and shipped in a cardboard box. It doesn’t so much have ‘infinite shelf-space’ as one shelf that’s infinitely long: it only sells things that can fit into that model. Grocery delivery is an entirely different model, that needs quite different storage and quite different logistics; so in turn is restaurant delivery (or take-away, which is at least a third of US restaurant spending). Meanwhile, the online mattress business was founded on the flash of insight that if you vacuum-pack a foam mattress then you can ship it like any other parcel, and bypass the existing mattress industry’s distribution model, but that also means you can’t do returns. Mattresses, books and sushi can all be bought ‘online’, but those are as different from each other as Walgreens and Ikea. They’re all algebra - all different points in that scatter diagram - and a binary split between ‘online’ and ‘physical’ retail is less and less useful.
Part of that algebra is that all sorts of things that were previously separate budgets become part of the same question. Should you spend your acquisition budget on search ads or free shipping? Or a better returns policy? If you open a store in that city, can you spend less on Instagram, and do your returns go down or up? How do you reach your customers, and who are they? There’s long been a joke in D2C that ‘rent is the new customer acquisition cost’, but what is an Amazon truck?
The further any retail category gets from these logistics questions, the less well the internet has tended to work. It’s much easier to connect a database to a web page than to put human experience on a web page. The Internet lets you live in Wisconsin and buy anything you could buy in New York, but it doesn’t let you shop the way you could shop in New York. And so a lot of the story of ecommerce in the last 25 years has been on one hand converting things that looked like they needed experience into logistics but on the other hand trying to work out how to build experience online. People will happily buy books without seeing them, and they will buy shoes if you add free returns (turning experience into logistics), and they will also buy high fashion or $100,000 watches but, now, you will need to solve experience, because that’s really what matters, not just the logistics.
Part of the promise of the internet is that you can take things that only worked in big cities and scale them everywhere. In the off-line world, you could never take that unique store in London or Milan and scale it nationally or globally - you couldn’t get the staff, and there wasn’t the density of the right kind of customer (and that’s setting aside the problem that scaling it might make people less interested anyway). But as the saying goes, ‘the internet is the densest city on earth’, so theoretically, any kind of ‘unscalable’ market should be able to find a place on the internet. Everyone can find their tribe.
This sits behind some of the explosive growth of Shopify, which handled over $100bn of GMV in 2020, and behind Instagram, and the influencer thing, and subscriptions, and now live video and AR. How do you take that experience from 1,000 square feet in Soho or Ginza to a screen? Of course, some of what Instagram and influencers are doing is unbundling magazines, and magazines are also a way to take the big city experience to everyone. But I wonder now how much more you can pick up those spikes of weird, quirky, interesting things in a few cities and take them everywhere.
I described retail as logistics and experience, but it’s also culture. What will happen as the generations that grew up with ecommerce no longer see it as new and exciting but instead internalise it, and take ownership? Retail is pop culture, and that’s live streaming but it’s also the shop that only you know about. Maybe the internet is due for a wave of things that don’t scale at all. In that light, I’ve been fascinated by ‘Morioka Shoten’ in Tokyo - a bookshop that sells only one book at a time. This is retail as anti-logistics - as a reaction against the firehose, and the infinite replication of Amazon. Before the internet that would only work in a very dense city, but, again, the internet is the densest city on earth, so how far do we scale the unscalable? |
1 | Is software now everybody's job? The implications of low/no code for developers | Business
Home
Business
Enterprise Software
Is software now everybody's job? The implications of low-code and no-code for developers
Some industry experts argue that the time has come for business users to be able to steer their own destinies when it comes to application development.
p
Photo: IBM Media relations Some industry experts argue that the time has come for business users to be able to steer their own destinies when it comes to application development. That's the message conveyed at a recent conference focused on this very topic, sponsored and hosted by Ninox. (I was a participant and moderator at the event.) The Covid-19 crisis illustrated the advantages low-code and no-code are bringing to the world. "Some IT organizations are faring better if they already have low-code platforms in their tool belts," according to John Bratincevic, analyst with Forrester. "They have more agile ways of development, and the scale of having many businesspeople on a platform."
If there is one silver lining that came out of the crisis, it is an acceleration toward user-driven application development and deployment, agreed John Rymer, also a Forrester analyst. In the process, attitudes that have existed over the past two or three decades are starting to wash away. "We're in the midst of is a gigantic mindset about software," he related. "The biggest issues have been about the risks of software, the costs of software, and who does the work. Software is viewed as very arcane, and we've had a bit of a priesthood in central IT, using words we don't understand when they talk to us. The idea that software is everybody's job is really radical."
In a recent post, Mina Pêcheux took on some of the concerns professional developers and IT leaders may have with low-code and no-code, suggesting that this trend may be a boon to software development at all levels. "Thanks to these tools, coding could become a mainstream hobby," she writes. "People could get a better understanding of how the products and software they use everyday work. Moreover, it would give plenty of enthusiasts from the broad audience a chance to build, maintain and master the programs they continuously live with."
Pêcheux outlines some of the benefits to both developers and businesspeople:
Make innovation easier: "In startups or small companies, it could also empower users or 'citizen developers' to be part of the implementation by helping out with tweakable and scalable contributions," Pêcheux says. "It could light the creative spark in many 'makers' and 'pioneers' that so far were blocked by tech constraints."
Make prototyping easier: "No-code tools are like LEGO bricks: they're a fun way to learn how to build something complex from small pieces," she points out. "The ability to quickly prototype something is not just a nice gift for non-developers: as a programmer, you'd often like to have some quick-sketch sandbox to test out stuff, so this could be a sweet alternative to Github templates or tutorial bundles."
Make development easier: "There also are some types of software a developer is not at-ease with; we all have specialties and we cannot branch out to learn about everything -- computer science advances so blazingly fast it would be an inhuman task," Pêcheux. "This is how the no-code movement could even attract some professional coders: we regularly find ourselves in situations where the gains of a simplified easy-to-setup framework could far outweigh the loss in tech choices and visual customization."
Take the drudgery out of programming: "As a professional, it's also a great opportunity to automate the most boring tasks in your work pipeline: auto-machine learning solutions are sort of a case study for this because they offer to take dumb data processing, basic feature engineering and even model deploying off of your hands."
At the same time, Pêcheux cautions that low-code and no-code tools may cost developers some of the deep insight they need to effectively design applications and systems. "You might miss some common concepts just because it's not presented the same in two different tools, or you might overly emphasize a step of the process because it seemed particularly in the spotlight on this particular platform," she says. "I'm not sure how much of a bird's-eye view these tools give you -- and therefore how much of this holistic-deep understanding they give you on the matter at hand."
There is also a risk of homogenization, she adds. "Having one way to do things ends up creating very close-looking apps or websites/ it may require a lot of effort to come up with a somewhat original design and it's usually more about the underlying idea than actual personal UI customization."
Still, low-code and no-code presents new kinds of opportunities for IT managers to move the business forward. "Why are we still hearing low-code is for simple things, for simpletons?" Rymer asks. "The experience says it's not true anymore. Sophisticated, scale, secure applications are nontrivial to build, but are way easier to build on low-code. IT people can be the mentors of that; they can be the lifelines."
Editorial standards Related
LinkedIn is using AI to make it easier for recruiters to reach you
Will Apple's Reality Pro signal the beginning of the immersive internet?
Can you distinguish people from AI bots? 'Human or not' online game reveals results |
2 | The rings of Saturn are 'ringing' like a bell | This Cassini image reveals the northern hemisphere of Saturn as it nears its summer solstice.
(Image credit: NASA/JPL-Caltech/Space Science Institute)
Saturn's rings are ringing like a bell, which is making it possible for researchers to explore deep inside the heart of the planet.
Gravitational forces push seismic waves from Saturn's interior into its ring system, where NASA's Cassini mission was able to detect the minute tremors. According to a new study, a large part of the planet's interior is more layered than previously expected.
Planets hide their interior processes behind hard-to-penetrate layers. While rocky bodies like Earth and the moon can have their layers probed through the study of seismic waves created by quakes, gas giants have no solid surface to measure such waves. Instead, researchers have to employ other methods, such as studying a planet's magnetic field.
Related: Amazing Saturn Photos from NASA's Cassini Orbiter
A dynamical interplay between Saturn's largest moon, Titan, and its rings is captured in this view from NASA's Cassini spacecraft taken on Sept. 20, 2009 and released Dec. 23, 2013. At every location within Saturn's rings, particles orbit with a particular period, or rhythm. This image is focused on two separate and nearby locations in the rings where those rhythms are in synchrony with different aspects of Titan's 16-day orbit, creating signature effects that point from a distance back towards Titan. (Image credit: NASA/JPL-Caltech/Space Science Institute ) Not long after NASA's Cassini mission arrived at Saturn in 2004, researchers realized that the planet's rings were oscillating strangely. Instead of single waves, which are predicted by existing theory, the spacecraft revealed clusters of small waves that could be explained by the presence of gravity waves in the deepest part of the planet's interior.
"What's really special about [gravity waves] is that their mere existence requires that at least part of that deep interior to be relatively calm and stable rather than convective," Christopher Mankovich, a researcher at the California Institute of Technology, told Space.com in an email.
Before these new research emerged, basic understanding of our solar system's giant planets suggested that their hot fluid interiors pushed heat outward, much like a lava lamp. But the presence of heavy ingredients like rock and water ice beneath the lighter hydrogen and helium can inhibit the movement of the fluid and generate gravity waves. In the case of Saturn, those waves can cause the planet to ring like a bell.
"The detection of internal gravity waves in Saturn through ring seismology is now one of the scant pieces of hard evidence that a significant fraction of Saturn's interior is stably stratified rather than convective," Mankovich said.
On rocky planets like Earth, disturbances beneath the planet's surface can move as a wave, traveling through the planet's interior and through its surface. As a result, major tremors can be felt hundreds of miles from the epicenter of a strong earthquake . Eventually, interference from other traveling waves can create a standing wave pattern spanning the entire planet.
"This translates into the whole planet ringing like a bell," Mankovich said.
Like a bell, the characteristics of this wave are dictated by the planet's size, shape and composition. They reveal insights into large-scale structures beneath the surface, including those that cannot be otherwise accessed.
The same process occurs on Saturn, where a beautiful ring system made up of tiny bits of rock and ice surround the planet. Most of the time, their orbit is calm and orderly, with occasional collisions. Scientists have known for decades that the ring particles can be affected by the gravitational pull of the planet's 82 moons .While ring seismology was proposed in the early 1990s , it wasn’t until a spacecraft spent time orbiting the planet that the idea could be put into play.
Cassini revealed that Saturn's rings were also subjected to the tremors of the planet's oscillating gravitational field. The spacecraft characterized more than 20 waves in the Saturn's ring system caused by the heart of the planet. The interactions occur only in special places in the rings, but the results can be "dramatic," according to Mankovich. The effect is small, with the waves only about a single kilometer from peak to peak, while the rings span nearly 180,000 miles (300,000 km).
"These waves are really only obvious at a very fine scale," Mankovich said. "Cassini made it possible to study these waves in exquisite detail by making the journey to Saturn to study the system up close and personal."
Understanding what's going on deep in Saturn's heart is very much an ongoing process. According to Mankovich, ring seismology favors a thick stable region that makes up roughly a quarter of the planet's radius. That's somewhat at odds with understanding gleaned from the planet's magnetic field, which favors a narrow, stable region only 5 to 10% of the planet's interior.
Mankovich says that it's too soon to say what the results imply about the planet's interior, but that one possibility is that the process that generates the planet's magnetic field is even more different from its fellow gas giant Jupiter than previously anticipated.
"It will be a fascinating next few years as the full implications of the multi-instrument Cassini data are worked out," Mankovich said. But far from being dismayed by the clash, Mankovich appeared excited.
"It's a testament to the power of a spacecraft mission like Cassini that we have such diverse data that different parts of it seem to say different things—it reflects a gap in our understanding and presents an opportunity for discovery. Scientific synergy at its best," he added.
Follow Nola on Facebook and on Twitter at @NolaTRedd . Follow us on Twitter @Spacedotcom and on Facebook .
OFFER: Save 45% on 'All About Space' 'How it Works' and 'All About History'!
For a limited time, you can take out a digital subscription to any of our best-selling science magazines for just $2.38 per month, or 45% off the standard price for the first three months. |
1 | C64 Limbo Preview by Excess (2018) | Excellent, it plays so well, but strikes me how did you guys get this. I can't find any announcement from the developer that started it a year ago. Is this really from Søren Trautner Madsen or it is some other parallel development ? |
1 | US Army's Next Generation Combat Vehicles and Robots | In this episode of MWI’s Urban Warfare Project Podcast, John Spencer is joined by Major General Ross Coffman. He is the director of Army Futures Command’s Next Generation Combat Vehicle Cross-Functional Team—one of eight cross-functional teams working toward the Army’s modernization priorities.
In the conversation, Maj. Gen. Coffman describes the cross-functional team’s goals as well as the projects it has underway as it works to replace the M2 Bradley Infantry Fighting Vehicle that was originally fielded forty years ago with a family of manned and unmanned robotics that will shape the future battlefield. He also discusses the different environments in which future combat vehicle capabilities are tested—from open areas to the most complex urban terrain.
You can listen to the discussion below or find the episode on Apple Podcasts, Stitcher, Spotify, TuneIn, or your favorite podcast app. Be sure to subscribe, and if you’re enjoying the Urban Warfare Project Podcast, please take a minute and leave the podcast a review or give it a rating!
Special thanks to Cadet Ben Phocas for post-production editing.
Image credit: Mr. Luke J. Allen, US Army |
6 | 3D printing poses a “grave and growing threat” to people’s privacy, experts warn | Legally governing 3D printing is not straightforward as the underlying technologies are so precise
3D printing poses a “grave and growing threat” to people’s privacy, experts warn
3D printing technology poses a “grave and growing threat” to individual privacy because of the potential for products to reveal private information about individuals, experts have warned.
People could use cameras, laptops or mobile phones to track and trace the origins of 3D printed objects and how they have been used if they have watermarks.
A new study warns about a lack of awareness among governments and companies about privacy issues associated with 3D printers, and calls for changes to treaties on copyright law and international human rights law.
The research, by Dr Annika Jones from Durham University and Dr James Griffin from the University of Exeter, recommends a new voluntary code of conduct to protect people’s privacy, and a regulatory body to provide guidance and oversight.
The experts carried out 30 in-depth interviews with representatives from Chinese 3D printing companies.
The research warns the rise of the Internet of Things, the increasing complexity of watermarking technologies that can survive transfer between different file formats, and the ability for big data to track 3D printed content could allow greater state surveillance of individuals.
Dr Griffin said: “3D printing will have a profound impact upon our notions of social privacy. This has the potential to be considerably more invasive than the Internet of Things. Every physical product that is 3D printed has the potential to be tracked in a way that has never occurred before. In the future, as 3D printing becomes more common place, there will be the potential for strangers to trace, track and observe objects, which can reveal an incredible amount of information about the users of such content.”
Legally governing 3D printing is not straightforward as the underlying technologies are so precise. With 4D printing objects print themselves and the use of augmented and virtual reality allows for enhanced tracking. There is potential for all 3D biotech materials such as blood vessels or replicas of body parts, to be traced.
People interviewed as part of the study said they were not saving data or files from customers, but recognised the sensitivities of the personal data which could be collected in the production and use of 3D printed materials. The process of tracking the use of 3D printed products was described by one interviewee as an “infringement” of the privacy of the individual.
Most participants in the research thought tracking technology would be used to tackle piracy or copyright issues. The interviews suggest watermarks are not yet being extensively used in 3D printing.
Several of the interview participants mentioned the absence, or inadequacy, of current regulation of privacy issues in the context of 3D printing. In the absence of clear guidance some said they were self-regulating in order to ensure that privacy was protected.
Dr Jones said: “Privacy issues are already being raised” and that “the risk of further incursions into individual privacy are on the horizon with the development of new technology and growing awareness of the commercial value of the personal data that can be collected through the production and use of 3D printed products”.
“At the same time, it is clear that there is a demand within the industry for further guidance as to how to ensure that personal data, and individual privacy, is protected as the industry evolves.”
The academics suggest the current international human rights law framework should be interpreted to deal with and acknowledge the specific issues relating to watermarking in 3D printed objects.
The voluntary code of conduct would encourage self-regulation of 3D printing and watermarking. The code would require watermarks to be clearly identified on 3D files and goods, and measures to be taken to ensure the protection of individual privacy where identifying marks or modes of identification are used within an object or code. There should also be a specific software component that can isolate and protect private information collected from a watermark.
The experts do not believe self-regulation would be sufficient without oversight. The new regulatory body could be organised by existing licensing organisations such as the UK Copyright Hub, National Copyright Administration of China, the UK Intellectual Property Office, the Copyright Tribunal, or Information Commissioners Office.
Dr Griffin said: “Digital watermarking and 3D printed products present a future where objects can be searched for with nothing more than the equivalent of a Google search word. 3D printing and digital watermarking specifically has not been considered by any government or regulatory body, nor has there been any regulatory research carried out on the matter. Our proposals help to ensure the protection of individual privacy in an increasingly digitised world.” |
2 | The Hunt for Vulcan, the Planet That Wasn’t There | p
The Hunt for Vulcan, the Planet That Wasn’t There Everyone thought the gravitational pull of an undiscovered planet made Mercury wobble. They were wrong. Albert Einstein explained why.
By p National Geographic Published November 4, 2015 10 min read • Share Tweet Email One hundred years ago today at the Prussian Academy of Sciences, Albert Einstein gave the first in a series of lectures that rewrote Newton’s laws of gravity and changed the world. In The Hunt For Vulcan: … And How Albert Einstein Destroyed A Planet, Discovered Relativity, And Deciphered The Universe , Tom Levenson reveals how this revolution could not have happened without disproving an obscure astronomical calculation.
According to Newton, Mercury’s wobble was caused by the gravitational pull of some other planet. Enter Vulcan—the so-called “other” planet—first observed in 1859; confirmed by the greatest astronomer of the day, Urbain Le Verrier; and hailed by The New York Times as one of the great discoveries of the century. Trouble was,
DON'T MISS THE REST OF THIS STORY! Create a free account to continue and get unlimited access to hundreds of Nat Geo articles, plus newsletters. Create your free account to continue reading No credit card required. Unlimited access to free content. p Read This Next p p
p p p p
p p p p
p
p p p p
p p Go Further Animals Environment History & Culture Science Travel Subscriber Exclusive Content previous p
p
p
p
p
p
p
p
p
p
p
p
p
p
p
next See More |
36 | Lenia – Mathematical Life Forms | {{ message }}
Chakazul/Lenia
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session. |
1 | BBC – UBI to be tested in Wales | Universal basic income to be tested in Wales
About sharing
Getty Images Every adult, regardless of their means, would receive a regular sum of money under the scheme By James Williams BBC Wales political correspondent A universal basic income scheme is to be trialled in Wales, meaning adults, regardless of their means, will receive a regular sum of money.
The idea is that this would cover the basic cost of living.
First Minister Mark Drakeford said the pilot would "see whether the promises that basic income holds out are genuinely delivered" in people's lives.
But the Conservatives said Wales should not become "a petri dish for failed left-wing policies".
Mr Drakeford said a pilot would "need to be carefully designed to make sure that it is genuinely adding income for the group of people we are able to work with".
He added: "It'll have to be a pilot because we don't have all the powers in our own hands to do it on our own.
"It'll have to be carefully crafted to make sure that it is affordable and that it does it within the powers available to the Senedd.
"We need to make an early start on designing the pilot to make sure that we have the best chance of operating a pilot that allows us to draw the conclusions from it that we would all want to see."
Reuters Elon Musk tweeted his support of universal basic income last year What is universal basic income? It means every adult in a specific area would receive a standard, unconditional payment at regular intervals.
Supporters of the idea have said it provides a safety net for people who are unemployed or have irregular work, allowing them time to find a new job or learn new skills.
Some high profile celebrities, including billionaire businessman Elon Musk, have backed the idea, while the UK Labour Party said it would explore a pilot of UBI in its 2019 general election manifesto.
Various versions of the scheme have been trialled around the world, including in Finland, where 2,000 unemployed people were paid €560 (£480) per month for two years.
Researchers found the scheme left those happier and less stressed, but did not aid them in finding work.
Meanwhile, in western Kenya, a 12-year trial is taking place, where every adult is being paid $22 (£16) per month to see if it can help lift people out of poverty.
Future Generations Commissioner for Wales Sophie Howe has called for politicians to be "brave and radical" Wales' future generations commissioner, who has previously called for a pilot, said she was "delighted" with the plan.
Sophie Howe said: "Signalling basic income as a priority for the new government is an incredibly significant commitment by the first minister to tackling Wales' poverty and health inequalities - which cause lasting damage to the health and prospects of individuals, families and communities.
"It's a huge moment for the campaign, which I've been proud to be a part of, and the growing support for a fairer way of allowing people to meet their basic needs.
"The current system isn't working - Wales' commitment to exploring a basic income once again proves it's often the small countries that can be world leading and make the biggest changes."
Could a basic income replace Universal Credit?
Has coronavirus changed the basic income debate?
Call for a shorter working week in Wales
In its manifesto, Plaid Cymru supported a Welsh pilot for a universal basic Income in order "to prepare for a future where work may have a different role in the economy as a result of automation and the application of AI and related technologies".
The Welsh Liberal Democrats also made an election commitment to support a trial because the party believes "UBI not only reduces inequalities and increases wellbeing, but that it strengthens local economies".
But the Welsh Conservatives said: "The Joseph Rowntree Foundation is clear that UBI is not the answer to solving poverty, in fact they claim it can actually increase poverty.
"The first minister needs to get on with kickstarting the Welsh economy, creating long-term, well-paid jobs for people rather than using Wales as a petri dish for failed left-wing policies."
In a 2018 blog post, the Joseph Rowntree Foundation's deputy director of evidence, Chris Goulden, said: "It is not affordable, unpalatable to most of the public because of its 'money for nothing' tag and perhaps most importantly - it increases poverty unless modified beyond recognition."
Related Topics Welsh Liberal Democrats
Poverty
Plaid Cymru
Welsh government
Wales economy
Welsh Conservatives
More on this story Basic income 'worrying and expensive'
p
Labour hopeful backs universal income
p
Could a basic income replace Universal Credit?
p
Greens offer universal basic income by 2025
p
'Tremendous tax' to fund basic income
p
Basic income for all 'attractive' idea
p
Has coronavirus changed the basic income debate?
p
Call for a shorter working week in Wales
p
An uncertain future: Young people and the pandemic
p |
1 | Libertarians for and Against Copyrights 1995 | Outline
A Dispute Among Libertarians
The Historical Argument
The Ethical Argument
The Economic Argument
The Information-Based Argument
First Tolkien Story
Alternatives to Intellectual Property Rights: Some
Formulations
Second Tolkien Story
-- Disclaimer
(to outline) (to
top of page)
It would be interesting to discover
how far a seriously critical view of the benefits to society of the law
of copyright ... would have a chance of being publicly stated in a society
in which the channels of expression are so largely controlled by people
who have a vested interest in the existing situation.
— Friedrich A. Hayek, "The Intellectuals and Socialism"
A Dispute Among Libertarians
The status of intellectual property rights (copyrights, patents, and
the like) is an issue that has long divided libertarians. Such libertarian
luminaries as Herbert Spencer, Lysander Spooner, and Ayn Rand have been
strong supporters of intellectual property rights. Thomas Jefferson, on
the other hand, was ambivalent on the issue, while radical libertarians
like Benjamin Tucker in the last century and Tom Palmer in the present
one have rejected intellectual property rights altogether.
When libertarians of the first sort come across a purported intellectual
property right, they see one more instance of an individual's rightful
claim to the product of his labor. When libertarians of the second sort
come across a purported intellectual property right, they see one more
instance of undeserved monopoly privilege granted by government.
I used to be in the first group. Now I am in the second. I'd like to
explain why I think intellectual property rights are unjustified, and how
the legitimate ends currently sought through the expedient of intellectual
property rights might be secured by other, voluntary means.
(to outline)
(to top of page)
Intellectual property rights have a tainted past. Originally, both patents
and copyrights were grants of monopoly privilege pure and simple. A printing
house might be assigned a "copyright" by royal mandate, meaning that only
it was allowed to print books or newspapers in a certain district; there
was no presumption that copyright originated with the author. Likewise,
those with political pull might be assigned a "patent," i.e., an
exclusive monopoly, over some commodity, regardless of whether they had
had anything to do with inventing it. Intellectual property rights had
their origin in governmental privilege and governmental protectionism,
not in any zeal to protect the rights of creators to the fruits of their
efforts. And the abolition of patents was one of the rallying cries of
the 17th-century Levellers (arguably the first libertarians).
Now this by itself does not prove that there is anything wrong with
intellectual property rights as we know them today. An unsavory past is
not a decisive argument against any phenomenon; many worthwhile and valuable
things arose from suspect beginnings. (Nietzsche once remarked that there
is nothing so marvelous that its past will bear much looking into.) But
the fact that intellectual property rights originated in state oppression
should at least make us pause and be very cautious before embracing them.
(to outline)
(to top of page)
Ethically, property rights of any kind have to be justified as extensions
of the right of individuals to control their own lives. Thus any alleged
property rights that conflict with this moral basis — like the "right"
to own slaves — are invalidated. In my judgment, intellectual property
rights also fail to pass this test. To enforce copyright laws and the like
is to prevent people from making peaceful use of the information they possess.
If you have acquired the information legitimately (say, by buying a book),
then on what grounds can you be prevented from using it, reproducing it,
trading it? Is this not a violation of the freedom of speech and press?
It may be objected that the person who originated the information deserves
ownership rights over it. But information is not a concrete thing an individual
can control; it is a universal, existing in other people's minds
and other people's property, and over these the originator has no legitimate
sovereignty. You cannot own information without owning other people.
Suppose I write a poem, and you read it and memorize it. By memorizing
it, you have in effect created a "software" duplicate of the poem to be
stored in your brain. But clearly I can claim no rights over that copy
so long as you remain a free and autonomous individual. That copy in your
head is yours and no one else's.
But now suppose you proceed to transcribe my poem, to make a "hard copy"
of the information stored in your brain. The materials you use — pen and
ink — are your own property. The information template which you used —
that is, the stored memory of the poem — is also your own property. So
how can the hard copy you produce from these materials be anything but
yours to publish, sell, adapt, or otherwise treat as you please?
An item of intellectual property is a universal. Unless we are to believe
in Platonic Forms, universals as such do not exist, except insofar as they
are realized in their many particular instances. Accordingly, I do not
see how anyone can claim to own, say, the text of Atlas Shrugged
unless that amounts to a claim to own every single physical copy of Atlas
Shrugged. But the copy of Atlas Shrugged on my bookshelf does
not belong to Ayn Rand or to her estate. It belongs to me. I bought it.
I paid for it. (Rand presumably got royalties from the sale, and I'm sure
it wasn't sold without her permission!)
The moral case against patents is even clearer. A patent is, in effect,
a claim of ownership over a law of nature. What if Newton had claimed to
own calculus, or the law of gravity? Would we have to pay a fee to his
estate every time we used one of the principles he discovered?
"... the patent monopoly ... consists in protecting inventors ...
against competition for a period long enough to extort from the people
a reward enormously in excess of the labor measure of their services, —
in other words, in giving certain people a right of property for a term
of years in laws and facts of Nature, and the power to exact tribute from
others for the use of this natural wealth, which should be open to all."
(Benjamin Tucker, Instead of a Book, By a Man Too Busy to Write One:
A Fragmentary Exposition of Philosophical Anarchism (New York: Tucker,
1893), p. 13.)
Defenders of patents claim that patent laws protect ownership only of ,
not of . (Likewise, defenders of copyright claim that
copyright laws protect only of ideas, not the ideas
themselves.) But this distinction is an artificial one. Laws of nature
come in varying degrees of generality and specificity; if it is a law of
nature that copper conducts electricity, it is no less a law of nature
that this much copper, arranged in this configuration, with these other
materials arranged so, makes a workable battery. And so on.
Suppose you are trapped at the bottom of a ravine. Sabre-tooth tigers
are approaching hungrily. Your only hope is to quickly construct a levitation
device I've recently invented. You know how it works, because you attended
a public lecture I gave on the topic. And it's easy to construct, quite
rapidly, out of materials you see lying around in the ravine.
But there's a problem. I've patented my levitation device. I own it
— not just the individual model I built, but the universal. Thus, you can't
construct your means of escape without using my property. And I, mean old
skinflint that I am, refuse to give my permission. And so the tigers dine
well.
This highlights the moral problem with the notion of intellectual property.
By claiming a patent on my levitation device, I'm saying that you are not
permitted to use your own knowledge to further your ends. By what right?
Another problem with patents is that, when it comes to laws of nature,
even fairly specific ones, the odds are quite good that two people, working
independently but drawing on the same background of research, may come
up with the same invention (discovery) independently. Yet patent law will
arbitrarily grant exclusive rights to the inventor who reaches the patent
office first; the second inventor, despite having developed the idea on
his own, will be forbidden to market his invention.
Ayn Rand attempts to rebut this objection:
But this reply will not do. Rand is suggesting that the competition to
get to the patent office first is like any other kind of commercial competition.
For example, suppose you and I are competing for the same job, and you
happen to get hired simply because you got to the employer before I did.
In that case, the fact that I have gotten there first does
not give me any rightful claim to the job. But that is because I have no
to the job in the first place. And once you get the job, your
rightful claim to that job depends solely on the fact that your employer
chose to hire you.
In the case of patents, however, the story is supposed to be different.
The basis of an inventor's claim to a patent on X is supposedly the fact
that he has invented X. (Otherwise, why not offer patent rights over X
to anyone who stumbles into the patent office, regardless of whether they've
ever even heard of X?) Registering one's invention with the patent office
is supposed to record one's right, not to create it. Hence
it follows that the person who arrives at the patent office second has
just as much right as the one who arrives first — and this is surely a
reductio ad absurdum of the whole notion of patents.
(to outline)
(to top of page)
The economic case for ordinary property rights depends on scarcity.
But information is not, technically speaking, a scarce resource in the
requisite sense. If A uses some material resource, that makes less of the
resource for B, so we need some legal mechanism for determining who gets
to use what when. But information is not like that; when A acquires information,
that does not decrease B's share, so property rights are not needed.
Some will say that such rights are needed in order to give artists and
inventors the financial incentive to create. But most of the great innovators
in history operated without benefit of copyright laws. Indeed, sufficiently
stringent copyright laws would have made their achievements impossible:
Great playwrights like Euripides and Shakespeare never wrote an original
plot in their lives; their masterpieces are all adaptations and improvements
of stories written by others. Many of our greatest composers, like Bach,
Tchaikovsky, and Ives, incorporated into their work the compositions of
others. Such appropriation has long been an integral part of legitimate
artistic freedom.
Is it credible that authors will not be motivated to write unless they
are given copyright protection? Not very. Consider the hundreds of thousands
of articles uploaded onto the Internet by their authors everyday, available
to anyone in the world for free.
Is it credible that publishers will not bother to publish uncopyrighted
works, for fear that a rival publisher will break in and ruin their monopoly?
Not very. Nearly all works written before 1900 are in the public domain,
yet pre-1900 works are still published, and still sell.
Is it credible that authors, in a world without copyrights, will be
deprived of remuneration for their work? Again, not likely. In the 19th
century, British authors had no copyright protection under American law,
yet they received royalties from American publishers nonetheless.
In his autobiography, Herbert Spencer tells a story that is supposed
to illustrate the need for intellectual property rights. Spencer had invented
a new kind of hospital bed. Out of philanthropic motives, he decided to
make his invention a gift to mankind rather than claiming a patent on it.
To his dismay, this generous plan backfired: no company was willing to
manufacture the bed, because in the absence of a guaranteed monopoly they
found it too risky to invest money in any product that might be undercut
by competition. Doesn't this show the need for patent laws?
I don't think so. To begin with, Spencer's case seems overstated. After
all, companies are constantly producing items (beds, chairs, etc.)
to which no one holds any exclusive patent. But never mind; let's grant
Spencer's story without quibbling. What does it prove?
Recall that the companies who rejected Spencer's bed in favor of other
uses for their capital were choosing between producing a commodity in which
they would have a monopoly and producing a commodity in which they would
not have a monopoly. Faced with that choice, they went for the patented
commodity as the less risky option (especially in light of the fact that
they had to compete with other companies likewise holding monopolies).
So the existence of patent laws, like any other form of protectionist legislation,
gave the patented commodity an unfair competitive advantage against its
unpatented rival. The situation Spencer describes, then, is simply an artifact
of the patent laws themselves! In a society without patent laws, Spencer's
philanthropic bed would have been at no disadvantage in comparison with
other products.
(to outline)
(to top of page)
Though never justified, copyright laws have probably not done too much
damage to society so far. But in the Computer Age, they are now becoming
increasingly costly shackles on human progress.
Consider, for instance, Project Gutenberg, a marvelous non-profit volunteer
effort to transfer as many books as possible to electronic format and make
them available over the Internet for free. (For information about Project
Gutenberg, contact the project director, Michael S. Hart, at hart@vmd.cso.uiuc.edu.)
Unfortunately, most of the works done to date have been pre-20th-century
— to avoid the hassles of copyright law. Thus, copyright laws today are
working to restrict the availability of information, not to promote it.
(And Congress, at the behest of the publishing and recording industries,
is currently acting to extend copyright protection to last nearly a century
after the creator's death, thus ensuring that only a tiny fraction of the
information in existence will be publicly available.)
More importantly, modern electronic communications are simply beginning
to make copyright laws unenforceable; or at least, unenforceable by any
means short of a government takeover of the Internet — and such a chilling
threat to the future of humankind would clearly be a cure far worse than
the disease. Copyright laws, in a world where any individual can instantaneously
make thousands of copies of a document and send them out all over the planet,
are as obsolete as laws against voyeurs and peeping toms would be in a
world where everyone had x-ray vision.
(to outline)
(to top of page)
Here's a story that illustrates some of the needless irritation that
intellectual property laws can cause.
Several years ago the avant-garde film animator Ralph Bakshi decided
to make a movie of J. R. R. Tolkien's classic fantasy trilogy The Lord
of the Rings. Or rather, he decided to split the trilogy into two movies,
since the work is really too long to fit easily into a single film.
So Bakshi started off with Lord of the Rings (Part One). This
movie covered the first volume of the trilogy, and part of the second volume.
The second movie was to have covered the rest of the second volume, and
then the whole of the third volume. To make the first movie, then, Bakshi
needed to buy the rights to the first two volumes, and this is what he
(or, presumably, his studio) did.
But Bakshi never got around to making the second movie (probably because
the first movie turned out to be less successful financially than had been
anticipated). Enter Rankin-Bass, another studio. Rankin-Bass had made an
animated TV-movie of Tolkien's earlier novel The Hobbit, and they
were interested in doing the same for the second part of Lord of the
Rings, left unfilmed by Bakshi.
But there was a problem. Bakshi's studio had the rights to the first
two volumes of the trilogy. Only the rights to the third volume were available.
So Rankin-Bass' sequel (released as The Return of the King) ended
up, of necessity, covering only the third volume. Those events from the
second volume that Bakshi had left unfilmed were simply lost. (Not even
flashbacks to events in the first two volumes were permitted — although
flashbacks to The Hobbit were okay, because Rankin-Bass had the
rights to that.)
Video catalogues now sell The Hobbit, The Lord of the Rings,
and The Return of the King as a unified package. But viewers unfamiliar
with the books will be a bit puzzled. In the Bakshi film, the evil wizard
Saruman is a looming force to be reckoned with; in the Rankin-Bass sequel,
he is not even mentioned. Likewise, at the end of the Bakshi film, Frodo,
Sam, and Gollum are traveling together; at the beginning of the Rankin-Bass
sequel we find them split up, without explanation. The answers lie in the
unfilmed portion of the second volume, which deals with Saruman's defeat,
Gollum's betrayal of Frodo, Sam's battle with Shelob, and Frodo's capture
by the Orcs. Not unimportant events, these. But thanks to intellectual
property laws, the viewer is not allowed to know about them.
Is this a catastrophe? I suppose not. The æsthetic unity and continuity
of a work of art was mangled, pursuant to the requirements of law. But
it was just an animated TV-movie. So what?
So what, perhaps. But my story does serve to cast doubt on the idea
that copyright is a bulwark of artistic expression. When a work of art
involves reworking material created by others (as most art historically
has), copyright laws can place it in a straitjacket.
(to outline)
(to top of page)
Alternatives to Intellectual Property Rights: Some Formulations
I may have given the impression, thus far, that intellectual property
rights serve no useful function whatever. That is not my position. I think
some of the ends to which copyrights and patents have been offered as the
means are perfectly legitimate. I believe, however, that those ends would
be better served by other means.
Suppose I pirate your work, put my name on it, and market it as mine.
Or suppose I revise your work without your permission, and market it as
yours. Have I done nothing wrong?
On the contrary, I have definitely committed a rights-violation. The
rights I have violated, however, are not yours, but those of my customers.
By selling one person's work as though it were the work of another., I
am defrauding those who purchase the work, as surely as I would
be if I sold soy steaks as beef steaks or vice versa. All you need to do
is buy a copy (so you can claim to be a customer) and then bring a class-action
suit against me.
There are other legal options available to the creators of intellectual
products. For example, many software manufacturers can and do place copy-protection
safeguards on their programs, or require purchasers to sign contracts agreeing
not to resell the software. Likewise, pay-TV satellite broadcasters scramble
their signal, and then sell descramblers.
None of these techniques is foolproof, of course. A sufficiently ingenious
pirater can usually figure out how to get around copy protections or descramble
a signal. And conditional-sale contracts place no restriction on third-party
users who come by the software in some other way. Still, by making it more
difficult to pirate their intellectual products, such companies do manage
to decrease the total amount of piracy, and they do stay in business and
make profits.
But what if I do go ahead and market your work without your permission,
and without offering you any share of the profits? Is there nothing wrong
with this? Can nothing be done about this?
In the case described, I don't think what I've done is unjust.
That is, it's not a violation of anyone's rights. But it's tacky.
Violating someone's rights is not the only way one can do something wrong;
justice is not the only virtue.
But justice is the only virtue that can be legitimately enforced.
If I profit from pirating your work, you have a legitimate moral claim
against me, but that claim is not a right. Thus, it cannot legitimately
use coercion to secure compliance. But that doesn't mean it can't
be enforced through other, voluntary methods.
A good deal of protection for the creators of intellectual products
may be achieved through voluntary compliance alone. Consider the phenomenon
of shareware, in which creators of software provide their products free
to all comers, but with the request that those who find the program useful
send along a nominal fee to the author. Presumably, only a small percentage
of shareware users ever pay up; still, that percentage must be large enough
to keep the shareware phenomenon going.
There are more organized and effective ways of securing voluntary compliance,
however. I have in mind the strategy of boycotting those who fail to respect
the legitimate claims of the producers. Research conducted by libertarian
scholar Tom Palmer has turned up numerous successful instances of such
organized boycotts. In the 1930's, for example, the Guild of Fashion Originators
managed to protect dress styles and the like from piracy by other designers,
without any help from the coercive power of government.
A voluntary boycott is actually a much safer tool than government for
protecting the claims of intellectual producers, because, in the course
of trying to strike a pragmatic balance between the economic power of producers
and the economic power of consumers, a private effort is more likely than
a government monopoly freed from market incentives to strike an analogous
balance between the legitimate moral claims of the two groups — the producers'
moral claim to remuneration, and the consumers' moral claim to easily accessible
information.
Something more formal can easily be imagined. In the late Middle Ages
a voluntary court system was created by merchants frustrated with the inadequacies
of governmentally-provided commercial law. This system, known as the Law
Merchant ("law" being the noun and "merchant" the adjective), enforced
its decisions solely by means of boycott, and yet it was enormously effective.
Suppose producers of intellectual products — authors, artists, inventors,
software designers, etc. — were to set up an analogous court system for
protecting copyrights and patent rights — or rather, copyclaims and patent
claims (since the moral claims in question, though often legitimate, are
not rights in the libertarian sense). Individuals and organizations accused
of piracy would have a chance to plead their case at a voluntary court,
but if found guilty they would be required to cease and desist, and to
compensate the victims of their piracy, on pain of boycott.
What if this system went too far, and began restricting the free flow
of information in the same undesirable ways that, I've argued, intellectual
property laws do?
This is certainly a possibility. But I think the danger is much greater
with coercive enforcement than with voluntary enforcement. As Rich Hammer
likes to point out: ostracism gets its power from reality, and its power
is limited by reality. As a boycotting effort increases in scope, the number
and intensity of frustrated desires on the part of those who are being
deprived by the boycott of something they want will become greater. As
this happens, there will also be a corresponding increase in the number
of people who judge that the benefits of meeting those desires (and charging
a hefty fee to do so) outweigh the costs of violating the boycott. Too
strenuous and restrictive a defense of copyclaims will founder on the rock
of consumer preferences; too lax a defense will founder on the rock of
producer preferences.
(to outline)
(to top of page)
Let me close with a second story about Tolkien and his famous trilogy.
The first edition of The Lord of the Rings to be published in the
United States was a pirated edition from Ace Books. For reasons which I
now forget, Tolkien could not take legal action against Ace. But when Ballantine
came out with its own official author-approved American edition of The
Lord of the Rings, Tolkien started a campaign against the Ace edition.
The Ballantine edition was released with a notice from Tolkien in a green
box on the back cover stating that this was the only authorized edition,
and urging any reader with respect for living authors to purchase no other.
Moreover, every time he answered a fan letter from an American reader,
Tolkien appended a footnote explaining the situation and requesting that
the recipient spread the word among Tolkien fans that the Ace edition should
be boycotted.
Although the Ace edition was cheaper than the Ballantine, it quickly
lost readers and went out of print. The boycott was successful.
It might be objected that Tolkien devotees tend to be more fanatical
than the average readers, and so such a strategy of boycott could not be
expected to succeed in ensuring such loyalty generally. True enough. But
on the other hand, Tolkien's boycott was entirely unorganized; it
simply consisted of a then-obscure British professor of mediæval
language and literature scribbling hand-written responses to fan letters.
Think how effective an organized boycott might have been! D
(to
outline) (to top of page)
Disclaimer: The views expressed
in this article are my own personal views, not my official policy as editor
of Formulations. For FNF's policy on copyright for material printed
in Formulations, see the section labeled Information for Authors
on page 2.
(to
table of contents of FNF archives)
(to outline) (to top
of page) |
1 | Does Bitcoin use too much energy? | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
3 | App Design on Big Sur | Table of Contents
Last time we covered design changes in macOS on this blog was back in 2015. At that point, OS X Yosemite had just been released, featuring a major overhaul of the user interface. Following in the footsteps of iOS, Yosemite moved away from a skeuomorphic interface towards a more flat, minimalistic design. At the same time, the Tower Git client was updated to adapt the new look and feel of OS X 10.10.
Well, the time is here again! The release of macOS Big Sur is just around the corner, and with it come comprehensive visual changes to the primary elements of the operating system, including windows, menus, icons and the dock. We’ve been hard at work making sure Tower feels right at home in this new environment. In this article, we’ll take a look at some of the design changes introduced in macOS Big Sur, and how we’ve applied these to Tower for Mac.
What motivates the changes to the user interface, and what is the long-term vision that drives and shapes these changes from version to version? Only Apple knows for sure. However, as spectators, we can certainly discern some overall trends. The Big Sur updates reflect some movements that have been going on for a long time.
Like OS X Yosemite, Big Sur seems to move in the direction of iOS visually. While it’s true that there’s more to macOS Big Sur than copying iOS, and while it may be more appropriate to say that the design of both iOS and macOS reflect the direction of the technology industry and of Apple as a whole, this statement does have a point. In an interview with MKBHD, Craig Federighi (Apple's senior vice president of Software Engineering) mentions the mental overhead required when constantly switching between the different interfaces of iOS and macOS. Making macOS feel more familiar for iOS-users reduces this overhead.
Apple has also been moving away from cluttered interfaces, towards sparseness and placing as much focus on content as possible. Whether you’re viewing photos, reading, or browsing your files, the content is featured prominently while available actions aren’t always immediately visible. With its increased spacing, reduced contrast and lighter icons, macOS Big Sur takes another step in this direction.
Changes in Big Sur
Having covered the general trends, let’s take a closer look at some of the visual updates in Big Sur! The look of windows is a good place to start. One of the essential elements of the user interface, windows receive some prominent changes:
Corners are more rounded than before.
Colors are lighter and contrast is decreased.
Increased spacing and more discreet icons contribute to the lightness of the interface.
Sidebars can now take up the whole height of the window.
The window title is integrated into the toolbar.
A Finder window in macOS Big Sur...
... and in Catalina.
An app can now specify an app-specific accent color, which is used for buttons, selections, and in sidebars. Selections have a new, rounded look, and many controls have been redesigned. Sheets have also been redesigned, now looking like modal popups, instead of folding down from the bottom edge of the toolbar.
Big Sur accent color, rounded selections...
... and a Big Sur sheet.
SF Symbols, a library of icons made by Apple previously available in iOS, is now included and used in macOS. These icons can be used in toolbars, sidebars and elsewhere.
App icons get a new style as well, with all icons now using a rounded rectangle shape. While the icons have been updated to correspond more closely to their iOS counterparts, they do retain some macOS touches. Traditionally, app icons in macOS have featured a more realistic style than in iOS, and many Big Sur icons sport quite a bit of detail.
Big Sur app icon design moving closer to iOS.
There are many more changes in macOS Big Sur than covered here. The dock is redesigned, as is the notification center, and there is a new control center (another change brought over from iOS). Menus look different, there are new system sounds, and more. In fact, along with the news of the impending switch to Apple Silicon — Apple-made processors — for Mac hardware, these changes were enough for Apple to decide to turn up the version number for macOS Big Sur to 11! Catalina, the current version of macOS, has the version number 10.15, while macOS Big Sur is known as version 11.0.
Changes in Tower
If you ask us, part of what makes Tower great is the fact that it’s a native Mac app. Not only does this provide the best performance, but also the best integration with the operating system — the environment in which the app is used. As macOS users generally update quickly, it’s important that Tower adapts the design language of the latest macOS version. Of course, this can’t mean that Tower suddenly looks out-of-place on previous versions — while some changes can apply both to old and new macOS versions, some user interface changes should be restricted to users running the latest update. We’ve been hard at work updating Tower to take advantage of what macOS has to offer, and we think the result is a Git client that looks and feels better than ever!
At first glance, you’ll notice that Tower adapts the full-height sidebar look of macOS Big Sur. This change doesn’t only affect the sidebar: it has consequences for the toolbar, the window title and the navigation bar. The buttons in the toolbar, pushed to the right, now share space with the window title and with elements from the navigation bar, like the clone button. The navigation bar itself has been removed, for a cleaner look. The services and bookmarks icons, previously in the navigation bar, now show up in the main toolbar, at the top of the new full-height sidebar.
Tower 6 running on Big Sur...
... and on Catalina.
Big Sur comes with a new look for selections, with rounded corners and different spacing. Naturally, Tower adopts these, except for in some places where rounded selections wouldn't make visual sense.
Icons have been updated: we now use SF symbols along with some custom ones in the sidebar, the toolbar and in the quick actions popup. These icons have a lighter look than the ones previously used. In addition, the sidebar icons and selections now use the system accent color, so if you go into your macOS preferences and change the accent color, you’ll see your choice reflected in Tower’s sidebar.
A change which shows up on older macOS versions as well concerns the remote activity area. This area, showing ongoing pushes and fetches, for example, used to be visible all the time. Now, it only shows up when there are actual remote activities going on, otherwise, it stays out of the way.
Finally, our app icon now adapts the rounded-rectangle shape of other Big Sur apps.
Of course, this is just an overview. There are a lot of changes beyond the main ones described here. Positioning and spacing has been adjusted throughout the app, as have the colors of graph lines, badges and file statuses. The preferences window has been rewritten in order to facilitate easier updates and changes in the future. Tower has been recompiled and optimized for running on Apple Silicon. All in all, we are very happy with how Tower Big Sur turned out — we think this is the best-looking, most usable version of Tower yet!
If you already have a Tower account, simply update Tower to the latest version to enjoy its brand-new Big Sur design! The new version is rolled out incrementally to our users using the updater in Tower — if you're in a hurry, you can download the latest version through our website.
Not a Tower user yet? Download our 30-day free trial and experience a better way to work with Git! |
139 | Price increase on .io on January 21, 2021 | The registry Top Level Domain Registry – Internet Computer Bureau Ltd, has notified Gandi that starting January 1, 2021 at 00:00 UTC (December 31 at 4:00 PM Pacific), the registration price for .io domain names will be increasing.
We’re aware this price increase may have a negative impact on you. However, due to the cost increase, Gandi has no choice but to pass on the price increase as of January 21, 2021.
For more information, please read the open letter from Stephan Ramoin, Gandi’s CEO on the price increases:
Please see the new prices at Gandi in Grid A (in USD), for . io domain creation, renewal transfer and restore below:
Creation and renewal: $ 42.18 * (current price $38.00*)
Transfer : $38.85* (current price $35.00*)
Restore: $ 134.16 * (current price $129.00*)
* Prices listed in USD
p .io corpo Nom de domaine
You may be interested in the following articles
Accessing your Gandi account: your recovery options if you lose your logins
Gandi recently added an “Account” team, who dedicated all their energy and skills to improving the management of accounts and the addition of secure and practical tools for helping users. […]
Launch of the .watches extension on March, 2023
From March 28th (16:00 UTC) to May 27th, 2023 (16:00), registration of a .watches domain will be reserved to holders of trademarks registered with TMCH.
Taiwan Trademark Association hosts event about domain brand protection mechanisms
The Taiwan Trademark Association organized an event about domain protection mechanisms for brand owners on May 5, 2021 in Taipei.
Should I put a dash in my domain name?
The dash, or more accurately hyphen, is the only special character allowed in a domain name, and the only
way to put a spacer between two words. But should you include one in your domain name? |
2 | Conflict of Pinterest: Women-friendly on the outside, toxic on the inside | In 2015, Pinterest—allegedly the kinder, nicer tech company—partnered with the consultancy Paradigm to increase diversity in the company, whose workforce remains only 4 percent Black. Yet even as the usual platitudes about diversity and inclusion and bringing change to the boys’ club that is the tech world were being mouthed on the outside, inside the company, a different and racist reality prevailed. According to a report in the Washington Post, a Black female employee, the only one on her team, was told by a white supervisor not to speak during meetings, after which the supervisor took credit for her work. Another executive “joked” that she should play the “servant” and “serve” the other members of the team.
These are only a few examples of Pinterest’s dirty innards. This Monday, another of Pinterest’s victims, Francoise Brougher, settled her own gender discrimination lawsuit against the company for $22.5 million. Brougher, the former chief operating officer of the company, was fired in April after she accused Pinterest of marginalizing and silencing women and excluding them from decision-making during a video chat. In a blog post published this August, Brougher describes an exclusive environment that centered on the site’s billionaire CEO Ben Silbermann. During Pinterest’s IPO process, Brougher discovered that she was being paid less than her male counterparts. When she complained, it was the beginning of her end at the company. Shortly after the IPO in 2019, she stopped being invited to board meetings, and one year later, she was fired under the pretext that she was not “collaborative.”
Two months before Brougher spoke out about her termination, two other women hired to improve Pinterest’s public image also went public about the treatment that they had faced while working there. Ifeoma Ozoma and Aerica Shimizu Banks, two out of the three members of Pinterest’s public policy team, quit in May. In June, Ozoma, prompted by a “Black Employees Matter” tweet on the Pinterest account, tweeted about what she had gone through in the past year just to be treated fairly by the company. The horrors included being doxxed by a fellow employee who shared all of her private information with racist and misogynistic parts of the internet. Even as she was made the face of Pinterest’s public policy wins, her boss, a white woman, gave her a bad performance review for not both-sidesing her push to eliminate pictures promoting weddings at plantation venues from the platform. Ozoma and Banks also learned that they were put at lower levels on Pinterest’s hierarchy than their white manager despite doing the same work, which deprived them of stock options they believe to be potentially worth hundreds of thousands of dollars. Prompted by Ozoma and Banks as well as the pending Brougher issue, hundreds of Pinterest employees staged a virtual walkout in August.
In the aftermath of all of this, Pinterest has committed to all sorts of policies that are geared toward improving their culture and their treatment of racial and gender issues. The hefty settlement with Brougher, one of the highest ever paid in such a lawsuit, will likely ensure some deeper soul searching, at least as it pertains to gender discrimination. A tech company geared toward women planning weddings, or searching for lovely table settings and a palette for their room décor (among many, many other things), may now learn that it has to be nice to the women who actually work for them.
Craftiness is great, but feminism is better, and organized feminism is the best.
The issue of racial discrimination is much thornier. Sites like Pinterest not only reflect aesthetic culture; they create it. This means that when Black women enter the fray and point out the entrenched racism of, say, pushing pictures of beautiful weddings at plantation homes that were the sites of racist abuse, they also point to the complicity of a culture that has unyieldingly declared these pictures permissible, even worthy of emulation. In America’s cutthroat corporate culture, which prioritizes self-advancement above all else, the very idea that Blackness confers a clearer perspective where these judgments are concerned is outright anathema. Simply put, a Black woman may be hired to point out just such things as plantation weddings, yet when she does, her white competitors—often also women—see her as unfairly benefiting from Blackness. That whiteness has conferred much greater advantages to white women and men does not hold back the qualms of those eager to point out the unfairness of Blackness or brownness being utilized as a career plus.
Then there is the issue of being a woman and continuing to use Pinterest. Knowledge of the toxic culture within this tech company should impute responsibilities on the women who use it. Simply put, it cannot only be the employees of the company who stage virtual walkouts. Craftiness is great, but feminism is better, and organized feminism is the best. If the millions of women who use Pinterest are made to realize the moral complicity of supporting a business that does not support women, the possibility of dramatic changes in its corporate culture skyrockets. Boycotts may or may not work where other businesses are concerned, but in this case, where the errant business caters to women first and foremost, they are almost guaranteed to galvanize change.
When Francis Brougher found out that she was being remunerated far less than her male counterparts, she spoke up. Similarly, when Ifeoma Ozoma and Aerica Shizumi Banks saw the hypocrisy of an organization that put on a public face devoted to racial and gender justice while ignoring it within, they took to Twitter. These actions reveal the transformed landscape in which discrimination claims may now be litigated; where once a woman had to have the money and wherewithal to file a discrimination claim, often compiling evidence in secret, she can now use social media to her advantage. Where once there was no support for the lone woman daring to speak out, a blog post can galvanize supporters from far and wide. The law has long forbidden gender and racial discrimination, but we may well be at a moment when the pressure of public shaming can transform companies and culture itself. The truth may not be as pretty as a Pinterest board, but it is empowering, should we choose solidarity and act on it. |
1 | Show HN: I created a Kotlin plugin to work with annotation use-site targets | {{ message }}
p / docs /
Files
Permalink
Permalink
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session. |
2 | Calculator the Game (PWA) | Back
h1
Get to the desired number by using the buttons a limited number of times.
Show technical details
Features ?
HTTPS
iManifestp
Manifest View
Offline
Android installable
iOS installable
Audits
96
Lighthouse
p
View report |
1 | Miro grew 3x, Twilio's PLG motion, PLG metrics | Is there an enterprise sales gene? If you don't find it here, you won't find it anywhere Mar 30 • Rahul Krishnan and Ruchin Kulkarni |
2 | EDE - small and fast desktop environment | EDE
small and fast desktop environment
EDE is small desktop environment built to be responsive, light in resource usage and to have
familiar look and feel. It runs on Linux, *BSD, Solaris, Minix, Zaurus and even on XBox.
Download version 2.1
or you can browse for older releases.
News: Bugzilla issues resolved
Blog: Autumn cleanup |
2 | Things I Wish They Told Me About Multiprocessing in Python | “Some people, when confronted with a problem, think ‘I know, I’ll use multithreading’. Nothhw tpe yawrve o oblems.” (Eiríkr Åsheim, 2012)
If multithreading is so problematic, though, how do we take advantage of systems with 8, 16, 32, and even thousands, of separate CPUs? When presented with large Data Science and HPC data sets, how to you use all of that lovely CPU power without getting in your own way? How do you tightly coordinate the use of resources and processing power needed by servers, monitors, and Internet of Things applications - where there can be a lot of waiting for I/O, many distinct but interrelated operations, and non-sharable resources - and you still have to crank through the preprocessing of data before you send it somewhere?
An excellent solution is to use multiprocessing, rather than multithreading, where work is split across separate processes, allowing the operating system to manage access to shared resources. This also gets around one of the notorious Achilles Heels in Python: the Global Interpreter Lock (aka theGIL). This lock constrains all Python code to run on only one processor at a time so that Python multi-threaded applications are largely only useful when there is a lot of waiting for IO. If your application is I/O bound and doesn’t require large blocks of CPU time, then, as of Python version 3.4, the asyncio system is the preferred approach.
Python ships with the multiprocessing module which provides a number of useful functions and classes to manage subprocesses and the communications between them. One interface the module provides is the Pool and map() workflow, allowing one to take a large set of data that can be broken into chunks that are then mapped to a single function. This is extremely simple and efficient for doing conversion, analysis, or other situations where the same operation will be applied to each data. But Pool and map() aren’t suited to situations that need to maintain state over time or, especially, situations where there are two or more different operations that need to run and interact with each other in some way. For these kinds of problems, one needs to make use of somewhat more complex features of the multiprocessing module, such as Process, Queue and Event. Using these features introduces a number of complicated issues that now need to be managed, especially with regard to cleanly starting and stopping subprocesses, as well as coordinating between them.
Since Python multiprocessing is best for complex problems, we’ll discuss these tips using a sketched out example that emulates an IoT monitoring device. This example is based on an implementation of an HVAC system that I worked on in 2018.
The application consists of a “Main Process” - which manages initialization, shutdown and event loop handling - and four subprocesses.
Note on code examples: These examples use the mptools{:} Python library that I developed while writing this blog post. This is the only example where the library interface is directly referenced.
:
( . , , )
= .
= .
. ( , , )
. ( , , )
. ( , )
. ( , )
. .
= . .
:
. :
. ( )
. :
. ( )
. :
. ( )
. :
( , , )
. :
. ( . , f { . } " )
. :
. ( . , f "Shutdown Event received: { . } " )
:
. ( . , f { } " )
At first thought, it might seem like a good idea to have some sort of shared data structures that would be protected by locks. When there is only one shared structure, you can easily run into issues with blocking and contention. As such structures proliferate, however, the complexity and unexpected interactions multiply, potentially leading to deadlocks, and very likely leading to code that is difficult to maintain and test. The better option is to pass messages using multiprocessing.Queue objects. Queues should be used to pass all data between subprocesses. This leads to designs that “chunkify” the data into messages to be passed and handled, so that subprocesses can be more isolated and functional/task oriented. The Python Queue class is implemented on unix-like systems as a PIPE - where data that gets sent to the queue is serialized using the Python standard library pickle module. Queues are usually initialized by the main process and passed to the subprocess as part of their initialization.
= .
= .
. ( )
# … in another subprocess ... = . ( = , = )
.
.
Subprocesses can hang or fail to shutdown cleanly, potentially leaving some system resources unavailable, and, potentially worse, leaving some messages un-processed. For this reason, a significant percentage of one’s code needs to be devoted to cleanly stopping subprocesses.
The first part of this problem is telling subprocesses to stop. Since we’re using Queues and messages, the first, and most common, case is to use “END” messages. When it’s time to stop the application, one queues up “END” messages to each queue in the system, equal to the number of subprocesses reading from that Queue. Each of the subprocess should be looping on messages from the queue, and once it receives an “END” message, it will break out of the loop and cleanly shut itself down.
“END” messages are very useful, but they aren’t the only method available, and they don’t help if a subprocess isn’t reading from a queue. I recommend using a multiprocessing.Event object. An Event object is a True/False flag, initialized as False, that can be safely set to True in a multiprocess environment while other processes can check it with is-set() and wait on for it to change to True. Best practice is to have only one “shutdown-requested” Event object in an application which is passed to all subprocesses. Subprocesses should then loop using that Event as their boolean check - so that if the shutdown-requested Event is set to True, the loop terminated.
.
:
= . ( = , = )
. :
:
Once a subprocess needs to end - be it via “END” message, shutdown_event Event flag, or an exception of some sort - it is the subprocess’s duty to clean up after itself by releasing any resources it owns. Python does a pretty great job of cleaning things up during garbage collection, but some resources need to be closed cleanly (pipes, files), and some will hang for some unknown timeout period, thus preventing a clean exit of the process. Network resources can not only tie up local resources, they can also tie up resources on the remote server systems while they wait for timeouts. So final cleanup is vital.
(
# -- Called during worker process start up sequence . = . ( . , . )
. . ( . , . , 1 )
. . ,
. . ( . )
. . ( 1 )
(
# -- Called when worker process is shutting down . .
Not only do subprocesses need to clean up after themselves, the main process also needs to clean up the subprocesses, Queue objects, and other resources that it might have control over. Cleaning up the subprocesses involves making sure that each subprocess gets whatever termination messaging that it might need, and that the subproceses are actually terminated. Otherwise, stopping the main process could result in either a hang, or an orphaned zombie subprocess. The normal procedure involves setting the shutdown flag, waiting for all the processes to stop normally within some reasonable amount of time, and then terminating any that haven’t stopped yet.
(
. .
= . + .
= 0
= 0
# -- Wait up to STOP_WAIT_SECS for all processes to complete . :
= ( , ( - .
. . ( )
# -- Clear the procs list and _terminate_ any procs that . :
= . .
. .
.
1
:
= . .
:
1
,
Python’s Queue objects also need a bit of special handling to be fully cleaned up: they need to be drained of any items that have been left there (and those items may or may not need to be dealt with somehow - perhaps by saving them to disk), closed, and, importantly, have Queue.join_thread() called so that the associated monitoring thread gets cleaned up and doesn’t generate a misleading exception message during final Python garbage collection.
= . ( = )
:
:
. ( = )
:
.
.
We discussed how to be sure to end subprocesses, but how does one determine when to end them? Applications will often have a way to determine that they have nothing left to process, but server processes usually receive a TERM signal to inform them that it’s time to stop. Also, especially during testing, one often finds oneself using the INT signal (aka KeyboardInterrupt) to stop a runaway test. More often than not, one desires the same behavior from TERM and INT signals, though INT might also always want to generate a stack trace so the user can see more precisely what they interrupted.
The approach I recommend is to have signal handlers set the shutdown_event Event flag the first two times they get called, and then raise an exception each time they’re called thereafter. This allows one to hit control-C twice and be sure to stop code that’s been hung up waiting or in a loop, while allowing “normal” shutdown processing to properly clean up. The example below uses a common signal handler function, using functools.partial to create the two functions, differing only in which exception they will raise, that get passed as the signal handlers. An important detail is that signals need to be set up separately for each subprocess. Linux/Unix systems automatically propagate signals to all child processes, so those subprocesses also need to capture and handle the signals as well. An advantage to this is that subprocess signal handlers can potentially operate on resources specific to that subprocess. For example, I have written a signal handler that changed a ZeroMQ blocking socket into a non-blocking one. This allowed me to write code for a get() call that didn’t have a timeout, and didn’t really need one.
:
= 3
( ,
. = 0
. =
(
,
,
,
. 1
. .
. = . :
( , , ,
= . ( , , )
. ( , )
. ( , )
( , ,
= ( )
( . , , , )
( . , , , )
Proper shutdown behavior requires that every process in the system be resilient against getting “stuck”. This means that not only will loops have terminating conditions, but that other system calls that could block and wait will need to use timeouts if at all possible: Queue objects allow for timeouts when doing both put() and get() calls, sockets can be configured to time out, etc.. This ends up being a form of polling where there are 2 events being checked: the system call, and the termination event (i.e. shutdown_event is True). You’ll need to determine how long you can afford to wait on the system call before checking if the loop needs to terminate. The goal is to check for termination frequently enough that the system will respond promptly to a termination/shutdown request, while spending most of the process’s time waiting on the resource (queue, event, socket, etc) It’s important to not wait very long because for server processes started with systemd, systemd will eventually (90 seconds by default) decide that your application isn’t stopping and send SIGKILL signal, and you no longer have a chance to clean up.
Here are some examples of waiting:
Polling against queues: get() from the Queue with block set to True and a short timeout. If queue.Empty is raised, go back to the top of the loop and try again, otherwise, process the returned item.
.
:
= . ( = , = )
. :
Poll against sockets: Call settimeout() on the socket with a short timeout. If the socket.timeout is raised, go back to the top of the loop, check for shutdown_event, and try again, otherwise, process handle the accepted client connection (which will also need to have settimeout() called on it, so its operations don’t hang)
. . ( )
. . ( 1 )
.
:
, = . .
. :
Poll while waiting for a timer: Loop as long as shutdown_event is not set, firing a timer every INTERVAL_SECS seconds. Each pass through the loop sleeps for the time remaining until next_time, up to the max of MAX_SLEEP_SECS (0.02) seconds (which, of course, means that it usually sleeps 0.02 seconds). If the code comes out of the sleep() before next_time, go back to the top of the loop and try again, otherwise do something (in this case, put “TIMER EVENT” on the event_queue), and re-calculate the time for the next timer event.
(
=
=
(
= . + .
. .
= ( , ( - . .
. ( )
. = :
. ( )
= . + .
Logging in an application is vitally important, and even more so in a multiprocessing app, where a combined log shines at reporting events in time-based order. Happily, Python provides good logging facilities. Sadly, Python doesn’t really provide a great way to sync subprocess log messages. Because there are so many moving parts, each log message needs 2 key pieces of data: which process is generating the log message, and how long it’s been since the application started. I generally name my processes. If there are multiple copies of the same process, then they’ll take the name “WORKER-1”, “WORKER-2”, etc.
Besides logging, each subprocess can send Error and Shutdown messages to the main event queue. Allowing the event handler to recognize and deal with unexpected events, such as retrying failed sends, or starting a new subprocess after one has failed.
Below is the log output from a sample run. Note how the timing being based on application start time provides a clearer picture of what’s going on during the all important startup process.
$ python multiproc_example.py
DEBUG:root: 0:00.008 SEND Worker Proc.__init__ starting : SEND
DEBUG:root: 0:00.013 SEND Worker Entering QueueProcWorker.init_args : (<multiprocessing.queues.Queue object at 0x105a7cd30>,)
DEBUG:root: 0:00.013 SEND Worker Entering init_signals
DEBUG:root: 0:00.014 SEND Worker Entering QueueProcWorker.main_loop
DEBUG:root: 0:00.014 SEND Worker Proc.__init__ starting : SEND got True
DEBUG:root: 0:00.015 LISTEN Worker Proc.__init__ starting : LISTEN
DEBUG:root: 0:00.019 LISTEN Worker Entering init_signals
DEBUG:root: 0:00.022 LISTEN Worker Entering main_loop
DEBUG:root: 0:00.022 LISTEN Worker Proc.__init__ starting : LISTEN got True
DEBUG:root: 0:00.024 STATUS Worker Proc.__init__ starting : STATUS
DEBUG:root: 0:00.029 STATUS Worker Entering init_signals
DEBUG:root: 0:00.033 STATUS Worker Entering TimerProcWorker.main_loop
DEBUG:root: 0:00.033 STATUS Worker Proc.__init__ starting : STATUS got True
DEBUG:root: 0:00.035 OBSERVATION Worker Proc.__init__ starting : OBSERVATION
DEBUG:root: 0:00.039 OBSERVATION Worker Entering init_signals
DEBUG:root: 0:00.040 OBSERVATION Worker Entering startup
DEBUG:root: 0:00.042 OBSERVATION Worker Entering TimerProcWorker.main_loop
DEBUG:root: 0:00.043 OBSERVATION Worker Proc.__init__ starting : OBSERVATION got True
^C (ed: user hit Control-C)
INFO:root: 0:03.128 OBSERVATION Worker Normal Shutdown
INFO:root: 0:03.128 STATUS Worker Normal Shutdown
DEBUG:root: 0:03.130 OBSERVATION Worker Entering shutdown
DEBUG:root: 0:03.130 STATUS Worker Entering shutdown
ERROR:root: 0:03.131 MAIN Unknown Event: OBSERVATION - SHUTDOWN : Normal
DEBUG:root: 0:03.132 SEND Worker QueueProcWorker.main_loop received 'END' message
INFO:root: 0:03.133 SEND Worker Normal Shutdown
INFO:root: 0:04.025 LISTEN Worker Normal Shutdown
DEBUG:root: 0:04.028 MAIN Process OBSERVATION ended with exitcode 0
DEBUG:root: 0:04.028 MAIN Process STATUS ended with exitcode 0
DEBUG:root: 0:04.028 MAIN Process LISTEN ended with exitcode 0
DEBUG:root: 0:04.028 MAIN Process SEND ended with exitcode 0
Python’s mutliprocessing module allows you to take advantage of the CPU power available on modern systems, but writing and maintaining robust multiprocessing apps requires avoiding certain patterns that can lead to unexpected difficulties, while also spending a fair amount of time and energy focusing on details that aren’t the primary focus of the application. For these reasons, deciding to use multiprocessing in an application is not something to take on lightly, but when you do, these tips will make your work go more smoothly, and will allow you to focus on your core problems.
Photo by Patrick Tomasso{:} on Unsplash{:}. |
1 | How to Use Fleta Connect | MEVerse
p Follow
Published in MEVerse
p 4 min read p Jul 8, 2021 --
Listen
Share
* Fleta Connect Website: https://fletaconnect.io/
* Fleta Connect Docs: https://fletaconnect.gitbook.io/fletaconnect/
* Fleta Wallet: https://wallet.fletamain.net/login
* Fleta Wallet Guide-line: https://medium.com/fleta-first-chain/fleta-wallet-guide-line-9ba2af6681d8
* How to Connect Fleta Wallet to BSC: https://medium.com/fleta-first-chain/fleta-wallet-how-to-connect-fleta-wallet-to-binance-smart-chain-bsc-f9b30ae034d2
* Fleta Token Smart Contract Address: 0x40897C872214303b6F479a37E549eE1516B264A2
* BSC-based bFleta Token Address: 0xff8152f09e0fddd1ce1577ef6eba72f3a7c2e7db
* Cherry Token Smart Contract Address: 0x487770734490ac571cda3bc06067048ecc5caa4e
1) Fleta Connect — Home
The image is the main page of “Fleta Connect.”
Click “Connect.”
Connect your Metamask Wallet to your Fleta Connect account.
2) Fleta Connect — Trade
The “Trade” tab moves you to Pancake Swap. You need LP tokens of specific pairs to use CherryPick. You can get LP tokens via Pancake Swap.
You can swap a token to another by clicking “Exchange”.
If you already have two tokens, click “Liquidity” to provide liquidity.
3) Fleta Connect — Exchange (on Pancake Swap)
You will see the page like the image shown when you move to Pancake Swap. To add liquidity, you should have both tokens of the pair. For example, if you want to stake on BNB/bFleta pair, you should have both BNB and bFleta.
If you have only one type of tokens, half of them will be swapped to the other one to join the pool.
* A few BNB tokens are used as a fee when swapping.
4) Fleta Connect — Liquidity (on Pancake Swap)
You can add liquidity to the pool if you have two different cryptocurrencies. The estimated value of those two should be equal. Please refer
https://academy.binance.com/ko/articles/impermanent-loss-explained for more details.
Click “Add Liquidity” to supply your tokens to the pool.
Put in the number of tokens you want to add and click “Supply.” You will soon get LP tokens in your crypto wallet.
* A few BNB tokens are used as a fee when clicking the buttons.
5) Fleta Connect — CherryPick
You can get Cherry tokens as rewards by staking LPs on “CherryPick”
“Unlock Wallet” means linking a crypto wallet to your account. Click “Unlock wallet” to link Metamask wallet to your Fleta Connect account. (You will automatically move to step 2 if you already have connected your wallet on the main page.)
“Approve Contract” indicates that you approve the withdrawal of specific LPs from your wallet and processing the contract automatically. You can stake your LPs only after you click “Approve Contract.”
You can stake your LPs when you click “Stake.” Put in the number of LPs you want to stake.
“Harvest” means receiving Cherry tokens you got as rewards. You can deposit your Cherry tokens to your wallet when you click “Harvest.”
* A few BNB tokens are used as a fee when clicking the buttons.
** The numbers in the images are just provisionally.
6) Fleta Connect — Basket
You can stake a single asset on “Basket.”
“Unlock Wallet” means linking a crypto wallet to your account. Click “Unlock wallet” to link Metamask wallet to your Fleta Connect account. (You will automatically move to step 2 if you already have connected your wallet on the main page.)
“Approve Contract” indicates that you approve the withdrawal of specific tokens from your wallet and processing the contract automatically. You can stake your tokens only after you click “Approve Contract.”
You can stake your tokens when you click “Stake.”
You can see the image after you click “Stake.” Put in the number of tokens you want to stake.
* A few BNB tokens are used as a fee when clicking the buttons.
** The numbers in the images are just provisionally.
— — —
Feel free to join and connect with us through our official channels below:
Fleta Website: https://fleta.io
Fleta Connect Website: https://fletaconnect.io/
Twitter: https://twitter.com/fletachain
Telegram: https://t.me/FLETAofficialGroup
Fleta Github: https://github.com/fletaio
Fleta Connect Github: https://github.com/fletaio/fletaconnect
p p p p p
Follow 3K Followers p Editor for MEVerse
Optimum Blockchain Metaverse Entertainment Platform
Follow Help
Status
Writers
Blog
Careers
Privacy
Terms
About
Text to speech
Teams |
1 | Y Combinator: Bookmarklet | Thanks to Phil Kast for writing this bookmarklet for submitting
links to Hacker News.
When you click on the bookmarklet, it will submit the page you're on.
To install, drag this link to your browser toolbar:
post to HN |
3 | HTMHell – Markup from Hell | A collection of bad practices in HTML, copied from real websites. |
1 | Cheating Entropy with Native Web Technologies | 90% of my computer usage is updating computers. – Paul Ford, Postlight Podcast
I have a number of old side projects which I occasionally have to revisit. The structure of these projects broadly falls under two categories:
When I open an old project like number two (described above), I find entropy staring me back in the face: library updates, breaking API changes, refactored mental models, and possible downright obsolescence. An incredible amount of effort will be required to make a simple change, test it, and get it live.
Conversely when I open an old project like number one (described above), I find myself relieved. A project authored in native web technologies, enhanced with an eye towards the future, with little or no tooling, leaves me facing few obstacles. Remove a couple shims that are no longer needed and that’s about it. Imagine that: you can remove code to update a project?
The contrast between these two approaches has been on my mind as of late and spurred me to write down my thoughts.
Any time you’re doing a side project, the first few days is really just fighting build tools, like “Okay, I wanted Sass, but now I’m stuck.” – Dave Rupert, Shop Talk Show #432:
HTML 4 and HTML 5, CSS 2 and CSS 3, those numbers aren’t about semver and communicating breaking change. HTML 4 and HTML 5 is very different than Angular 1 and Angular 2. The promise of the web is that there are no breaking changes. HTML, CSS, and JS are, in a semver sense, still at version 1.x. Whatever you wrote in 2018 or 2008 will still work. On the other hand, a project built on abstractions from native web technologies—frameworks, tooling, language sub/supersets—will contain innumerable dependencies with countless major version changes over time. Updating a single dependency often requires updating everything. Building on top of base web technologies, where possible, is a way to cheat the entropy and churn of modern web technology abstractions.
This is why, over years of building for the web, I have learned that I can significantly cut down on the entropy my future self will have to face by authoring web projects in vanilla HTML, CSS, and JS. I like to ask myself questions like:
The more I author code as it will be run by the browser the easier it will be to maintain that code over time, despite its perceived inferior developer ergonomics (remember, developer experience encompasses both the present and the future, i.e. “how simple are the ergonomics to build this now and maintain it into the future?) I don’t mind typing some extra characters now if it means I don’t have to learn/relearn, setup, configure, integrate, update, maintain, and inevitably troubleshoot a build tool or framework later.
In my experience, authoring vanilla CSS using selectors that largely repeat is easier than more tersely authoring nested selectors but having to maintain Sass over time.
Similarly, authoring vanilla JS without language transpilation, bundling, etc., is easier than building and maintaining something like Babel + Webpack over time.
Take a moment and think about this super power: if you write vanilla HTML, CSS, and JS, all you have to do is put that code in a web browser and it runs. Edit a file, refresh the page, you’ve got a feedback cycle. As soon as you introduce tooling, as soon as you introduce an abstraction not native to the browser, you may have to invent the universe for a feedback cycle. No longer writing CSS and instead writing Sass? Now you need a development server with a build process to watch your files and compile your changes just to develop and test your project. You’ve just added a giant, blocking dependency for your project to work. And if you can’t get that dependency working, your project is dead in the water until you can—both now and in the future. |
1 | Tilt Five – Tabletop AR | Bring holograms
home Strategize, battle, and play your favorite games in mind-blowing 3d! Tilt Five® SYSTEM Gather your party for a game night they will never forget.
Strategize, battle, and play your favorite video games, board games, or tabletop RPGs — now with mind-blowing 3D graphics that mix with your surroundings, right on your table. Tilt Five™ is the first and only augmented reality (AR) system that uses an innovative projection technology to connect you and your friends to the games you love – in-person or online!
No items found. "It was instantly magical, and I personally ordered a unit the same day!"
"If you love the idea of AR, then Tilt Five could provide a breakthrough gaming experience."
"Tilt Five is exactly the AR product that I could see myself using regularly"
"... both their team and their hardware are absolutely amazing… very little overhead and generally was one of the fastest ‘ports’ we ever did"
READY TO EXPERIENCE GAMES IN A
NEW DIMENSION?
What is Tilt Five™? TILT FIVE SYSTEM FEATURES:
Tilt Five® AR glasses are designed with a patented and unprecedented 110-degree field-of-view optical system, bringing immersive 3D worlds to life right on the table with vivid colors and natural depth of field.
The glasses weigh just under ~100g (~3.5 ounces), one-third of typical mixed-reality headsets, and provide a comfortable experience for all-day game sessions.
A six-degrees-of-freedom wand controller that enables players to reach into and physically control and interact with the environments and characters.
An embedded camera system that tracks physical objects like game pieces and hands and seamlessly blends them with virtual ones.
Stereo speakers and microphone so players can immerse themselves in solo experiences or converse with their friends while playing together remotely.
"One of the most magical AR demos I’ve ever had a chance to see was Tilt Five"
"I'm confident that this will be the next big thing in AR and gaming"
Catan: Tilt Five AR coming spring 2023
We have 100s of games to play and are continually adding new titles and fan favorites! ARE YOU READY to experience
tabletop Holograms? Tilt Five® XE KiT $359 Everything you need to start playing holographic games on your table! Tilt Five® requires a USB 3.0 or faster – aka “SuperSpeed USB” – port. On your Windows 11 or Windows 10 PC, these ports are usually indicated by the SuperSpeed USB Trident icons.
WE HAVE PARTNERS ALL AROUND THE WORLD Tilt Five's Developer Program!
SELECTED ARTICLES: Magical AR Tabletop Gaming with Tive Five: Jeri Ellsworth’s Epic Journey from Valve to castAR to Tilt Five, By Kent Bye, Voices of VR
Tilt Five Was Magical by Karl Guttag, KGOnTech
Tilt Five's Jeri Ellsworth: To Get Spatial Computing, Look at the Mouse, By Sean Higgins, Spatial Reality |
7 | Pre-installed Ubuntu and Arch desktop images for LXD | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
55 | Why NFTs are bad: the short version | Antsstyle
p Follow
2 min read p Sep 3, 2021 --
6
Listen
Share
This is a short explanation of why NFTs are bad. For more detailed explanations, the end of this article has links to detailed articles I have written on the subject.
In a nutshell, NFTs are bad for two reasons:
There is zero actual value to NFTs. Their sole purpose is to create artificial scarcity of an artwork to supposedly increase its value (it doesn’t do this, but the pretense that it does can be used for illegal purposes by those who recognise that fact).
I have written a much longer, detailed explanation of all the reasons why NFTs (and cryptocurrency) are bad in the article below.
h2 This is the in-depth follow up to this brief article: antsstyle.medium.com
Below are some more articles I’ve written, that are responses to specific articles on the web (Jisu’s article was a highly-shared article full of misinformation about NFTs, and so two of the articles below specifically respond to the points in that).
h2 This is a response to Jisu’s article about NFTs… antsstyle.medium.com
h2 This is a response to the second part of Jisu’s article about NFTs, which can be found here… antsstyle.medium.com
h2 The article in question, for context… antsstyle.medium.com
h2 This article is written to explain blockchain consensus algorithms, and explain why none of the current systems are any… antsstyle.medium.com |
1 | Ditch the regex, learn to parse | Although parsing is often described from the perspective of writing a compiler,
there are many common smaller tasks where it’s useful. Reading file formats,
talking over the network, creating shells, and analyzing source code are all
easier using a robust parser.
By taking time to learn general-purpose parsing tools, you can go beyond
fragile homemade solutions, and inflexible third-party libraries. We’ll cover
Lex and
Yacc in
this guide because they are mature and portable. We’ll also cover their later
incarnations as Flex and Bison.
Above all, this guide is practical. We’ll see how to properly integrate parser
generators into your build system, how to create thread-safe parsing modules,
and how to parse real data formats. I’ll motivate each feature of the parser
generator with a concrete problem it can solve. And, I promise, none of the
typical calculator examples.
Table of contents
People usually use two stages to process structured text. The first stage,
lexing (aka scanning), breaks the input into meaningful chunks of characters.
The second, parsing, groups the scanned chunks following potentially recursive
rules. However, a nice lexing tool like Lex can be useful on its own, even
when not paired with a parser.
The simplest way to describe Lex is that it runs user-supplied C code blocks
for regular expression matches. It reads a list of regexes and constructs a
giant state machine which attempts to match them all “simultaneously.”
A lex input file is composed of three possible sections: definitions, rules,
and helper functions. The sections are delimited by %%. Lex transforms its
input file into a plain C file that can be built using an ordinary C compiler.
Here’s an example. We’ll match the strings cot, cat, and cats. Our
actions will print a replacement for each.
(Alternately, build it in one step with make catcot. Even in the absence of a
Makefile, POSIX make has suffix
rules
that handle .l files.)
The program outputs simple substitutions:
The reason it prints non-matching words (such as “the”) is that there’s an
implicit rule matching any character (.) and echoing it. In most real parsers
we’ll want to override that.
Here’s what’s happening inside the scanner. Lex reads the regexes and generates
a state machine to consume input. Below is a visualization of the states, with
transitions labeled by input character. The circles with a double outline
indicate states that trigger actions.
Note there’s no notion of word boundaries in our lexer, it’s operating on
characters alone. For instance:
That sounds rather like an insult.
An important subtlety is how Lex handles multiple eligible matches. It picks
the longest possible match available, and in the case of a tie, picks the
matching pattern defined earliest.
To illustrate, suppose we add a looser regex, c.t, first.
Lex detects that the rule masks cat and cot, and outputs a warning:
It still compiles though, and behaves like this:
Notice that it still matched cats, because cats is longer than c.t.
Compare what happens if we move the loose regex to the end of our rules. It can
then pick up whatever strings get past the others.
Now’s a good time to take a detour and observe how our user-defined code acts
in the generated C file. Lex creates a function called yylex(), and inserts
the code blocks verbatim into a switch statement. When using lex with a parser,
the parser will call yylex() to retrieve tokens, named by integers. For now,
our user-defined code isn’t returning tokens to a parser, but doing simple
print statements.
As mentioned, a lex file is comprised of three sections:
The definitions section is where you can embed C code to include headers and
declare functions used in rules. The definitions section can also define
friendly names for regexes that can be reused in the rules.
The rules section, as we saw, contains a list of regexes and associated user
code.
The final section is where to put the full definitions of helper functions.
This is also where you’d put the main() function. If you omit main(), the
Lex library provides one that simply calls yylex(). This default main()
implementation (and implementations for a few other functions) is available by
linking your lex-generated C code with -ll compiler flag.
Let’s see a short, fun example: converting Roman numerals to decimal. Thanks to
lex’s behavior of matching longer strings first, it can read the single-letter
numerals, but look ahead for longer subtractive forms like “IV” or “XC.”
Now that we’ve seen Lex’s basic operation in the previous section, let’s
consider a useful example: syntax highlighting. Detecting keywords in syntax is
a problem that lex can handle by itself, without help from yacc.
Because lex and yacc are so old (predating
C),
and used in so many projects, you can find grammars already written for most
languages. For instance, we’ll take quut’s C
specification for lex, and modify
it to do syntax highlighting.
This relatively short program accurately handles the full complexity of the
language. It’s easiest to understand by reading in full. See the inline
comments for new and subtle details.
One of the biggest areas of improvement between classic lex/yacc and flex/bison
is the ability of the latter to generate code that’s easier to embed into a
larger application. Lex and yacc are designed to create standalone programs,
with user-defined code blocks stuck inside. When classic lex and yacc work
together, they use a bunch of global variables.
Flex and Bison, on the other hand, can generate thread-safe functions with
uniquely prefixed names that can be safely linked into larger programs. To
demonstrate, we’ll do another scanner (with Flex this time).
The following Rube Goldberg contraption uses Flex to split words on whitespace
and call a user-supplied callback for each word. There’s certainly an easier
non-Flex way to do this task, but this example illustrates how to encapsulate
Flex code into a reusable library.
Fixing compiler warnings
If you compile with more warnings enabled, the compiler will complain about
“unused parameter yyscanner” in several functions. Flex’s reentrant mode adds
this parameter to the functions, and the default implementation doesn’t use it.
To fix the warnings, we can provide our own definitions. First, disable some of
Flex’s auto-generated functions. Add these options to your lex input file:
Provide the implementations yourself down by words_callback, and add the macro
in a code block up by the %options.
A calling program can use our library without seeing any Flex internals.
To build the program, just link it with words.o.
Now that we’ve seen how to identify tokens with a scanner, let’s learn how a
parser can act on the tokens using recursive rules. Yacc/byacc/bison are LALR
(look-ahead left recursive) parsers, and Bison supports more powerful modes if
desired.
LR parsers build bottom-up toward a goal, shifting tokens onto a stack and
combining (“reducing”) them according to rules. It’s helpful to get a mental
model for this process, so let’s jump into a simple example and simulate what
yacc does.
Here’s a yacc grammar with a single rule to build a result called foo. We
specify that foo is comprised of lex tokens A, B, and C.
Yacc transforms the grammar into a state machine which looks like this:
The first rule in the file (and the only rule in our case) becomes yacc’s
goal. Yacc begins in state 0, with the implicit rule 0: $accept: • foo $end.
The parse will be accepted if we can produce a foo followed immediately by
the end of input. The bullet point indicates our progress reading the input. In
state 0 it’s at the beginning, meaning we haven’t read anything yet.
Initially there’s no lookahead token, so yacc calls yylex() to get one. If
lex produces an A, we follow the state transition to state 1. Because the arrow
is a solid line, not dashed, yacc “shifts” the token to its token stack. It also
pushes state 1 onto a state stack, which now holds states 0 and 1.
State 1 is trying to satisfy the rule which it calls rule 1, namely 1 foo: A • B C. The bullet point after the A indicates we’ve seen the A already. Don’t
confuse the state numbers and rule numbers – yacc numbers them independently.
Yacc continues processing input, shifting tokens and moving to states 3 and 5
if lex produces the expected tokens. If, at any point, lex produces a token not
matching any transitions in the current state, then yacc reports a syntax error
and terminates. (There’s a way to do error recovery, but that’s another topic.)
State 5 has seen all necessary tokens for rule 1: 1 foo: A B C •. Yacc
continues to the diamond marked “R1,” which is a reduction action. Yacc
“reduces” rule 1, popping the A, B, C terminal tokens off the stack and pushing
a single non-terminal foo token. When it pops the three tokens, it pops the
same number of states (states 5, 3, and 1). Popping three states lands us back
in state 0.
State 0 has a dashed line going to state 2 that matches the foo token that was
just reduced. The dashed line means “goto” rather than “shift,” because rule 0
doesn’t have to shift anything onto the token stack. The previous reduction
already took care of that.
Finally, state 2 asks lex for another token, and if lex reports EOF, that
matches $end and sends us to state 4, which ties a ribbon on it with the
Acc(ept) action.
From what we’ve seen so far, each state may seem to be merely tracking progress
through a single rule. However, states actually track all legal ways forward
from tokens previously consumed. A single state can track multiple candidate
rules. For instance:
For this grammar, yacc produces the following state machine:
In state 1 we’ve seen token A, and so rules 3 and 4 are both in the running to
reduce an x or y. On a B or C token, the possibilities narrow to a single rule
(in state 5 or 6).
Also notice that our rule foo : x | y doesn’t occur verbatim in any states.
Yacc separates it into 1 foo: x and 2 foo: y. Thus, the numbered rules
don’t always match the rules in the grammar one-to-one.
Yacc can also use peek ahead by one token to choose which rule to reduce,
without shifting the “lookahead” token. In the following grammar, rules x and y
match the same tokens. However, the foo rule can say to choose x when followed
by a B, or y when followed by a C:
Note multiple reductions coming out of state 1 in the generated state machine:
The presence of a bracketed token ([C]) exiting state 1 indicates that the
state uses lookahead. If the state sees token C, it reduces rule 4. Otherwise
it reduces rule 3. Lookahead tokens remain to be read when following a
dashed-line (goto) action, such as from state 0 to state 4.
While yacc is a powerful tool to transform a grammar into a state machine, it
may not operate the way you intend on ambiguous grammars. These are grammars
with a state that could proceed in more than one way with the same input.
As grammars get complicated, it’s quite possible to create ambiguities. Let’s
look at small examples that make it easier to see the mechanics of the
conflict. That way, when it happens in a real grammar, we’ll have a better
feeling for it.
In the following example, the input A B matches both x and y B. There’s
no reason for yacc to choose one construction over the other when reducing to
foo. So why does this matter, you ask? Don’t we get to foo either way? Yes,
but real parsers will have different user code assigned to run per rule, and it
matters which code block gets executed.
The state machine shows ambiguity at state 1:
At state 1, when the next token is B, the state could shift the token and
enter state 5 (attempting to reduce x). It could also reduce y and leave B as
lookahead. This is called a shift/reduce conflict. Yacc’s policy in such a
conflict is to favor a shift over a reduce.
Alternately, we can construct a grammar with a state that has more than one
eligible reduction for the same input. The purest toy example would be foo : A | A, generating:
In a reduce/reduce conflict, yacc chooses to reduce the conflicting rule
presented earlier in the grammar.
While matching tokens, parsers typically build a user-defined value in memory
to represent features of the input. Once the parse reaches the goal state and
succeeds, then the user code will act on the memory value (or pass it along to
a calling program).
Yacc has stores the semantic values from parsed tokens in variables ($1,
$2, …) accessible to code blocks, and it provides a variable ($$) for
assigning the semantic result of the current code block.
Let’s see it in action. We won’t do a hackneyed calculator, but let’s still
make a parser that operates on integers. Integer values allow us to avoid
thinking about memory management.
We’ll revisit the roman numeral example, and this time let lex match the digits
while yacc combines them into a final result. It’s actually more cumbersome
than our earlier way, but illustrates how to work with semantic parse values.
There are some comments in the example below about portability between yacc
variants. The three most prominent variants, in order of increasing features,
are: the POSIX
interface
matching roughly the AT&T yacc functionally,
byacc (Berkeley Yacc), and
GNU Bison.
The corresponding lexer matches individual numerals, and returns them with
their semantic values.
To review: lex generates a yylex() function, and yacc generates yyparse() that
calls yylex() repeatedly to get new token identifiers. Lex actions copy
semantic values to yylval which Yacc copies into $-variables accessible in
parser rule actions.
Building an executable roman from the input files roman.y and roman.l
requires explanation. With appropriate command line flags, yacc will create the
files roman.tab.c and roman.tab.h from roman.y. Lex will create
roman.lex.c from roman.l, using token identifiers in roman.tab.h.
In short, here are the build dependencies for each file:
And here’s how to express it all in a Makefile.
And now, the moment of truth:
In this example we’ll parse LISP
S-expressions, limited to string and
integer atoms. There’s more going on in this one, such as memory management,
different semantic types per token, and packaging the lexer and parser together
into a single thread-safe library. This example requires Bison.
The parser does the bulk of the work. We just need to pair it with a scanner
that reads atoms and parens.
Finally, here’s how to call the parser from a regular program.
To build it, use the Makefile pattern from roman to create analogous
lisp.lex.o and lisp.tab.o. This example requires Flex and Bison, so set
LEX=flex and YACC=bison at the top of the Makefile to override whatever
system defaults are used for these programs. Finally, compile driver_lisp.c
and link with those object files.
Here’s the program in action:
Internet Request For Comment (RFC) documents describe the syntax of many
protocols and data formats. They often include complete Augmented Backus-Naur
Form (ABNF)
grammars, which we can convert into robust yacc parsers.
Let’s examine RFC4181, which
describes the comma-separated value (CSV) format. It’s pretty simple, but has
problematic edge cases: commas in quoted values, quoted quotes, raw newlines in
quoted values, and blank-as-a-value.
Here’s the full grammar from the RFC. Notice how alternatives are specified
with “/” rather than “|”, and how ABNF has the constructions
*(zero-or-more-things) and [optional-thing]:
The grammar makes no distinction between lexing and parsing, although the
uppercase identifiers hint at lexer tokens. While it may be tempting to
translate to yacc top-down, starting at the file level, I’ve found the most
productive way is to start with lexing.
We can combine most of the grammar into two lex rules to match fields:
With FIELD out of the way, here’s what’s left to translate:
Let’s also drop the designation of the first row as the “header.” The
application can choose to treat the first ordinary row as a header if desired.
This simplifies the grammar to:
At this point it’s easy to convert to yacc.
Matching blank fields is tricky. There are three fields in a,,b, no way
around it. That means we have to identify some value (either a non-terminal
symbol, or a terminal token) out of thin air between characters of input. As
a corollary, given that we have to honor blank fields as existing, we’re forced
to interpret e.g. a 0-byte file as one record with a single blank field.
We handled the situation with an empty yacc rule in field.opt. Empty rules
allow the parser to reduce when it sees unexpected lookahead tokens. Perhaps
it’s also possible to use fancy tricks in the lexer (like trailing context and
start conditions) to also match empty non-escaped fields. However, I think an
empty parser rule is more elegant.
Three notes about empty rules:
Now that we’ve seen the structure of the grammar, let’s fill in the skeleton to
process the CSV content. From now on, examples in this article will use my
libderp library for basic data
structures like maps and vectors.
The complete parser below combines values from the lexer into full records,
using the vector type. It then prints each record and frees it.
Build it (using the steps shown for earlier examples). You’ll also need to link
with libderp version 0.1.0, which you
can see how to do in the project readme.
Next, verify with test cases:
IRCv3 extends the Internet Relay Chat (IRC) protocol
with useful features. Its core syntactical change to support new features is
message tagging. We’ll write
a parser to extract information from RFC
1459 messages,
including IRCv3 tags.
The BNF from this standard is written in a slightly different dialect than that
of the CSV RFC.
As before, it’s helpful to start from the bottom up, applying the power of lex
regexes. However, we run into the problem that most of the tokens match almost
anything. The same string could conceivably be a host, nick, user, key_name,
and command all at once. Lex would match the string with whichever rule comes
first in the grammar.
Yacc can’t easily pass lex any clues about what tokens it expects, given what
tokens have come before. Lex is on its own. For this reason, the designers of
lex gave it a way to keep a memory. Rules can be tagged with a start
condition, saying they are eligible only in certain states. Rule actions can
then enter new states prior to returning.
We’ll revisit the lexer to fill in details for assigning yylval. First, let’s
see the parser and its data types.
Returning to the lexer, here is the code with all the details filled in
to construct yylval for the tokens.
Build irc.y and irc.l according to our typical pattern (and link with libderp).
Here’s an example of the IRCv3 parser in action: |
3 | Dark Patterns in UI Copy: 2021 Report | “We value your privacy.” For every website that says it, why do I get the feeling that the complete opposite is true? Literally popping onto screens a couple years ago, this is one of the most blatant examples of slippery marketing copy on user interfaces, where positive spin is used to mask negative outcomes for users. Design researcher Caroline Synders calls this use of language a ‘content strategy dark pattern’:
A ‘content strategy’ dark pattern is where language and design create misinterpretations
Large steps towards making Dark patterns illegal were made in 2019*, but throughout 2020 and even today, the crackdown appears to have lead the relationship between language and design to become even more sneaky and strategic. Shady euphemisms are employed to trick users into handing over personal data, and manipulative descriptions continue to deceive people.
From Facebook to Medium, this article calls out increasingly deceitful web copy techniques used today by the masters of the dark arts of design writing.
The purpose of this article is to demonstrate dark patterns in UI copy through examples. Dark patterns often exist to support business goals, and so it may not be a designer or writer’s choice to implement them.
There are different labels and categories for dark patterns depending on where you look. Harry Brignul’s darkpatterns.org is the most well-known, with labels such as ‘Confirmshaming’ and ‘Privacy Zuckering’. Then, there’s a more recent study of “Asshole design”, which highlights 6 strategies of deceitful design, including Nickle-and-Diming, Entrapping or Misrepresenting. Here’s a table from the study:
Whilst the above offer great definitions, to address dark patterns specific to copy on elements in user interfaces, I’ll use the following terms in this article:
This is where anything that can be perceived negatively is rephrased, or rebranded to sound positive. Using a positive tone is widely practiced to make websites easier to understand, and is therefore common practice across many websites and apps. For instance, here’s the ‘Writing Positively’ guide from Mailchimp’s content style guide:
As shown above, the practice of positive writing includes turning negative language into positive language, much like a euphemism, where mild or indirect expressions are used in place of a blunter truth. The goal is to make us feel things – and it happens fairly often on the web:
Medium
‘Paywall’ → ‘Partner Programme’
Amazon
‘Cancellation’ → ‘End Benefits’
Facebook
‘Tracking’ → ‘Personalised Ads’
‘Paywall’ → ‘Partner Programme’
For example, blogging platform Medium often persuades writers to publish stories behind a paywall. However, due negative associations with the word ‘paywall’, they often replace it with more positive terms and phrases such as ‘PartnerProgram’ or ‘Meter my story’, as highlighted in the following screenshot:
If we take a closer look at the word choice underlined in pink, a few shady issues arise:
Reading between the lines, the option can be translate as follows:
The same checkbox, worded more negatively (or realistically)
Is it a dark pattern?
Because of the difference in the description of the checkbox, and the outcome of checking it, the example above could easily be classified as a dark pattern. Important information (the writer’s story won’t be available to non-paying readers) is obscured through word choices that persuade users down a predefined path that favours shareholder interests.
We define asshole designer properties as instances where designers explicitly assert control over the user’s experience, implementing obnoxious, coercive, or deceitful behaviors that are almost solely in the shareholder’s best interest.
1.2 Cancel? That’s not a word 🤑 (Amazon)
Cancel → End Benefits
Amazon’s cancellation page is another example of positive wording that can mislead. Similar to Medium’s ‘PartnerProgram’ branding for Paywall, Amazon use ‘Prime Benefits’, or ‘Benefits’ as a veil for cancellations. So instead of a negative ‘Cancel Membership’ page, you get the more positive ‘End Benefits’ page. In the following screenshot, every trace of the word ‘Cancel’ is repackaged as ‘End Benefits’:
Amazon avoid using the word ‘cancel’ on their cancellation page.
Again, even though it’s more positive, it becomes less clear – possibly by design. Founder of Creative Good, Mark Hurst, also conveys this in his post “Why I’m losing faith in UX“:
Increasingly, I think UX doesn’t live up to its original meaning of “user experience.” Instead, much of the discipline today, as it’s practiced in Big Tech firms, is better described by a new name.
UX is now “user exploitation.”
In his article, Hurst explains how Amazon have fallen from leaders in User Experience design, to one of the biggest pushers of “user exploitation” design.
This has not gone unnoticed by others either, and Amazon face a few legal challenges:
We are filing a complaint against Amazon for breaching European law. Using manipulative techniques, or #darkpatterns, is endemic online & need to end.
See report & complaint here: https://t.co/wUSrMoUH6E [Thread]
— Finn Lützow-Holm Myrstad (@finnmyrstad) January 14, 2021
For more on this, the BBC has a follow up article on Amazon’s cancellation tricks.
1.3 You Call it Tracking, We Call it ‘Personalisation’ (Facebook)
Tracking → Personalised Service
This third example of shady euphemisms is common across social media websites, like the friendly Facebook. As one of the most aggressive miners of personal data, Facebook intentionally package all of this intrusive behaviour as a feature that benefits users.
For example, cross-website tracking and hidden pixels becomes your ‘Ad preferences’. In fact, there’s no clear mention of tracking or mining – it’s all euphemisms and positive spin that masks what’s happening in order to make users feel in control:
Facebook dodge the word ‘tracking’, in favour of softer but ambiguous alternatives.
The above screenshots are taken from Facebook’s privacy checkup, and although nothing is factually untrue, it raises the question – is information being withheld?
Dark Patterns vs Positive Writing
Despite the good intentions of writing positively for users, as shown in the above examples, there’s also a very dark side to it, and it’s often intentional. It’s acceptable for websites to be persuasive, or have a bias towards their own goals, but when positive wording and euphemisms mask information or mislead users, as Arielle Pardes shows in Wired, it becomes more unethical:
By definition, design encourages someone to use a product in a particular way, which isn’t inherently bad. The difference, Yocco says, is “if you’re designing to trick people, you’re an asshole”.
For instance, the upcoming privacy changes on Apple’s iOS expose that Facebook avoid the word “Tracking” at all costs, despite it being the most accurate term to use for explaining their behaviour:
How Facebook tracking requests may look on iPhones with Apple’s updated privacy changes.
In contrast to their original requests for tracking consent, on Apple devices, Facebook will be forced to use 2 simpler options:
❌ Ask App Not to Track
✅ Allow Tracking
Here’s a screenshot from Apple’s User Privacy and Data Use page on asking permission to track:
Apple on Asking Permission to Track
Despite this being clearer, Facebook doesn’t appear too happy with it as it’s likely to negatively affect their profit margins, so there’s currently a battle going on over privacy in big tech. If you want to dive deeper into this saga, check out Sara Fischer‘s media trends newsletters at Axios.
In a similar vein to these shady euphemisms, let’s move on to see how header text can be used to distract and deflect:
Not too dissimilar from shady euphemisms, in this case, large header text is used to distract or mislead users when they’re confronted with choices, such as agreeing to personalised ads (again), or upgrading an account. The header often says something reassuring to deflect from what’s going on in the UI.
For example, this personalised ads request from Twitter starts by saying “You’re in control”, but the entire modal encourages you to accept tracking for personalised ads (the primary blue button):
It seems to cause mistrust more than anything:
Lmao @ this popup from this morning. It says “You’re in control”, has some words, then gives me two options:
1. Turn on personalized ads
2. Keep less relevant ads
So, the question is, “Ads or ads?” Thanks for giving me control, Twitter. 😂 pic.twitter.com/WCeDPjOqym
— Ashlee Boyer 🦻 (@AshleeMBoyer) July 15, 2020
Here’s another example where the titles aren’t untrue, but they elaborately feign good intention to gain an end. Instagram and Reddit both want us to download their more addictive mobile apps, but disguise it as a benefit to users:
Since the mobile websites are already well-made, as Android Police highlight, these popups could indeed be a ploy to suck you into using their app every day. The popups themselves actually make the websites harder to use:
Reddit’s mobile website is well-made and fast, but for ages, the platform has been pushing anyone who visited that site to the official app instead, complete with an obnoxious banner that shows up every time you open a Reddit link in your phone’s browser.
It’s probably because downloading the app massively benefits the company, as social media apps are often much more addictive than their web counterparts through the use of notifications that aim to grab attention (as opposed to functional notifications). Avery Hartmans at Business Insider explains this in her article on the sneaky ways apps like Instagram, Facebook, Tinder lure you in:
App makers are using deliberate techniques to attract your attention. They aren’t simply relying on you to come to them whenever you have downtime…Instagram sends dozens of push notifications each week and uses “Stories” to attract you.
Conversely, there do exist legitimate cases where a mobile app would better than the web version, such as that of a writing app, or even when there aren’t resources for a solid web experience. But for these giants, it’s really not the case:
“Don’t you want to view it in the official Reddit app for the best experience? No, no I don’t. And the official reddit app is not the best experience.”
Here’s another header, this time from Medium.com, that also deflects from the real purpose:
The intention here may be good, but due to the tone used, it can also come across riddling, or even arrogant. Famous web developer, Wes Bos, highlights that artful headings such as this often lead to more confusion than benefit (and that may be the intention):
Here, Wes Bos is concerned that users now have to log in to read any Medium article, when in fact they don’t. Because the messaging is consistently indirect, nobody is ever too sure what it really means. To quote Tyson Fury, they’re “going around the bushes and putting their arse in the hedge”.
Here, a user is presented with one or more options, but sentences explaining those options are structured to deflate the negative ones. Continuing with Medium, there’s a lot of self-serving syntax in this simple dropdown:
Similar to the first Medium example in section 1, Medium is again off-handedly convincing users to put articles behind their paywall. Instead of asking directly though, here’s how they structure the request to disguise intentions:
Peter‘s tweet here sums it up perfectly:
I mean, do I want editors to be able to recommend this? Sure! Do I also want it not to go behind a paywall? Absolutely! How can these two things be mutually exclusive?
— Peter Bihr (@peterbihr) December 5, 2019
Usually, a writer would prioritise the most important and impactful information first, so that users are well informed of all implications of their choices. With this in mind, it becomes even more worrying that the checkbox above is opted-in by default. It’s clear that this is intentional when you look at Medium’s help page:
Even the help page is confusing through the use of a double negative statement. They explain that to remove your article from the paywall, you have to uncheck the box.
According to Plainlanguage.gov, double negative statements like this should be avoided for clear, and understandable language:
Let’s not forget Facebook are experts at this one too. The following example originated in 2018, but things haven’t changed much since, as you’ll see below. Here, Facebook frames face recognition as a security feature, shifting ‘benefits’ to the top.
In doing so, they hide what we all know is equally, or even more true – they want to extract more data from you. Jennifer’s commentary sums it up:
Sure #Facebook, I’ll take a milisecond to consider whether you want me to enable #facialrecognition for my own protection or your #data #tracking business model. #Disingenuous pricks! pic.twitter.com/s7nngaHVSq
— Jennifer Baker (@BrusselsGeek) April 20, 2018
For more on how social media sites like Facebook continue use tricks like this, Wired has a great article on it.
Similar, but more subtle to Brignul’s confirmshaming (where users are guilted into opting into something), here, button text is crafted to persuade you down a preferred path. For example, Amazon try to get people reconsider canceling subscriptions by adding ‘and End Benefits’ to the cancellation button:
And they also make the opposing option positive: “Keep My Membership and My Benefits”.
Confirmshaming is the act of guilting the user into opting in to something. The option to decline is worded in such a way as to shame the user into compliance.
This page could actually be a lot simpler:
Going back to Hurst’s article on the downfall of UX, he suggests something along the same lines:
What should be a single page with a “Cancel my subscription” link is now a six-page process filled with “dark patterns” – deceptive design tricks known to mislead users – and unnecessary distractions.
This twitter thread from digital policy maker, Finn, delves a bit deeper into the deflective wording used in these buttons:
9. After having clicked «cancel my benefits», we get to the next screen. Here we are told that we can save money by switching to annual payments. Even though we are in the middle of ending the subscription, Amazon wants us to extend it by a year instead. pic.twitter.com/9tPeoaLFpE
— Finn Lützow-Holm Myrstad (@finnmyrstad) January 14, 2021
The button text on the Twitter modal used in a previous example works in a similar way. Instead of an option to decline ads, you’re given the option to ‘Keep less relevant ads’:
As illustrated above, it looks as if simple options have been reworked to portray a friendlier, but more manipulative message. However, at least the description itself is transparent and human. Instead of framing ads into a user benefit (like the Facebook example in section 1), they explain that ads are used to keep their service free ✅.
Banking on the research that nobody on the internet reads, large walls of text are a great way to get users to agree to whatever you need. Here, what might be more fitting on a terms and conditions page is squished into a modal, often with a single choice – to accept. Take WhatsApp’s most recent confusing update for a quick example of this:
As well as the large amount of text, there are 5 links out to pages with even more information to read before agreeing. As tech journalist, Jennifer Baker says, “Who other than a tech journalist has time for reading all that?”
I’ve now delved into the new #Facebook T&Cs… WHO (other than a tech journalist) has time for this??? Obviously what they’re counting on. It’ll take an article rather than a tweet to do this justice. #dataprotection #ePrivacy Grrrr!
— Jennifer Baker (@BrusselsGeek) April 23, 2018
According to The Verge, the WhatsApp example above actually lead to mass confusion. And this isn’t the first time Facebook’s terms and conditions have caused such chaos, showing their preference towards money over user privacy mightn’t have changed much since back in 2012:
In that instance, Facebook was attempting to “borrow” your photos and sell them to third party companies. And like the recent WhatsApp example, they were forced to reconsider.
As surprising as it may sound, people DO pay attention to these “boring” legal agreements, and when they see something that is unclear or confusing, they speak up.
If you’re interested in how privacy policies themselves are perfectly crafted to “to tell you things without actually telling you things”, Shoshana Wodinsky‘s article in Gizmodo is a must read: What Facebook’s Privacy Policies Don’t Tell You. Check out her Twitter for more comprehensive research into privacy issues:
ive spent two years researching the minutiae of whatsapp’s privacy policies / combing through every page its business-facing code / getting into shouting matches w random engineers over this shit
this is the most comprehensive explanation you’ll read 💃 https://t.co/bq08JKTk1V
— shoshana wodinsky (@swodinsky) January 15, 2021
It might be argued that some of the above examples aren’t dark patterns, but just badly written copy or thoughtless errors. Although when you see it coming from an actual writing platform, or a company with smart people working for them like Amazon and Facebook, it becomes hard to believe.
This isn’t an accident. Instead, and this is the point of Decade 3, there’s a highly-trained, highly-paid UX organization at Amazon that is actively working to deceive, exploit, and harm their users.
Tech founder Paul Azorín furthers this when writing that such companies are known to prioritise money over what’s right:
Large tech companies such as Facebook, Google, and Amazon are known for making unethical decisions. Tech companies should focus on what’s right instead of simply what makes money.
From shareholders pressure, to greed, there are many forces pushing companies towards the use of dark patterns. Of the examples above, the transition from ethical to deceiving is most noticeable with Medium, where the burden of $135 million funding turned them from a pleasant to use writing platform, into one riddled with those confusing messages.
Another example where ethics are disregarded for money was when analytics app, Baremetrics was sold by its creator and founder, Josh Pigford, to a venture capital firm. Straight away, the corporate acquirers implemented bizarre messaging and popups to prevent customers from cancelling subscriptions:
Before you use, or subscribe to @Baremetrics, please make sure you read this. This happened today. I went to their app to try & unsubscribe because we feel that Stripe’s own dashboard is good enough for us after they added reporting specifically.
Few clicks around, I saw this: pic.twitter.com/havlVm4X4j
— Tarek Khalil (@cmdkhalilov) December 22, 2020
Now that money seems to be the priority, you have to have a call with ‘non-salesperson’ Brian before cancelling your subscription.
Even though all of this can be frustrating for customers, at the end of the day, these sneaky tricks are business tactics to support the goal of generating profit. Author of How Design Makes the World, Scott Berkun, points this out when suggesting customers of companies like Amazon are happier with cheaper prices and next day delivery than good UX. Similar to a Medium writer who benefits from excellent distribution – they can put up with the downsides and dark patterns because the service is so good.
You can have a great user experience in one sense and be exploited, or exploit others, at the same time.
Despite them being annoying, dark patterns still exist because they’re getting results. The question remains though, at what point are these patterns illegal and unethical? When do they start to ruin a product?
If you disagree with shady practices and dark patterns, here’s a few different ways to push back against sites that use them:
As an ethical designer, or anyone producing UI copy, as Andrea Drugay states, you can try and write your way out of using dark patterns:
The point isn’t that only someone with the title “UX Writer” can write their way out of dark patterns. The point is that anyone who is writing product copy should be able to write their way out of dark patterns.
In her article, ‘The role of UX writing in design ethics’, Andrea suggests the following prompts you can use to push back:
I don’t feel comfortable writing language that misleads users. I suggest we use ___ instead.
This flow seems misleading. How might we get the information we’re looking for without implying the feature already exists?
UX writing best practices typically have the button call-to-action match the verb in the header. This modal would be more clear if the implied action in the title matched the CTA. How about we rephrase it like this: ___?
Read her post for more of these.
As writer of the book White Hat UX, Trine Flibe states, the best way to understand why things are done unethically is to simply ask why:
Ask why something is being done unethically; ask why you are told to make a black hat feature; question the current state of things.
Find out more about ethical design in her post on Smashing, Ethical Design: The Practical Getting-Started Guide .
Alternatively, you can always find an ethical company to work for instead. Ethical freelance collectives like Thea might be a good place to look too:
As a web user, you can opt for ethical alternatives. Instead of Facebook, use Space Hey 👾. Instead of Google Analytics, try Simple Analytics. The sites have curated alternatives for both of these:
For more privacy-focused alternatives, check out this article from mindful UX studio, This Too Shall Grow.
As shown in this article, Dark Patterns haven’t gone away – they’re now subtler and sneakier, and are likely to be just as effective (or they wouldn’t exist). Writing an article like this can help call them out and get people talking about them. Alternatively, just add a comment, or tweet to DarkPatterns.org to get dark practices listed in the hall of shame.
Thanks for reading, I’ll be coming back to edit it (missed a few image captions, alt text, and links).
Also, thanks to Ann and Sophie for the feedback on this article.
*Edited 17th Feb 2021: a bill was proposed to make dark patterns illegal, which may not have been actioned yet. |
1 | Generics in Go: Viva La Revolution | Marko Milojevic
p Follow
Published in Level Up Coding
p 11 min read p Nov 30, 2021 --
4
Listen
Share
One feature to change them all.
How often do we experience some radical changes in the programming language of our choice? Some languages introduce changes more often, but some are traditional more than the Wimbledon.
Such language is Go. Sometimes to me, it looked too rigid. "This is not the Go way!" is the sentence I dream of the most. Most of the new Go releases were improvements in the same chosen direction.
In the beginning, I wouldn't say I liked such a path. When there is no excitement, working with something begins to be boring. So much that sometimes I would rather watch Keeping Up with the Kardashians.
(I am joking. One of the reasons I do not have a TV is to avoid any possibility to pollute my beautiful eyes with such TV shows.)
And then… it happened. The Go team announced the news that Generics in Go is becoming our reality. It is not just whispering and infinite elaboration on whether we should do it and how.
Brace yourselves, revolution is coming.
h2 As a Medium member, a portion of your membership fee goes to writers you read, and you get full access to every story… blog.ompluscator.com
Generics enable us to parameterize types when we define interfaces, functions, structs.
Generics is far from the new concept. We used it from the first version of Ada, over templates in C++, to the modern implementation in Java and C#.
To avoid any complex definition, let us check the real example — here, it gives us an opportunity to, instead of many Max or Min functions like this:
to declare just one method, like this:
Wait, what just happened? Well, instead of defining a method for each type in Go, we used Generics — we used generic type, parameter T, as an argument for the method. With this minor tweak, we support all orderable types.
The parameter T stands for any type that fulfills the Ordered constraint (later, we will touch on a topic of constraints). So, initially, we need to define what kind of type that T is.
Next, we define where we want to use that parametrized type. Here, we determined that both input arguments and the output are of the type T. If we execute the method by defining T as an integer, then everything here is an integer:
And it does not stop there. We can provide as many parameterized types as we want. And we can assign them to the different input and output arguments, whatever pleases us:
Here we have three parameters, R, S, and T. As we can see from the constraint any (which behaves like interface{}) those types can be, well, anything.
So, until now, we should be clear about what generics are and how we use them in Go. Let us concentrate on more exciting consequences.
When this article came up to the public, generics were not part of any stable version for Go. Therefore, some adaptations are necessary so we can test them locally.
To enable the usage of generics, I have used Goland from Jetbrains. I found a helpful article on their website for setting up an environment to run code in Goland.
The only difference from that article is that I used the Go source code (https://go.googlesource.com/go) with master branch instead of the one in the article.
On the master branch, we can enjoy in the new package from the standard Go library, Constraints.
Generics in Go is not the same as reflection.
Before jumping to some complex examples, it would be essential to check the benchmark score for generics. Logically, we do not expect performance close to reflection, as in that case, we would not need generics.
Of course, generics are not anyhow close to reflection. And it was never an intention for it to be. If anything, generics is there to be an alternative for generating the code, at least in some use cases.
So, it means that our expectation is that code based on generics has the same benchmark result as the code executed "classically". So, let us check a basic case:
Here are the small methods for transforming one Number type to another. Number is our constraint built on the Integer and the Float constraints from Go standard library (again, we will tackle this topic).
Number can be any numerical type in Go: from any derivative of int, to uint, float and so on. Method Trasforms expect slice with the first parametrized numerical type S as slice's base, and transforms it into the slice with the second parametrized type T as slice's base.
In short, if we want to transform a slice of ints into a slice of floats, we would call this method as we do in the main function.
Non-generics alternative for our function would be some method that expects a slice of ints and returns a slice of floats. So, that is what we will test in our benchmark:
No surprises. Execution time is practically the same for both methods, so using generics does not impact the performance of our application. But, are there some repercussions for structs?
Let us try that. Now, we will use structs and attach methods to them. The task will be the same — converting one slice into another:
Again, no surprises. Using generics or classic implementation does not bring any impact to the performance of the Go code. Yes, it is the truth that we did not test too complex cases, but if there were a significant difference, we would already experience it.
So, we are safe to go.
If we want to test more complex examples, adding any parametrized type and running the application is not enough.
If we decide to make a simple example of some variables without any complex calculation, we will not need to add anything special then:
Except that our method Max does not calculate the maximum value of its inputs but returns them both, there is nothing strange in the example above. To do that, we use a parameterized type T, defined as interface{}.
In this example, we should not look at interface{} as a type but as a constraint. We use constraints to define rules for our parametrized types and give the Go compiler some background on what to expect.
To repeat this: we do not use interface{} here as a type but constraint. We define rules for the parameterized type, and in this case, that type must support whatever interface{} does. So, practically, we could also use the any constraint here.
(To be honest, in all the examples, I have preferred to interface{} instead of any , as my Goland IDE does not support new reserved words ( any , comparable ), and then I have an explosion of error messages in IDE and broken autocomplete.)
During compile-time, the compiler can take a constraint and use it to check if the parameterized type supports operators and methods that we want to execute in the following code.
As the compiler does most of the optimization in runtime (and therefore, we do not impact runtime, as we could see in the benchmark), it allows only the operators and functions defined for particular constraints.
So, to see the importance of constraints, let us finish implementing the Max method and try to compare a and b variables:
When we try to trigger the application, we get an error — operator > not defined on T. As we defined T type as any, the final type can be, well, anything. From that point, the compiler does not know what to do with the operator.
To solve this issue, we need to define the parameterized type T as some constraint that allows such an operator. Thanks to the beautiful Go team, we already have the package Constraints, which has such a constraint.
The name of the constraint we want to use is Ordered, and after adaptation, our code works like a charm:
By using the Ordered constraint, we got our result. The nice thing in this example is that we can see how the compiler interprets the final type T, depending on the values we pass to the method.
Without defining actual type in squared brackets, like in the first two cases, the compiler can recognize the type used for arguments — in case of Go, that should be int and float64.
On the other hand, if we want to use some types than default ones, like int64 or float32, we should strictly pass such types in squared brackets. Then we give the exact information to the compiler on what to do.
If we want, we can extend the functionality in the function Max to support searching of maximum value inside an array:
In this example, we can see two interesting points:
We had a chance to see what kind of constraint it is to compare values of some type. With the Ordered constraint, we can use any operator defined on integers, floats, and strings.
If we want to use the operator ==, we can use a new reserved word comparable, a unique constraint that supports only such operator and no other:
In the example above, we can see how we should use the comparable constraint. Once again, the compiler can recognize actual types even without strictly defining them in squared brackets.
One point to mention in the example was that we used the same letter, T, for both parametrized types in two different methods, Equal and Dummy.
Each T type is defined only in the method's scope (or struct and its methods), and we do not talk about the same T type outside its scope. We can repeat the same letter in different methods, and the types will still be independent of each other.
We can define our custom constraints, and that is pretty easy. A constraint can be any type we want, but probably the best choice would be to use interface:
We defined an interface Greeter so that we can use it as a constraint in the Greetings method. Here we could directly use a variable of Greeter type instead of generics, but it is acceptable for demonstration.
Every type has an associated type set. The type set of an ordinary non-interface type T is simply the set {T} which contains just T itself. The type set of an interface type (in this section we only discuss ordinary interface types, without type lists) is the set of all types that declare all the methods of the interface.
The definition from above is from a proposal of type sets. It is already inside the Go source code, so we can use it wherever we like it.
This significant change brought new possibilities for us: our interface types can also embed primitive types, like int, float64, byteand not just other interfaces. This feature enabled us to define more flexible constraints.
Let us examine the following example:
We defined our Comparable constraint. And that type looks a little bit strange, right?
The new approach with type sets in Go allowed us to define an interface that should be a union of types. To describe a union between two types, we should place them inside the interface and put an operator | between them.
So, in our example, the Comparable interface is the union of types: rune, float64, and… I guess int? Yes, it is int indeed, but here defined as an approximation element.
As you can see in the proposal for type sets, the type set of an approximation element T is the type set of type T and all types whose underlying type is T.
So, just because we use the~int approximation element, we can provide variables of our customInt type to the Compare method. As you can see, we defined customInt as a custom type, where int is the underlying type.
If we did not add an operator ~, the compiler would complain, not executing the application.
So, this starts to be serious.
We can go wherever we want.
Seriously, this feature revolutionized the language. I mean, many new codes appear constantly. Probably this can make a significant impact on packages relying on code generation, like Ent.
Starting from the standard library, I can already see many codes refactored in future versions to use generics. Generics might even lead to the development of some ORM, the one we used to see in Doctrine, for example.
For example, let us consider a model from the Gorm package:
Imagine that we want to implement the Repository pattern in Go for both models (ProductGorm and UserGorm). With the current stable version of Go, we can do only one of the following solutions:
Now, with generics, horizon with opportunities is shifted toward a more flexible approach, and we can do something like this:
So, we have our Repository struct, with a parameterized type T, that can be anything. Notice, we defined T only in the Repository type definition, and we just passed it its assigned functions.
Here we can see only two methods, Create and Get, just for demonstration. To make our life easier for demonstration, let us create two separate methods for initializing different Repositories:
These two methods return instances of Repositories with predefined types, so like shortcuts. Let us make the final test of our small application:
And yes, it works. One implementation for Repository, support for two models. Zero reflection, zero code generation. I thought I would never see something like this in Go.
I am so happy. I think I will cry.
There is no doubt, Generics in Go is a colossal change. The change that can quickly shift a way of using Go and the change that can cause many refactors in the Go community shortly.
Although I am playing with generics almost every day, trying to see what we can expect more, I can not wait to see them in the stable Go version. Viva la Revolution! |
1 | Snyk Manager of customer education, Michele Wiedemer shares productivity hacks | August 9, 2021 In this edition of Calendar Heroes, we talk to Michele Wiedemer, Manager of Customer Education at Synk, to learn about her style, tools, and methodologies for balancing priorities while developing the education training program for security champions and developers at the open source security platform. Follow Michele on LinkedIn and Snyk on Twitter at @snyksec.
Calendar Heroes are real stories from very busy professionals across all types of roles and industries to learn more about how they manage to make time where there is none. We’re highlighting these stories to help share tips and ideas for working effectively, improving your time management skills, and boosting your productivity.
If you know a Calendar Hero who has awesome productivity hacks that you’d like to recommend we interview or want to be interviewed yourself, let us know! You don’t have to be a Reclaim user to be featured as a Calendar Hero: these stories are about anyone with an interesting approach to managing a complex schedule.
I’m Michele, the new Manager of Customer Education at Snyk, and I’m building an education program for our customers that includes group training activities and self-paced learning paths. I’m a fully remote worker in a globally distributed company. Before that, I was a freelancer for the last 15 years, often working with small startup companies to create software user guides, online help, and video and eLearning content.
The work in my typical week falls into a few main categories:
While I was freelancing, I was often juggling several projects at once, as well as spending time volunteering with my children’s schools. I became a bit of a productivity enthusiast to stay on top of everything. What helped me most was reading David Allen’s book Getting Things Done . I’m not always great about all of the steps of the GTD methodology, but whenever I start feeling overwhelmed, it helps me get more grounded if I focus on the basic steps of the process.
Having a plan really helps with those million random things that don’t neatly fit into a certain project. What I love about the GTD method is that I can capture all of those things in one safe place and know that getting back to them will not significantly derail my other big goals.
One of the first things I did when I started the new job was to figure out which tools to incorporate. I’ve always been a fan of using the right tool for the job, and no one tool does every job well when it comes to productivity.
I’m using Asana to track cross-collaborative projects and associated tasks. I’m using Notion for both informal note-taking and as an internal information repository. I use Lattice to track my quarterly goals and KPIs, as well as agendas for one on one meetings.
I love how tools work together. For example, I can start an Asana task from a Slack message or store links for useful documents or articles I want to return to in Notion.
Because of all of the meetings, my calendar is really my source of truth for how I spend my time. We use Google Calendar, and internally, my teammates can find time on my calendar when scheduling a meeting. I’m also using Calendly to simplify scheduling external meetings.
Within my first couple of weeks at Snyk, someone recommended Reclaim.ai and it has been the best tool to keep my calendar as my main day-to-day grounding of what I should be doing. In addition to syncing with my Slack status, I use both the Habits and Tasks functionality.
There are personal Habits that I want to make sure to include in my week, like a daily walk and time for lunch. I love that I can give Reclaim some parameters about these Habits to make sure they show up on my calendar in the right way.
I had a lightbulb moment when one of my teammates talked about her “big rocks” to refer to focused project work. Each week, I decide which big rocks I want to schedule, estimate how much time I want to spend on them, and let Reclaim Tasks do the rest.
Once Reclaim does the scheduling, I sometimes tweak it a bit. Then when I find time slipping away on undefined Slack work, or feel that I’m going from meeting to meeting, I remind myself that I have some focused work time coming up later in the day or the next day.
Big thinking, small steps. Stay focused on the project in front of you, with just a hint of what’s coming next. |
1 | Roger Penrose and the Big Bang | This article is more than 2 years old.
p
Penrose's idea of a conformal cyclic cosmology hypothesizes that our Universe arose from a ... [+] pre-existing Universe that would leave imprints on our cosmos today. This is a fascinating and imaginative alternative to inflation, but the data doesn't support it, despite Penrose's dubious claims that it does.
Skydivephil / YouTube
One of the greatest scientific successes of the past century was the theory of the hot Big Bang: the idea that the Universe, as we observe it and exist within it today, emerged from a hotter, denser, more uniform past. Originally proposed as a serious alternative to some of the more mainstream explanations for the expanding Universe, it was shockingly confirmed in the mid-1960s with the discovery of the “primeval fireball” that remained from that early, hot-and-dense state: today known as the Cosmic Microwave Background.
For more than 50 years, the Big Bang has reigned supreme as the theory describing our cosmic origins, with an early, inflationary period preceding it and setting it up. Both cosmic inflation and the Big Bang have been continually challenged by astronomers and astrophysicists, but the alternatives have fallen away each time that new, critical observations have come in. Even 2020 Nobel Laureate Roger Penrose’s attempted alternative, Conformal Cyclic Cosmology, cannot match the inflationary Big Bang’s successes. Contrary to recent headlines and Penrose’s assertions, there is no evidence of “a Universe before the Big Bang.”
The quantum fluctuations inherent to space, stretched across the Universe during cosmic inflation, ... [+] gave rise to the density fluctuations imprinted in the cosmic microwave background, which in turn gave rise to the stars, galaxies, and other large-scale structure in the Universe today. This is the best picture we have of how the entire Universe behaves, where inflation precedes and sets up the Big Bang.
E. SIEGEL, WITH IMAGES DERIVED FROM ESA/PLANCK AND THE DOE/NASA/ NSF INTERAGENCY TASK FORCE ON CMB RESEARCH
The Big Bang is commonly presented as though it were the beginning of everything: space, time, and the origin of matter and energy. From a certain archaic point of view, this makes sense. If the Universe we see is expanding and getting less dense today, then that means it was smaller and denser in the past. If radiation — things like photons — is present in that Universe, then the wavelength of that radiation will stretch as the Universe expands, meaning it cools as time goes on and was hotter in the past.
At some point, if you extrapolate back far enough, you’ll achieve densities, temperatures, and energies that are so great that you’ll create the conditions for a singularity. If your distance scales are too small, your timescales are too short, or your energy scales are too high, the laws of physics cease to make sense. If we run the clock backwards some 13.8 billion years towards the mythical “0” mark, those laws of physics break down at a time of ~10-43 seconds: the Planck time.
MORE FROMFORBES ADVISOR Best Travel Insurance Companies Best Covid-19 Travel Insurance Plans A visual history of the expanding Universe includes the hot, dense state known as the Big Bang and ... [+] the growth and formation of structure subsequently. The full suite of data, including the observations of the light elements and the cosmic microwave background, leaves only the Big Bang as a valid explanation for all we see. As the Universe expands, it also cools, enabling ions, neutral atoms, and eventually molecules, gas clouds, stars, and finally galaxies to form.
NASA / CXC / M. WEISS
If this were an accurate depiction of the Universe — that it began hot and dense and then expanded and cooled — we’d expect a large number of transitions to occur in our past history.
When this last stage occurs, the photons permeating the Universe, which had previously scattered off of the free electrons, simply travel in a straight line, lengthening in wavelength and diluting in number as the Universe expands.
In the hot, early Universe, prior to the formation of neutral atoms, photons scatter off of ... [+] electrons (and to a lesser extent, protons) at a very high rate, transferring momentum when they do. After neutral atoms form, owing to the Universe cooling to below a certain, critical threshold, the photons simply travel in a straight line, affected only in wavelength by the expansion of space.
Amanda Yoho
Some ~55 years ago, this background of cosmic radiation was first detected, catapulting the Big Bang from one of a few viable options for our Universe’s origin to the only one consistent with the data. While most astronomers and astrophysicists immediately accepted the Big Bang, the strongest proponents of the leading alternative Steady-State theory — people like Fred Hoyle — came up with progressively more and more absurd contentions to defend their discredited idea in the face of overwhelming data.
But each idea failed spectacularly. It couldn’t have been tired starlight. Nor reflected light, nor dust that was heated up and radiating. Each and every explanation that was tried was refuted by the data: the spectrum of this cosmic afterglow was too perfect a blackbody, too equal in all directions, and too uncorrelated with the matter in the Universe to line up with these alternative explanations. While science moved on to the Big Bang becoming part of the consensus, i.e., a sensible starting point for future science, Hoyle and his ideological allies worked to hold back the progress of science by advocating for scientifically untenable alternatives.
The Sun's actual light (yellow curve, left) versus a perfect blackbody (in grey), showing that the ... [+] Sun is more of a series of blackbodies due to the thickness of its photosphere; at right is the actual perfect blackbody of the CMB as measured by the COBE satellite. Note that the "error bars" on the right are an astounding 400 sigma. The agreement between theory and observation here is historic, and the peak of the observed spectrum determines the leftover temperature of the Cosmic Microwave Background: 2.73 K.
Wikimedia Commons user Sch (L); COBE/FIRAS, NASA / JPL-Caltech (R)
Ultimately, science moved on while the contrarians became more and more irrelevant, with their trivially incorrect work fading into obscurity and their research programme eventually ceasing upon their deaths.
In the meantime, from the 1960s up through the 2000s, the sciences of astronomy and astrophysics — and particularly the sub-field of cosmology, which focuses on the history, growth, evolution, and fate of the Universe — grew spectacularly.
And we were able to further verify additional predictions of the Big Bang, such as the predicted abundances of the light elements, the presence of a population of primordial neutrinos, and the discovery of density imperfections of exactly the necessary type to grow into the large-scale structure of the Universe we observe today.
The Universe doesn't just expand uniformly, but has tiny density imperfections within it, which ... [+] enable us to form stars, galaxies, and clusters of galaxies as time goes on. Adding density inhomogeneities on top of a homogeneous background is the starting point for understanding what the Universe looks like today.
E.M. Huff, the SDSS-III team and the South Pole Telescope team; graphic by Zosia Rostomian
At the same time, there were observations that were no doubt true, but that the Big Bang had no predictive power to explain. The Universe allegedly reached these arbitrarily high temperatures and high energies at the earliest times, and yet there are no exotic leftover relics that we can see today: no magnetic monopoles, no particles from grand unification, no topological defects, etc. Theoretically, something else beyond what is known must be out there to explain the Universe we see, but if they ever existed, they’ve been hidden from us.
The Universe, in order to exist with the properties we see, must have been born with a very specific expansion rate: one that balanced the total energy density exactly, to more than 50 significant digits. The Big Bang has no explanation for why this should be the case.
And the only way different regions of space would have the same exact temperature is if they’re in thermal equilibrium: if they have time to interact and exchange energy. Yet the Universe is too big and has expanded in such a way that we have many causally disconnected regions. Even at the speed of light, those interactions couldn’t have taken place.
The leftover glow from the Big Bang, the CMB, isn't uniform, but has tiny imperfections and ... [+] temperature fluctuations on the scale of a few hundred microkelvin. While this plays a big role at late times, after gravitational growth, it's important to remember that the early Universe, and the large-scale Universe today, is only non-uniform at a level that's less than 0.01%. Planck has detected and measured these fluctuations to better precision than ever before.
ESA/PLANCK COLLABORATION
This presents a tremendous challenge for cosmology, and for science in general. In science, when we see some phenomena that our theories cannot explain, we have two options.
Only the first approach has scientific value, and therefore that’s the one that must be tried, even if it fails to yield fruit. The most successful theoretical mechanism for extending the Big Bang has been cosmic inflation, which sets up a phase before the Big Bang where the Universe expanded in an exponential fashion: stretching it flat, giving it the same properties everywhere, matching the expansion rate with the energy density, eliminating any prior high-energy relics, and making the new prediction of quantum fluctuations — leading to a specific type of density and temperature fluctuations — superimposed atop an otherwise uniform Universe.
In the top panel, our modern Universe has the same properties (including temperature) everywhere ... [+] because they originated from a region possessing the same properties. In the middle panel, the space that could have had any arbitrary curvature is inflated to the point where we cannot observe any curvature today, solving the flatness problem. And in the bottom panel, pre-existing high-energy relics are inflated away, providing a solution to the high-energy relic problem. This is how inflation solves the three great puzzles that the Big Bang cannot account for on its own.
E. SIEGEL / BEYOND THE GALAXY
Although inflation, like the Big Bang before it, had a large number of detractors, it succeeds where all the alternatives fail. It solves the “graceful exit” problem, where an exponentially expanding Universe can transition into a matter-and-radiation-filled Universe that expands in a way that matches our observations, meaning it can reproduce all the successes of the hot Big Bang. It imposes an energy cutoff, eliminating any ultra-high-energy relics. It creates a uniform Universe to an enormously high degree, where the expansion rate and the total energy density match perfectly.
And it makes novel predictions about the types of structure and the initial temperature and density fluctuations that should appear, predictions that have subsequently been borne out to be correct by observations. Inflation’s predictions were largely teased out in the 1980s, while the observational evidence that validated it has come in a trickling stream over the past ~30 years. Although alternatives abound, none are as successful as inflation.
While many independent Universes are predicted to be created in an inflating spacetime, inflation ... [+] never ends everywhere at once, but rather only in distinct, independent areas separated by space that continues to inflate. This is where the scientific motivation for a Multiverse comes from, and why no two Universes will ever collide. There simply aren't enough Universes created by inflation to hold every possible quantum outcome owing to the interactions of particles within an individual Universe.
Karen46 / FreeImages
Unfortunately, Nobel Laureate Roger Penrose, although his work on General Relativity, black holes, and singularities in the 1960s and 1970s was absolutely Nobel-worthy, has spent a large amount of his efforts in recent years on a crusade to overthrow inflation: by promoting a vastly scientifically inferior alternative, his pet idea of a Conformal Cyclic Cosmology, or CCC.
The biggest predictive difference is that the CCC pretty much requires that an imprint of “the Universe before the Big Bang” show itself in both the Universe’s large-scale structure and in the cosmic microwave background: the Big Bang’s leftover glow. Contrariwise, inflation demands that anywhere where inflation ends and a hot Big Bang arises must be causally disconnected from, and cannot interact with, any prior, current, or future such region. Our Universe exists with properties that are independent of any other.
The observations — first from COBE and WMAP, and more recently, from Planck — definitively place enormously tight constraints (to the limits of the data that exists) on any such structures. There are no bruises on our Universe; no repeating patterns; no concentric circles of irregular fluctuations; no Hawking points. When one analyzes the data properly, it is overwhelmingly clear that inflation is consistent with the data, and the CCC is quite clearly not.
For approximately 10 years, Roger Penrose has been touting extremely dubious claims that the ... [+] Universe displays evidence of a variety of features such as low-temperature-variance concentric circles, which arise from dynamics imprinted prior to the Big Bang. These features are not robust and are insufficient to provide support for Penrose's assertions.
V.G. Gurzadyan and R. Penrose, arXiv:1302.5162
Although, much like Hoyle, Penrose isn’t alone in his assertions, the data is overwhelmingly opposed to what he contends. The predictions that he’s made are refuted by the data, and his claims to see these effects are only reproducible if one analyzes the data in a scientifically unsound and illegitimate fashion. Hundreds of scientists have pointed this out to Penrose — repeatedly and consistently over a period of more than 10 years — who continues to ignore the field and plow ahead with his contentions.
Like many before him, he appears to have fallen so in love with his own ideas that he no longer looks to reality to responsibly test them. Yet these tests exist, the critical data is publicly available, and Penrose is not just wrong, it’s trivially easy to demonstrate that the features he claims should be present in the Universe do not exist. Hoyle may have been denied a Nobel Prize despite his worthy contributions to stellar nucleosynthesis because of his unscientific stances later in life; although Penrose now has a Nobel, he has succumbed to the same regrettable pitfall.
While we should laud the creativity of Penrose and celebrate his groundbreaking, Nobel-worthy work, we must guard ourselves against the urge to deify any great scientist, or the work they engage in that isn’t supported by the data. In the end, regardless of celebrity or fame, it’s up to the Universe itself to discern for us what’s real and what’s merely an unsubstantiated hypothesis, and for us to follow the Universe’s lead, regardless of where it takes us. |
1 | San Francisco’s Tilting Tower (2018) | Is it likely to fall over? In a word: NO.
Based 5% on insider information and 95% on the laws of physics, San Francisco’s 58-story Millennium Tower is in no danger of tipping over. In this age of over-stimulated media, the rabid coverage of this issue has sown doubt in the minds of ordinary citizens about the competence of those of us who develop, design, and build large things.
What actually did go wrong? The building has settled downwards, and is, in fact, tilting. Most reports say the building has settled about 17 inches, and is leaning 14 inches westward and 6 inches northward at its crown. Settlement is normal, (more about that later), but what about the tilt? Let’s do a little math: at the top, the horizontal displacement is 15.2 inches (hypotenuse of a 14:6 triangle), and the building is 645 feet tall, so the Millennium Tower is leaning 0.11 degrees to the west-northwest. How significant is this? At its most precarious, Pisa’s famous tower leaned about 5.5 degrees, but it has been stabilized in recent times to lean “only” 4.0 degrees. Right now, its apex is displaced about 13 feet. The Leaning Tower of Pisa is only 183 feet tall; if the Millennium Tower leaned 4.0 degrees, its top would be displaced 45 feet! London’s 315 foot Big Ben is also leaning, but merely 0.26 degrees, a little over twice the tilt of the Millennium Tower.
That’s enough math, but it does show that San Francisco’s tower is almost not leaning at all, especially compared to Pisa’s tower (and you probably didn’t even know about Big Ben). Yet 60 Minutes showed some San Franciscans maintaining they can clearly see the tower tilting, almost ready to tip over, and advise others to stay away. Sadly, they have fallen prey to suggestion based on the inability to distinguish between a priori truth and the opinion of anyone blurting out a tasty sound-bite.
The story of the tower’s sinking was revealed by a suburban geologist retained by the homeowner’s association, a licensed practitioner, but one without any clear expertise in the design of tall building subsurface structures. He pointed out that the foundation piles do not go down to bedrock, but instead, rely on support from upper stratum sands and clay. Sounds scary, especially when parroted by lawyers, but friction piles embedded in strata above bedrock have been used successfully to support massive buildings in San Francisco for years.
Bedrock, in San Francisco anyway, is over-rated. The word itself suggests solid, reliable, and dependable attributes, and in many large cities, bedrock must be blasted out with dynamite. But the Franciscan Complex in the Bay Area can be removed with a pick-axe. The problem here is geologic complexity; our bedrock is technically called a mélange by geologists, because it is a jumble of metamorphic rocks. There are all kinds of layers of various densities below the surface; strata capable of bearing the weight of a building are sometimes near the surface and sometimes very deep. This geologic complexity is responsible for our stunning topography, and it calls for sophisticated engineering. Bedrock is stronger than sand/clay, and for a tall building in general, fewer piles are necessary when founded in bedrock rather than upper strata material. Economics usually governs the choice between fewer long piles compared to more short ones, but both systems have proved to be reliable.
Days after the Millennium Tower story broke, marketers of nearby buildings under construction were advertising the fact that their foundations were solidly supported on bedrock. This is hyperbole – a large structure can be just as stable with foundations resting on sand or clay.
There is no dispute, however, that Millennium Tower has sunk more than predicted. Most foundations settle during construction, as the site is loaded by the dead weight of the structure, and once occupied, a building may continue to move slightly – usually nobody notices. In this case, something unusual took place, resulting in more displacement than anticipated after the building was completed. The most plausible explanation is a change in the ground water level. Imagine that what lies below the surface in San Francisco is like a chunk of Swiss cheese, and the voids are filled with all kinds of stuff, including water. In fact, the cheese itself is more like a blend of hard and soft, and there are many fissures for water to migrate through underground. For the past decade, a building boom has been driving the economy of San Francisco, and many large, new projects have been built near the Millennium Tower. Excavations for these projects may have caused the ground water to shift, overstressing the clay layers below the tower’s piles, resulting in more settlement than anticipated. Most of the below-grade work near the Millennium Tower has now been completed, and credible experts have concluded that the building is safe. Given the relatively minimal extent of displacement, it is not going to fall over.
Not even in an earthquake.
In fact, most Bay Area residents were not fretting too much about THE BIG ONE until the New York Times ran a piece in April commemorating the 112th anniversary of our famous 1906 earthquake and fire. Following a template honed by Fox News, the authors cited various sources to question the wisdom of allowing San Francisco to build high-rise structures at all, let alone one like the Millennium Tower. San Francisco has been taking a big seismic gamble, they said. The most disturbing quote was from a Caltech professor who said of our buildings: “It’s kind of like getting onto a new airplane that’s only been designed on paper but nobody has ever flown in it.” What could this possibly mean? Since Gothic times, buildings have been “designed on paper” rather than by trial and error, and some of the most remarkable large-scale human creations owe their existence to mathematical abstractions that led to built reality. Designs are tested extensively using computer models, which is, by the way, how airplanes are designed these days, too.
Well, since the days of the Barbary Coast, those of us living on the fault lines have developed coping mechanisms for dealing with the possibility of a major earthquake in our lifetime. Some have left – the most famous being Enrico Caruso, who was here for the 1906 event and vowed never to come back (and didn’t). Right now, the betting line is that a major earthquake of magnitude Richter 6.7 or greater along one of our three active seismic faults has a 72% chance of happening in the next 30 years. In the meantime, most of us will continue to enjoy our extraordinarily beautiful urban backdrop, one that would not even exist without the shifting land whose behavior is impossible to harness. |
1 | What Is Programmable Money? | Alexander Lee 1
Introduction
Discussions of financial technology have recently started to include the idea of "programmable money," though the specific meaning of the term is often unclear. Different perspectives may presume a particular underlying technology or set of features to be a part of a programmable money system, and lack of agreement on these aspects may lead to confusion. To support clearer discussion of this concept in the central banking community and the financial industry more broadly, this research note offers an investigation into the nature of programmable money. This note focuses on the importance of a mechanism guaranteeing the inseparable functionality of the technical components of a programmable money system rather than prescribing the specific nature of those components. This "coherence guarantee" is crucial regardless of specific technical choices and admits a wide range of potential designs for programmable money. The guarantee also facilitates the view of programmable money as a concrete product, which may provide users with greater certainty about its nature and capabilities than alternative service-oriented models that can automate interaction with particular digital value records.
strong
Many observers of financial technology have offered interpretations and discussion of potential use cases of programmable money. 2 While such references to programmable money typically describe it as being enabled by distributed ledger technology (DLT) or blockchain systems, this is not universally the case, and the term remains ill-defined. 3 Two natural components of the definition are a digital form of money and a mechanism for specifying the automated behavior of that money through a computer program (this mechanism is termed "programmability" in this note). However, it is not clear whether these components alone are sufficient for a definition, given that various combinations of similar technology for payments automation have existed for decades. It was only after the advent of public blockchain cryptocurrencies that the term "programmable money" became common parlance. 4 So what is it about these new systems that has prompted the recent spate of references to the term, and does the answer somehow imply that DLT has to be a part of any "programmable money" system?
One facet of successful public blockchain systems that may provide some clarity is how they closely link digital value and programmability in a single system that only functions properly when both are present. In traditional financial technology systems, digital money is typically defined by database entries. Any "programmability" offered for this money, whether internally to the entity maintaining the database or exposed to its customers via an application programming interface (API), involves another technology system built separately from that database and then connected in some fashion. While newer cryptocurrency systems also use a database (often in the form of a blockchain data structure), a key difference is that the records in such blockchains either directly incorporate some programmable script (as Bitcoin records do, for example), or sit alongside a general programming functionality within the system that allows for direct manipulation of those records (the model used by Ethereum, among others). 5 In both designs, the value represented in those systems and the programmability of that value are tightly integrated. There is no notion, for example, of "bitcoins" without an associated script governing their spending conditions, whereas a traditional ledger could certainly hold digital records of money without offering a programming interface to those records. 6
Figure 1. Simplified visualizations of a traditional database using an API for programmability, and a programmable blockchain
The necessary technical components and the notion of tight integration between those components suggests a definition of programmable money as a unified, coherent product that encapsulates both the storage of digital value and programmability of that value. This note uses the term coherence guarantee to refer to a mechanism guaranteeing that the technical components of the programmable money product are "inseparable" and that those components are consistently functional, such that the product is stable and coherent for users. While nominally "separable" technologies (including traditional technologies that predate DLT) may be used to implement the two core technical components depicted above, the coherence guarantee must ensure that they are not separable as far as the overall product is concerned. Without this guarantee, the individual components are nothing novel: Digital money as a product has existed in many forms for years, as has the ability to write computer software. 7 With the guarantee, programmable money may be seen as a new product category sitting alongside existing money products such as central bank deposits, demand deposits, nonbank money, and cash. The benefit of stability that this guaranteed inextricability provides users of the product may not be present in other systems relying on services that can be altered by their providers and which may be subject to outages. The idea of the guarantee itself is broadly agnostic to technical choices, allowing for a range of approaches to the design of the digital money and programmability components using a variety of technologies.
Technical designs for programmable money: possibilities and tradeoffs
There are many ways to approach the technical design of a programmable money system. Such designs may incorporate both traditional technologies and newer alternatives like DLT, with specific technical choices being driven by the goals of the system designers. While a technical design can address the necessary components of a programmable money system (digital money and programmability), a coherence guarantee cannot be provided by technology absent some broader reinforcing incentives (which might be economic, legal, or reputational). It is nevertheless important to understand the possible technical approaches for programmable money system designs, some of which are described below along with attendant tradeoffs.
Recordkeeping for digital money
The most basic building block of a programmable money system is a set of digital records representing transactable value. These records will almost certainly be kept in a persistent database of some variety. Many existing financial technology systems use traditional relational databases for this purpose, such as a bank keeping a digital record of customer accounts, while DLT systems typically use a blockchain-style database to store records of value. A notable difference between these technologies is that traditional databases are often general purpose: They may be used for financial applications as a way to store accounts, but they are designed to be flexible enough to be used to store virtually any type of digital record, and they do not have any "awareness" of the implications of the records they store.
DLT database designs, by contrast, are usually purpose-built for financial accounting of their integrated digital currencies and have intrinsic mechanisms for detecting invalid transactions such as doubly spent value or the spending of greater value than is available to a user. In addition to helping maintain correct value records, such mechanisms can complement programmability in those systems, performing fundamental accounting checks regardless of what type of programmatic instruction is issued. In the public blockchain context, these mechanisms help ensure the proper operation of the system and provide additional surety to users when combined with a mechanism such as proof-of-work, which is designed to make the (correctly recorded) records practically immutable. Deployed in a private context, DLT could potentially offer some of the same specialized recordkeeping benefits to the operator of the system (with the notable exception of immutability), if not necessarily to the system's users. 8
The format in which records of value are kept in a database may have implications for the design of the programmability component of a programmable money system. One recordkeeping possibility is the traditional account format, such as a bank or financial service provider might keep for a customer. The account format has the immediate advantage of being a familiar construct and is the de facto standard for traditional financial technology systems. Some DLT systems use the account format as well, enabling a single account to control a variety of digital assets and to issue programmatic instructions. 9 A potential drawback of the format is that it makes tracing the history of any particular unit of value more difficult, if this is a desired feature of the system.
The unspent transaction output (UTXO) format, first popularized by the Bitcoin system, more readily facilitates this type of historical transaction record. This alternative recordkeeping model records all discrete amounts of value (the "unspent outputs") in existence at the current time, and associates ownership of that value directly with the amounts. 10 A rough analogy from the traditional financial world might be banknotes in circulation: Owners are identified by current bearers, and individual ownership of any banknote may be reassigned by its bearer at any time. In systems like Bitcoin, ownership is asserted through satisfaction of a conditional script associated with each UTXO, typically requiring a valid digital signature from a specific cryptographic private key. Possession of the correct private key is thus tantamount to "possession" of the value, and for this reason bitcoin has sometimes been referred to as a "bearer instrument," though the analogy is somewhat imprecise. 11 Benefits of the UTXO model include conciseness of the records being kept (there is no such thing as an "inactive account" or "empty account" under the UTXO recordkeeping model) and the ability to associate specific (programmable) spending conditions with every discrete UTXO. Drawbacks of UTXO-based recordkeeping include the need to maintain external metadata in order to roll up "account balances" for individual users (if such balances are desired) and the potential abuse of the format to fragment value across many UTXOs, which will need to be tracked by the system. 12
Enabling programmability
In conjunction with technical decisions about how to store transaction and balance records for programmable money, a system designer must also consider how to enable the programming capability for that money. There are several potential methodologies to choose from, some of which are typically used in conjunction with DLT and others that are more often used for traditional financial technology systems. In some cases, these decisions involve mutually exclusive designs for enabling programmability, but there is usually broad flexibility in how programmability can be enabled.
Public blockchain systems have generally followed one of two strategies for enabling programmability, denoted here as the transaction scripting approach and the virtual machine approach. While these approaches are not mutually exclusive and do not dictate a particular choice of recordkeeping format, certain combinations are logically and technologically complementary. Under the transaction scripting approach, a (small) program is attached to every discrete amount of value tracked by the system, indicating how that amount may be spent. This approach is analogous to having a programmatic spending condition attached to every banknote or physical coin in existence, which can be re-programmed when spent. This "digital cash" model is the approach pioneered by Bitcoin, combining the UTXO recordkeeping format with the transaction scripting model, with one transaction script per UTXO dictating how it may be spent. When a UTXO is consumed as an input to a financial transaction, a new script can be associated with every output of that transaction. Users can control any number of UTXOs and can spend them independently by satisfying the associated transaction scripts.
Figure 2: A simplified schematic of the UTXO recordkeeping format combined with the transaction scripting approach to programmability
By contrast, the virtual machine approach embeds programming capability in the system itself in the form of virtual machine instructions that can recognize and manipulate the value stored in the system. This is the approach taken by Ethereum, which allows transactions to contain programmatic instructions and further allows programs written for its own virtual machine (the EVM) to be deployed to and stored in the blockchain. Programs deployed in this manner may persist indefinitely, and their functionality can be repeatedly used by future transactions. These programs are typically called "smart contracts," a term that predates public blockchains and need not imply any contractual obligation in a legal sense when used in the contemporary DLT context. 13 This approach is combined with the account recordkeeping format for recording value in Ethereum and several other DLT systems to allow for programmatic manipulation of account balances.
Figure 3: A simplified schematic of the account recordkeeping format combined with the virtual machine approach to programmability
These two designs present different tradeoffs. Potential benefits of the "digital cash" model using programmable UTXOs are the ability to specify spending constraints on any discrete amount of value and a greater facility to trace the provenance of any particular "virtual banknote." Benefits of the account-based model may include greater fungibility of value and efficiencies from reusing smart contract code for frequently desired program functionality. New technological designs may borrow elements of both models and offer a different range of benefits, and the interpretation of what is "beneficial" may depend on the objectives for the system outside of just the technology.
Financial software systems that do not use DLT, for example programmable payment services like PayPal, have typically enabled programmability through an API layer on top of some combination of underlying technology. Because the API is an abstraction of the system that it provides an interface for, DLT could in fact be used behind an API as well. 14 APIs are an example of a component that does not preclude another technology choice, and these interfaces can be layered on top of one another (and often are in practice). Beyond this flexibility, benefits of APIs include established design patterns and general familiarity among system developers, which may not be the case with newer technologies associated with DLT, such as UTXO-based recordkeeping or smart contracts. Drawbacks of APIs include the potential decoupling of the programmability from the digital representation of money if a discrete API gateway were the sole interface for programmatically interacting with that money (for example, a bank's API for programmability might go down while its databases for querying account balances on its customer portal remain up), and an increasing potential for system failure if numerous critical APIs are layered atop one another. 15
Figure 4: A simplified schematic of a traditional software system combining a database with an external API gateway
Another key consideration in the technical design of a programmable money system is how to prevent abuse of the programmability. Even if the mechanism correctly functions to only allow valid commands to be programmed, valid commands may be combined (maliciously or inadvertently) in ways that lead to undesirable consequences. One major form of potential abuse is a denial-of-service attack, in which large numbers of instructions to execute some programmatic function are issued repeatedly by the attacker, forcing the system to perform wasteful computation at the expense of providing service for legitimate requests. Public blockchain systems have implemented multiple deterrence mechanisms out of necessity, lacking any central operator capable of screening out an attacker's traffic: Fees are typically required for every transaction, introducing an economic disincentive for abusive transactions, and the extent of potential program execution is also limited in some fashion. 16 Private DLT implementations may also deliberately constrain program execution, and APIs for existing financial service are often rate-limited to prevent similar abuse; such techniques could be used in conjunction depending on the overall system design. Different designs may present perceived drawbacks to users of the programming facility (such as offering only a limited set of commands), and at a certain level of restriction, whether the system still constitutes "programmable money" may be called into question. 17 Regardless of exact design, however, some level of restriction is likely a necessity to allow the system to operate correctly for legitimate users.
strong
While the technical design of a programmable money system is important, the technology alone cannot intrinsically provide a sufficient coherence guarantee. The system must also rely on a convincing set of external incentives to uphold the guarantee, lest the overall product not be seen as credible in the eyes of its users. 18 These incentives can be viewed as falling within at least three broad categories of economic incentives, legal incentives, and reputational incentives, though other frameworks are possible. These categories are not mutually exclusive, though certain would-be providers of programmable money may rely more heavily on some of them than on others. In addition, the technical design of a system may lend itself well to reliance on certain types of incentives.
Reinforcing guarantees through incentives
Economic incentives would feasibly apply to almost any offeror of a programmable money product. Contemporary providers of money have a clear economic incentive for simply continuing to offer certain products, such as seignorage from banknote issuance. If a programmable money provider offered a product that had a similar economic benefit to the issuer, this incentive would help guarantee that product's continued provision. While public cryptocurrencies may not have any specific offering entity, they may nevertheless rely on economic incentives to perpetuate the system, as exhibited for example by the subsidies and fees involved in the process of proof-of-work mining, which will be explored further below.
Legal incentives are also likely to apply in some form to nearly all offerors of programmable money, though the structure of these incentives may be highly variable across jurisdictions. From a user's perspective, the nature of what exactly is being legally guaranteed may also be salient. How to legally enforce that a programmable money system be provided on an ongoing basis is unclear, for example, though it is easier to envision a legal requirement that restitution be provided to users of such a system if it were taken offline (at least in cases where the provider is an identifiable entity subject to such requirements in its own jurisdiction, such as a corporation). While restitution in such a situation is certainly preferable to those users receiving nothing at all, it may be the case that some users would prefer to have the programmable money that they believed they had access to, rather than some alternative form of compensation. 19
Reputational incentives are perhaps most variable in importance depending on the offeror of programmable money, and in some cases may factor in only marginally. For example, the creator(s) of Bitcoin remain anonymous, and have different incentives to maintain a reputation than an established company or government looking to launch a programmable digital money might. For certain entities with exceptionally strong reputations, such as governments capable of issuing investment grade debt, or central banks capable of issuing widely accepted fiat currency, an incentive to uphold that strong reputation may in fact be the most important part of a guarantee.
Coherence guarantees in action: practical and theoretical examples
A practical example of a coherence guarantee is that offered by public blockchain systems for their native cryptocurrencies. These systems rely on economic incentives for network participants to guarantee their continued operation and the integrity of the associated transaction ledger. For systems that use proof-of-work consensus, such as Bitcoin and Ethereum, network participants performing transaction validation and construction of the shared ledger receive economic rewards denominated in native cryptocurrency for performing this work (in expectation), and must consume economic resources (in the form of electricity) in order to do so. 20 Because validators maintaining the canonical ledger history will reject any invalid transactions, including double-spend transactions, while any attacks on the network and ledger history which involve mining still incur the cost of performing that work, there is a significant economic cost for dishonest miners. The economic cost of proof-of-work is also where the "immutability" of such public ledgers derives from: In order to change the ledger history, an attacker would have to consume enough resources to produce proofs of work for an alternate section of ledger history that would become the longest version of the blockchain. 21
It is important to note that while these incentives in combination with the technical design of the underlying systems have worked to uphold the coherence guarantee of native cryptocurrencies like bitcoin and ether for several years, the same incentives do not apply to non-native "cryptocurrencies" like ERC-20 tokens built on top of the Ethereum platform. 22 While such tokens may rely on the same consensus mechanism as ether when it comes to recording transactions, they do not derive value from that mechanism in the same manner, because miners on the Ethereum network are paid for performing proof-of-work in ether, not in ERC-20 tokens. Thus, while such tokens may be programmable, they may not represent "money" in the system and may subsequently not fall under this note's definition of programmable money.
Another example of a practical system that could feasibly qualify as programmable money is a privately administered system such as a payment service provider or bank offering programmability for digital money through an API. Such systems can offer many of the same technical capabilities as other programmable money systems for their users, but the incentives underpinning the maintenance of those systems may be more complex and the coherence guarantee less robust, possibly to the point where the label may be inapplicable. For example, a large financial services company may have legal incentives to serve its customers well enough to avoid lawsuits, reputational incentives as a provider of a well-known programmable API for payments, and economic incentives in the form of taking fees for its services, and some customers may find these incentives credibly strong enough to qualify the company's services as programmable money. However, such services can still be discontinued at any time in whole or in part if incentives were to change. 23 The relative strength or weakness of a coherence guarantee for programmable money being offered by a private entity may thus be seen to rely on the particular mix of incentives that the offeror has for maintaining such a product, and these incentives may shift considerably over time.
A more theoretical example of a programmable money system is a hypothetical programmable digital currency offered by a public-sector entity already issuing other forms of money. Central banks of sufficient reputation and credible technical capability to offer a programmable money system may be able to offer a coherence guarantee primarily on the basis of their reputations, with an implied incentive for maintaining those reputations. 24 This approach is not open to most would-be providers of programmable money systems, but it may be the case that the same authority that allows certain governments to issue widely used fiat money (with its implicit guarantee of continued value in the future enabling its use today) will also allow those entities to issue programmable money "by fiat." Although no central bank has yet offered such a programmable digital currency product, it remains a possible development if general interest in central bank-issued digital currencies persists.
A final note on guarantees
The word "guarantee" in the preceding discussion should always be read with implied quotation marks around it. Public cryptocurrency systems, like Bitcoin, have worked in practice for years, but the behavior of validators in the long run when the incentive structure no longer includes subsidies in the form of block rewards remains an open question, and external factors leading to a loss of confidence in such systems could also affect the guarantee. 25 As illustrated in the practical example of privately offered programmable money, such systems may vanish if the offeror decides to discontinue them, technological capability regardless. Central banks could similarly issue and later elect to demonetize a digital currency. Attempting to assess the comparative strength of various coherence guarantees in these situations and others is beyond the scope of this note. Potentially more important than any such hypothetical framework, however, is the credibility of the guarantee for any given programmable money product in the eyes of its potential users. Observing user attitudes towards preexisting services like payment APIs and newer offerings such as open banking APIs and cryptocurrencies may offer insight into this evolving landscape in the coming years.
Conclusion
While "programmable money" as a term may have originated in the public blockchain community, as a concept it need not involve distributed ledger technology at all. There are multiple possible approaches to the design of a technological system offering a digital representation of money with an associated programming facility; what is more important than the specific technical approach is the guarantee that the system will in fact cohere into a unitary product offering, rather than a service offering associated with an otherwise independent non-programmable digital money. This coherence guarantee offers greater stability and consistency of experience to users over "programmability-as-a-service" models, and must be reinforced by suitable incentives, with different providers of programmable money relying on different forms of incentives. The ultimate assessment of the suitability of such incentives and thus the credibility of the coherence guarantee will come from users of these products, and their evaluations of different forms of programmable money may evolve over time as more practical examples are offered to the public.
Bechtel, Alexander, Jonas Gross, Philipp Sandner, and Victor von Wachter (2020, September 29). "Programmable Money and Programmable Payments," Medium. https://jonasgross.medium.com/programmable-money-and-programmable-payments-c0f06bbcd569
Carlsten, Miles, Harry Kalodner, S. Matthew Weinberg, and Arvind Narayan (2016). "On the Instability of Bitcoin Without the Block Reward (PDF)," Princeton Economics Working Papers.
Harney, Alexandra, and Steve Stecklow (2017, November 16). "Special Report: Twice burned – How Mt. Gox's bitcoin customers could lose again," Reuters. https://www.reuters.com/article/us-bitcoin-gox-specialreport-idUSKBN1DG1UC
Koning, John Paul (2020, November 19). "Programmable money isn't new, we've had it for ages," Moneyness. http://jpkoning.blogspot.com/2020/11/programmable-money-isnt-new-weve-had-it.html
Lee, Alexander, Brendan Malone, and Paul Wong (2020). "Tokens and accounts in the context of digital currencies," FEDS Notes. Washington: Board of Governors of the Federal Reserve System, December 23, 2020.
Lewis, Antony (2020, April 26). "What Actually is Programmable Money?" Bits on Blocks
Nakamoto, Satoshi (2008). "Bitcoin: A Peer-to-Peer Electronic Cash System (PDF)."
Szabo, Nick (1994). "Smart Contracts."
Please cite this note as:
Lee, Alexander (2021). "What is programmable money?," FEDS Notes. Washington: Board of Governors of the Federal Reserve System, June 23, https://doi.org/10.17016/2380-7172.2915. |
3 | The Target Value for Bitcoin Is Not Some $50 or $100. It Is $100K to $1MM (2013) | Bitcoin’s value is at an all-time high again. Following the hype peak and crash in 2011, many seemed to have thought it was just another dotcom fluke. But bitcoin was much more than that, and it has returned with a vengeance – its market cap is now twice what it was in the 2011 peak, and it is nowhere near its potential, which is four orders of magnitude above today’s value.
In this, a lot of people are confused at the fact that bitcoin has climbed 200% since the start of this year alone, and wonder what to make of it. It is currently at $41.50 and climbing fast, and I see a lot of people just looking at the numbers and guessing from charts how things will pan out.
I am seeing guesses of $50, $100, $150, even $1,000. These numbers seem pulled out of thin air from just looking at the charts – nobody seems to have done due diligence from the other direction, from the most fundamental observation of all:
Bitcoin is a transactional currency. As such, it is competing for market share on the transactional currency market.
Talking about bitcoin value is not about happily watching numbers go up and down while having popcorn. This is about identifying a global market, looking at its size and estimating a target market share based on the strengths and weaknesses of the competing product or service under analysis.
When you know the size of the target market, and have an estimate for your projected market share, you can estimate the value of your product or service as a percentage of the value of the total market. I haven’t seen anybody do that for bitcoin.
The total size of the transactional currency market is hard to estimate, but has been pegged at about $60 trillion (the amount of money in circulation worldwide). Seeing how this number is roughly on par with the world’s GDP, it is a reasonable enough number to be in the right ballpark. Based on my four earlier estimates (one, two, three, four), I think it is reasonable that bitcoin captures a 1% to 10% market share of this market.
The low end of 1% would be if it captures international and internet trade. The 10% would be if bitcoin also manages to capture some brick-and-mortar retail trade, which we are already seeing strong signs that it might – operations provide a 3% to 5% extra profit margin on sales when you can cut out the credit card processors, so the incentive to switch is immense: those 3% to 5% cost savings translate to 50% to 100% increased profits, as margins are typically very slim in retail.
Furthermore, some people will undoubtedly invest in bitcoin and keep their portion of bitcoin away from the transactional pool, like all people tend to hoard money if they are able. This decreases the amount of bitcoin that must fulfill the market share, further driving up value for each individual bitcoin. As a rough estimate, let’s assume that only one in four bitcoins is actually used in transactions, and the rest are in some kind of savings or investment plans.
This leads us to a target market cap of 600 billion to 6 trillion USD, to be fulfilled by about 6 million bitcoin, which makes for easy calculations. That means that each bitcoin would be worth $100,000 at the low market cap and $1,000,000 at the high market cap.
In the light of this, present-day projections of $100 that present themselves as “daring and optimistic” actually come across as rather shortsighted and almost dealing with peanuts.
So is the projected market share realistic? Bitcoin certainly has hurdles to overcome – scalability and usability being two of them – but it has done remarkably well in maturing in the two years since I started looking at it. My prediction of a mainstream breakthrough around the year 2019 remains, and it still depends on getting mainstream usability; a target market cap may be reached about a decade after that happens, as a technology typically takes ten years from mainstream breakthrough to maturity.
Now, there are definitely uncertainties in this projection and its assumptions – but it does indicate what kind of ballpark we are talking about.
The author has a significant investment in bitcoin. Specifically, he went all-in two years ago after having run these very numbers. |
1 | Pgcli: CLI for Postgres with auto-completion and syntax highlighting | Ukrainian people are fighting for their country. A lot of civilians, women and children, are suffering. Hundreds were killed and injured, and thousands were displaced.
This is an image from my home town, Kharkiv. This place is right in the old city center.
Please consider donating or volunteering.
BlackLivesMatter
We value the diversity of our community. We strive to amplify the voices of the oppressed to eradicate racism and xenophobia. We ask our community to stand together in support of the Black community.
Pgcli is a command line interface for Postgres with auto-completion and syntax
highlighting.
Source: https://github.com/dbcli/pgcli
Quick Start
If you already know how to install python packages, then you can simply do:
$ pip install pgcli
If you're on macOS you can install it via Homebrew. Just be aware that this will
install postgresql if you don't already have it.
$ brew tap dbcli/tap
$ brew install pgcli
If you're having trouble with the quick start, check the install page for
detailed instructions.
Usage
$ pgcli --help
Usage: pgcli [OPTIONS] [DBNAME] [USERNAME]
Options: -h, --host TEXT Host address of the postgres database. -p, --port INTEGER Port number at which the postgres instance is listening. -U, --username TEXT Username to connect to the postgres database. -u, --user TEXT Username to connect to the postgres database. -W, --password Force password prompt. -w, --no-password Never prompt for password. --single-connection Do not use a separate connection for completions. -v, --version Version of pgcli. -d, --dbname TEXT database name to connect to. --pgclirc PATH Location of pgclirc file. -D, --dsn TEXT Use DSN configured into the [alias_dsn] section of pgclirc file. --list-dsn list of DSN configured into the [alias_dsn] section of pgclirc file. --row-limit INTEGER Set threshold for row limit prompt. Use 0 to disable prompt. --less-chatty Skip intro on startup and goodbye on exit. --prompt TEXT Prompt format (Default: "\u@\h:\d> " ). --prompt-dsn TEXT Prompt format for connections using DSN aliases (Default: "\u@\h:\d> " ). -l, --list list available databases, then exit. --auto-vertical-output Automatically switch to vertical output mode if the result is wider than the terminal width. --warn / --no-warn Warn before running a destructive query. --help Show this message and exit.
pgcli also supports many of the same environment variables as psql for login options (e.g. PGHOST, PGPORT, PGUSER, PGPASSWORD, PGDATABASE).
Examples
$ pgcli local_database
$ pgcli postgres://amjith:passw0rd@example.com:5432/app_db
$ pgcli -h localhost -p 5432 -U amjith app_db
note: While using @ or special symbol in password, do encode and quote it.
$ pgcli 'postgresql://amjith:%40postgres@localhost:5432/app_db'
$ pgcli -h localhost -U amjith -W '@postgres' -d app_db
Request
If you know how to debian package or RPM package for Python applications
please get in touch. |
11 | Texas governor signs voting restrictions bill into law | Texas Republican Gov. Greg Abbott on Tuesday signed into law a bill that bans 24-hour and drive-thru voting, imposes new hurdles on mail-in ballots and empowers partisan poll watchers.
The restrictive voting measure adds Texas to the list of Republican-controlled states that have seized on former President Donald Trump’s lies about widespread voter fraud and clamped down on access to the ballot box this year. Already, Florida, Georgia and other states have enacted new voting laws.
The election overhaul in Texas comes as Republicans seek to hold onto power in a rapidly changing state where people of color make up virtually all of the population growth – and that growth is concentrated in large cities that tend to vote Democratic.
Democrats had fled the Capitol in Austin for weeks in an effort to stymie the bill – first preventing the passage of a similar measure at the end of the state’s regular legislative session in May, then forcing Abbott to call two special sessions to tackle what the governor called “election integrity.”
“It does make it easier and than ever before for anybody to go cast a ballot. It does also, however, make sure it is harder than ever for people to cheat at the ballot box,” Abbott said at an event at which he signed the bill into law.
Opponents of Senate Bill 1 said its provisions will disproportionately restrict voting access for marginalized voters – particularly people of color and those with disabilities.
The new law takes aim at Harris County, the home of Houston, which last year offered drive-thru voting and 24-hour early voting. The bill restricts the hours counties can offer early voting to between 6 a.m. and 10 p.m. And it prohibits tactics like the ones Harris County used in 2020, when a garage at the Toyota Center – the home of the NBA’s Houston Rockets – was among the venues used as a place residents could vote from their vehicles.
The bill also blocks counties from sending unsolicited mail-in voting applications – even to those who are over age 65 and therefore qualify automatically to vote by mail. It also places new rules around mail-in voting, increases protections for partisan poll watchers and sets new limits on those who help voters, including those with disabilities, to cast their ballots.
Marc Elias, the leading Democratic elections lawyer, said immediately after Abbott signed the bill that he had filed a lawsuit on behalf of a group of Texas organizations challenging the law, arguing it violates the Voting Rights Act.
The lawsuit says the new Texas law contains provisions “intended to impose a particular burden on Texas’s Black and Latino communities – exacerbating the marginalization caused by more than a century of discriminatory practices.”
“SB 1 is an appalling, anti-democracy effort by Texas Republicans to construct barriers to voting for people they believe will not support them,” Eric Holder, the US attorney general under then-President Barack Obama, said in a statement. “What makes this bill and similar ones Republicans are pushing across the country even more un-American is that Republicans are using the ‘Big Lie’ about the 2020 election as a pretext to support them. The reality is that these bills have nothing to do with election integrity or security, but rather are discriminatory measures making it harder for all people to vote. These bills will have a disproportionate impact on communities of color.”
In part because Democratic inattention to state-level races has allowed Republicans to build majorities in legislative chambers in many otherwise competitive states, Democrats have proven unable to mount a forceful response this year to the tide of voting restrictions in GOP-led statehouses.
RELATED: America’s long history of Black voter suppression
Instead, they are looking to Congress for action. In Texas, state House Democratic lawmakers left Austin and traveled to Washington to urge Congress to approve new voter protections.
However, measures to expand voting rights are stalled on Capitol Hill with Senate Democrats unable to break the 60-vote filibuster threshold – and unable to eliminate the filibuster, due to opposition within the party.
“I was born in segregation,” Democratic state Rep. Garnet Coleman said before Tuesday’s House vote to approve the bill. “We think we’ve made progress, and then all of a sudden there’s a new law that moves us back in time.”
Republicans in Texas, meanwhile, said their election reforms were aimed at making it easier to vote and harder to cheat – a refrain GOP lawmakers have used even though there is no evidence of widespread voter fraud in Texas.
“The common-sense reforms in this legislation strengthen our trust in the electoral process, from voter registration through the final tallying of ballots. Texans can cast their votes with confidence knowing that they’ll be counted and reported accurately,” said Republican state Sen. Bryan Hughes, the provision’s author. |
2 | When it comes to legal cannabis, Canada's doing it right | If U.S. policymakers want to better prioritize public health while legalizing cannabis, they should look to Canada's model for ideas, according to a new research report funded by Stanford University.
Keith Humphreys, PhD, the Esther Ting Memorial Professor of psychiatry and behavioral sciences at Stanford Medicine, commissioned the report in his role as co-director of the Stanford Network on Addiction Policy, in an effort to more closely examine Canada's efforts to control marketing of the drug, its use of government-run cannabis retail stores and efforts to reduce youth access to better prevent harmful drug use.
I talked with Humphreys about the results of the study and how it could be useful in forming future policy in the United States.
The lessons we learned from regulating the tobacco industry have been forgotten in the legalization of cannabis, which has prioritized profit over public health. The potency of the drug is not capped, advertising is ubiquitous and sometimes dangerously fraudulent -- for example claiming that cannabis cures COVID-19 or heroin addiction -- and taxes are too low to cover the health and social harms the drug produces.
U.S. policymakers often ask the Stanford Network on Addiction Policy for models of cannabis legalization that prioritize public health, and we realized there aren't good examples domestically.
Canada, with a stronger social welfare tradition and history of regulating corporations in the public's interest, was a better place to look than the United States.
This report uses public policy analysis and polling to illuminate Canada's unique approach to the legalization of cannabis. In contrast to the U.S., Canada sharply restricted cannabis marketing through legal measures and also limited branding on packages. Such restrictions are valuable because marketing addictive products often leads to overuse.
Canada also does something no state has done, namely allowing bans on risky, high potency products, such as certain edibles and concentrates. These products, which have levels of THC (the principal intoxicant in cannabis) many times that of traditional smoked flower cannabis, present risks that scientists only partially understand, but which are probably most significant for adolescents because of the neuroplasticity of their brains.
Canada allows individual provinces a high level of control over how cannabis is sold. For example, the government can operate retail outlets itself as is done in Quebec.
Multiple states in the U.S. have had government-run retail stores for alcohol for almost a century, so it's a completely viable model for cannabis. States that have government-run alcohol stores have less advertising, better limit youth access and have lower rates of binge drinking. We can reasonably expect parallel benefits for cannabis.
Photo by Shelby Ireland |
1 | Is There Any Good Software to Get Products Data from Online Stores? | Search this site Skip to main content Skip to navigation
Web Scraping tools Is There Any Good Software To Get Products Data From Online Stores? Why Data Scraping Tool Is Must Have For A Company?
So many people around the world do not have much knowledge about these web mining tools. There are so many data mining tools available on the internet to extract specific data from the website. Every company in the world has been dealing with tons of data, managing and converting this data into a useful form is really hectic work for them. If this accurate information is not available at the right time, a company will waste valuable time in making strategic decisions based on this accurate information.
However, in these situations, the data extraction and data scraping tools will help you to take the strategic decisions at the right time to reach your goals in this competitive business. There are so many advantages with these web scraping tools that you can store customer information in a readable format, you beat your competitors, and also you can figure out your company performance. And it is a critical job for every company to have this information at their fingertips when they need this information.
What Is The Best Web Scraping Tool To Extract Data From Different Online Stores?
To survive in this competitive business world, this web crawling tool is critical in the operations of the company. There is a powerful tool called Anysite Website Scraper used to scrape data from multiple different online stores like eBay, Walmart, Amazon, Daraz, Etsy, Alibaba, Ali Express, and many more. With this web scraping software , you can filter the data on the internet and retrieve the information for specific needs. This web harvesting tool is used in various fields like digital marketing, telemarketing, email marketing, freelancing, research, and, many more. You can build your own web scraper by using Anysite Scraper like Amazon Product Scraper, eBay product scraper, Walmart Product Scraper, etc. The best of the software is that you can scrape data from many social media sites and business directories like Facebook, Twitter, Yellow Pages. White Pages, Yelp, Manta, etc. I will describe some data scraping tools built by Anysite Scraper that can scrape data from online stores like Amazon, eBay, and, Walmart.
Amazon Product Data Extractor
Amazon Scraper is a piece of software that will allow you to choose the exact data you want from the Amazon website and let you download it as an Excel or CSV file. Amazon Crawler extracts almost all the product information including seller name, phone number, email address, product price, shipping, product, sales rank, ASIN, product description, product feature, customer review, and much more. This Amazon Reviews Scraper allows you to extract the details of a seller and its products into an editable Excel and CVS format.
eBay Product Data Extractor
eBay Scraper helps in a growing business as well as reaching business to new success and heights. eBay Product Scraper is a desktop application built by Anysite Web Scraper to extract data from multiple eBay stores for different business requirements. eBay Image Extractor can extract seller name, number, address, email, image link, product name, description, eBay listing link, and much more for eBay stores without coding.
Walmart Data Scraper
Scraping the product data promptly, easily, and efficiently from Walmart needs the usage of Walmart Extractor software that can scrape product information without writing any codes. You can conveniently and easily scrape different fields from Walmart like product title, product title link, price, images, reviews, seller’s country, seller email address, seller phone number, product shipping information, etc.
Conclusion
You should not waste your time and money on scraping product data from different online stores on your own. It’s better to buy a reliable and trusted web scraper like Anystie Scraper that can provide you web scrapers for different websites at the lowest price and they are time-saving, cost-effective, easy-to-use, and capable of scraping bulk data from e-commerce stores, social media sites, and business directories.
Contact Us:
Email: aslogger@ahmadsoftware.com
Phone Number: 03084471774
Page updated Google Sites Report abuse |
31 | Statement from JetBrains | The TeamCity Blog
Powerful CI/CD for DevOps-centric teams
Follow
Follow:
p Twitter
p Youtube
p
Get TeamCity
JetBrains
Security
TeamCity
Statement on the story from The New York Times regarding JetBrains and SolarWinds
Please make sure you also read the follow-up post from the 7th of January
The New York Times has published a story in which they point to JetBrains being under investigation and somehow related to the SolarWinds breach that recently took place.
First and foremost, JetBrains has not taken part or been involved in this attack in any way. SolarWinds is one of our customers and uses TeamCity, which is a Continuous Integration and Deployment System, used as part of building software. SolarWinds has not contacted us with any details regarding the breach and the only information we have is what has been made publicly available. It’s important to stress that TeamCity is a complex product that requires proper configuration. If TeamCity has somehow been used in this process, it could very well be due to misconfiguration, and not a specific vulnerability. Furthermore, security is our top concern and we notify and manage updates transparently in our Security Bulletin.
Secondly, we have not been contacted by any government or security agency regarding this matter, nor are we aware of being under any investigation. If such an investigation is undertaken, the authorities can count on our full cooperation.
We remain open to answering any and all questions regarding this matter and as always are committed to delivering the best possible products and services to our customers.
Thank you
Maxim Shafirov
Chief Executive Officer
Share
p
p
p
p TeamCity integration with .NET, Part 3: Deploying projects New in TeamCity 2020.2: Bitbucket Cloud Pull Requests p |
1 | Beringia: a lesson on change from our ancestors | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
3 | Six announcements from Google I/O | Six announcements from Google I/O
About sharing
Google By James Clayton and Cody Godwin Technology reporters Last year Google's developer conference was cancelled. Google I/O was back this year with several new announcements. Here are the things that caught our eye.
Google says it is creating a smartphone camera that more accurately depicts skin tone.
"For people of colour, photography has not always seen us as we want to be seen, even in some of our own Google products" said Google's Sameer Samat.
Google says it's making changes to auto white balance adjustments "to bring out natural brown tones".
The company says it is also working on new algorithms to better distinguish the subject from a background.
The camera will be ready on the new Google Pixel which will be out later this year.
However Google has itself been subject to accusations of racism recently.
The company was described as "institutionally racist" by influential AI academic Timnit Gebru. She says she was dismissed by the company after a paper she had written about discrimination in AI in December. More details here.
Google Photos will use AI to curate collections to share with the user, similar to Memories on Apple and Facebook.
A big complaint from users of Apple's or Facebook's feature is when people are reminded of breakups or tough times in their lives.
Google Google says they have taken this into consideration and will allow users to control which photos they do or don't see by letting them remove specific images, people or time periods.
Another feature they're introducing is called "little patterns". It will use AI to scan pictures and create albums based on similarities within them - for example, the same jumper, the same couch or similar cups of coffee.
Google is also using machine learning to create what they're calling "cinematic moments" which will look at two or three pictures taken within moments of each other and creating a moving image, similar to Apple's Live Photos.
Google launched a new feature called "Smart Canvas" - a sort of umbrella platform interconnecting Google Docs, Meet, Sheets, Tasks, and Slides.
One feature announced is an "assisted writing" tool.
This will flag up "gendered terms" to the user and suggest alternatives.
For example, a user who wrote "Chairman" would be asked whether they wanted to use a gendered term - and would be offered alternatives such as "Chairperson" or "Chair".
Google Google said more details would be revealed "in the coming weeks".
Google described this update to its operating system, Android 12, as "the biggest design change in Android's history".
This update will include new privacy features that will allow users to have more control over how much information apps get from them.
A light on the top of the screen will indicate if an app is using the device's camera or microphone - a feature users of Apple's iOS are already familiar with.
Google Android 12 can create colour palettes that compliments the background image Users can also opt to share an approximate location instead of an exact one.
Google and Apple have recently faced criticism about their operating systems. The two companies run the operating systems of the vast majority of the world's phones outside of China.
Apple recently announced a new feature to its iOS update that allows users to stop third party apps from tracking them. Google didn't announce an equivalent tool.
Google announced it was working on a new video chat system - where the person you're chatting to appears in front of you in 3D.
The project is called "Starline" and aims to create ultra realistic projections for video chats.
The pandemic has created huge interest in how video conferencing can be improved.
Google Google's Project Starline uses 3D imaging to make someone on a video call appear as if they're really there. However before you get too excited, this is still very much a work in progress.
The technology needs multiple cameras, and is unlikely to be compatible with consumer tech anytime soon.
Google AI to help identify skin conditions Google also unveiled a tool that uses artificial intelligence to help spot skin, hair and nail conditions, based on images uploaded by patients.
The company said it should launch later this year, and that the app has been awarded a CE mark for use as a medical tool in Europe.
More details here.
Google Related Topics Google
Artificial intelligence
Android
More on this story Google AI tool can help diagnose skin conditions
p
Google and big tech are 'institutionally racist'
p |
1 | Hasura GraphQL Engine 2.0 | 23 February, 2021 | 9 min read On behalf of the entire team at Hasura, I'm delighted to announce the release of Hasura GraphQL Engine 2.0 (also just called Hasura 2.0). 🎉
Hasura 2.0 is our most significant release over the last few years, but is also entirely backwards compatible with Hasura 1.x. Over the last 85 releases of Hasura over 2 years, we've endeavoured to never break user facing APIs and this major release is another checkpoint in that ensuring that users get new features without having to refactor their work. In fact, Hasura 2.0 == Hasura 1.4, but it was such a big release that we decided to call our next 1.4 release, 2.0.
strong
If you'd like to skip the spiel and just head on straight to trying it out, head to the docs to try 2.0 with Docker or try it for free on Hasura Cloud - our managed service offering of the Hasura GraphQL Engine. Watch this webinar to get an overview of all the new capabilities in our v2 release.
Hasura 2.0 captures fundamental changes inside the GraphQL engine that allows Hasura to serve the needs of a much larger class of mission critical applications.
There are 5 main areas of work and feature additions you'll see in Hasura 2.0:
We first built Hasura with exclusive support for Postgres. This was a deliberate choice so that we could focus our engineering and product efforts and given the rise of Postgres and the Postgres ecosystem exploding this choice paid off well.
With Hasura 2.0 we've set the foundation to unlock 2 key capabilities:
As we've worked with our users, especially enterprise & mission critical workloads, we've noticed increasingly that users often bring multiple new or existing databases, even if its just Postgres. These are 2 of the top use-cases we've seen:
Postgres was always just the beginning at Hasura. We've now refactored our query compilation pipeline so that it can be generalized to any database easily.
Existing or legacy mission critical data is often not on Postgres and the cost of ETL-ing that data to Postgres is infeasible and defeats the purpose of being able to query the source of truth directly - especially for fast-moving operational/realtime data.
And ofcourse, new data workloads also might go into other databases that aren't just Postgres!
We'll be rolling out SQL Server support on Hasura Cloud in phases over the coming day. You can also head to the docs to try Hasura with Docker and your own SQL Server database.
Hasura's philosophy of adding a new database is to keep the final GraphQL API similar, but without trying to normalize all database engines to the same common denominator interface. Hasura aims to bring out the best of a particular database engine while ensuring all the Hasura goodness of GraphQL, authorization and eventing Just Work.
We've started to put together a contribution guide of how you can extend a few typeclasses to add support for your favourite SQL system and also work closely with the maintainers to keep improve native support for that database and its goodness (its specific types, functions, operators, connection handling etc.).
Over the course of the year, we'll be bringing other types of databases systems into Hasura: document and key-value stores are big on our minds! Let us know what databases you'd like to see support for via our Github so that others can weigh in as well and help us prioritize.
You can also read excellent article on how we built a GraphQL to SQL Compiler on Postgres, MS SQL, and MySQL.
Hasura supports "joining data" over heterogeneous sources so that API consumers see a semantically unified Graph of their API models. Currently, Hasura supports joining in certain specific directions.
Coming soon, in order of priority, are the following remote join capabilities:
GraphQL APIs have emerged as the best modern API for humans to explore, consume and integrate.
While GraphQL's goal at inception was to provide a specification for automating API composition on UIs and creating a type-safe client-server contract over JSON, its success goes far beyond that. Arguably, and much to the frustration of true GraphQL stans, GraphQL fragments and Relay style GraphQL are not as popular as they ought to be. This is because GraphQL solved a much bigger problem than just automating the tedious portions of API composition and integration on the UI.
GraphQL represents two big shifts in the way we think about APIs:
However, in practice, we've seen three big deterrents to using a GraphQL API in production - even where it's "technically" feasible:
This allows Hasura users to get all the benefits of GraphQL and REST - when the situation demands it. Hasura uses GraphQL as an intermediate representation and converts a RESTful request with its parameters to a parametrized GraphQL query and then executes it, with minimal overhead - query plans are cached after all!
Check out the REST to GraphQL interop specification on Github.
This might seem similar to the idea of "persisted queries" but the key difference here is supporting idiomatic REST. This implies supporting error codes, caching headers, REST style verbs and parameterization (path params, URL parameters, JSON body etc.) and even OpenAPI/Swagger documentation for created endpoints.
Hasura contains a sophisticated authorization engine - very similar to RLS style authorization in the database world.
This allows Hasura to extend its authorization policy abilities across data-sources without being restricted to the authorization features supported by particular databases only.
Over time we plan to extend this authorization engine to create a generic and unified RLS style authorization layer across all data sources. With support for authorization on remote GraphQL services we've only just begun!
As data explodes, the need for declarative and "human observable" authorization becomes even more important.
A GraphQL layer fronting multiple heterogeneous data sources is an ideal candidate for being a single point of failure. Hasura 1.x was already seamless to run in a scaled up fashion as a cluster of replicas and they would sync with each other automatically.
With Hasura 2.0 we wanted to ensure that we can start bringing in features that make running a Highly Available and "distributed" Hasura cluster along with its interaction with a set of upstream data sources as seamless and easy as possible.
This is no longer about just making Hasura scale up and work, but also to make its interaction with upstream sources fault-tolerant & scalable to prevent the single-point-of-failure problem.
These are the features coming to Hasura soon:
ICYMI, Hasura is a GraphQL server that doesn't require a build step! All of Hasura's configuration is entirely dynamic and API driven. This "metadata" API - which can be locked down in production depending on the use case - makes "change management" extremely convenient.
However, these metadata APIs are also extremely powerful because they allow Hasura users to "fetch" all of the configuration of a Hasura system on the fly.
This unlocks unprecedented metadata capabilities:
While some of the features above available aren't in this release of Hasura 2.0, expect a lot of metadata tooling to come your way!
Here's the one we're most excited about are working on right now:
Hasura 2.0 is here and along with the features that are a part of this release, it sets the technical foundation for engineers and contributors to innovate and ship rapidly for the coming months and years.
Join me at the Hasura 2.0 launch webinar to go over these features in action, learn more and ask questions!
And of course, in case you missed it, don't forget to register for the awesome (and free!) virtual GraphQL conference: https://graphql.asia/
Here's the announcement post for Hasura 2.0.0-beta.1. |
2 | What happened to the Grid? Does the AI site builder still exist? (2019) | It was October 8th, 2014.
The Grid had just launched a crowdfunding campaign for their highly anticipated website builder, claiming to use artificial intelligence to design and build websites.
Like many upcoming startups, they got covered in TechCrunch and had an eye-catching promo video which enticed thousands of people to reach for their wallets.
A few months later, after closing a $4.6M series A, everything looked promising for the expected Spring 2015 launch.
But when Summer 2015 came around and only 100 of the 50,000+ backers got their hands on a Beta product, concerns seemed to arise…
@thegrid Any update on the beta? It's been over a month since the last update and I am eagerly awaiting to test it out!
— Colin Davis (@davco9200) July 30, 2015
@StringStory I'm on the waitlist for @thegrid. Been waiting very very patiently for many many months now :( only 100 beta users have access.
— Julie Klimt (@JulieKlimt) July 28, 2015
Although the tool had incredible promise, things seemed to take a turn for the worse.
In late 2016, the Grid started to show signs of a failing company:
To make things worse, in early 2019 The Grid took down all customer websites and showed this message on their updated, one-page website:
“Dear people of the Grid, we have placed V2 sites in an archived state and current users will be emailed a tool to acquire & backup their content. User registration will reopen with V3. To keep expectations clear, we will not be uber-regular with life-is-swell emails. We will not be shipping *minimally* viable shells of a product to satisfy suits. We won't talk talk talk about how close we are (though we are) and how game-changing yada yada. You won't be hearing from us in full dose till the new engine hums.”
So what happened? Does the company still exist? Will there ever be a v3? What will happen to the $6M dollars that were invested by founding members?
That’s what we’re here to try and figure out.
b if you’ve never used the Grid and would like to see a detailed review of how the product worked, we’d recommend taking a look at this video review by Kaya Ismail, Founder of Wordify.
According to our research, here is a detailed overview of everything that happened to the Grid.
The Grid was founded in San Francisco by a leading team of web developers and designers including Dan Tocchini (CEO and Co-Founder), Brian Axe (Chairman and Co-Founder who was Director of Products at Google AdSense), Leigh Taylor (Lead Designer and the first Medium designer), and Henri Bergius (VP of Engineering).
(source: crowdfundinsider.com)
Several years of RD, which includes projects such as Grid Style Sheets and NoFlo.
“After the first year & half of R&D at the Grid, with a rough prototype and only 3 engineers, which included me, Facebook courted us for acquisition.” - Dan Tocchini, Founder of the Grid.
When the initial offer of $30-50M became an offer of $10-15M, the team decided to turn it down and pursue a public launch.
(source: huffpost.com)
The Grid launches their crowdfunding campaign.
(source: techcrunch.com)
The Grid closes their Series A for $4.6 million USD, led by AME Cloud Ventures.
(source: crunchbase.com)
The Grid founder Dan Tocchini explains how their main website was built using the product. The presentation starts at 1:20. At this point in the campaign, there are 22,763 founding members.
(source: livestream)
The Grid shares a sneak peek of their upcoming iOS app.
(source: venturebeat.com)
The Grid showcases the first in-app demos of the product.
(source: youtube.com/user/TheGridio/videos)
The Beta Product begun to rollout to the first 100 backers. The Grid claims to have close to 50,000 founding members. They created a video giving customers additional details about their journey and what can be expected during the Beta program.
(source: twitter.com/thegrid & youtube.com)
The Grid mentions on Twitter that all customers should be online before January 2016.
(source: twitter.com/thegrid)
The Grid holds their first online hangout and discusses improvements in their design systems. At this point, they claim it takes 6-10 seconds to create a website.
The Grid holds their second online hangout and discusses improvements in the user interface. Although they mention there will be several additional hangouts in the series, this was the final hangout publicly shared online.
The Next Web is one of the first major tech websites to review the Grid. The article showcases several screenshots of the product and goes into detail about some of the confusion surrounding the company, such as the ongoing crowdfunding campaign and the fact there is still no date for a public launch.
At this point, the crowdfunding campaign has been live for 10 months and there are over 54,000 founding members.
(source: thenextweb.com)
The Grid releases a new video showcasing how to build a website using their iOS App.
The Grid’s founder Dan Tocchini is interviewed by HuffPost. The Grid has 35 employees, over 61,000 founding members and the product is still in private beta.
The Grid announces that it’s started to roll out auto activations in beta.
(source: twitter.com/thegrid)
Close to 2,000 founding members have access to the Grid’s beta platform.
(source: twitter.com/thegrid)
Over 7,000 founding members have access to the Grid’s beta platform.
(source: twitter.com/thegrid)
Over 15,000 founding members have access to the Grid’s beta platform.
(source: twitter.com/thegrid)
All 61,000+ founding members have access to the Grid’s beta platform.
(source: twitter.com/thegrid)
The Grid turns off beta and launches their public version.
(source: twitter.com/jonadaskin & twitter.com/fabriceangelini)
The Grid launches a “lifetime membership” of $96 to the top 10% of users. Here is a copy of the landing page that explained all the perks of the membership. The website also contained a message explaining why there had been no marketing over the past year:
“10 years of R&D & more than a dozen patents went into making the first AI web designer. 2 years ago we let the world know and shattered the record for consumer software crowdfunding. We didn't sell out, we walked from offers from the biggest website in the world. How could we? This is just the beginning. For the last year no marketing, no ads, we took your feedback and invested everything into making v3 what we've all been waiting for…”
You can still demo their v3 product here.
(source: web.archive.org)
The Grid sold out of lifetime memberships. They also provide additional details about why they created a “Lifetime Membership”.
(source: twitter.com/thegrid)
During a webinar, Dan Tocchini mentions that v3 of the Grid will be released on November 17th, 2017.
Note: Unfortunately the webinar is no longer available as the domain has been taken down.
However, there are still tweets about it.
Dan Tocchini replies to angry customers that are frustrated with the Grid missing their release date.
(source: twitter.com)
The Grid sends their last public tweet.
(source: twitter.com)
Still no update on V3 release. Frustrated customers start voicing criticism all over the internet.
(source: twitter.com)
All customer websites are taken down; thegrid.ai where customers sites were hosted is no longer functional. The main website confirms this information.
(source: web.archive.org)
The Grid continues to sell pre-paid subscriptions. Their new “Pro Membership” gives you access to 10 sites for $144. They are also offering a demo of the v3 product while letting people join the v3 wait list.
(source: plans.thegrid.io)
First things first: we don’t know what happened. We can only speculate.
The only people who know for sure are the numerous employees whose LinkedIn profiles still show them working at the Grid despite having new roles at other companies.
However, because our company (PageCloud) went through similar challenges as The Grid, we feel like we may be able to comment on what went wrong.
Here’s a little bit of context for those of you who missed it in 2014-2015 when The Grid and PageCloud were both attempting to shake up the website builder market.
According to our research and experience, here are the key points that stopped the Grid from making it in the website builder market:
After running the most successful crowdfunding campaign in website builder history, the Grid faced tremendous pressure to deliver a game-changing product that built websites through AI.
Despite not having a public-facing product, the Grid’s marketing team continued to push out teaser videos that showcased amazing features, further increasing customer demand and expectations.
After missing several deadlines and stirring a wave of customer frustration, the Grid decided to release a product that wasn’t fully featured or ready for user consumption.
When the Grid released their product in the wild, their most important advantage quickly became their biggest weakness.
To build a website, the Grid asked users to pick a color palette as well as a font and layout style. Then, after adding some content, the Grid created a website.
Sounds pretty cool on paper!
However, regardless of how the website was built, there were only two choices when it came to the end product: satisfaction or dissatisfaction.
If users were unhappy with the design, which most were, they went looking for a solution.
Unfortunately, because the Grid used AI for all aspects of the product, users had very limited options when it came to editing the design: they could change the original style choices or tell the Grid that they didn’t like the design and the app would spin up another website.
For a lot of people, this felt like a game of designer roulette. A user would never know if the output would be right for them.
Because of the lack of control and the small amount of time users invested in their sites, many of them felt like the easiest solution was to simply give up.
When the Grid launched, they immediately aimed for the stars.
They wanted users to build full-blown websites entirely through AI.
Instead, they could have aimed for something more achievable, like building a single page or holding back on certain AI elements, like layout, that are extremely difficult to perfect even for professional designers.
For example, they could have offered a few templated layouts and then built-in AI features allowing customers to redesign them over time.
Here’s a basic representation of what we mean:
If users had a sense of accomplishment working with the software, where they could build something simple, they would be more willing to come back over and over to iterate on their website as the product improved.
Users had very little input when it came to their website. Although we love the idea of giving people pre-styled options for things like colors and fonts, users need some freedom to set their own custom styles.
Individuals and businesses using the Grid struggled to respect their own style guides which caused high levels of frustration and churn.
We all know how poor reviews can collapse even the most hyped companies. Remember the legendary sandwich picture that poured gas on the failed Fyre Festival?
Although there were people on Twitter backing the Grid’s product, they were heavily outweighed by the numerous customers who were disappointed with the Grid’s product and communication after it went live in 2015 - 2016.
Here are just a few examples:
Furthermore, many people mentioned that the Grid had started to delete their negative comments and ban them from their social media accounts.
Here’s just one example from a user who created a one hour video review with the title: “The Grid Sucks”.
Needless to say, this turned into a PR nightmare.
As mentioned before, the Grid didn’t focus on a minimum viable product.
They decided to develop everything, all at once. This included an iOS app and Android app that occupied developer resources that could have been working on something else.
Essentially, the Grid was trying to add bells and whistles to a core product that wasn’t finished.
Although the Grid had raised $4.6M and had sold for over $6M in founding memberships, like most startups, the company had a high burn rate and had not reached profitability.
According to a 2015 interview with Dan Tocchini, the Grid had 35 employees spread across 8 time zones. Based on the average salaries of backend developers working with AI, let’s assume an average salary of $100K for all of these employees.
Simple napkin math would show annual costs of $3.5M (35 x $100K) without considering any additional costs for rent, travel, patents, hardware, software or other expenses.
Considering the Grid had been growing their employee base since 2010, it’s not hard to believe that by the end of 2016 the company was under serious financial pressure to raise more money or to renew and grow the existing subscription base.
With poor customer reviews and retention, this would have been no easy task.
b Just for the sake of comparison, and to illustrate how expensive software development can be, Wix launched an AI based template builder in 2016. The year before, their development team of close to 600 cost them $68M.
They have a website. They have a product that you can demo, sort of.
However, there are many signs that the Grid is slowly fading away, and we wouldn't bet on them making a historic return any time soon.
Here are just a few examples:
When you sign up for their v3 waiting list, their business address shows a winery just north of San Francisco.
And their contact email is carl@connectedbrains.com, which after a little bit of research, connects back to Carl Chouinard, their former CTO who left the company in December of 2017.
One person on Twitter had this to say:
“Not sure if you’ve been following the situation here on Twitter or not, but I talked to 8 of the original team, they either left due to ethical issues, contract ending, or declined to comment. NONE of them had anything positive to say, all negative, or no comment.”
A former employee, who gave a one star review on Glassdoor, mentioned:
“Out of money and didn't pay out the employees”
There have even been talks of a class action lawsuit against the company.
It’s always unfortunate when such promising tech doesn’t make it.
However, for PageCloud, the worst part is seeing 60k+ customers who lost their investment and were never properly informed about what was going to happen to their websites.
Because of this, we decided to create a unique offer to help the Grid backers get their websites online.
If you show proof of purchase (screenshot) of a payment made to the Grid, we will provide you with a free website for one year ($240 value).
No strings attached. If you don’t like the website after a year, you can simply walk away forever.
To benefit from this promotion, create your free Pagecloud account and reach out to our in-app support team to let them know you had a Grid subscription. They will be happy to provide you with additional information.
If you know people who used the Grid, feel free to share this information with them!
Did you enjoy this article? We’d love to hear your thoughts below. |
1 | The first-ever e-racing cycling world championships will be held this December | Heading out the door? Read this article on the new Outside+ app available now on iOS devices for members!
Download the app.
Late last year the UCI and Zwift announced that a virtual racing world championships would be held in 2020. Today, the two organisations revealed more detail, not least that the competition will be held on December 8-9, 2020.
“It’s with great pleasure that we are able to confirm the dates and plans for the first UCI Cycling Esports World Championships,” said UCI president David Lappartient. “The year has certainly been a challenging one for all, but we are now back to enjoying racing and have a new UCI World Championships to look forward to at the end of 2020.
“Virtual races were hugely popular during the period that competitions ceased, and I truly believe in the potential of esports to help grow participation in our sport. This is a historic moment.”
While the race routes are yet to be announced, a joint press release confirmed that “the competition will take place entirely within Zwift’s Watopia world” and that women’s and men’s races will take place “on identical courses with equal distances.” The recent Virtual Tour de France featured the same courses and distances for men’s and women’s races as well.
Race winners will receive a virtual rainbow jersey to wear while riding Zwift, as well as a real-world jersey to wear during esports competitions for the following year.
“2020 has been a big year for esports as it has helped fill the gap left by traditional sport,” said Zwift CEO and co-founder Eric Min. “We look forward to establishing this as a new discipline of the sport – not one to plug gaps, but one that’s truly complementary to other disciplines, whether that be road, cyclocross or mountain bike. There’s a huge opportunity to grow the sport with esports and I’m proud that together with the UCI, we are able to lead the way.”
As with the UCI Road World Championships, national federations will be allocated a certain number of places for the Esports World Championships, “determined by certain criteria including the UCI Road Rankings (as of June 2020), the number of riders in the anti-doping Registered Testing Pool, and a minimum experience level on the Zwift platform.”
On the men’s side, 20 national federations will get an automatic invite to register a team: Italy, Belgium, France, Netherlands, Australia, Spain, USA, Great Britain, Germany, Switzerland, Canada, Denmark, Poland, Austria, Colombia, New Zealand, South Africa, Norway, Ireland, and Japan. On the women’s side, 13 nations will be automatically invited: the Netherlands, Italy, Australia, France, USA, Germany, Belgium, Great Britain, Poland, Canada, New Zealand, South Africa, and Japan. In addition, “wildcard invitations may be awarded to individual riders by the UCI.”
Further details about the 2020 UCI Cycling Esports World Championships will be revealed in the coming months, including the specific courses that will be used. |
2 | Shell executives quit amid discord over green push | Shell executives quit amid discord over green push
Expert insights, analysis and smart data help you cut through the noise to spot trends,
risks and opportunities.
Join over 300,000 Finance professionals who already subscribe to the FT.
Subscribe to unlock this article
Try unlimited access
Try full digital access and see why over 1 million readers subscribe to the FT Only$1 for 4 weeks
Then $69 per month
New customers only
Cancel anytime during your trial
Then $69 per month
New customers only
Cancel anytime during your trial
What is included in my trial?
During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.
Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.
Change the plan you will roll onto at any time during your trial by visiting the “Settings & Account” section.
What happens at the end of my trial?
If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.
For cost savings, you can change your plan at any time online in the “Settings & Account” section. If you’d like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.
You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many user’s needs. Compare Standard and Premium Digital here.
Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.
When can I cancel?
You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.
You can still enjoy your subscription until the end of your current billing period.
What forms of payment can I use?
We support credit card, debit card and PayPal payments.
Read more
Explore our subscriptions
Individual
Find the plan that suits you best.
Group
Premium access for businesses and educational institutions.
Check if your
university
or
organisation
offers FT membership to read for free. |
1 | The 100 Sequences That Shaped Animation | Illustration: Giacomo Gambineri
This article was featured in One Great Story , New York’s reading recommendation newsletter. Sign up here to get it nightly.
All animation, whether it depicts a whistling mouse, a walking dinosaur, or a leaping superhero, is a kind of magic trick. It’s right there in the name of one of the earliest devices used to project slides: the magic lantern. If you take an image of an open hand and an image of a fist and project the two in sequence, you’ll convey the illusion of a clench. “What happens between each frame is more important than what happens on each frame,” the prominent experimental animator Norman McLaren (who makes the list with his short Neighbours, below) once explained. “Therefore, animation is the art of manipulating the invisible interstices between frames.”
That has largely remained true throughout the medium’s history, both frame by frame and over the course of a two-hour children’s movie. Animated cartoons fool the brain into believing that static images can move; characters are “brought to life” by putting pen to paper or finger to a computer’s trackpad. The medium that began to crawl thanks to the live performances of inventor Charles-Émile Reynaud and illusionist Georges Méliès has now matured into a complex and diverse art form — one that has seen new processes and cultural innovations in every decade since its inception. The characters and intellectual properties it has drawn into existence are as relatable as Daffy Duck and as lucrative as Mickey Mouse. Today, vast audiences understand what artists like McLaren were observing: that the invisible holds a marvelous power over us.
To capture an idea of that power and to narrate its history, we have charted the evolution of animation by considering 100 sequences throughout the medium’s history. We chose the deliberately flexible element of a “sequence” because it felt the most focused: It is often in one inspired moment, more so than a single frame or entire work, that we are able to see the form progress. Focusing on full cartoons would create a bias in the favor of studios with the resources to produce theatrical features — but history has shown that many landmark achievements in animation have been produced with a variety of budgets, formats, and lengths. By focusing on sequences, we can let creators and their individual decisions shine in a way full-length works may not.
The arc of this history begins in 1892, the year Charles-Émile Reynaud first used his Théâtre Optique system to screen his moving pictures — to our mind, the first animated cartoons ever produced — for the public (and long after the invention of the magic lantern). From there, we address sequences in every decade well into our own era, touching on a range of formats, innovations, and historical moments, from the patenting of rotoscoping to the invention of the multiplane camera to the rise of anime and everything in between and after.
This list is not intended to be comprehensive. One hundred is a crushingly compact number of slots with which to encapsulate the totality of a medium. That isn’t to say we didn’t try. We arrived at our list after months of discussions and arguments among a brain trust of animation professionals, historians, and other experts. More than 600 nominations were considered based on the criteria we established: Since this list is for an American audience, entries skew toward what influenced American animation; to be eligible, sequences had to have been made available, at some point, to audiences in the U.S., whether in limited screenings, wide release, or bootleg importation. You’ll notice Japan’s output is better represented than that of French or Czech animators, which we felt reflected American audiences’ evolving, decades-long relationship with Japanese animation. We excluded porn, video games, and advertising, reasoning that they didn’t jibe with a list of art intended to be consumed, rather than interacted with. We were especially choosy about which examples of combined live action and animation to use — a gimmick that had been deployed long before Mary Poppins — and how to handle the question of special effects, which we tried to limit to moments when we felt the tools and forms used by animators crossed over most dramatically with those of live-action filmmakers.
Credits
The animation team
Historical expertise provided by Jerry Beck, Amelia Cook, Jason DeMarco, Maureen Furniss, Monique Henry-Hudson, Willow Catelyn Maclay, Linda Simensky, Koji Yamamura
Entries by Rebecca Alter, Elly Belle, Kambole Campbell, Jen Chaney, Amelia Cook, Alex Costello, Marley Crusch, Toussaint Egan, Christopher L. Inoa, Genevieve Koski, Willow Catelyn Maclay, Rafael Motamayor, Sammy Nickalls, Joshua Rivera, Daniel Schindel, Ayoola Solarin, Drew Taylor, Alison Willmore
All of the nominees were subject to the forces of capturing an accurate historical progression: Necessary inclusions meant omissions, some of which may feel crushing as you notice them. Many a beloved character (Mr. Magoo), creator (Mamoru Hosoda), film (Barefoot Gen), or series ( Avatar: The Last Airbender ) went unrecognized. Such cuts were typically made because while the titles were important to the history of animation, it was often the case that their impact was not showcased in one specific sequence, and we felt it would be disingenuous to present them in that way. We also didn’t want to sanitize the complicated contributions made to the medium by problematic figures; members of our brain trust ultimately decided work by Bill Cosby, John Kricfalusi, and others ought to be reckoned with in any honest history of the form. And finally, the works of white men ended up disproportionately represented here, for similar reasons, since white men have been disproportionately represented in the American animation industry since its formation.
Inevitably, a list like this can only scratch the surface of an art form unparalleled in its elasticity and capacity for wonder. And yet the sequences included here, listed chronologically, speak as much for the evolution of animation as a medium as they do for themselves. The creators of the early, tastelessly minstrelsy-laden shorts on this list could not have imagined how our entries would make vast audiences vibrate with joy — and the basic compact of the craft still holds, firm as ever: Animators continue to fool us into believing still images can move and breathe, and we in turn remain delighted to live between the frames.
Théâtre Optique
Directed by Charles-Émile Reynaud
Before the Lumière brothers’ cinematograph, one of the earliest movie cameras, there was a filmic evolutionary link that now feels all but forgotten. Starting in 1892, three years before the Lumières first exhibited their motion pictures, French inventor Charles-Émile Reynaud presented his animations for audiences at the Musée Grévin in Paris. His Théâtre Optique (or “optical theater”) system was a rough precursor of the technology that would come to define both animation and film projection. The films were made of hundreds of individually illustrated cells connected via strips that were perforated with sprocket holes — a first in film history — and wound around spools, which could be run rapidly before a magic lantern, projecting a moving image for an audience.
Reynaud’s show, Pantomimes Lumineuses, consisted of sets of shorts that he had drawn. The premiere lineup featured “Un Bon Bock” (A Good Beer), about a tavern boy swiping beers from unsuspecting patrons of a country bar, “Le Clown et Ses Chiens” (The Clown and His Dogs), about a clown directing his three dogs through their tricks, and “Pauvre Pierrot” (Poor Pierrot), a riff on the familiar Pierrot, Harlequin, and Columbine characters from the commedia dell’arte. These animated performances were not fully premade stories that Reynaud simply played for his audiences; manually operating the Théâtre Optique, he could play each short at variable speeds and repeat certain moments. He could react to how patrons responded to the shows, having a character perform an encore of a winning gag or trick.
Sadly, Reynaud was not just a cinema pioneer but also an early victim of the exploitation that would rapidly infect that business. He worked for the Musée Grévin under a stunningly unfair contract. Despite the giant success of Pantomimes Lumineuses, he saw little of the profits and eventually went broke. In despair, he destroyed the Théâtre Optique and tossed most of his films into the Seine. Today, only parts of “Pauvre Pierrot” and 1894’s Autour d’une Cabine (Around a Cabin) survive as testaments to his magic.
Star Film Company
Directed by George Méliès
The turn of the 20th century saw a much needed injection of modern filmmaking thanks to the work of experimental French filmmaker, set designer, and magician George Méliès, widely regarded as the innovator of special effects in movies. Méliès’s penchant for illusion and stage magic played a vital role in the way he approached his early movies, with a desire to transfer the whimsy witnessed in theaters to film. Méliès is credited with innovating the first split screen, the first double exposure and the first dissolve effect.
After being mesmerised by the Lumière brothers’ groundbreaking moving picture camera, the cinematograph, in 1895 Méliès set about designing and re-engineering his own camera and quickly established Star Film Company, with a film studio famously built entirely of glass walls. It was at the studio that Méliès made over 500 shorts, including his most famous work Le Voyage Dans la Lune (A Trip to the Moon) and not as well known but just as beloved works such as L’Œuf du Sorcier (1902), also known as The Prolific Magical Egg. The film, directed by and starring Méliès, is an example of early stop-motion SFX as the film sees the magician make an egg appear in a deft sleight of hand and then grow the egg until it turns into not one but three giant heads, which then merge into a goblinesque facade.
The seamless jump cut editing of the vanishing act and additional use of double exposure to illustrate the giant heads separating and merging were proto-techniques that would go on to be utilized in animation, and are still employed today. While many of Méliès’s films have been lost over time, his impact remains keenly felt. In the Oscar award-winning 2011 film, Hugo — which fittingly won Best Cinematography and Best Visual Effects — Martin Scorsese made Méliès a character whimsically played by Ben Kingsley, showcasing how creative magic can elevate any motion picture.
Alpha Production Works
Directed by Arthur Melbourne-Cooper
Pre-dating Pixar’s Toy Story by nearly 90 years, this stop-motion sequence of toys coming to life was created by Arthur Melbourne-Cooper in 1908. Cooper was an innovative photographer and filmmaker and a pioneer in the medium who’s credited with creating what is often called the first animated film shown in public, Matches: An Appeal (1899). While it differs in content from his “trick films” featuring matchsticks (of which there are several sports-themed pieces in addition to Appeal’s wartime content), Dreams of Toyland is arguably the British filmmaker’s most iconic work and a stunning example of early animation bookended by live action.
The elaborate scene reveals a chaotic world with toy cars driving recklessly through a busy town, dolls falling off wooden horses, and other playthings (such as a toy bear and policeman) brawling in the streets before piling aboard a double-decker bus and ultimately facing a shocking explosion. The motion is remarkably fluid considering the equipment available at the time, and movement is seen not just among the toys in the foreground but with every item viewable onscreen. There’s little in the way of plot, but the movement itself shows the level of care and dedication taken by Cooper in his experimenting with this new form of art. Based on the movement of shadows from the toys, this scene seems to have been shot on an outdoor stage, further heightening the impressiveness of this piece. With through-lines to Gumby, Wallace and Gromit, Laika’s modern stop-motion offerings, and of course, various “living toy” stories, animation enthusiasts everywhere owe a debt to Cooper’s wild dreams.
Société des Etablissements L. Gaumont
Directed by Émile Cohl
A French caricaturist, cartoonist, and one of the first great animation innovators, Émile Cohl became aware of motion pictures in 1907 and wanted to see if this art of movement and the illusions of light could be adapted to include his own interests of cartooning. The next year, Cohl made Fantasmagorie, whose title is a reference to the “fantasmograph,” a mid-19th-century variant of the magic lantern that projected ghostly images onto surrounding walls. And it forever changed animation.
Fantasmagorie was shot with a vertical mounted camera and consisted of 700 chalk-line drawings, each of which was double-exposed and rendered in a stream-of-consciousness style, which saw the drawings morph in and out of one another and evoked a sense of constant transformation. This short is among the first to outline animation as a character-driven medium, and it follows a lively stick-figure clown who pulls various objects out of his own body. It follows shorts by James Stuart Blackton such as The Enchanted Drawing in 1900 and Humorous Phases of Funny Faces in 1906, which also featured clowns appearing to make faces and move from the torso up; Cohl’s film, by comparison, is a full-body exercise.
The introduction of the clown as a central character has its roots in the carnival stylings of the traveling circus, where movies got their start as an extension of the works of illusion and prestidigitation performed by magicians. The little clown was the first character in animation history and began a trend of character-driven work that we still witness today. Fantasmagorie is also one of the great works of the avant-garde, with its morphing abstract images and Cohl’s emphasis on innovating new techniques and scenarios.
The Box Office Attractions Company
Directed by Winsor McCay
Winsor McCay did not create the animated cartoon as he always claimed, but he was responsible for one of its great “big bang” moments. Composed of 10,000 drawings made by the newspaper cartoonist (with the help of his assistant, John A. Fitzsimmons, who traced the backgrounds) and mounted on cardboard, McCay’s third short laid the groundwork for the next century of animation.
Taking inspiration from his son’s collection of flip-books, McCay became interested in testing whether he could turn his illustrations into short films, his first being based on his most famous comic strip, 1911’s Little Nemo in Slumberland. His second, The Story of a Mosquito, appeared a year later, and both were incorporated into his vaudeville act. Audiences approved, but they didn’t truly believe that they’d witnessed McCay’s drawings move. That is, until he introduced them to Gertie the Dinosaur.
The short marked the first use of key animation, registration marks, animation loops, in-betweening, and, most important, character animation. McCay not only gave Gertie life; he gifted her with a personality. Before Gertie the Dinosaur, characters were blank slates. Now they could cry, which Gertie does when McCay scolds her for disobeying, or eat, drink, or breathe, all of which she does with a playful, elegant charm that many later artists would try to emulate and build empires on.
McCay would continue to work in animation until 1921, stepping away shortly after abandoning a sequel, Gertie on Tour, mainly because his editor at the New York Herald, William Randolph Hearst, wanted him focused on editorial cartoons rather than animation. Most of McCay’s work both in comics and in film have been lost, but Gertie the Dinosaur is one of the best preserved; it’s been a part of the U.S. Library of Congress National Film Registry since 2011.
Pat Sullivan Studios
Directed by Otto Messmer
While Lady Gaga used “rite of passage” to describe getting a song parodied by “Weird” Al Yankovic, it’s a phrase that can apply to a celebrity being caricatured in animation, too, from BoJack Horseman’s cheeky use of Character Actress Margo Martindale to pretty much any episode of Family Guy or South Park. Even the British royal family is getting the animated satire treatment (blimey) in HBO Max’s upcoming The Prince. But really, it all started with one anthropomorphic black cat hungry for the spotlight.
Consider the seven-minute-long silent-era short film Felix in Hollywood. Created by Pat Sullivan and Otto Messmer, we can credit this little gem, made nearly a century ago, for what’s now a staple of modern-day animated television. In the short, Felix the Cat uses his ample wits to travel to Hollywood, where he shares the silver screen and rubs elbows with real-life industry pioneers and tastemakers like Charlie Chaplin, William S. Hart, Will Hays, Snub Pollard, and Ben Turpin. It was the first animated cartoon to caricature celebrities and along with them the contemporary studio system. Felix even earns his “long-term contract” — bestowed by one of the founding fathers of American cinema, Cecil B. DeMille — after a camera crew catches him rescuing an unconscious, tied-up Douglas Fairbanks from a swarm of angry mosquitoes.
The value of Felix’s contract may be nebulous, but the film’s impact is undeniable. Just a decade later, Looney Tunes celebrity caricatures began to emerge as well. In one of the company’s early shorts, Bosko in Person (1933), the titular character created by Hugh Harman and Rudolf Ising finds himself interacting with imitations of Maurice Chevalier, Jimmy Durante, and Greta Garbo.
Directed by Lotte Reiniger
Though Disney would later debut Snow White and the Seven Dwarfs, which was the first animated feature in the U.S., The Adventures of Prince Achmed is the oldest surviving animated feature film, period. Directed by the great Lotte Reiniger, the earliest woman animator whose work is still extant and the first to helm an animated feature, it premiered in Germany over a decade before Disney’s first masterpiece. At that time, Reiniger pioneered silhouette animation as a self-taught artist particularly skilled in shadow play. To create the film, she manipulated cutouts made from cardboard and thin sheets of lead under a camera, similar to Wayang shadow puppetry. Perhaps even more impressively, the piece was animated frame by frame, which took three years. In the scene for the Caliph’s birthday, Reiniger animated the sorcerer’s magical horse, a miraculous steed flying through the air, proving both her fantastic imagination and ability to bring it to life through silhouettes.
It’s also an early use of fairy-tale storytelling, another area Disney’s films would become known for. Prince Achmed specifically tells stories based on One Thousand and One Nights, including the story of Aladdin, which Disney’s studio would return to decades later. Moreover, Reiniger’s style went on to influence even more modern works, including an episode of Steven Universe, “The Answer.”
Despite the fact that Reiniger’s contributions continue to define the medium, sexism has been pervasive within it over time. For years, Lotte Reiniger’s name went largely unsaid in the industry, falling out of the popular canon. Today, there are still too few women with creator credits in animation — but even their success and entry into the medium are owed to those like Reiniger who opened doors and showed that talent and innovation that women could bring to the table. In proving that women could animate as well as men, Reiniger paved the way for those like LaVerne Harding, the second woman in animation history to receive an onscreen credit (known for work on Woody Woodpecker). Later, Walt Disney would hire Bianca Majolie, responsible for much of the early concept work for Peter Pan, Cinderella, and Fantasia’s “Nutcracker Suite” segment.
Walt Disney Studios
Directed by Walt Disney, Ub Iwerks
Steamboat Willie, the short that introduced the world to Mickey Mouse, served as a watershed technological breakthrough thanks to its use of fully synchronized sound and a fully post-produced soundtrack. It was also born out of heartache. Starting a little more than a year before the short’s release, Walt Disney and Ub Iwerks began producing short films for Universal and producer Charles Mintz featuring a character called Oswald the Lucky Rabbit. When Walt traveled to New York to renegotiate the terms of the deal, he was blindsided. Not only did Mintz offer him less money, but he had slyly started to steal Disney’s employees for his own animation operation. Walt quit, and Ub stood by his longtime partner. But Walt didn’t own the character. Universal did.
As the undoubtedly apocryphal story goes, Walt began brainstorming the idea for Mickey Mouse on the train ride back from his failed meeting in New York. Disney had a dynamite new character, an intellectual property he could own. Walt could just as easily have given up, but instead the recent experience strengthened his resolve.
Iwerks and Disney got to work. Steamboat Willie wasn’t the first Mickey Mouse short they made (that honor goes to Plane Crazy), but it was the first distributed, and its gags incorporated consistent sound and music throughout, a first in the business. When Steamboat Willie hit theaters in November 1928, this labor of love became a sensation, applauded for its technical artistry and entertainment value. And rightfully so — it is still a hoot, and one you can watch on Disney+ right now. And while many of the cultural references have faded from memory (its title is a play on a Buster Keaton movie called Steamboat Bill, Jr.), Steamboat Willie remains a towering achievement of early animation and a testament to Mickey Mouse’s singular, elemental power — a character whose emergence wound up altering the shape of U.S. copyright law.
Mickey is no bland corporate figurehead. Rather, he’s downright rascally — at one point, he cranks the tail of a goat who has eaten sheet music for “Turkey in the Straw” and the tune spills from the goat’s mouth. Only a few seconds of Steamboat Willie have truly been immortalized in the popular consciousness — the opening moments, in which Mickey whistles and steers the boat, have become a signature of the Walt Disney Company. But the entire short is of staggering importance — for its technological advancement, sure, but more so for the introduction of an American icon.
Iwerks Studio
Directed by Ub Iwerks
Disney magic wasn’t made by Walt Disney alone. Many of Disney’s early successes, before movies, were done in collaboration with Ub Iwerks, who helped to create Mickey Mouse. Then Iwerks and Disney had a falling out in 1930, and Iwerks opened his own animation studio.
There, he created the bow tie-wearing Flip the Frog. And Flip’s big-screen debut short, Fiddlesticks, came as the first complete sound cartoon to use the two-strip Technicolor process. It’s important to note that it was not the first cartoon made in color; that distinction is a matter of debate between 1912’s In Gollywog Land (a lost live-action film based on a racist caricature, which used puppet-animated sequences and was made by the Natural Color Kinematograph Company) and 1920’s The Debut of Thomas the Cat (made by the team of Earl Hurd and John Randolph Bray, who are credited with creating cel animation, at great cost and shot using the Brewster Color process, a Technicolor competitor), neither of which popularized the artistic choice. Fiddlesticks is a simple bit of animation: It starts with Flip dancing and then playing the piano accompanied by a familiar-looking mouse in red shorts playing the violin.
But it is still an achievement. Fiddlesticks came two years before Disney’s own Flowers and Trees, which was the first full-color Technicolor cartoon and won an Academy Award. But it was Iwerks who showed that the burgeoning Technicolor process could be applied to the medium. Technicolor was faster and easier than previous coloring techniques for animation, and the finished product was easy for theaters to screen.
Iwerks and Disney eventually settled their differences, and he went back to work at Disney’s studio in the 1940s. Today, Disney recognizes him as a “master of animation and technology,” a title he richly deserves.
Fleischer Studios
Directed by Dave Fleischer
Of course Snow White and the Seven Dwarfs was always going to make this list, but let’s start with the other technically innovative 1930s animated musical adaptation of the fairy tale. This one stars two of the Fleischer brothers’ greatest creations: Betty Boop and Koko the Clown. Koko was developed in 1918 concurrently with Max Fleischer’s invention of the rotoscope technique, which allowed animators to trace over filmed reference footage to achieve fluid, uncannily lifelike motion in their characters. Betty Boop, on the other hand, was created as a send-up of Jazz Age flappers, with a character design naughty enough to match the times.
In the original Out of the Inkwell series, Koko’s filmed movements were acted out by Dave Fleischer while he was dressed as a clown. But in 1933, Fleischer Studios put Betty Boop and Koko the Clown in the seven-minute Betty Boop in Snow-White short animated by Roland C. Crandall, with a rotoscoped set piece in the middle, set to “St. James Infirmary Blues,” performed by jazz artist Cab Calloway. Watching this scene, in contrast with the Disney version of the folktale that would set the template for mainstream animated storytelling, the sheer experimentalism looks like an eerie dispatch from a different, much cooler timeline.
The film was a follow-up to Calloway’s popular Minnie the Moocher Fleischer short from the year prior, which opened with live footage of Calloway dancing before rendering him into a walrus. Here, Calloway seems to moonwalk along the animated landscape as Koko, arms out, singing a blues song about death and decay. When the witch casts her mirror over him, he becomes a ghost, at which point the rotoscoping gives seamlessly to impossible contortions. The ghost’s limbs pretzel in on themselves, turning at one point into a gold chain, echoing the lyrics. At the time, character animation — think the Fleischers’ Bimbo, Otto Messmer and Pat Sullivan’s Felix the Cat, Walt Disney and Ub Iwerks’s Oswald the Lucky Rabbit and Mickey Mouse — was often rooted in the racist visual language of blackface and minstrelsy. Cab Calloway’s Fleischer shorts, and their use of rotoscope, saw an African American musician able to voice and perform his own art. Playful and surreal, it remains artistically daring nearly 90 years later.
Radio Pictures
Directed by Merian C. Cooper and Ernest B. Schoedsack, animated by Willis H. O’Brien (chief technician)
During the early stages of what would become King Kong, Merlan C. Cooper planned to film wild gorillas, intercutting it with footage of Komodo dragons so that it would appear as if the animals were engaged in a life-or-death battle. Thankfully, he figured out that it would be more economically feasible to go with animation, coming to that conclusion after viewing Creation, an overbudget and eventually canceled action-fantasy film helmed by stop-motion animator Willis H. O’Brien.
A working animator since 1915, O’Brien’s stop-motion work was already heralded as groundbreaking before his contributions to Kong. By 1925’s The Lost World, he was already experimenting with ways to make it look as if his creations were sharing the same physical space as the live actors. With Kong, O’Brien was able to push his experiments even further, achieving a milestone not only in the history of stop-motion animation but for the entire field of cinema special effects.
To achieve Cooper’s dream of a giant gorilla, O’Brien and his assistant animator Buzz Gibson combined stop-motion animation with other special effects including miniatures, matte paintings, and rear projection. The result seen in Kong’s introduction to the movie that carries his name makes it appear as if the giant ape, an 18-inch model made out of rubber, latex, and rabbit fur — designed by Marcel Delgado — is towering over Ann Darrow, the helpless blonde the beast falls for. This moment in the 1933 classic would set up the next 90 years of blockbuster movies, everything from the work of Ray Harryhausen, to the animatronics of Stan Winston, to the inclusion of CGI.
Walt Disney Productions
Directed Burt Gillett
There’s a moment, about three and a half minutes into the Silly Symphony short Three Little Pigs, when the Big Bad Wolf is about to blow down one of the pigs’ houses. He gets himself in the headspace to get a-blowin’ and then prepares physically by breathing in more and more air, his chest heaving and expanding with each gasp. Finally, when his chest is about to burst, he lets out a gust of wind powerful enough to knock down the poor piggy’s home. As drawn by Norm Ferguson, perhaps best known as the creator of Mickey’s dog Pluto, the Big Bad Wolf was a benchmark in terms of character animation. Chuck Jones commented that the film made him realize “something was happening there that hadn’t happened before.” Jones said that it showcased a major principle in character animation, that “it wasn’t how a character looked but how he moved that determined his personality.” He even argued that character animation truly began with the film.
Disney himself agreed. Notorious for finding fault in just about anything he produced, upon finishing the short, Disney exclaimed, “At last we have achieved true personality in a whole picture!” And as a result, Three Little Pigs was hugely influential both inside and outside the studio.
Internally, the short featured an original song by Frank Churchill, Ted Sears, and Pinto Colvig, and the use of original music would become a convention of many Disney shorts that followed. Also, thanks largely to the work done by Freddie Moore, a hugely talented and influential artist at the studio, the storytelling and animation style at Disney began to shift. The “rubber hose” style of animation was out; the more naturalistic and complicated “squash and stretch” style was here to stay. It was a huge success for the studio, too, winning an Academy Award and making a truly unbelievable amount of money; the following year, the studio’s net profit was estimated at more than $600,000 and led to Disney’s expansion. One theater played the short for so long that it started adding whiskers to a poster for the short outside the auditorium; as the run went on, the whiskers would get longer and longer.
Most crucially, Three Little Pigs was one of the first of Disney’s films to feature a story department, which included Ferguson, Art Babbitt, and Dick Lundy. (It was also, not coincidentally, one of the first animated films to be fully storyboarded.) While Disney had established a story department before 1932, the success of Three Little Pigs, the creation of which Disney himself was heavily involved in, made him double down on his desire to create specialized roles for talented people that ran in direct opposition to the studio’s earlier, looser, everybody-chip-in ethos. And that story department would prove crucial in the years ahead as he marched toward a feature-length animated film.
Culturally, though, Three Little Pigs had an even larger impact. It proved that Walt’s work, far from being trifles for kiddies, could be considered high art, and the short, along with Disney himself, was fêted widely by the Hollywood elite and embraced by critics. As a metaphor for the Great Depression, then in its fourth year, it also spoke volumes, with the wolf representing the country’s economic hardship and the industrious, hardworking pig serving as a metaphor for Roosevelt’s New Deal. It became something of an anthem for a beleaguered country; its audio was played over the radio and its plot satirized in the newspaper. When fascism began bubbling up in Europe, the pigs with houses of straw and sticks were repurposed as a desperate warning call to Western nations not taking the Nazi threat seriously. (It should be noted that the original version of the short featured the Wolf dressing up as a “Jewish peddler,” a distressing moment that has been edited out of subsequent versions beginning in 1948; the choice didn’t help Disney’s case when he was accused of anti-Semitism.) And the song, “Who’s Afraid of the Big, Bad Wolf?,” was influential too, inspiring Edward Albee to title his hit play Who’s Afraid of Virginia Woolf?
Walt Disney Productions
Directed by Wilfred Jackson and Walt Disney
Mickey Mouse in color! The Band Concert, a ten-minute tour de force, is mostly notable as Mickey’s first appearance, after 72 cartoons, in color — technically three-strip Technicolor — alongside stalwarts Donald Duck and Goofy and lesser-known characters like Clarabelle Cow, Peter Pig, and Horace Horsecollar.
In the short, Mickey is a conductor trying, desperately, to get through the “William Tell Overture.” Goofy is a clarinetist in the band and Donald is an obnoxious ice-cream salesman who takes out a flute and starts jamming along uninvited. There are a number of notable character moments: Mickey’s reaction when a melting scoop of ice cream slides down his back is still a supremely impressive bit of character animation, and when a rampaging tornado threatens the band and its audience, the benches become anthropomorphic and run away from the disaster.
While it’s largely considered one of the greatest, if not the greatest, Mickey Mouse cartoons ever, Donald is clearly the star of the show. The only character with speaking lines, he is hilarious throughout, something that was commented upon at the time of the movie’s release — “The duck takes over,” a critic for the New York Journal wrote.
Iconic and recognizable, The Band Concert inspired a 1942 short called Symphony Hour, has been referenced in a number of video games over the years, and is the basis for a pair of Disney Parks attractions: Mickey’s PhilharMagic and Silly Symphony Swings, a classic wave-swinger attraction at Disney California Adventure that features characters from the short (including the bee!) painted on the side and whose structure is adorned with a statue of Mickey in his oversize conductor’s coat, stick in hand. (You can even hear the overture as you swing.) Truly a tremendous performance.
Fleischer Studios
Directed by Dave Fleischer
Popeye the Sailor Man made his animated debut in a 1933 Betty Boop short named after him and quickly became Fleischer Studios’ star attraction. The naval pugilist with forearms the size of watermelon had originated from E.C Segar’s daily comic strip Thimble Theatre, where he was only supposed to make a one-off appearance. By the mid-1930s Popeye was the most popular character in America, so it only made sense that Paramount Pictures would push Max Fleischer to produce a more ambitious short starring the spinach chomping strong man.
Popeye the Sailor Meets Sindbad the Sailor was the first Popeye cartoon made in Technicolor as well as the first American animated film to be billed as a feature (running over 16 minutes, it took up two reels), and it is where the Fleischer brothers’ “setback process” was showcased to its full potential.
The Fleischers’ studio had been behind a number of inventions that helped innovate animation during the mediums early years, but arguably none were more remarkable than the process invented by John Burks. First used in the 1936 Popeye short, For Better or Worser, the process gave off the illusion that two-dimensional characters were able to maneuver in a three-dimensional space. Over 80 years after its premiere, the process is still effective, the illusion not aging a day.
A vital influence on Ray Harryhausen, who made a Sinbad film of his own, the process would be used for a handful of other shorts and one feature, its heights remaining with Sindbad and its follow up Popeye the Sailor Meets Ali Baba’s Forty Thieves. The pair would be the grandest cartoons Fleischer ever produced — until they began work on the adventures of a mild-mannered reporter from Metropolis.
Walt Disney Productions
Directed by Wilfred Jackson
Walt Disney’s most impactful accomplishments, especially in the early days of the theatrical shorts, came at the intersection of storytelling and technological advancement. Such is the case with Silly Symphony’s The Old Mill. For years, Disney had wanted more realism and dimensionality in his cartoons, foremost by ensuring that both the backgrounds and characters moved — as in a sequence for the Oscar-winning Three Orphan Kittens from 1935 — and later by tasking animator Ken Anderson, effects animator Cy Young, lighting expert Hal Halvenston, and engineer Bill Garity to come up with a solution. (Part of this was preparation for Snow White and the Seven Dwarfs, and one of the tests was a shot pushing in on the dwarfs’ cottage.)
This led to the invention of the multiplane camera, in which different scenes and characters would be painted on separate panes of glass; the camera would then move “through” the panes at different speeds and at various distances from one another, creating the illusion of dimensionality and depth — a concept Ub Iwerks, by this point long gone from Disney, had been tinkering with for years. Walt Disney biographer Neal Gabler hypothesized that Disney “was anticipating the deep-focus photography that director Orson Welles would use so famously in Citizen Kane.” Whether or not that’s true, the technology was used fabulously in The Old Mill.
A wordless ode to nature, the eight-and-a-half-minute film focuses on the titular mill and the animals that inhabit it as a summer storm approaches. Instead of being jokey caricatures, the animals and their action are rendered in a more realistic manner. They are simplified for visual clarity but never personified like in other shorts. It’s an odd and striking conceit, made all the more beautiful by the design of the animals and the exceptionalism of the effects animation — ripples in water, a swaying spiderweb, the way a flower reacts to columns of light, twinkling fireflies — that bring the whole enterprise to life. While Disney intended the short to be a test run of sorts for Snow White, the feature it more closely resembles, with its emphasis on naturalistic beauty and complex effects animation, is 1942’s Bambi.
There’s an eerie intensity to the short as well, with an emphasis on some of the less cuddly creatures in the mill (those bats!), that lent its tone not only to the “Night on Bald Mountain” segment from Fantasia but also to more modern, horror-adjacent animated triumphs like Scooby-Doo and Over the Garden Wall. There is a reason that, for years, clips from the short would be used in Disney Halloween compilation specials. It really is that spooky. Also, if you ever find yourself on the Walt Disney Studios lot, pop into the Frank G. Wells Building. There, you can see the multiplane camera that was used on The Old Mill, sitting right in the lobby.
Walt Disney Productions
Directed by David Hand (supervising) with William Cottrell, Wilfred Jackson, Larry Morey, Perce Pearce, Ben Sharpsteen
It is impossible to overstate the importance of this movie to animation. The first full-length Disney animated film and the first full-length cel animated feature, period, the hardest thing about including Snow White and the Seven Dwarfs on this list was trying to decide which sequence to highlight.
Disney’s heralded Nine Old Men worked on the film and famously pulled inspiration from many sources, including European fairy-tale illustrations, German expressionism, and silent films. Snow White also turned the tables of influence a bit. Where early animation incorporated the rhythms of jazz music, both metaphorically and literally, Snow White gave the world a song, “Someday My Prince Will Come,” that would become a jazz standard covered by the likes of Dave Brubeck and Herbie Hancock, even inspiring a Miles Davis album more than two decades after the film’s 1937 premiere.
But what’s most notable about the “Someday My Prince Will Come” sequence is the way it establishes, in a very short scene, the core elements for so much of Disney animation going forward. It centers on a princess and her desire for a happy ending. The details in the characterizations of the dwarfs, from the big, moony eyes so many of them share to the individual distinctions, like Grumpy’s snakelike eyebrows and Sleepy’s flabby cheeks, are engaging and precise. The adorability quotient, courtesy of the woodland animals who gather to listen to Snow White, is extremely high. What one gets from watching all these bits of artistry working in tandem is a warmed heart and a lifted spirit. It’s a feeling best described as pure Disney.
Walt Disney Productions
Directed by Ben Sharpsteen and Hamilton Luske
As the first full-length animated feature in the U.S., Snow White and the Seven Dwarfs gets the credit for changing the course of animation history, and rightfully so. But Walt Disney’s wildly ambitious follow-up, Pinocchio, ultimately had much bigger effects on the medium itself. From its ample and ambitious use of the multiplane camera to its reliance on an anthropomorphized animal sidekick to its use of live-action models and celebrity voice-acting, the film planted many of the seeds that would be sown over the next half-century of animation.
But of all Pinocchio’s achievements, it is the film’s special-effects animation, in particular its approach to water, that is most remembered and celebrated. Splitting the difference between stylization and realism, Pinocchio set the standard for water effects in hand-drawn animation and became the model most mainstream animation aspired to up until CG animation made photorealism the new benchmark. The film is lousy with subtly impressive aquatic flourishes, from Jiminy Cricket popping and nearly drowning inside an underwater bubble to Figaro diving into Cleo’s fishbowl, but nothing captures Pinocchio’s marriage of ambition and achievement better than the climactic chase scene, in which Pinocchio and Geppetto escape via sneeze from the gullet of the gargantuan whale Monstro.
Every single frame of the three-and-a-half-minute sequence shows water in motion, a crescendoing symphony of foaming ripples, whirling eddies, and crashing waves achieved through a combination of inked cels photographed over specially toned blue paper and white paint overlays. Walt Disney himself called Pinocchio “the toughest job the animators have ever had” (adding, “I hope I never have to live through another one like it”), and you can see every ounce of that effort paying off in the Monstro chase. A box-office bomb in its day, Pinocchio might have seemed like a stumble for Disney in terms of financial success, but it beat live-action films that year for two competitive Academy Awards: Original Score and Original Song (“When You Wish Upon a Star”).
Walt Disney Productions
Directed by Wilfred Jackson
Fantasia was, undoubtedly, Walt Disney’s biggest gambit — a largely wordless, classical-music-based anthology film that would require theaters to install new equipment in order to accommodate its breakthrough “Fantasound” technology, with individual speakers playing separate musical instruments. In fact, Disney himself saw the film as an ever-evolving, never-complete passion project. (He wanted the film to have its segments switched in and out every few years.) It was Walt at his artsiest and most ambitious. And for the movie’s grand finale, “Night on Bald Mountain,” based on a piece of music by Russian composer Modest Mussorgsky, Walt would plunge the audience straight into hell.
Well, not hell exactly, but close. “Bald Mountain, according to tradition, is the gathering place of Satan and his followers,” the introduction goes. “Here, on Walpurgisnacht, which is the equivalent of our own Halloween, the creatures of evil gather to worship their master.” What is incredible isn’t just that this vision of darkness somehow made it into a Disney animated film aimed at a mass audience — it’s that an animated version of the material had already been made just a few years before by Alexandre Alexeïeff and Claire Parker, a Russian American husband-and-wife animation team based in France, using their own, incredibly complicated technique called pinscreen animation. This “Night on Bald Mountain,” incredibly, is just as creepy. And Disney still managed to one-up it.
Fantasia was meant to show how limitless animation was and how willing Disney, who had made his fortune with cuddly animated characters, was to experiment. You can feel all of that coursing through “Night on Bald Mountain.” Its iconic devil character, the Chernabog, was animated by the deeply talented Billy Tytla, who found inspiration for the character in everything from doodles by Swiss artist Albert Hurter to the face of Bela Lugosi, who came to the studio and posed for animators. (Tytla would leave the studio after the animators’ strike that rocked Disney in 1941.) Rewatching the “Night on Bald Mountain” segment, it’s downright shocking. Not only is the tone midnight black, but the ghouls and goblins summoned by this unspeakable evil are truly grotesque; there are even topless female witches, their bare breasts exposed. Its bleakness is enough to make you wonder how, when the movie was rereleased in 1969 and aimed explicitly at the period’s head culture (just look at the poster), all those stoned audience members felt about that finale.
It is worth noting that at the time of the film’s release, this segment of Fantasia fell under particular scrutiny. Overall, the film was not well regarded, but critics complained specifically about “Night on Bald Mountain” and how Disney chose to showcase evil in abstract terms (a winged demon lording over an army of the undead) while the real thing was making itself very apparent in Europe.
Tellingly, Disney affixed “Ave Maria” as a coda of sorts, saying, “We are portraying good and evil.” Without that segment, Fantasia would have ended on a note of utter hopelessness — the kind of hopelessness Walt would feel, many times, as the responses to Fantasia rolled in. (In Michael Eisner’s memoir Work in Progress, he recounts that the film didn’t make a profit until it debuted on home video.) To those who were willing to get swept up in it, Fantasia proved a heady, hugely inspiring trip indeed. And Walt would finally get his follow-up, of sorts, at the end of 1999, upon the debut of Fantasia 2000.
Fleischer Studios
Directed by Dave Fleischer
Shortly after Action Comics No. 1 introduced the era of comic-book superheroes, Paramount Pictures acquired the film rights to Superman and wanted its animation studio, Fleischer Studios, to bring the character to series. This was a substantially different task than Max and Dave Fleischer were used to, forcing them to trade caricatured humans and animals for realistic-looking characters.
And yet the Fleischer Superman serial ended up as a definitive take on the Man of Steel. We get the origin, a bit of Clark Kent’s daily life as a journalist having to hide his identity, and Superman heroically saving innocents and doing great feats of strength with a smile on his face, all in the ten minutes of “The Mad Scientist,” the first of 17 shorts. The Fleischers’ patented rotoscoping technique seldom looked as smooth as it does here — the brief moment when Superman lands on the ground after saving a building from collapsing and stands tall to stop a laser beam with his bare hands still looks better than most live-action acts of superheroism.
“The Mad Scientist” was a huge success. Not only was it nominated for an Academy Award, but its influence on both Superman comics and action animation is undeniable. As the legend goes, the studio got permission from the comic’s publisher to make the Man of Steel fly because they were unconvinced with how giant leaps looked. Likewise, Superman’s iconic pose — fists on the hips, with the cape waving in the wind — first appeared in this short. And the shorts’ Art Deco architecture and noirlike aesthetic influenced animator Bruce Timm’s now-classic Batman: The Animated Series and, later, his own Superman: The Animated Series.
Ultimately, though, it was also the series that ended the Fleischer brothers’ working relationship. Amid financing troubles with Paramount Pictures, the Fleischers resigned from their own company having produced nine Superman shorts credited to Fleischer Studios, which was later renamed Famous Studios.
Xinhua Film Company
Directed by Wan Guchan and Wan Laiming
The Wan brothers — Chaochen, Dihuan, Guchan, and Laiming — are the founders of Chinese animation (it’s written right there on Wan Laiming’s tombstone), and their first feature-length film began as an artistic act of resistance. Shanghai was under Japanese occupation in the midst of the Second Sino-Japanese War when, in 1939, the siblings decided to make Princess Iron Fan. They wanted to make something that could match Snow White and the Seven Dwarfs, which had been released two years earlier, as well as represent their distressed nation.
The Wans looked to famous source material for their 1941 debut, adapting a section of the 16th-century classic Journey to the West, a novel they’d return to in the ’60s for their best-known work, Havoc in Heaven. Princess Iron Fan expands on an interlude in which the mischievous Sun Wukong and his fellow travelers tangle with a demonic couple over a fan they need to continue on their way. It’s a fight that culminates in a spectacular sequence in which the demon king transforms into a giant bull and chases the characters across the skies and through the woods until he’s defeated with the help of some local villagers.
There’s a slapstick logic to the animation that recalls the earlier Disney shorts, but the drawing style and the opera-tinged soundtrack are distinctively Chinese. Princess Iron Fan would, with a touch of irony, be exported to Japan, where it would inspire a then-teenage Osamu Tezuka to pursue animation as well as the commissioning of the country’s own first full-length animated film.
Walt Disney Productions
Directed by David Hand (supervising)
Though not the first time Disney anthropomorphized animals and far from its last time taking the life of an animal parent, the 1942 feature Bambi stands apart. The death of the young Bambi’s mother is perhaps the most brutal and sudden instance of loss in the studio’s filmography, the mother and child’s hope of escape from a human hunter referred to only as “Man” swiftly and mercilessly cut short with the sound of a single gunshot.
That starkness is only amplified upon realizing that the company still dialed back the bleakness of the scene in the novel the film adapted, Felix Salten’s Bambi: A Life in the Woods. The book was originally considered too grim for the company to adapt, but after the film was put on hold following the release of Fantasia, it was, of course, taken into production. The death sequence in Bambi is emblematic of a recurring trope in Disney films: the death of a parent as a formative part of the film’s narrative. Disney producer Don Hahn attributes this trope to Walt Disney’s personal life. He believes the impulse to include such sequences stemmed from residual guilt surrounding the tragic death of his mother, for which he possibly blamed himself. Whether that real-life tragedy influenced Bambi is unknown, even though the famously autocratic Disney held an almost dictatorial control over all the studio’s early works and was directly involved in the process of adapting Bambi. After all, the mother’s death was already written, and the death of a parent in films starring children — as it is in fairy tales — is a trope in part because of its narrative convenience, however emotionally difficult that convenience may prove. (Even Disney’s own daughter wasn’t thrilled at its inclusion.)
Despite its bleakness, the source novel was well received by critics and is considered both one of the first environmentalist novels and an anti-fascist allegory. As for the film, its most tangible effect, besides influencing similar death sequences in films including The Land Before Time and The Lion King, was on animal rights. The film led Sir Paul McCartney to vegetarianism, at least indirectly, and the origins of the term “Bambi effect” as a stand-in for the prioritization of the safety of “adorable” animals over others are obvious.
Those impacts aside, it’s simply a marvelous sequence. Realistic movement of the deer was a particular goal for the studio; animator Eric Larson referred to its previous animated animals as looking “like big flour sacks.” The film is one of the most striking of Disney’s Golden Age, even without the gothic spires and epic scope of Sleeping Beauty or Fantasia — those deceptively simple forest backgrounds enveloped by snow and punctuated by small streams go a long way. Though the studio is now working to “update” the film by remaking it in CGI, as it has done with so many of its works, the effort will surely be for naught. It won’t ever look as good as this.
Metro-Goldwyn-Mayer Cartoon Studio
Directed by Tex Avery
Animated shorts based on fairy tales were a staple of animation in the first part of the 20th century. The Walt Disney Company made more than ten short films based on fairy tales during the 1930s alone, and both Disney’s own feature films and his competitors followed suit. Yet it’s easy to imagine the audiences of the 1940s getting a bit bored with the same handful of stories animated over and over by different studios.
Enter Tex Avery, animation’s master of screwball comedy, capable of pushing every comedic button in every short to produce maximum laughter. For his 1943 MGM short Red Hot Riding Hood, he changed the script with one of his signature fourth-wall breaks: Instead of a straight adaptation of the story, the characters directly talk to the narrator and ask for a new take. Then the second title card appears and we get an urban, catcalling wolf that pursues Red, now a sexy nightclub performer à la pinup girls Rita Hayworth and Lana Turner. The short is the perfect amalgam of Averyisms, from meta-humor to pop-culture references, to gags with characters pulling objects out of thin air, to incredibly stretchy and contorting bodies. On top of it all, Avery’s signature risqué comedy was practically guaranteed to give the era’s censors panic attacks within a short’s first few seconds.
And while cartoon content would be tamed over the coming decades, Avery’s innovations stuck.me His catcalling Wolf, for instance, has received homages and parodies from generations of animators after him. And the sequence in which Red’s grandma, now portrayed as a hip and wealthy woman living in a penthouse, begins chasing the Wolf around her apartment, all the while opening doors that lead outside the building or reveal cement walls, inspired a similar chase sequence in 1988’s Who Framed Roger Rabbit.
Leon Schlesinger Productions
Directed by Bob Clampett
During World War II, nationalism took over multiple animation industries — hell, the first feature-length anime, Momotaro: Sacred Sailors, was about cute animals doing imperialism for the glory of Japan. On the American side, there are a few categories of WWII propaganda cartoons. There are the ones devoted to demonstrating the evils of the enemy (such as Der Fuehrer’s Face). There are instructional films for proper military and/or homefront conduct (the Private Snafu series). There are the ones that are simply about our favorite characters beating the shit out of now-uncomfortable racial caricatures (Commando Duck or Bugs Bunny Nips the Nips).
The Merrie Melodies short “Falling Hare” stands out from this crowd, being non-racist, non-educational, and non-jingoistic, and as such it has endured in reruns over the years without being censored. (To be fair, it probably threatened fewer censorship minefields than the notorious Censored Eleven shorts because it features just two characters at odds with each other.) Mainly, it uses an Army Air Force base as an excuse for airplane shenanigans.
Notably, it also bucks the trend as a Bugs Bunny short in which his adversary consistently gets the better of the wascally wabbit, and the role reversal can be surreal to watch at times. A “gremlin” seeking to sabotage planes puts Bugs through some fantastic physical comedy. In the final sequence, this paradoxically hits new heights as the plane plummets toward the ground. Bugs is made of putty, contorting in mortal agony. The depiction of the falling plane itself is a masterful combination of skillful animation and cost-saving shortcuts on the part of director Bob Clampett; it’s incredible how visceral simply spinning a static shot of the ground is. And it all caps off with one of the greatest punch lines in WB history.
MGM Cartoons
Directed by William Hanna and Joseph Barbera
William Hanna and Joseph Barbera were, like their most famous creation, a match made in animation heaven. In Leonard Maltin’s book Of Mice and Magic, the film historian described how the two complemented each other: “Barbera’s forte was gag comedy. Hanna aspired to be a director and possessed a keen sense of timing.” Hanna was the musical one (and would later write The Flintstones theme), and Barbera’s talent was drawing “like hell” (his words). With their Tom and Jerry shorts, their strengths always rose to the occasion, and the collaborators worked almost exclusively on the two natural enemies for 15 years after their debut short, Puss Gets the Boot.
The Cat Concerto is one of both duos’ zeniths of situation, timing, and gag-based comedy. Tom is a concert pianist animated with a pompous affectation (modeled on the short’s music supervisor, Scott Bradley), and Jerry is a rascal inside the piano looking to ruin Tom’s big performance of Hungarian Rhapsody No. 2. What unfolds is a battle of wits built around Bradley’s dynamite explosion of a score. The Tom and Jerry cartoons were created with the highest craft at MGM, racking up numerous Academy Award nominations and wins in the process, and the escalating violence of the premise was lightning in a bottle for the studio.
Tom and Jerry’s influence has stretched across the decades, from its ubiquity on television over the years to absurdly vicious and bloody spoofs like The Simpsons’ Itchy and Scratchy cartoons. Like Looney Tunes, Tom and Jerry shorts solidified a type of animated violence that feels distinctly American.
United Productions of America
Directed by Bobe Cannon (main) and John Hubley (supervising)
In the wake of the Disney animators’ strike of 1941, several staff members left the company. Among them was John Hubley, who believed that the medium of animation was constrained by Disney’s painstaking approach to realism. Hubley joined the emerging United Productions of America studio, where he would go on to create the iconic Mr. Magoo. He, together with other animators, would also break the mold in American animation and prove that animation could have as much variety as the imagination allows.
UPA introduced the concept of “limited animation,” which brought a modernist design to the medium. The studio used single blocks of solid color and a few lines to indicate a location, influenced by the flattened perspectives and bright colors of Picasso, Matisse, and other modern painters. This style would later be associated with Hanna-Barbera, which used the technique to save time and money, but it was UPA that made the choices that changed how animation was perceived. The creativity that was possible using limited animation was evident in Gerald McBoing-Boing, a short about a boy cursed to speak only in sound effects.
Produced by Hubley, directed by Bobe Cannon, and based on a story by Dr. Seuss, this short is the perfect showcase for the UPA style, masterfully using limited animation to deliver a modernist film that captures mood through a limited color palette and seamless editing between scenes. The short’s best transition comes after the titular Gerald gets home after being bullied at school. The color palette shifts to reflect his dejection, and when he runs away from home, he finds himself surrounded by black, a backdrop that evokes the woodcut backgrounds of the illustrator Lynd Ward.
Directed by Noburo Ofuji
First made in 1927 as a silent black and white film, Noburo Ofuji’s Kujira, or Whale, maintained its priority of visual storytelling in its final version while taking on some fascinating changes thanks to the possibilities of color film. Ofuji’s film was the first piece of Asian animation ever shown at the Cannes Film Festival, and it garnered praise from festival attendee Pablo Picasso (yes, really) as well as the poet Jean Cocteau, who was a member of the jury the year that it was screened.
In remaking the film, Ofuji deployed the unique method of using cutouts of transparent, colored cellophane and silhouetted shadow puppets, assembled on a multiplane animation table used to backlight the frames. This resulted in intentionally flat but fantastically layered frames, each swirling layer of water or sky remaining distinct even as the film quickly moves into visual chaos. Instead of storytelling through dialogue or conventional animated character acting — expression through both body language and facial expression — Ofuji’s obfuscation of the characters, which only exist here as shadows, forces the film to convey its meaning through just body language and movement, composer Setsuo Tsukahara’s tense classical score, and sound effects: crashing waves and thunder; the strained groans of a sailing ship under duress; and occasionally the laughter, screams, and incidental chatter of the ship’s inhabitants.
The film follows a ship as it’s attacked by the eponymous whale and, subsequently, one of the survivors as she staves off assaults by her crew mates and evokes the work of Herman Melville, with its collisions of man’s vices and folly with titanic marine life, as well as the biblical tale of Jonah, as the survivors of the shipwreck are swallowed by the whale. But Ofuji’s own fable is wholly idiosyncratic in its presentation. Its fairly common themes of humankind’s propensity for violence and the conflict between humans and the natural world become extraordinary in the hands of Ofuji — and its creative ambitions, as that Cannes jury confirmed, were a signal to the world that Japan, sooner rather than later, would become an animation superpower to be reckoned with.
National Film Board of Canada
Directed by Norman McLaren
Neighbours is one of the most important works by animator Norman McLaren and the first short to use live-action actors to make a stop-motion film, a technique called pixilation. The story, which is an antiwar parable, and which was greatly scrutinized when it came out in 1952 (McLaren said he was inspired by witnessing “the beginnings of Mao’s revolution” in the People’s Republic of China), plays out over just eight minutes and shows two men fighting over a flower.
In Neighbours, it’s plain to see exactly how McLaren influenced the industry, with each frame picked and displayed with care, beginning with the scene’s coyly counterposed newspapers. The pixilation and editing in Neighbours allow for a number of visual gags that wouldn’t have been possible in a more straightforward live-action film, no matter how appealing are its two brigands, Jean-Paul Ladouceur and George Munro (who is also credited with innovating the pixilation technique) — from seeing them float mid-jump to creating fences out of thin air. Not only does Neighbours build tension and offer a unique way of presenting his simple story; it forces the viewer to confront the relationship between animation and live-action film. The short went on to win an Academy Award and a Canadian Film Award.
Over the years, McLaren made many more contributions to the medium, mostly in his experiments combining animation with music. He also founded the National Film Board of Canada’s animation department, which cultivated the artistry of several notable independent animators, and taught animation in China and India. Eventually, he retired to a suburb of Montreal with the love of his life, Guy Glover, a man he met in 1937 and who supported him during bouts of depression. They lived there together until McLaren’s death in 1987.
Warner Bros. Cartoons
Directed by Chuck Jones
Duck Amuck is a classic Merrie Melodies short in that, like so many others, it’s about Daffy Duck being driven absolutely bonkers by the situations in which he finds himself. But as a meta-commentary on how Daffy Duck’s entire existence is beholden to those who created him, it’s infused with the sense of mischief that is so very Chuck Jones, who directed it. And, as written by Michael Maltese, it also serves as a lesson in how animation works and why each element of it matters.
”Whoever’s in charge here: Where’s the scenery?” Daffy asks through a ruptured fourth wall after his background has turned into a blank white space. From there, the backdrops keep changing and Daffy keeps trying to adjust. But eventually everything goes haywire: The sound goes out, the frame collapses and nearly crushes Daffy, and even Daffy himself gets erased more than once by the butt end of a pencil that enters the frame, presumably via some God-like figure.
Every person who worked on Duck Amuck matters, this short tells us, because every piece of a story, if altered or absent, transforms the narrative. That said, special shout-outs go to Mel Blanc for his signature, hilarious escalation of Daffy’s exasperation and to legendary composer Carl Stalling for changing up the music with impeccable timing. The big twist is, once again, very Chuck Jones: Turns out it’s Bugs Bunny, ever the stinker, who’s been sitting at the drafting table and messing with Daffy the whole time. A lot of the works on this list are perfect cartoons, but seriously: This is a perfect cartoon.
Warner Bros. Cartoons
Directed by Chuck Jones
More than almost any other short film in this list, this one needs no introduction. Animation legend Chuck Jones at the height of his creative powers? The final appearance of Elmer Fudd in a Jones-directed cartoon? Bugs Bunny in his best drag performance? A pitch-perfect parody of Richard Wagner’s operas and ballets, the Bugs-and-Elmer formula that had grown kind of stale, and even a send-up of Disney’s Fantasia? There’s no wonder this became the first cartoon selected for the National Film Registry.
The short continued in the vein of Jones’s earlier opera parody, Rabbit of Seville, itself a nod to an earlier Woody Woodpecker short loosely adapting the same opera by Gioachino Rossini. What’s Opera, Doc? was an especially labor-intensive cartoon to make, requiring Jones and his animators to fudge the numbers on their time cards to get it done, claiming the additional weeks were instead allocated toward the easier-to-produce Wile E. Coyote and Road Runner shorts. The added time and effort show onscreen. Maurice Noble’s art direction evokes the limited animation style of rival studio UPA to create a world influenced by the horrors of German silent-film expressionism, featuring jagged towers and buildings and sets that simply couldn’t be replicated in live-action even with the highest of budgets. Meanwhile, Dutch angles are used to give the story of Bugs and Elmer’s last stand a scope worthy of Wagner’s grandiose epic.
Then there’s the real star of the show, Bugs’s lapine femme fatale, the pigtailed Brunhilde. For many people, including RuPaul, Bugs Bunny provided a first introduction to drag queens, and the wascally wabbit never did it better than here, riding atop a morbidly obese yet graceful steed as Brunhilde. It is a definitive entry for the character, whose creative life is a subject of a chapter in Jones’s own illustrated autobiography. “Bugs went through a period of wild awkwardness before settling into the self-contained studied attitudes peculiar to him, so that his every movement is Bugs and Bugs only,” Jones wrote. He described What’s Opera, Doc? as one of the final corners turned in that evolutionary process: “Probably our most elaborate and satisfying production.”
Toei Animation
Directed by Taiji Yabushita and Kazuhiko Okabe
World War II lasted 15 years for Japan, a period during which few foreign cartoons were accessible. After the war, game-changers like Snow White and the Seven Dwarfs and Fantasia flooded Japanese cinemas, overshadowing the domestic industry’s grayscale shorts. Until Toei released Japan’s first full-color animated feature, proving Japan could play the game too.
The head of Toei, the former accountant Hiroshi Okawa, bought respected animation studio Nichido Eiga and rebranded it Toei Animation. He intended to replicate Disney’s business model (and financial success) by churning out features for export beyond both the language barrier and anti-Japanese sentiment. But Japan didn’t have enough animators for that yet, so Toei had to cultivate them to make Panda and the Magic Serpent, which required a staff of more than 13,500 to complete (including a young in-betweener who would later become famous as anime director Rintaro).
The effect was monumental. “I first fell in love with animation when I saw Panda and the Magic Serpent,” said Studio Ghibli’s Hayao Miyazaki, who worked at Toei early in his career with Ghibli co-founder Isao Takahata. “I can still remember the pangs of emotion I felt at the sight of the incredibly beautiful young, female character, Bai-Niang, and how I went to see the film over and over as a result.” Her transformations between her human and serpent form are the emotional cruxes of the film, and Miyazaki’s reaction was far from an isolated experience. Toei had made the biggest anime landmark since Japan’s first feature-length animation 13 years earlier, wartime propaganda Momotaro: Sacred Sailors (1945). Recruiting aspiring animators was easy.
Retention was harder. The so-called Toei University turned young hopefuls into accomplished professionals, creating a massive pool of talent dissatisfied with their wages. Many left, including Miyazaki and Takahata, who became Toei’s competitors. Toei built an animation workforce ripe for poaching, just in time for the anime TV boom of the 1960s.
Walt Disney Productions
Directed by Clyde Geronimi (supervising)
“We took the approach that we were going to kill that damned prince,” Wolfgang Reitherman once said of the dynamic sequence he directed in Sleeping Beauty. Overseen by Reitherman in a film co-directed by several of his fellow Nine Old Men, the fight between Prince Phillip and Maleficent in her dragon form feels like a moment Disney had been building to throughout its history as a studio. This sequence requisitions the best elements of Snow White’s retreat into the forest after she learns of the wicked queen’s plans to kill her and the arrival of the demon in Fantasia’s “Night on Bald Mountain” sequence, combining them into one climactic expression of storytelling bliss. Like many of the studio’s earliest efforts, Sleeping Beauty’s animation techniques owe a lot to the expressionistic quality of silent film. Phillip’s galloping horse and Maleficent’s transformation are clear images of good and evil that are immediately understood and brought to a thrilling conclusion of explosive greens, twisting brambles, and the deadly fires of Hell.
But Sleeping Beauty’s transcendence is not confined to just one scene, and Walt Disney’s decision to film with Super Technirama 70-mm. — the first wide release to utilize the prestige format — only emphasized the film’s glittering assemblage of artistic achievements. There is the expansive, richly painted background work of artist Eyvind Earle. There is also the beautiful color work depicting the three fairy godmothers and their planning of Aurora’s birthday, which Floyd Norman, Disney’s first Black animator, helped create. And there is the arrival of Maleficent, overseen by Marc Davis, whose villainy debuts with a clap of thunder and green fire, dominating the tone and texture of the film’s imagery. That’s not to mention the other remarkable animators who worked on the production in varying capacities, including Chuck Jones and Don Bluth.
Despite all this spectacle, Sleeping Beauty did not find its audience upon release, and it would signal the end of an era for Disney. In the future, the studio would adopt animation shortcuts and digital techniques, and Sleeping Beauty would stand as the last film the studio allowed to be built entirely by the hands of its creators.
Directed by Jiří Trnka
Though not well known in the U.S., Czech animation has a long, rich history. Two of its animators in particular contributed enormously to the medium: Karel Zeman and Jiří Trnka. Zeman was often referred to as the “Czech Méliès” for his use of special effects and animation, and his short Inspirace bears the distinction of being the first film animated using blown-glass figurines, reportedly because of a bet Zeman accepted. His work influenced directors as wide-ranging as Terry Gilliam and Wes Anderson, to say nothing of his contemporary Trnka.
Trnka, a master of puppetry and stop-motion, was called “the Walt Disney of Eastern Europe” in his day owing to his influence on the medium and his fantastical adaptations of literary works. His masterpiece, A Midsummer Night’s Dream, uses framing and posing alone in order to convey the emotions of Shakespeare’s characters, as Trnka refused to alter the hand-carved puppets in any way. The film is set to the balletic music of frequent collaborator Václav Trojan in order to highlight the fluidity and rhythm of the action.
A Midsummer Night’s Dream is a highly unusual film in technical terms. Trnka, who found letterbox presentation of films to be abhorrent, chose to shoot every single frame with two cameras, one in Academy ratio and the other in CinemaScope, essentially recording an in-camera pan-and-scan of the film simultaneously with the theatrical version. The result is a film that feels epic in ways few other animated films of the time did — and many took notice. Stephen Bosustow, one of the founders of United Productions of America, said Trnka was a great influence on UPA’s visual approach, praising the director as “the first rebel against Disney’s omnipotence.”
Jay Ward Productions, Gamma Productions, Producers Associates of Television, Inc.
Produced by Jay Ward and Bill Scott
Running as interstitials in The Adventures of Rocky and Bullwinkle, Fractured Fairy Tales was a pun-laden series of animated shorts by creator Jay Ward that retold classic fairy tales (always narrated by Edward Everett Horton with Daws Butler, June Foray, Bill Scott, and Paul Frees supplying voices) with a toonish, sardonic flair. In particular, we’re highlighting the series’ first short, a brilliant take on the story of “Rapunzel,” which sports significantly lowered stakes (we’re pretty sure the wife in the story isn’t going to actually die if she doesn’t get her rampion), dressed-down dialogue (“Rampion, shmampion, it still looks like weeds to me”), and a spunky Rapunzel who is sick of her hair-related headaches.
Influenced by Dragnet spoof “St. George and the Dragonet,” by Butler and Stan Freberg, Fractured Fairy Tales started with twists on real fairy tales not only by the Brothers Grimm but by Hans Christian Andersen as well; after a while, the creative brains behind it started composing fairy tales of their own. Fractured Fairy Tales “were so distinctive an element of Rocky and His Friends,” wrote animation historian Keith Scott in The Moose That Roared , his book about Rocky and Bullwinkle, “that they remain the strongest memory of the series for many viewers.”
The humor holds up in excellent fashion eight decades later — unsurprising, given that Fractured Fairy Tales was one of Ward’s favorites of the show. But beyond its timeless binge-worthiness, Fractured Fairy Tales has also cemented its place in animation history for defying industry norms and influencing generations of subsequent creators.
Compared to the Hanna-Barbera re-creations of the family-sitcom format like The Flintstones and The Jetsons, the Rocky and Bullwinkle humor in general and Fractured Fairy Tales in particular felt jaggedly satirical and occasionally dark. Fractured Fairy Tales paved the way for a film like Disney’s Tangled and especially the Shrek franchise. We can ultimately thank Ward for everyone’s favorite Scottish ogre as well as for a host of later fairy-tale twists, such as Jon Scieszka’s award-winning postmodern children’s book, The Stinky Cheese Man and Other Fairly Stupid Tales.
Shanghai Animation Film Studio
Directed by Te Wei
Te Wei is unlike anyone else featured on this list. The cartoonist and animator did not come to the medium out of passion for the art or a desire to further innovate it; rather, Wei entered the world of animation because a government official ordered him to.
A year after being hired by his China’s Ministry of Culture to run the animation division of Changchun Film Studio, he, along with a number of artists, moved to Shanghai to form the Shanghai Animation Studio, where together they would pioneer three new animation techniques: paper cutting, paper folding, and Wei’s speciality, ink-wash animation.
Wei and his staff would develop the technique after being challenged by Chen Yi, a high-ranking government official, to create a short that resembled the water color paintings of Qi Baishi, who had just passed away. Astoundingly, they met the challenge on their initial attempt, the seemingly simple Where Is Mama both dazzled and baffled animators around the world, as no one could pin down exactly how Wei and his team at SAFS crafted the beautiful short, whose influence is still being seen in China today — look no further than the opening of 2018’s White Snake .
Chinese animation has such a rich history but has had to overcome many hurdles thanks to government interference or indifference. There has not been a true ink-wash animated film since Wei’s final film, 1988’s Feeling From Mountain and Water, and with older animators not passing their techniques to younger generations, because of a lack of financial support from the government and the most talented animators being acquired by American and Japanese animation studios, there are real fears that the technique Wei help pioneer will soon fade into history.
Hanna-Barbera Productions
Directed by William Hanna and Joseph Barbera
Though cartoons were considered children’s entertainment in the ’50s, William Hanna and Joseph Barbera’s The Huckleberry Hound Show, featuring characters like the titular pooch and Yogi Bear, became a surprise hit with adult audiences, who would even go to bars to watch the show. This surprise success inspired the duo, who had already produced Academy Award–winning Tom and Jerry shorts for MGM, to create a groundbreaking adult-oriented cartoon series for prime-time TV.
The Flintstones was not an instant hit, at least not with critics, but the show quickly grew an audience as it married the tropes and humor of beloved live-action sitcoms like The Honeymooners (which Hanna considered the funniest show at the time) with the kind of visual gags you could only achieve with animation. Hanna-Barbera even hired two of The Honeymooners’ writers, Herbert Finn and Sydney Zelinka, to bring the adult humor that was cracking up audiences in the live-action format to the modern Stone Age world of The Flintstones. The cartoon was the first to include laugh tracks and focus on family issues that got resolved with laughter by the end of each episode, and it would create the template for animated sitcoms that The Simpsons ran with decades later to become an animation juggernaut.
The Flintstones, like most of Hanna-Barbera’s productions, made use of looping “limited animation.” The animators kept characters’ hands at their sides. They looped animation of Fred’s feet as he served as the motor of his own car. Characters passed across the same backgrounds over and over again. Limited animation was pioneered by the UPA studio as a stylistic alternative to the more detailed realism of Disney and Warner Bros., but it was Hanna-Barbera that saw the technique’s potential to save serious time and money. The Simpsons memorably mocked this in later years, but in the ’60s and ’70s, this helped Hanna-Barbera become so efficient at churning out shows that 60 Minutes once referred to the studio as “the General Motors of animation.”
Mushi Production
Directed by Osamu Tezuka
Despite using fewer than 20 images, this sequence from the show’s first episode, “The Birth of Astro Boy,” covers more than 200 frames. It showcases the limited animation associated with Astro Boy’s creator, manga artist/anime boss/cultural giant Osamu Tezuka. Techniques like partial animation, abstract backgrounds, animation loops, and camera movement on still images all convey motion with as little animation as possible.
But while they developed into stylistic conventions now part of anime’s visual language, these techniques weren’t Tezuka’s. Or new. Or unique to Japan. At the time, Astro Boy didn’t look too out of place next to Hanna-Barbera productions like The Flintstones. Tezuka didn’t pioneer limited animation so much as the commercial conditions that forced TV animators in Japan to rely on it.
Tezuka sold Astro Boy’s pilot in the anime TV industry’s formative years. Unfamiliar with the costs involved, buyers made low offers based on known quantities such as animation imports. By accepting an amount he knew fell far short, Tezuka set a harmful precedent that became industry standard. The ripple effect of that decision continues today; where Tezuka switched suppliers to save five yen per cel, animators now receive starvation wages to work in crunch conditions without benefits. Astro Boy made today’s anime industry possible. That’s a complicated legacy.
Astro Boy was also the first anime series to be broadcast on U.S. TV, imported by Fred Ladd, whose work carries its own complicated legacy. It set the precedent for treating imported anime as raw materials. Renamed characters, liberal translation, heavy-handed editing, bowdlerization, and filled silences (as in this sequence, entirely without words in Japanese) were hallmarks of anime localization until the aughts.
Morningside Productions
Directed by Don Chaffey; Visual effects by Ray Harryhausen
Ray Harryhausen didn’t invent the use of stop-motion as a means of creating big-screen special effects. But his contributions to the field were instrumental, and his influence remains immense, as the artist responsible for everything from the proto-kaiju that attacks New York in The Beast From 20,000 Fathoms to Bubo the mechanical owl in Clash of the Titans. Harryhausen did no less than define a whole generation of moviegoing wonder, bringing to screen flying saucers and fighting centaurs and dinosaurs, all of it decades before the advent of computer-generated imagery.
His genius wasn’t just seen in his skill when it came to designing and manipulating miniatures. He also developed a technique, dubbed “Dynamation,” that combined live-action footage with stop-motion photography, using split screen and rear projection. The result is that his animated creations appeared to exist in the real world — and never more famously than in the climactic skeleton battle in 1963’s Jason and the Argonauts, his masterpiece, in which the eponymous Greek hero (Todd Armstrong) faces down seven bony foes who emerge from the Earth at the behest of King Aeëtes (Jack Gwillim).
The intricate sequence, which took four months to film, looks so good because Harryhausen was able to sync the actions of his models up with the actions of the actors, making for an entirely convincing sword fight. First he shot the actors, who had rehearsed with stunt doubles and were then filmed performing their half of the fight, then he layered in his skeleton warriors, each of which had five appendages, into the film with their halves of the fight. “You have to make 35 moves when you have seven skeletons on the screen for one frame of film,” he later recalled of the workload. In the end, the fight was a landmark special effect. Those stop-motion skeletons shared the screen with the stars, but more importantly, they interacted with them.
Walt Disney Productions
Directed by Robert Stevenson
The hybridization of animation and live-action photography had existed long before Mary Poppins. Walt Disney himself had made a series of shorts using the technology before he ever invented Mickey Mouse or Oswald the Lucky Rabbit. In those early “Alice comedies,” Disney reversed the gimmick of the popular Fleischer brothers’ Out of the Inkwell shorts, putting a live-action girl in a completely animated world. And he would return to the idea decades later in Mary Poppins, during a sequence in which Bert (Dick Van Dyke) and Mary (Julie Andrews) and the children escape into a chalk drawing.
While in this animated wonderland, they dance, they sing catchy tunes by the Sherman Brothers, and they interact with a small platoon of animated penguin waiters, as charmingly animated by Disney legends Frank Thomas and Ollie Johnston. It feels like something of a throwback today, but for the time was incredibly cutting edge, mostly thanks to the interactivity between the humans and animated characters and the way the sequence was put together in an era long before blue-screen technology or the kind of compositing popularized by Star Wars.
This feat was accomplished using a combination of sodium vapor lights and a specially designed Technicolor camera responsible for only two strips of film, with a special prism that would capture the sodium vapor light on one strip and everything else on the other. The result was a perfect matte line that allowed for the background and characters to be fully animated and things like the piece of material holding Mary’s hat to her head to be fully transparent. Incredibly, Technicolor was never able to replicate the prism again, leading to decades of cumbersome, overtly intricate solutions to the same problem. And looking at the sequence now, it really is as stunning as it ever was, thanks largely to the expressiveness of the animation (at a time when the attention of the head of the world’s most renowned animation studio was drifting away from the medium) and just how good the compositing is. You can tell that Disney wanted to push things forward, and push things he did.
Legend has it that notoriously contentious Mary Poppins author P.L. Travers hated the animated sequence and after the world premiere even urged Disney to cut it. (His reply? “That ship has sailed.”) Yet it remains one of the very best, most lively moments in the movie, let alone a benchmark of live-action/animation combination. In fact, the very same penguin waiters appear at the Ink & Paint Club in Who Framed Roger Rabbit, and a similarly intricate animated sequence was conceived for the long-overdue follow-up Mary Poppins Returns. (Somewhat tellingly, the animation on the sequel wasn’t handled by Disney.) Travers might not have been a fan, but the sequence stands as a favorite of animation fans the world over.
Videocraft International, Ltd. (Rankin/Bass Productions)
Directed by Larry Roemer and Kizo Nagashima (associate)
It’s janky. It’s junky. But it’s also jingle-jangly: Rankin-Bass’s 1964 holiday television special Rudolph the Red-Nosed Reindeer established the mid-century template for American Christmas tradition in all its glorious kitsch. Something about the stop-motion makes it fascinating, year after year, to the very young: the characters are hypertactile, all hair and fur, the story simple but elemental. Where A Charlie Brown Christmas appeals to the bourgeoisie, with its anti-consumerist screed and middle-brow jazz, and How the Grinch Stole Christmas’s Seussian wit positions it as the most effectively classic and timeless, Rudolph is chintzy and childlike and — between Hermie and the Misfit Toys and the disapproving jock dad — remarkably, charmingly queer. Its Abominable Snowman is like Baby’s First Harryhausen, made all the scarier by its lo-fi movements and the compensating hyper-close-ups.
Rudolph the Red-Nosed Reindeer endures for all of these elements and a bang-up folksy Burl Ives soundtrack, and its production history represents a model that American TV animation continues to employ to this day: After storyboarding in New York, the actual “Animagic” animation was outsourced to Tadahito Mochinaga’s team in Japan. The interchange between American and Asian animation outfits, including the problem of who exactly does what labor, would endure — as would that adorable, nasal-voiced reindeer.
Directed by Marv Newland
When he was working on his student project at the Art Center College of Design in Los Angeles in the late ’60s, Marv Newland had no idea that he’d create one of the funniest and famed animated shorts of all time. When his first project — a live-action film — turned out to be too ambitious for the allotted time, Newland abandoned the project and changed course, spending two weeks and less than $300 on Bambi Meets Godzilla.
The minute-and-a-half-long film plays the picturesque, rural “Call to the Dairy Cows” from the 1829 opera William Tell as Bambi grazes in the pasture — that is, until the last haunting note from the Beatles’ “A Day in the Life” (1967) reverberates as (spoiler!) Godzilla’s scaled foot comes crashing down on our protagonist. Decades later, Newland joked that Bambi Meets Godzilla is the “film that ruined my career,” though he went on to work on Gary Larson’s Tales From the Far Side TV special.
Jokes aside, few students can say their school project played in theaters across the U.S. (in this case, before screenings of Philippe de Broca’s King of Hearts). The short’s magic is all in the timing. Of the total 90-second runtime, the film spends the first 48 listing the opening credits and the last 27 on the closing credits, leaving just about 12 seconds in the middle for the “action,” which is just Godzilla’s unmoving, monstrous foot, ensuring the life is truly squashed out of poor, poor Bambi. But the film lives on, getting a makeover in a frame-for-frame HD re-creation in 2013.
Filmation, Hanna-Barbera Productions
Directed by Hal Sutherland, William Hanna, Joseph Barbera
These two iconic sequences go hand-in-hand, so we’re combining them as one — cheating, perhaps, but it’s simply irresponsible to discuss the chart-topping megahit without highlighting the heavy hitter it hatched, both spinning out of the pages of Archie comics in the late ’60s.
Written by musicians Jeff Barry and Andy Kim and originally recorded by The Archie Show’s fictional bubblegum pop band the Archies (with Ron Dante, Andy Kim, and Toni White on vocals), “Sugar Sugar” skyrocketed to No. 1 on the U.S. Billboard “Hot 100” chart, where it stayed for four weeks. According to Dante, a promoter in San Francisco “took off the label before giving it to the top radio station there. He said, ‘Just play it! It’s a mystery group.’ The guy played it, and the phones lit up.”
The catchy tune (and the originality of the animated music-video concept) inspired a wave of Saturday-morning cartoons to follow suit by incorporating bands and music, like that time Scooby Doo met the Monkees. And while it was the first song by an animated band to reach the charts, it certainly wasn’t the last.
But if the Archies started the trend, Josie and the Pussycats perfected it with their mega-groovy intro sequence. Featuring an all-women rock band including Valerie Brown, the first Black woman main character in a cartoon, Josie and the Pussycats — which debuted half an hour before the Harlem Globetrotters TV series, the first majority Black cast in an animated show — set the scene for later girl-power animated classics like Jem and the Holograms.
TMS Entertainment
Directed by Osamu Dezaki
Osamu Dezaki was a lion among animators, renowned for his work on such anime as Astro Boy, Dororo, Lupin the Third, and Space Adventure Cobra and whose signature techniques have since become inseparable from the visual language of Japanese animation. His most enduring contribution to the medium of animation comes in the form of his “postcard memories” technique, a stylized form of denouement shots that has been all but unanimously adopted by countless anime directors since the 1970s.
Characterized by a freeze frame resembling a faded pastel-chalk portrait painted on a postcard, hence the name, the “postcard memories” technique is a form of limited animation that’s been used to emphasize humor, drama, romance, action, or melancholy. This last quality is on full display in the closing shot of the 1970 boxing sports anime Ashita No Joe, Dezaki’s directorial debut, where the protagonist Joe Yabuki, following his defeat at the hands of his rival José Mendoza, slumps over in his corner of the boxing ring deathly still, a faint smile eerily painted across his face. The “postcard memories” technique has since transcended its creator to become one of the most ubiquitous visual tropes of Japanese animation, seen everywhere from Dragon Ball Z to Cowboy Bebop to Kill la Kill and beyond.
Fritz Productions, Aurica Finance Company, Krantz Films
Directed by Ralph Bakshi
Furries themselves often point to Disney’s 1973 version of Robin Hood as a shared, foundational text in furry culture. Arguing against a subculture’s own idea of its history might be anathema, but here, they are wrong to evade the key touchstone of horny anthropomorphic cinema: 1972’s Fritz the Cat. A few short years after the Western auteurist revolution of films like Easy Rider and Midnight Cowboy came a cast of characters that could have been mistaken for Jay Ward kiddie cartoons — until they opened their potty mouths or shed their hippy togs to reveal full feline tits and ass.
Based on Robert Crumb’s underground comix character, Ralph Bakshi’s Fritz the Cat was the first film to score the X rating, and it pulsates with a sophomoric “Can you believe we’re getting away with this?” attitude that presages the naughtiness and cynicism of South Park decades later. In what is perhaps the most memorable of many memorable scenes, you see everything that earned Fritz the Cat its reputation: campus feminists being lured into a dirty bathtub orgy, plenty of drug use, and bumbling cops who in this universe are, of course, pigs. Nearly 50 years later, it still evokes the fresh, rebellious excitement of a kid doodling a wang on a bathroom stall for the first time, giddily sordid.
Filmation
Created by Bill Cosby; directed by Hal Sutherland
Much as we’d prefer to leave Bill Cosby out of this — and we really, really would — we can’t. That’s because Fat Albert and the Cosby Kids, which Cosby co-created and starred in, marks an important milestone as the first animated TV series to focus on original Black characters. (The Jackson 5ive and The Harlem Globetrotters, which preceded it, were based on existing people.) The initial special based on Cosby’s stand-up comedy, Hey, Hey, Hey, It’s Fat Albert featured the designs of Leo Sullivan and the work of six other animators and aired on NBC in 1969, while the long-running series that debuted on CBS in 1972 was produced by Filmation — known for its use of limited animation and adaptations of Archie comics, Star Trek, and other properties.
Like so much of children’s programming during the early ’70s, Fat Albert and the Cosby Kids was built on educational underpinnings. In every episode, Fat Albert, Mushmouth, Rudy, and the rest of the Junkyard Gang learned some kind of lesson. That mission, and the show’s spirit, are best captured in the show’s theme song, which starts with a distinctive bass groove that quickly turns into a “na-na-na, gonna have a good time” party. In the song, Fat Albert declares that he and his friends will be “learning from each other while we do our thing,” while the animation introduces its all Black cast of distinctive personalities. The nicknames of these characters weren’t always positive — “Dumb” Donald, not the best! — but seeing all these Black kids on TV, depicted in a positive light, was significant. The fact that Fat Albert, the overweight center of the series, was the hero and that each of his friends had their own challenges to overcome only adds to the show’s status as a true example of better representation in animation.
Scholastic Rock, Inc.
Produced by George Newall and Thomas Yohe
Schoolhouse Rock! was born on the first Saturday in 1973 with this short, which established the sensibility of one of the most significant works of educational animation in modern history. Co-created by a team from an ad agency, including Thomas Yohe, who provided the drawings that became the basis for the animation, “Three Is a Magic Number” was written and performed by jazz musician Bob Dorough, who would go on to contribute to 32 more memorable Schoolhouse Rock! efforts.
“Three Is a Magic Number” was a catchy song, effective at cementing multiplication tables in children’s heads, and moving in its evocation of holy trinities, triangles, and single-child families. The animation was just as elegant in its simplicity. Combined, the music and those visuals created a distinctive aesthetic that would be expanded upon in “Conjunction Junction,” “I’m Just a Bill,” and many other shorts, which were shown regularly between commercials during ABC Saturday-morning cartoons. “Three Is a Magic Number” and its PSA-esque descendants like Muzzy or Téléfrançais! taught kids math, grammar, history, and science while also serving as an antidote to the increasing barrage of commercials being pitched directly to wide-eyed, sugared-cereal-hungry audiences.
Many generations have been exposed to Schoolhouse Rock! since its debut, thanks to the shorts themselves, viewable on Disney+, as well as the many homages and parodies that wound their way through pop culture. But
Gen-Xers were basically homeschooled on “Three Is a Magic Number” and the shorts that followed, to the point where it seems fair to argue that the children of the 1970s became MTV’s earliest music-video-obsessive adopters, in part because Schoolhouse Rock! trained them for the moment.
Jiří Trnka Studio
Directed by René Laloux
There’s nothing else out there quite like Fantastic Planet, that 1973 science-fiction freakout from French filmmakers René Laloux and Roland Topor and the Prague-based Jiří Trnka Studio. To see images from it are to have them forever seared on the brain. Who could forget the giant blue-skinned Draag, with their lidless red eyes and a tendency to keep humans (called Oms) as pets, sometimes indulging the much-smaller species and sometimes subjecting its members to random acts of capricious cruelty? Czech artist Trnka, who died in 1969, was best known for his reliance on puppets and paper in animation, and Laloux had a background in puppetry as well, and the result of the latter’s five-year cross-European collaboration with the studio was a film that used paper cutouts and dreamlike backdrops to unique and unsettling ends.
Fantastic Planet is an all-purpose allegory about oppression that at varying times has been read as having a message about slavery, about animal rights, and about the 1968 Warsaw Pact invasion of Czechoslovakia. The truth is that it’s malleable enough to be repurposed for any conflict, as the oppressed Oms learn to use Draag knowledge and technology against their captors. The po-faced story is lightened up considerably by the heavy streak of psychedelia in the imagery, something that’s made the film a treasured party backdrop, especially in the scene in which four adult Draags are shown meditating. As their bodies shift kaleidoscopically into strange, organic shapes as they travel with their minds, it’s clear that what you’re watching is sci-fi, sure, but with an unmissable whiff of substances to it.
Mushi Production
Directed by Eiichi Yamamoto
Belladonna of Sadness was singular at the time of its release in 1973, addressing historical misogyny and the ways that it can compromise or, in some cases, annihilate the bodies of women. Rendered through beautiful watercolor paintings, much of Belladonna is brought to life in close-ups of its protagonist, Jeanne, whose tragic face often reflects the complicated emotions of Lillian Gish’s work as D.W. Griffith’s tragic martyr. This story of extreme misogyny and violence does not make the implication that Jeanne is a universal figure of woman, but it does present her anguish in a journey reminiscent of Eve’s in the Bible. Jeanne has more in common with the tempting snake than the average woman, but the fatalism of her story, and the way it is rendered through oozing, wretched, red and black paint and a gaze of furious intent contains within it an elemental rage toward those who attempt to fracture the psyche of women everywhere.
Belladonna climaxes, in a way, in its prolonged orgy sequence, which plays in direct opposition to earlier scenes of rape. Jeanne draws the villagers, who believe in God, into a world of animal lust and primal instinct, illuminating a central hypocrisy in those who lift up a higher power only to crush those who are deemed filthy or different. Yamamoto’s Jeanne is a seductive figure, and the way she was painted had prolonged effects in the way that anime heroines were conceived of going forward. It’s easy to trace the DNA of this film to the later works of Mamoru Oshii (Ghost in the Shell) or Satoshi Kon (Perfect Blue). This is also the rare animated feature that has found soul mates of form in the likes of such horror films as Jonathan Glazer’s Under the Skin and Rob Zombie’s The Lords of Salem — works also interested in the ways stories of martyred women are woven throughout time.
Academy Productions, Group TAC
Directed by Leiji Matsumoto
Three years before the sci-fi boom led by Star Wars, Space Battleship Yamato sent a salvaged warship through space to save Earth from alien attack. Like Star Trek’s USS Enterprise, the Yamato was named for a real ship. This flashback from the episode “The Opening Gun! Space Battleship Yamato Starts!” animates its very real demise in 1945: bombed and burning, sinking with 3,000 crew members while Japanese soldiers pay their respects. A voice-over says the Yamato’s origin as a warship, born to fight, is a tragedy.
So the director, the prominent manga artist Leiji Matsumoto, was horrified to learn this sequence had aired with a military march. He fought to change the music, insisting, “Young people will not go along with this!” and “If the broadcast station hears this, the program is over.” War was still a delicate subject in Japan, where anti-military protests had filled the previous decade. Using real wartime iconography in a sci-fi setting to tell a very human story required a careful balance — easily tipped by a militaristic soundtrack. He won the fight, changed the cue, and Yamato went on to become one of the most influential anime of all time, both in Japan and in the U.S., where it was localized as Star Blazers.
However, this sequence didn’t make it into the U.S. adaptation in any form. But neither did more overtly antiwar sequences. At one point, the protagonist weeps for his enemies while surrounded by dead allies, wondering if violence was necessary. Star Blazers created audio to keep the dead alive and the hero firm in his beliefs. The U.S. — subject of Japan’s anti-military protests — had its own delicate balance to maintain.
Nippon Animation
Directed by Isao Takahata
Space Battleship Yamato signaled the beginnings of fandom as we know it, with teenagers turning up at the studio to show their enthusiasm. But the girls would sometimes admit they preferred Yamato’s rival: Heidi, Girl of the Alps. At the time, most TV anime were about sports or sci-fi, starring boys or beautiful women. Heidi, scheduled opposite Yamato and achieving identical ratings, highlighted the business case for TV anime targeting girls.
It also made the case for prestige TV animation. The penny-pinching conditions Osamu Tezuka accepted with Astro Boy’s undervaluation in 1962 had become the industry norm, but Heidi‘s director was Isao Takahata — previously demoted at Toei after ignoring deadlines and budget in pursuit of perfection on his debut feature, The Little Norse Prince (1968).
Heidi’s animators visited the Swiss Alps, shot reference footage, and used up to 8,000 cels per episode (Astro Boy’s average was 2,500, many reused). Impressive anime openings don’t typically represent a show’s animation, but this one does. Heidi’s quality, popularity, and exportability appealed to sponsors, who funded a string of “masterpiece anime” series based on children’s books. These were dubbed and aired around the world, popularizing this anime style.
Hayao Miyazaki (who danced around a car park with a colleague as a reference for Heidi and Peter’s dance in this sequence) described working on Heidi as “a year-long state of emergency,” which he realized was “the danger of television”: maintain that unsustainable state of emergency or sacrifice production quality. He chose to make movies instead.
Soyuzmultfilm
Directed by Yuri Norstein
A little hedgehog is on his way to meet his friend the bear when he spots a white horse in the evening fog and decides to investigate. The horse disappears, and the hedgehog encounters all manner of frights in the fog before eventually finding his way to the bear. Even when the danger has passed, he cannot shake the image of the horse in the fog from his mind. Neither will anyone who watches this astonishing film.
Animation is breathing the illusion of life into two-dimensional objects, and few directors have made this magic as wondrously as Russia’s Yuri Norstein. Despite working with paper cutouts — a form that has more in common with stop-motion than traditional 2-D animation — he brings incredible dimensionality to his films through a variety of tricks, such as his own unique version of the multiplane camera. With Hedgehog in the Fog, he stumped his colleagues around the world with extraordinary environmental effects. How do the animals actually fade into and out of the fog? How did he replicate the fuzziness of fog so effectively with glass and celluloid? (The answer is that he painstakingly manipulated an extremely thin layer of paper between the camera and the planes of the scenes.) The film is at turns beautiful and scary in its evocation of a child’s imagination and the first encounter with the all-consuming strangeness of the world.
Directed by Caroline Leaf
The best-known practitioner of paint-on-glass animation is probably Russia’s Aleksandr Petrov, who’s gotten four Academy Award nominations for his shorts, winning for his 1999 The Old Man and the Sea, a gorgeous adaptation of the famous Ernest Hemingway novel. But the artist most often credited with inventing the technique is pioneering Canadian filmmaker Caroline Leaf, who first used it to make her wondrous 1976 short The Street, based on the story by Mordecai Richler. Leaf has made use of various innovative approaches to animation throughout her career, creating images with sand or by scratching directly on the emulsion of the film itself.
For her paint-on-glass work, she used pigments with retardants mixed in so they wouldn’t dry. After drawing on a white glass background and photographing the result, she’d wipe away the old image with a cloth and redraw the next frame. The result, in The Street (which was also up for an Oscar), is a handcrafted look that conveys the subjectivity and, in the fluidity of how the figures and scenes shift from one moment to the next, the haziness of the recollected past. It’s a style perfectly suited for a story shot through with love and loss — a Montreal man’s memory of a summer when he was a boy and his grandmother was on her deathbed, the whole family keeping vigil nearby while he thinks mostly of the fact that when the woman passes, he’ll finally have his own room.
Bruno Bozzetto Film
Directed by Bruno Bozzetto
The 1976 animated musical Allegro Non Troppo cannot be described as anything less than an emphatic, full-throated ‘F-U’ to Disney’s Fantasia. The magnum opus of Italian animator Bruno Bozzetto, the film’s title roughly translates to “Not So Fast,” a plea to criticize not only the optimism of Disney’s aforementioned musical but of the Western notion of progress itself. Set to the classical rhythms of Debussy, Dvorák, Sibelius, Vivaldi, and Stravinsky, Bozzetto’s film flips the self-importance Disney’s orchestral concept into a raucous comedy of irreverence and unbridled self-expression.
The film’s most famous sequence, set to Maurice Ravel’s “Boléro,” depicts a sentient dollop of black protoplasmic ooze writhing from the mouth of a discarded soda bottle before slinking across a barren expanse. Big things have small beginnings, and from the folds of this tiny roiling pustule spawns an entire planetary ecosystem of mammoth monstrosities with squinting eyes and gnashing teeth. A parodic counterpoint to Fantasia’s “Rite of Spring” sequence, Allegro Non Troppo’s “Boléro” imagines prehistory not as a titanic clash of competing forces but as a Boschian acid trip of horrors during which life itself strains to survive. Allegro Non Troppo meets and arguably even surpasses Disney’s Fantasia in terms of their respective ambitions, and the film’s “Boléro” sequence is evidence of that fact.
Hanna-Barbera
Directed by Charles A. Nichols
In the mid-1970s, Hanna-Barbera was unstoppable. Midway through what would be a 30-year reign as one of the most prolific animation studios in television history, the studio made hay by embracing limited animation, a low-budget technique that only required animators to animate what they absolutely had to, and embraced recycled footage whenever possible. The studio’s success was a victory of quantity over quality, one made easier by the massive portfolio of licensed characters available to it. DC’s Justice League was enshrined by the studio on television for more than a decade as the Super Friends.
The longevity of Super Friends — like many Hanna-Barbera properties, the series would regularly be retooled and renamed — means that wider trends in the television landscape of the time can be seen in its segments. The introduction of Black Vulcan, the first Black superhero on television, is one of them.
A landmark for onscreen diversity is reduced to rote tokenization — Black Vulcan was created for the show when a rights dispute precluded the inclusion of Black Lightning, DC’s first Black superhero. Black Vulcan was one of a rotating cast of ethnic superheroes used interchangeably to team up with one of the “main” Justice League members (in “The Whirlpool,” his first appearance, it’s Aquaman) to try and maximize the value of the superhero IP in animation, which for decades meant selling toys and appealing to audiences in the shallowest way possible.
Directed by Suzan Pitt
Suzan Pitt’s Asparagus was attached as an opening short film to David Lynch’s Eraserhead as the latter was growing into a cult phenomenon on the midnight-movie circuit in the ’70s. Both films glide across an abstract reality of moving images that could only be wrought by the bare hands of their creators. Pitt’s animation used a combination of cut-outs, stop motion, and traditional hand-drawn and painted animation cels. She spent her entire career experimenting with form while finding inspiration through the natural world, and Asparagus is overwhelmed with florid images of vegetation that resemble genitalia, a not-so-subtle metaphor for life and its possibilities of creation.
Pitt animates her film with a gliding, dreamy quality of shape-shifting and effervescent movement. She refuses to cut hard from one image to the next, instead opting for something more fluid with a seductive, liquid effect of disguised image wipes, which give the short a sinking, hallucinatory aura. In Asparagus, when doors and windows open, within those images there are only more images to slip into even further, as if Pitt envisioned her 20-minute short as Alice falling down the rabbit hole if the falling never stopped. The sloping, curving images of Pitt’s animation also feel deliberately feminine in construction and are only amplified by the sensuality of hands cupping phallic imagery that morph and sway with the bobbing of a mouth. Pitt’s work is surrealist but deliberate in its intent, and her straightforward approach to emotions and sensations made all of her work prick the skin of feeling — feeling totally inhabited by the soulfulness of her own human spirit as a result.
Pitt died in 2019, but the influence of her artistry and of Asparagus in particular are undeniable in the fields of experimental animation, visual art, and film to the extent that a community formed in her orbit over the years. In a remembrance, her friend and fellow animator Julie Zammarchi recalled asking her deep “questions about art and life” over the years, which Pitt never shied away from. “No subject was off limits or too personal,” Zammarchi said. “She was always generous during these meandering interviews as long as we both kept drawing and painting.”
A film roughly 30 years in the making, French animator Paul Grimault and poet Jacques Prévert’s adaptation of a tale by Hans Christian Andersen probably felt like a bizarre piece of history even upon its release in 1979. What began in 1948 as The Shepherdess and the Chimney Sweep, loosely based on Andersen’s fairy tale of the same name, was released unfinished in 1952 (as The Curious Adventures of Mr. Wonderbird in English-language markets) without the approval of either Grimault or Prévert. Grimault eventually obtained the rights to the film and was finally able to complete the film as he originally intended. And boy, were those intentions bizarre.
In this story of an evil painting of a king coming to life so as to kidnap a shepherdess and force her into marriage, Grimault’s work recalls the bouncy movements and ghoulish, wide-eyed characters of animation from decades prior, particularly that of Max Fleischer. It’s all placed within a film that pushes that style to a surrealist extreme, each wild left turn set against backdrops of a kingdom that alternates between minimal brutalism and Escher-esque labyrinthine architecture. And then the giant robot appears.
Despite Mockingbird’s unpredictability up to this point, the first appearance of the king’s giant robot, hidden beneath the city as a last resort weapon, is still a shock, its empty eyes and cold metal making for a stark visual contrast with the clean, white stone of the rest of the kingdom. More shocking still is the film’s final sequence, when the robot is repossessed and transformed into a tool of the common people and used to raze the pristine, decadent structures of the castle to the ground as the king’s tyranny is finally met with cathartic resistance. The surrealist film, particularly this sequence, is a noted influence on Ghibli co-founders Hayao Miyazaki and Isao Takahata, with Miyazaki explaining how the film made him more aware of how to use space in a vertical manner. The design of the robot itself seems to echo throughout that director’s work, but even in isolation the film remains a powerful one, with its animated tale of totalitarianism rendered in beautiful detail.
Sunrise
Directed by Yoshiyuki Tomino, Ichiro Itano (animator)
When Ichiro Itano was 20 years old, he decided one night to strap 50 fireworks onto his motorbike, light them with a Zippo, and start speeding down his local beach at 80 miles an hour. Why? Because he was told it was dangerous. In the midst of all the smoke, light, and sound emanating from the fireworks, the young daredevil and animator gained the inspiration for what would become one of the most iconic sequences in animation.
The “Itano Circus,” where a single character or object maneuvers through a torrent of missiles (or lasers, body parts, etc.), all in a single shot or within a single cut of the character, often shown from the perspective of the cockpit, is one of anime’s most dynamic, stylish, and visually distinctive tropes. Itano first used the technique in the 1980 series Space Runaway Ideon, and gained even more attention for the technique when he pulled it out for 1982’s Super Dimension Fortress Macross, hence the origin of its other name, “The Macross Missile Massacre.”
Since the 1980s, it has been one of the most copied sequences in animation, many attempting to mimic or even outdo the innovator. Itano has said that only three animators have successfully pulled off the technique: Yasushi Muraki, Masami Goto, and Neon Genesis Evangelion creator Hideki Anno. The circus also helped lead to the rise of sakuga culture in anime, where fans become familiar with the work of individual animators, following them from series to series, elevating them to superstar status.
Directed by Jan Švankmajer
No one combines the whimsical and the grotesque quite like Jan Švankmajer. The great Czech stop-motion surrealist turned Alice’s Adventures in Wonderland into a half live-action, half-animated child’s-eye-view nightmare. He transformed a folktale into dark comedy about a tree stump baby with a taste for human flesh. He depicted two cutlets of meat having a grand and fully consummated romance before getting fried up for dinner. Švankmajer’s sensibility and style have been hugely influential, particularly when it comes to fellow stop-motion lovers the Brothers Quay and Terry Gilliam, who wrote in the Guardian that his films “always leave me with mixed feelings, but they all have moments that really get to me; moments that evoke the nightmarish specter of seeing commonplace things coming unexpectedly to life.”
Despite the number of artists he’s inspired, Švankmajer’s work remains singular and fantastically strange, which is never more evident than in his 1983 short Dimensions of Dialogue. The three-part film involves different variations on faces doing disturbing things to one another, but it’s the opening sequence that’s the most extensive and the most memorable. In it, tooth-clicking profiles formed out of produce, kitchen equipment, and office gear take turns devouring one another and then regurgitating the increasingly chewed-up bits. After several rounds of this, what’s left are a set of identical and more realistic looking clay heads vomiting each other up ad infinitum — call it a metaphor for anything from the flattening of public discourse under authoritarianism to the tedium of making small-talk.
Aurora Productions, Don Bluth Productions, United Artists
Directed by Don Bluth
After working at Walt Disney Productions for nearly a decade, Don Bluth was fed up. Feeling the famed studio’s decline in the ’70s was due to cut corners in the animation process, he left — along with Gary Goldman, John Pomeroy, and a cadre of other Disney talent to form his own studio, Don Bluth Productions. A staunch advocate for the medium, Bluth wanted to make movies that refused to cut those corners and showed what animation could be at a time when the form was at a box-office low.
This is why The Secret of NIMH, Bluth’s first feature, could be accurately described as “showing off.” When Mrs. Brisby, the mouse hero, meets Jeremy, the clumsy crow voiced by Dom DeLouis, he doesn’t have to be tangled up in thread, but he is, completely. He trips and falls and gesticulates, all while covered in a scarlet cord, dangling with real weight; a touch of painstaking realism in a fantasy world. NIMH is Bluth and his animators beating Disney, but better: a film darker and more emotionally complex, rendered in art that’s impossible not to lose yourself in.
The Secret of NIMH wasn’t a financial success, but its arrival altered the course of American animation. Disney would soon respond to Bluth’s defection with a revitalized slate of films, later canonized as the Disney Renaissance. Bluth’s studio would become the primary competition but mostly in spirit. While Bluth would be responsible for classics like The Land Before Time and An American Tail, financial success would not follow him into the ’90s as Disney’s revitalized juggernaut proved suffocating. It’s a win for Bluth in a way: He thought quality animation would win the day. Audiences didn’t choose his, but they did choose the better animated world he pushed for.
Tatsunoko Production, Artland
Directed by Noboru Ishiguro
This iconic sequence proved you could sell action figures and pop songs at the same time and demonstrated two game-changing innovations: transforming mecha and virtual idols.
Macross co-creator Shoji Kawamori was one of the teenage fans who visited Space Battleship Yamato’s Studio Nue. These visits became regular, and he was working there part time before even starting his mechanical engineering degree. In 1980, he designed a line of toys that transformed smoothly from robots to cars or airplanes and back again, grounded in real world mechanics. These would become Transformers, opening up the most profitable new mecha toy possibilities since Getter Robo first combined robots in 1974.
But the idol content was just as groundbreaking. A Japanese idol’s success relies on their ability to enable parasocial relationships. This framework was never applied to fictional characters until Minmay. After airing, Macross released two albums: a soundtrack and Miss DJ, Minmay’s in-universe radio show complete with adverts, Beatles covers, and in-character interviews. The soundtrack sold music. Miss DJ sold Minmay. She became the first virtual idol, her song from the 1985 film Macross: Do You Remember Love? reaching No. 7 on Japan’s Oricon music charts. It was the first step toward Hatsune Miku (via Sharon Apple, the in-universe virtual idol of 1995’s Macross Plus).
The U.S. adaptation Robotech represented the height of butchering localization practices, combining Macross with two unrelated shows to make one Frankenseries. Nevertheless, it built a dedicated fan following and shaped the animation landscape as Star Blazers had before it.
World Events Productions, Toei Animation
Directed by Franklin Cofod (adaptation), Katsuhiko Taguchi (original)
If it wasn’t for World Events Productions founder Ted Koplar, Toei Animation’s 1981 series Beast King GoLion would have faded into obscurity like many post-Gundam mecha anime of the period. It was by pure chance that Koplar got his hands on a tape of GoLion while searching for programming for KPLR, the independent TV station owned by his father.
Alongside executive producer Peter Keefe, Koplar heavily edited the series, removing the more violent and gruesome aspects of GoLion and rewriting the script so that it could be appropriate for American children. He and his staff would also change the name of the robot from GoLion to Voltron, Defender of the Universe.
A legit pop-culture phenomenon, Voltron is an example where — in the U.S. and elsewhere — the adaptation almost fully eclipsed the source. It was the No. 1-rated syndicated children’s program for three years, mainly remembered today for its formation sequence, where five pilots combine their robot lions to form the all-powerful Voltron — always accompanied by the triumphant score composed by John Petersen. While not the first super-robot series to feature a formation sequence (that honor goes to Toei Animation’s Getter Robo), the trope became synonymous with Voltron, referenced in other animated programs ranging from Dexter’s Laboratory to Robot Chicken, to say nothing of its nod on the Wu-Tang Clan’s debut album, Enter the Wu-Tang (36 Chambers). More than anything, Voltron paved the way for other anime series to appear on American televisions like Dragon Ball Z, and tokusatsu shows like Mighty Morphin Power Rangers.
Warner Records
Directed by Steve Barron
When the Norwegian pop trio A-Ha released their now-iconic single “Take on Me” in 1984, the reception was a deadening thud. Despite the response, lead singer Morten Harket knew they had a hit on their hands; it just needed that extra something to push it over the top. Enter Jeff Ayeroff, then–creative director of Warner Bros. Records. Ayeroff put the band in touch with “Billie Jean” director Steve Barron, who from there enlisted the talents of animators Michael Patterson and Candace Reckinger to bring the song’s music video to life.
After shooting the live-action scenes in London, Patterson and Reckinger took the footage and spent 16 weeks creating the rest of the video. Drawing on the experience of creating his student film Commuter, Patterson and Reckinger drew over 2,000 drawings, bringing them to life in the style of a flickering, comic-book-like animation with rotoscoping (the first music video to make use of the technique). The result was nothing short of revolutionary, unlike anything television audiences had seen until then; catapulting the song to No. 1 on the Billboard charts in America and embedding itself firmly in the pop-culture subconscious. The video became so popular that viewers would leave MTV on in the background, just waiting for the chance of it coming on. This reception caused a ripple effect that would culminate in MTV creating its own dedicated animation block in the early ’90s, Liquid Television, spawning such shows as Beavis and Butt-head and Æon Flux, which themselves would create a precedent that would ultimately go on to inspire Turner Broadcasting’s own Adult Swim block.
National Film Board of Canada
Directed by Richard Condie
An award-winning short written and directed by Richard Condie in 1985, The Big Snit contrasts a couple’s domestic squabble with world-ending nuclear apocalypse. Laced with absurd humor, and the direct inspiration for the Scrabble scene in The Simpsons episode “Bart the Genius,” this piece manages to be a genuinely funny take on a deadly serious topic. It was produced through the National Film Board of Canada’s animation department — a robust incubator for artists like Cordell Baker, Janet Perlman, Chintis Lundgren, Ryan Larkin, and others — was founded by animation pioneer Norman McLaren. This critically acclaimed piece of animation, which boasts at least 17 awards and an Oscar nomination, is a prime example of the organization’s impact on the art form.
The boiling lines technique used for the character outlines in this short lives on in fellow Canadian animator Danny Antonucci’s Ed, Edd n Eddy, although it’s unclear if this piece is the constantly twitching trio’s direct inspiration. The more lifelike effect of constant slight movements stands in stark contrast to the film’s overall message, that life is ultimately meaningless and annihilation unavoidable. However, the idyllic ending of The Big Snit shrouds its bleak antiwar message in a colorful collage of flowers and flying Scrabble pieces. The short ends on a relatively high note, despite these dark undertones. Sharon Condie is credited with creating the backgrounds, presumably including the visually maximalist floral afterlife, and the result is not unlike George Dunning’s psychedelic 1968 animation in Yellow Submarine. Maybe the true takeaway from Condie’s cartoon is that love really is all we need.
Directed by The Brothers Quay
Siblings Stephen and Timothy Quay were born in Pennsylvania, but their aesthetic feels so innately European in its influences that they might as well be from across the Atlantic in spirit. The Brothers Quay are often held up as inheritors of the tradition of Jan Švankmajer, to whom they paid homage in a 1984 tribute short, though they’ve insisted themselves that their main inspiration is actually Polish filmmaker Walerian Borowczyk. Regardless, their work, which combines stop-motion and live-action footage with a dark sensibility, feels both familiar in its touchpoints and fresh in how it builds on those references toward something new.
In their best-known work, the 1986 short Street of Crocodiles, a puppet inside a curio box is freed from his strings by a man looking down through a viewing window from the outside. What the puppet finds in its explorations is a decrepit landscape that’s a masterpiece of mood and miniatures — all rusted hardware, clouded mirrors, hollow-headed dolls, a pile of dandelion fluff that reassembles itself into a ball, and a pocket watch that opens to reveal it is filled with meat. In the words of the Quays, the film is a depiction of “mechanical realities and manufactured pleasures,” but it’s also just an eerily beautiful experience that injected new life into the realm of experimental stop-motion. And the commercial, as well — it’s no surprise that Mark Romanek cited it when directing the video for Nine Inch Nails’ “Closer” eight years later.
Pixar Animation Studios
Directed by John Lasseter
Pixar may be one of the most recognized brands in animation today, but its icon status found its start in a little lamp that could. Luxo Jr. was the first short film ever produced by Pixar Animation Studios, shaping and showcasing the qualities that the studio has come to be known for since. At the time, Pixar was a new, small studio where John Lasseter and a team of part-time animators were working with very little funding and low expectations. Though Luxo Jr. was produced merely as a test to demonstrate the Pixar Image Computer’s capabilities, it exceeded and transformed the traditional understanding of what computer graphics were for and what animation could be.
At the time of its release in 1986, the short was the first work of animation to use procedural animation and marked a breakthrough in CGI, using shadow mapping to show shifting light and shadow animated objects. Its emotional realism established inanimate objects as having lifelike qualities that could inspire both comedy and drama. After premiering it at the SIGGRAPH festival to laughter and applause, it was clear that Pixar had piqued the world’s interest. Later that year, Luxo Jr. became the first computer-animated film to be nominated for an Academy Award for Best Animated Short Film.
”Luxo Jr. sent shock waves through the entire industry,” Pixar co-founder Ed Catmull wrote in his 1998 book Computer Animation: A Whole New World. “At that time, most traditional artists were afraid of the computer. They did not realize that the computer was merely a different tool in the artist’s kit.” The storytelling in Luxo Jr. got Pixar more support and funding and made it possible for the team to turn to feature animation, and eventually Toy Story , the first fully computer-animated feature film, also directed by Lasseter.
Studio Ghibli
Directed by Hayao Miyazaki
While waiting for their father’s bus, sisters Satsuki and Mei get caught in the rain. Soon they encounter another commuter, the local bear/cat/rabbit spirit Totoro, to whom Satsuki lends their spare umbrella. Totoro has some fun with making water fall on the umbrella before his ride comes: an enormous bus shaped like a cat. (Or rather, a cat shaped like a bus?) It is an extremely simple scene and also one of the most beat-by-beat delightful movie moments ever.
Hayao Miyazaki directs this sequence, which has been mimicked or parodied countless times and neatly encapsulates Miyazaki’s style, with a mesmerizing rhythm of pauses and actions. Each mundane gesture — Mei stomping in a puddle while Satsuki makes string figures, a streetlight coming on as night falls, Satsuki hefting Mei onto her back when she wants to nap — gradually builds to the intrusion of the fantastic. Every action has an equal and opposite non-action, a moment of consideration and reflection. This is best exemplified when Totoro figures out that raindrops + umbrella = fun noises, with the buildup climaxing with him making a huge hop (an echo of Mei playing in the puddle) to cause a ton of water to shake off the trees. It’s the simple feeling of passing the time, refracted through a lens that makes it indelible.
Studio Ghibli
Directed by Isao Takahata
One of the most emotionally draining animated films ever was originally released in theaters as part of a double bill with My Neighbor Totoro, which would have made for quite the evening. But despite the drastically different tones of the two films, Miyazaki and Isao Takahata share an innate understanding of animation as a combination of movement and non-movement, one built by their long working relationship before they co-founded Studio Ghibli. That understanding is on full display in the devastating climax of Grave of the Fireflies, in which the deprivation of the two young protagonists closes in on them and teenager Seita’s willful refusal of all aid results in his little sister Setsuko’s death.
Nowhere else is a still drawing of a human so agonizing as in this scene. The conclusion is foregone; the movie starts with Seita dying in a train station and his ghost joining his sister’s, before jumping back in time to show us how things came to this. Seita, having cashed in family savings to get food, thinks there’s still time to save Setsuko from malnutrition. There isn’t. Like Miyazaki’s bus stop in Totoro, the sequence has its own rhythm. Seita is horrified by the mud “rice balls” Setsuko has made, then cuts her a slice of watermelon, then it lingers on the pause before she weakly reaches to take it. Each time it cuts to Setsuko lying on the floor of their makeshift shelter, you expect this to be it. When the moment finally comes, it isn’t with a meticulously animated final breath, but instead just another still shot, with Seita’s voice-over stating that she never woke up.
Grave of the Fireflies is indicative of two related historical impulses: the collective act of a population still processing grief decades after the end of World War II and the individual drive of an artist to depict it. Takahata’s film followed Barefoot Gen, a movie that graphically depicted the bombing of Hiroshima, and though he has denied that Grave of the Fireflies was intended as an antiwar film, he took pains to reproduce the era faithfully, knowing that among his animators, he was the only one in fourth grade during the war who could remember what the landscape looked like. “I’m not out to make a movie that explains the times,” he said, “but I think those aspects should get incorporated somehow.”
Touchstone Pictures
Directed by Robert Zemeckis; animation directed by Richard Williams
Animation and live action had co-existed on the big screen well before Who Framed Roger Rabbit, dating all the way back to the silent era. But Robert Zemeckis’s work of film cartoon noir married the two for the length of an entire feature with such precision and imagination that it felt like the genre had just rocketed into a new stratosphere.
That feeling sets in immediately during the film’s brilliant opening sequence, which begins with “Somethin’s Cookin’,” a hand-drawn short in the Merrie Melodies vein starring Roger and Baby Herman, then pulls the camera back to reveal a live-action set where human director Raoul J. Raoul and his crew are shooting this “cartoon” on a movie set. Even now, it remains astonishing to see how seamlessly the real people interact with the animated characters, a testament to the work of the legendary Richard Williams, animation director on Roger Rabbit and director of the famously unfinished The Thief and the Cobbler, who sharpened every line of sight between toon and person to make sure that an animated rabbit yanking an actual coat looked completely real.
Arriving in 1988 at a time when animation was on a cultural downswing, Who Framed Roger Rabbit, with moments like that initial sequence and its many homages to beloved cartoons of the past, reminded members of its audience of their deep affection for the medium. It also reminded critics just what animation could do. As a result, it paved the way for the next decade of cartoons, which would include The Simpsons, the Disney Renaissance, the revived popularity of Hanna-Barbera and Looney Tunes, and the return to the gleeful slapstick of yore on networks like Kids WB and Nickelodeon. Yes, this motion picture was half–live action. But it was 100 percent a love letter to the humor and magic of animation. ( Click here to watch on Disney+. )
Tokyo Movie Shinsha
Directed by Katsuhiro Otomo
Where does one even begin to adapt, let alone reimagine, Katsuhiro Otomo’s 2,000-plus-page cyberpunk epic Akira for the big screen? For the man himself, the answer was simple: with nothing short of a big bang followed by a scintillating high-speed battle through the streets of a futuristic metropolis teetering on the brink of destruction.
The opening 13 minutes of Akira are a master class in cinematic precision. Otomo grabs the audience and thrusts it full force into the film’s world, weaving a motorcycle chase comprised of pulsating light trails, visceral action, and a thunderous Noh-inspired drum score performed by Geinoh Yamashirogumi between parallel sequences of civil unrest, police brutality, and a mysterious agent provocateur being mercilessly ventilated in a hail of gunfire. Everything you need to know is spelled out in those 13 minutes: the strained friendship and simmering rivalry between protagonist Kaneda and his antagonist/foil Tetsuo, a societal rage threatening self-immolation, and a clandestine military government desperately attempting to bury the past while straining to hold it all together.
The chase’s midpoint climax, animated by veteran animation director Koji Morimoto and informally known as the “Akira Bike Slide,” has been replicated nearly countless times on television, in the movies, and in games since the film’s release in 1988 — and especially in animation. Akira is a monolith of contemporary Japanese cinema, a cinematic achievement as historically significant as it is eminently impressive, and Akira’s opening motorcycle chase is nothing if not an enduring testament to the film’s primacy in the history of animation. (Click here to watch on Hulu.)
Gracie Films
Various directors
Starting as a segment on the Tracey Ullman Show before moving on to its own series, The Simpsons proved to American audiences (once again) that animated programs were not merely for children. For over 30 years, the series has given us many quotes and moments that have been referenced in everything from other cartoons to the cesspool that is Twitter. While not as relevant as it once was in the ’90s, there is one aspect of The Simpsons that can still demand attention from people who haven’t watched a full episode in years, and that is the couch gag. First appearing in the series’ second episode, “Bart the Genius,” the couch gag gives The Simpsons animators free rein to do what they wish to Springfield’s most famous family within constriction of the couch shot of its intro.
In the last three decades, the couch gag has gone from very simple actions such as the Simpsons performing a dance routine to gloriously outlandish ones that sometimes stretched for more than two minutes. The gag also allowed the family to meet other classic cartoon characters, such as Gumby, Rick and Morty, the Flintstones, and even the version of themselves that appeared on Tracey Ullman.
It also offered an opportunity for outsiders to come in and present gags animated in their own personal style. World of Tomorrow’s Don Hertzfeldt, the U.K. street artist Banksy, The Triplets of Belleville’s Sylvain Chomet, Guard Dog’s Bill Plympton, and even Blade II and Pan’s Labyrinth director Guillermo del Toro have all made versions of the gag. It is perhaps the most iconic aspect of one of the most iconic animated programs in history as well as its most adaptable. strong Click here to watch on Disney+ .)
Walt Disney Feature Animation
Directed by Ron Clements and John Musker
Very few artists had as widespread an influence on the history of animation as lyricist and playwright Howard Ashman, and he wasn’t even an animator. When Ashman started working with Disney in 1986 after he was commissioned to pen the lyrics for a song in the creative failure that was Oliver and Company, it began a relationship that would help birth the so-called Disney Renaissance and chart a path for the animation giants that they are still following to this day.
With The Little Mermaid, Disney’s output returned to the world of fairy tales and tapped into a feminine yearning for something more that resonated deeply with children everywhere. “Part of Your World” is an expertly crafted song of rising, bombastic vocals from Jodi Benson and firmly situated Disney in a new Broadway-influenced style of animated musicals. In the sequence, the mermaid Ariel retreats to her secret treasure trove, where she collects things from the human world, and like a teenage girl’s bedroom, it is decorated with her hopes and dreams. Much of the animation in this sequence is catered to the way Ariel moves, with her gorgeous flowing hair seeming to have a life of its own and her skyward gaze to emphasize her longing for more. (It would be the last fully traditionally cel-animated Disney film, before the process was replaced by Disney’s Computer Animation Production System, or CAPS.)
“Part of Your World” tapped into something fundamental about girlhood, and those sweeping, beautiful enchantments about being liberated and free from the restrictions of where you grew up, or who you are, or what your body looked like, still strike a chord to this day. Disney knows this too, as it has been tapping into the Ariel model ever since with the likes of Elsa from Frozen. “Part of Your World” is not only the start of Disney’s resurrection, but gave them an emotion to inhabit for the next 30 years, and that all started with Howard Ashman. ( Click here to watch on Disney+ .)
Spümcø
Directed by John Kricfalusi
When Nickelodeon debuted its first Nicktoons in 1991, none were as gross, subversive, and just plain weird as The Ren & Stimpy Show. Animated, directed, written, and created by John Kricfalusi, Ren & Stimpy was one of the most popular cartoons of its day. Unlike Doug and The Rugrats, which it premiered alongside, Ren & Stimpy was known for its surrealist humor and the outsize personalities of its eponymous, anthropomorphized Chihuahua and cat.
The show’s unsettling essence is encapsulated by its most famous moment, “Happy Happy Joy Joy.” It showcased one of the things that Kricfalusi did so well, which was to reintroduce physical humor and timing into cartoons, reminiscent of the work of Bill Hanna and Tex Avery, but with his subversive twist. The two leads perform a weird butt-slapping dance, followed by Ren bashing himself in the head with a hammer in time to the music while Stimpy bounces around, well, joyfully. It channels the weird antics, cringey behavior, and silliness the show was known for into a catchy tune whose popularity helped propel the show to new heights and does so with odd parodies of classic cartoon tropes.
But, like so many things, the show’s inventiveness is now overshadowed by its creator’s indefensible behavior. In 2018, two women came forward with accounts of how Kricfalusi sexually harassed them when they were teens, accusing him of grooming them and starting a sexual relationship with one when she was just 16. Kricfalusi admitted to the claims and said it was motivated by undiagnosed bipolar disorder and ADHD. Kricfalusi’s sexual harassment of female artists and teenage girls was an open secret of the animation industry at the time.
Despite Kricfalusi’s behavior, Ren & Stimpy’s influence is undeniable. It inspired dozens of imitators and fellow envelope-pushers in American adult animation in the late ’90s and early aughts. Mike Judge credits it with MTV picking up Beavis and Butt-head, and many of the conventions that it created or popularized still endure in today’s cartoons. ( Click here to watch on CBS All Access .)
Toei Animation
Directed by Junichi Sato
In the mid-to-late 1990s, Japanese imports began to find their way to American television sets, becoming massively popular among children in particular. The likes of Power Rangers (Super Sentai) and Pokémon were all the rage, and with the advent of Cartoon Network’s after-school programming on Toonami, such shows as Dragon Ball Z and Sailor Moon took hold of the imagination. Both of these series presented superheroics as an act of transformation. In the case of Sailor Moon, its function was in the transformational power of dress-up, and its most iconic sequence was birthed from that very idea.
In Japan, the magical-girl anime was nothing new in the 1990s, but it was for American audiences. There was an indefinable magic about this sequence that still resonates, as the magical-girl transformation has directly inspired the creators of popular shows like Steven Universe and She-Ra and the Princesses of Power . With their transformation, the Sailor Guardians became superheroes who were powerful and elegant in equal measure — and because these characters were only 14 years old, they all existed in that in-between space of girlhood on the precipice of womanhood. This made the sequence all the more relatable to the preteen girls who were obsessed with the series.
During this sequence, the camera spins around lead character Usagi as she becomes Sailor Moon, as if she were a ballerina mid-pirouette. No longer is she clumsy, but rather assured of her movement, and bit by bit she becomes glamorous and strong; tipped fingernails, mascara, a sailor suit, and a tiara — and with a determined look on her face, she was ready to not only save her friends but the entire world. ( Click here to watch on Hulu .)
Warner Bros. Animation
Storyboarded by Bruce Timm; animated by Kazuhide Tomonaga
How many animated television shows have managed to condense nearly an episode’s worth of plot beats comfortably into the space of minute — let alone for that minute to convey one of the most archetypal stories of a character so iconic one could immediately recognize who, or what, they’re watching without any dialogue, title card, or credits to speak of? The intro sequence of Batman: The Animated Series , aside from prefacing one of the greatest animated television series of all time, is itself irrefutably one of the greatest sequences in the history of animation.
“Everything you need to know about Batman is in [that opening],” DC Entertainment president and CCO Geoff Johns said back in 2004. Drawing inspiration from pulp-fiction staples such as the likes of the Avenger and the Shadow, Max Fleischer’s Superman shorts of the 1940s, and the dark futuristic architecture designs of Hugh Ferris, co-creator Bruce Timm and art director Eric Radomski crafted a one-minute pilot short to pitch the series to Warner Bros., which would later be reanimated by Kazuhide Tomonaga of TMS Entertainment to serve as the series’ intro sequence.
The result was an incarnation of the Caped Crusader unlike anything that had been brought to the screen before, drawing on the precedent of Tim Burton’s own feature film (and including its theme), albeit now reimagined through a synthesis of the chiaroscuro stylings of film noir, the architectural audacity of Art Deco, and the angular menace of German Expressionism. Nearly three decades later, the image of a heroic silhouette illuminated by a bolt of lightning against a blood-red sky stands as one of the most iconic depictions of the Dark Knight ever conceived and serves as a benchmark for animated action television to come.
Walt Disney Feature Animation
Directed by John Musker and Ron Clements
Animation is the only medium that could truly keep up with Robin Williams. The zealous, hilarious, endlessly talented actor and comedian first first lent his voice to animation in 1992’s Ferngully: The Last Rainforest, where he played an unbalanced rapping fruit bat named Batty Koda. The animators at Kroyer Films did a decent job animating Williams’s vocal skills, but they would truly shine when he joined Disney, as the company was redoubling its animated efforts, for the biggest film of 1992, Aladdin.
It takes almost 36 minutes for Genie to make his appearance and less than three to steal the entire picture (though Williams also voices the opening scene’s merchant). His opening song, “Friend Like Me,” written by the late Howard Ashman and composed by Alan Menken, is one of the highlights of the Disney Renaissance. From Williams rapid-fire delivery to the off-the-wall and elastic animation from Disney, and especially from the character’s lead animator, Eric Goldberg, the entire sequence is flawless.
Williams’s participation in Aladdin would lead to other studios hiring famous voices for animated characters — Eddie Murphy in Mulan and Shrek, Tom Hanks and Tim Allen in Toy Story, Chris Rock in Osmosis Jones. Williams would not return for the straight-to-video feature The Return of Jafar, as the actor felt betrayed by Disney when the company went back on their word and used his voice for promotional purposes, something Williams was strictly against; he did come back for 1996’s Aladdin and the King of Thieves. Today, Genie is Williams’s most famous and popular character, and it’s unlikely to see a big-budget animated film without at least one or two marquee celebrities attached. ( Click here to watch on Disney+ .)
Touchstone Pictures
Directed by Henry Selick; produced by Tim Burton and Denise Di Novi
Traditional children’s animated entertainment forever got permission to be ghoulish thanks to the opening number of the Tim Burton–produced, Henry Selick–directed The Nightmare Before Christmas, the stop-motion film that brought Ray Harryhausen–style craft into the mainstream as a longform art.
In 1993, in the middle of the Disney Renaissance, The Nightmare Before Christmas showed up on screens and immediately thrust vampires, skeletons, and clowns with tearaway faces into the eyeballs of audiences much more used to Disney princesses and genies with the comedy gifts of Robin Williams. “This Is Halloween” was scary — “Everybody scream!” shouts a creepy talking tree in the Danny Elfman–penned song — and seductively dark. If animated movies of the time were generally for kids who yearned for tiaras and adventure, The Nightmare Before Christmas, released under Disney’s Touchstone Pictures label, was aimed squarely at Wednesday Addams.
The story of Jack Skellington and Halloweentown was made via the painstaking stop-motion process, which was rarely used in feature-length animated motion pictures at the time. When the film opened on Halloween weekend, it rocketed to the top of the North American box office, a success story that brought the format back into vogue. Every stop-motion feature that followed — the Aardman Animation theatricals, Wes Anderson’s Fantastic Mr. Fox, the entire output of Laika studios (including such films as Coraline, also directed by Selick, and The Boxtrolls), and Tim Burton’s subsequent stop-motion films, Corpse Bride and Frankenweenie — owes the Pumpkin King a significant debt.
Across the pond, stop-motion animation director Nick Park and studio Aardman Animations achieved feats of their own the same year, with the The Wrong Trousers, a short Wallace and Gromit film with an iconic train chase that nods to Indiana Jones. The short is full of visual humor that builds layers of sight gags in every handcrafted, incredibly detailed scene. We would be remiss not to mention it, as it and Nightmare both kick-started an interest in stop-motion in the ’90s. Aardman’s 2000 film Chicken Run remains the top-grossing stop-motion film of all time. ( Click here to watch on Disney+ .)
Hanna-Barbera Cartoons, Cartoon Network Studios
Directed by Genndy Tartakovsky
Not to be outdone by Nickelodeon and its Nicktoons success, Hanna-Barbera launched a new show on Cartoon Network called What a Cartoon!, the brainchild of producer Fred Seibert. It was a weekly showcase of new animation by new creators and led to the boom of memorable ’90s cartoons on Cartoon Network like The Powerpuff Girls, Johnny Bravo, Courage the Cowardly Dog, and many more. But the biggest to come out of the early days of the show was Dexter’s Laboratory.
Dexter’s Laboratory, created by Genndy Tartakovsky, was the second short to premiere on What a Cartoon! but the first to be greenlit for a full series. (The Powerpuff Girls, created by Tartakovsky contemporary Craig McCracken, premiered a week earlier in February 1995.) Mike Lazzo, the Space Ghost Coast to Coast creator who would later launch Adult Swim, greenlit Dexter’s Lab after a vote from viewers, who loved the weirdly accented little Dexter and how he contrasted with his destructively ignorant sister Dee Dee.
“Changes,” originally titled simply “Dexter’s Laboratory” in the What a Cartoon! anthology, is the short that started it all. Dee Dee sneaks into Dexter’s lab and starts misusing his latest invention: a remote that inexplicably turns people into random animals. The two run amok, turning each other into various animals throughout the lab and then up into their house, as their oblivious mother calls them down for breakfast. The crisp animation and visual humor made the segment stand out, nowhere more so than in the ending — a battle between a tortoise and the snail racing for the remote.
Dexter’s Laboratory received critical acclaim and won multiple Annie Awards, including the pilot episode, a testament to Tartakovsky’s talent and commitment as a filmmaker and a proof of concept for the What a Cartoon! anthology format, from which more series were quickly greenlit. Once it launched Tartakovsky’s career, he went on to create the Emmy-winning shows Samurai Jack , Star Wars: Clone Wars, and Primal, not to mention directing the Hotel Transylvania franchise.
Klasky Csupo Productions
Directed by Jim Duffy, Steve Socki, Jeff McGrath
Holiday specials have been around nearly as long as television, with virtually every popular serialized TV show, animated or otherwise, incorporating at least one Christmas-themed episode. But it took until 1995 and the arrival of Rugrats before we saw a Jewish-holiday special in an animated show.
Rugrats, together with Doug and The Ren & Stimpy Show, was one of the first slate of creator-driven animated series Nickelodeon called Nicktoons, introduced in 1991 as an alternative to the works of the Walt Disney Company and the merchandise-based adventure series of the 1980s. A show about babies for all ages, Arlene Klasky and Gábor Csupó’s series offered a view of the world through a child’s eyes without ever shying away from mature themes, and it quickly grew into a Peanuts for its generation. This episode was written in response to Nick executives’ request for a Hanukkah special, a request granted only one episode later; no other cartoon had ever before depicted Jewish American life at such length or in such depth.
In the episode, the babies attend a Seder with the maternal grandparents of protagonist Tommy Pickles, Ashkenazi Jewish immigrants who talk with heavy Yiddish accents and at times even in full Yiddish phrases. Because this is still a show about babies, the Ten Plagues are toned down a bit, and Moses tells Pharaoh: “Let my babies go!” But the episode still manages to capture the epic scope of the Passover story, with Tommy as Moses parting the Red Sea evoking the famous scene in The Ten Commandments. The episode was a big hit with critics and audiences alike, with the New York Times even reviewing the episode, and it became the highest-rated episode in the history of Nickelodeon to date. It also proved influential within the network, with Hey Arnold! later including a Bar Mitzvah episode. The creators of Rugrats, Klasky and Csupo went on to define Nickelodeon during the ’90s, producing other hit shows such as Aaahh!!! Real Monsters, The Wild Thornberrys, and As Told by Ginger. strong Click here to watch on Hulu .)
Pixar Animation Studios
Directed by John Lasseter
The epochs of feature-length animation are measured in the years before Toy Story and after Toy Story, the first film to be completely animated using CGI and the first collaboration between Disney and Pixar Animation Studios. Following the success of their short Luxo Jr., computer scientists at Pixar were tasked with building the software to design and execute the feature, crafting a new art form. Facing what then seemed like a Sisyphean task, they created an advanced rendering system, named RenderMan.
But learning from Disney’s mistakes in prioritizing art over story, John Lasseter and his team were especially careful to create conversations between characters who would touch people’s hearts. Eight writers are credited with either “screenplay by” or “story by” credits on the film. A huge cast of comedic actors led by Tom Hanks and Tim Allen do the voices. Randy Newman’s award-winning music and the song “You’ve Got a Friend in Me” lent the new and technologically unfamiliar film an inviting warmth. And its themes of changing friendship, jealousy, doubt and fear resonated with adults and children alike. The simplicity of the film’s characters and narrative arcs belied the workmanship it all required; animating Woody alone meant manipulating 596 articulation variables for Toy Story (and 7,198 for Toy Story 4 , released 24 years later).
The seams never showed, though. In one of the most memorable sequences in animated film history, the climax of Toy Story has Buzz Lightyear and Woody racing to catch a truck, cheering each other on as they fly. The scene exhibits the camaraderie of the film, as well as the lifelike emotions and features the animators were able to imbue through new technology.
Following its release, John Lasseter received a special Academy Award for leading the Pixar team, and the film became the first animated feature to be nominated for an Oscar for Best Original Screenplay, Score, and Song. Lasseter’s star continued to rise at Pixar through its acquisition by Disney until 2018, when he left the company in the wake of a sexual harassment scandal. Since Toy Story’s release, there have been more than 250 computer-animated films released around the world and CGI has eclipsed traditional animation in the blockbuster features market. strong Click here to watch on Disney+ .)
Character Studio
Developed by Michael Girard, Susan Amkraut, John Chadwick, Paul Bloemink, John Hutchinson, Adam Felt
In the fall of 1996, a 3-D toddler doing what kind of looked like a cha-cha took the internet by storm. It was an ungainly thing, the dancing baby, which was part of the appeal — the absurdity of a diapered tot showing off some elaborate moves matched by the uncanny-valley quality of the animation, which looked like it was aiming more for realism than for the figurative, but really wasn’t getting there. The proud parent was the computer-graphics program that’s now called Autodesk 3ds Max. Wee little sk_baby.max was a sample file meant to show off what a new plug-in called Character Studio could do.
But the kid soon took on a life of its own, especially when video of it was converted into a GIF and the dancing baby went a particularly ’90s version of viral. In the era of CompuServe and AOL, years before social media as we now know it would take hold, this meant that it spread by way of forums, personal sites, and email forwarding. Its true pop-culture ubiquity really arrived only after it became an element on Ally McBeal, where it was given a soundtrack of Blue Swede’s “Hooked on a Feeling” and appeared as a recurring hallucination, a symbol of the Calista Flockhart character’s fears about her career consuming her chances to have a family. After that, it was everywhere, a milestone of early memedom as a well as a sign of how central raw and interactive animation would become to the culture of the internet.
Gainax, Tatsunoko Production
Directed by Kazuya Tsurumaki
In Neon Genesis Evangelion , a story about a boy forced to fight monsters from within a monster of his own becomes much more: simultaneously homage, deconstruction, and annihilation. For 24 episodes, Hideaki Anno’s acclaimed series breathed new life into the giant robot genre, equally concerned with deep psychodrama as well as massive spectacle. Then came the ending, a pair of episodes that abandoned the forward momentum of the plot for an extended impressionist tone poem, a groundbreaking finale borne of necessity and catharsis.
In its two-part finale, Evangelion retreats entirely into its protagonist Shinji Ikari’s head, repurposing animation from the entire series to illustrate Shinji’s attempts to escape the throes of a deep depression. Offscreen, the Human Instrumentality Project, a last-ditch effort to save humanity from the monstrous Angels that may also wipe it out, begins. We don’t know how it goes; the series essentially abandons its apocalypse in favor of a story about a boy struggling to stop hating himself.
Anno’s own depression while making his wildly influential series is barely subtext; Shinji’s interiority and self-loathing is centered from the start. Yet the decision to end the show in a cathartic work of avant-garde art is completely breathtaking. Whether due to blown deadlines, a dried-up budget, or some combination of the two, the original plans for Evangelion’s ending changed, and in stripping itself down to the atomic level — literally to a single line on paper — Neon Genesis Evangelion transcends. It rebuilds its world into a more complete one, one where its characters, perhaps even its creator, could be happy, all as the apocalyptic story it was telling comes to a horrible end. ( Click here to watch on Netflix .)
OLM, Inc.
Directed by Kiyotaka Isako
In 1997, a strobe of flashing lights in episode 38 of Pokemon, “Dennō Senshi Porygon,” gave hundreds of Japanese children seizures. The series was immediately suspended, and a new set of industrywide guidelines was created to prevent animation triggering photo-sensitive epilepsy. In the U.S., this was the first many adults had heard of Pokémon, at a time when the word anime conjured images of nerds, schoolgirls, robots, and tentacles. But there was no media panic. If anything, the incident (which The Simpsons and South Park parodied a couple of years later) boosted Pokémon’s brand awareness.
Pokémon was the turning point for anime’s shift to the mainstream. Like Heidi, Girls of the Alps, and other touchstone anime, it was designed with the perfect cultural balance for export: foreign enough to stand out, but easy to localize. There was a visual language to learn, but the target audience — young Nintendo customers — quickly learned what sweatdrops were, what it meant when eyes became dots or shadows, and so on. The internet was new, but anime fandom was already online. Newbies began to learn that what the Pokémon anime called “jelly donuts” were actually onigiri, a Japanese rice snack. When they saw characters bowing, or wearing yukata, they could look up the significance.
Pokemon flattened the learning curve to appreciate anime and opened a gateway to Japanese culture. This paved the way not just for similar anime like Digimon, Yu-Gi-Oh!, or Monster Rancher, but for the Toonami block of iconic shows like Dragon Ball Z, Sailor Moon, and Gundam Wing.
Madhouse
Directed by Satoshi Kon
The late great Satoshi Kon could depict the fragmentation of identity like few other directors, whether of animation or live action, and this terrifying folie à deux is one of his crowning achievements. When idol singer turned aspiring actress Mima realizes that her manager Rumi is the one who’s been stalking her and murdering men on her behalf throughout the film, Rumi insists that she’s “the real Mima” and attempts to complete this transformation by murdering Mima. A flight through the city streets ensues, in which Mima is pursued by … herself. Or rather, a murderous version of the sugary-sweet, pitch-perfect idol persona she once adopted, the one she’s tried doggedly to leave behind.
Here is a literal rendering of trying to outrun one’s past, the culmination of how Perfect Blue renders an identity crisis as a clash between different projections of a self. The movie refuses to validate any one identity as the “true” one, or to confine an identity to a single individual. Through heart-skipping editing, this chase fragments both Mima and Rumi. In one moment, Mima is fleeing a ballerina-like doppelgänger floating through the air; the next, we see Rumi as she really is, wheezing to keep up with the younger woman. It’s deliberately absurd but loses none of its horror for it.
Kon would revisit these types of deliberate absurdities, fractured identities, and breathtaking chase scenes throughout his works later in titles like Millennium Actress, Paprika, and Paranoia Agent. Though his career was unfortunately cut short when he died in 2010, his impact on the Japanese animation community was undeniable. Before his death, he helped establish the Japanese Animation Creators Association, a group that works to improve labor conditions for animators in the country.
Williams Street
Animation directed by C. Martin Croker
One of the first Cartoon Network originals and an unlikely harbinger of the likes of The Eric Andre Show, the talk-show parody Space Ghost Coast to Coast has a legacy far greater than its roughshod resurrection of a 1960s Hanna-Barbera icon could have foretold. Beginning in 1994 on Cartoon Network, the show eventually ended in 1999 — only to return again with the birth of the network’s Adult Swim late-night programming block.
Rather than something serious and narrative-focused, Coast to Coast parodied the late-night format and was cobbled together from recycled clips from the original series. The show would pit the oblivious and incompetent Space Ghost and his reluctant co-hosts Zorak and Moltar against very real guests, who would usually suffer through a gauntlet of ludicrous questions and non sequiturs. It laid the ground for years of Adult Swim programming, with such shows as Aqua Teen Hunger Force, Sealab 2021, and Harvey Birdman: Attorney at Law all born from it. Each of those series shared animators with SGC2C as well as its style, using recycled stock footage from Hanna-Barbera cartoons while lampooning them mercilessly.
Even among all those new and long-running shows, so many of the most memorable moments belonged to SGC2C, perhaps the best of which is the introduction of Grandpa Leonard Ghostal — voiced by an extremely game Macho Man Randy Savage in a remarkable bit of casting — and portrayed with essentially the same character model and animation as Space Ghost but with a long gray beard and walking stick. Mixing the trademarks of his braggadocio showmanship with an amusing, aged belligerence, Savage’s Grandpa Ghostal proved to be one of the show’s surreal peaks. Typically, interviews on the show would unfold as a series of strange miscommunications, which sometimes were pushed into open antagonism, often to incredible effect, and no sequences in the show did so better than those featuring Grandpa Ghostal. Shortly after his arrival, the character wrests control of the show from his grandson, first threatening the life of Rob Zombie before unwittingly terrorizing poor Raven-Symoné (yes, of That’s So Raven) as he loudly asks if she’s ever sought out the thrill of throwing one of her peers to the mat. All in all, a supremely silly delight and a testament to the show’s bizarre imagination and long-lasting charm. RIP, Macho Man. ( Click here to watch on HBO Max .)
Sunrise
Directed by Shinichirō Watanabe; animated by Yutaka Nakamura and Masami Goto
Cowboy Bebop announced itself to the world in an erratic burst of black and white type punctuated by a salvo blast of brass horn trumpets. While the sentences that raced across the screen may have been rendered all but subliminal to first-time viewers when it aired, the message between the space of their words rang out loud and clear: Cowboy Bebop was an anime unlike anything that had come before it. Directed by Shinichirō Watanabe, written by Keiko Nobumoto, and produced by a talented committee of young and ambitious animators and producers under the collective pseudonym Hajime Yatate, Cowboy Bebop remains not only a quintessential work in the canon of Japanese animation to this day but a representative work of the aesthetic and tonal elasticity inherent to the medium of animation itself.
Originally conceived as a science-fiction action show designed to capitalize on the then-renewed popularity of the genre in the wake of Star Wars: Episode I – The Phantom Menace’s release, Watanabe & Co. were given only one clear directive when creating the show: Put a crap-ton of spaceships in it so they could sell merchandise. What the decision-makers at Sunrise and Bandai got was more than they, or anyone for that matter, could have expected: a neo-noir space-western comedy action series that drew from such diverse and far-flung inspirations as French New Wave cinema, Hong Kong action flicks, and mid-century New York jazz. All of which are more than apparent in the series’ iconic title sequence, a synesthetic barrage of stylish images à la Seijun Suzuki’s Tokyo Drifter, sleek geometric designs and transitions channeling the spirit of Saul Bass, and infectious energy brought to life by a thunderous theme song courtesy of composer Yoko Kanno and her band the Seatbelts. While the series may not have become a new genre in and of itself, as its manifesto so boldly proclaims, Cowboy Bebop nonetheless remains a masterwork of animation all its own. ( Click here to watch on Hulu .)
Toei Animation
Directed by Daisuke Nishio
The history of Dragon Ball Z in America is long and at times confusing. The seminal anime had already finished airing in Japan before Funimation licensed the show for an English-language release to be syndicated by Saban Entertainment in 1996. The 67-episode order was heavily edited for content, and despite strong ratings, the production halted in 1998 after two seasons — at which point reruns began airing on Cartoon Network’s Toonami block. This is all to say that when American audiences finally had new episodes in the form of the “Frieza Saga” in September 1999, it quickly became a cultural event.
The fight itself encapsulates everything that made Dragon Ball Z special and unlike anything audiences had ever seen. For one, it’s the longest fight in a show already known for its multi-episode fights (part of its serialized DNA as an adaptation of Akira Toriyama’s manga series in Weekly Shōnen Jump). The Frieza fight comes in at over four hours long, stretching across 20 episodes, and it made kids tune in every day to see how the fight would progress. It also weaves emotional character development and the deaths of beloved characters together with moments of epic action. Then there’s Goku’s “Super Saiyan” transformation, a concept that would become the central focus for the remainder of the series.
Finally, Dragon Ball Z served as a major turning point for anime localization in the U.S. It was the first televised anime import whose success flew in the face of its so-called cultural odor, its otherness from what American audiences were used to. Toonami made an event of Dragon Ball Z’s uniqueness and differences from its American programming, and little effort was made — compared to a show like Pokémon — to scrub the show of those differences. The increasing popularity of internet fan communities helped clarify those differences, as American fans more widely came to learn and understand any cultural nuances that were lost in the translation (or the blood that was edited out for Toonami’s daytime broadcast).
And it paid off. The season-three premiere of DBZ became the highest-rated program ever at the time on Cartoon Network, cementing it as a pop-culture juggernaut and Toonami as a powerhouse of programming, heralding a new era for anime in North America. The fight itself introduced a new generation to the idea that anime was more than just cartoons; they were shows with long-running arcs you had to follow religiously. The “Super Saiyan” transformation instantly became iconic, just as Frieza’s “This isn’t even my final form” became a meme that remains popular more than 20 years later.
Comedy Central Films, Scott Rudin Productions, Braniff Productions
Directed by Trey Parker and Matt Stone
South Park was always criticized for its crude nature, both in its animation and its overall content. So when Trey Parker and Matt Stone spun off a major motion picture from their series, they doubled down on that reputation, framing the entire story around the controversy that erupts when all the kids in South Park wind up seeing the wildly inappropriate Terrence and Philip movie, Asses of Fire, a film summed up in the simultaneously puerile and genius musical number “Uncle Fucka.”
Like the Terrence and Philip bits on the Comedy Central series, the animation of those two Canadian obscenity machines is about as primitive as the contents of a flip-book. The construction-paper aesthetic looks even more cut-out and glued together than it does on Stan or Cartman. More importantly, the sequence fully frees up Parker and Stone to be as nasty as they wanna be, resulting in a song with such lyrics as “Shut your fucking face, Uncle Fucka / You’re the one who fucked your uncle, Uncle Fucka,” and a lengthy interlude that consists of nothing but farts. Every adult in the theater walks out during the number — “What garbage,” says one woman. “Well, what do you expect? They’re Canadian,” her date responds — but Stan, Cartman, Kenny, Kyle, and Kyle’s little brother, Ike, are mesmerized. The whole thing is both a celebration of stupid and a meta-commentary on the impact that too much stupid can have on young minds, and the franchise’s historic TV-MA-worthy irreverence and use of cut-out animation lit a path for dozens of adult animated shows to follow.
J.C.Staff
Directed by Kunihiko Ikuhara
Animation director Kunihiko Ikuhara, dissatisfied with the amount of creative control Toei allowed him while he was a series director on Sailor Moon, formed his own creative group, Be-Papas, in 1996. Its first series, Revolutionary Girl Utena, was created by a super-group of sorts, with Ikuhara and Neon Genesis Evangelion animator Shinya Hasegawa collaborating to rewrite the magical-girl anime as something distinctly queer and transgressive.
After the success of the anime, Be-Papas made a feature-length film to accompany the anime and called it Adolescence of Utena, which took the themes and forms of the series to heights that had never been seen before. At the climax of the film, heroines Utena and Anthy must escape from the angular, boxed-in, constricting world full of harsh lines and cyclical violence. The only way to transcend those boundaries is through literal transformation. Utena’s body shape-shifts into something that can take her love for Anthy and birth it into a brand new world. So, obviously, she becomes a badass pink hot rod, blazing down the open road.
In context, this is an image of liberation for a minority group that is still beholden to conservative ideas in Japan (and America), smuggled through the breathless wonderment of cinematic imagery. Utena’s transformation is also resonant in the scope of transgender imagery, where definition of self can allow you to be anyone or anything you wish. Adolescence of Utena does for the magical-girl anime what Neon Genesis Evangelion did for the mecha anime: lay waste to the rules that came before to craft a bold new language all its own.
DreamWorks Animation
Directed by Andrew Adamson and Vicky Jenson
You can choose to shake your first at clouds as the world changes around you, or you can “let the world roll you,” as it were. And love it or hate it, we started off the new millennium with the animated landmark that was Shrek. At the time of its production, Katzenberg’s DreamWorks was trying to compete with Disney within the template Disney had defined over decades. As the studio attempted to frame itself as a serious alternative to the Disney Renaissance by presenting a grand, sweeping, painterly take on the biblical epic with The Prince of Egypt, it punished its lagging animators to “the gulag” of Shrek , a process called — no kidding — getting Shreked. (Anyone familiar with the A-team/B-team story of the Pocahontas and The Lion King animators at Disney knows how this story turns out.)
Yet Shrek became a critical and commercial phenomenon, winning the first-ever Academy Award for animated feature, and it was even nominated for a goddamn Palme d’Or, its audacious gamble paying dividends. The movie’s opening sequence encapsulates much of what made Shrek a sea change: It begins with genuinely enduring theme music by Harry Gregson-Williams and John Powell as a storybook opens in a direct reference to the Disney classics. But just as it lulls you in, Shrek himself stops the narration, tears out a page, says “What a load of —” and flushes the toilet, literally wiping his nasty swamp ass with Disney’s decorum. He kicks open an outhouse door, engages in an inverted princess routine of squeezing the life out of forest animals, and does it all set to Smash Mouth’s “All Star.” It was ballsy and audacious, and by refusing to stick to Disney’s limitations, it set the rules for a new millennium. All of us now suffer the consequences: We wouldn’t have Minions’ fart guns, Trolls’ dance parties, and the abomination to God and nature that is Bee Movie without it. With its opening sequence, Shrek announced that the old ways were dead. The years really do start coming — and, alas, they don’t stop coming. ( Click here to watch on Peacock .)
Square Pictures
Directed by Hironobu Sakaguchi
Final Fantasy: The Spirits Within director, the franchise’s creator Hironobu Sakaguchi, envisioned a cycle of innovation between films and games, making character Aki Ross a virtual actor for other productions. A first-time filmmaker, Sakaguchi’s Hollywood-bankrolled pursuit of realism resulted in a technical marvel. But cost overreaches outweighed box-office returns, and it was deemed a failure.
Photorealism is expensive and tends to yield diminishing returns. Humans can anthropomorphize anything — except non-human things that look just humanlike enough to read as inhuman; that’s where you enter the uncanny valley. Someone who talks to their Roomba like a pet will cringe at the crying and kissing in this Spirits Within scene, a visceral response. You cannot cross the uncanny valley, at least not yet. Motion-captured performances of the most charismatic actors in Hollywood have led to flops from The Polar Express (2004) to Gemini Man (2019). You can only go back, like when Shrek (2001) made human characters more cartoony after children cried in test screenings.
Improving CG animation requires a continuous cycle of artistic innovation and technological advancement. After Spirits Within, Japanese producers largely rejected CG, using it only sparingly and functionally for the next decade. Since CG became cheaper and foreign money increased budgets in the 2010s, Japan has been playing catch-up. Now titles like 2017’s Land of the Lustrous — which influenced Spider-Man: Into the Spider-Verse — have started to reveal a uniquely Japanese CG aesthetic informed by 2-D anime practices. But getting here took time and talent cultivation that could have begun in 2001.
Studio Ghibli
Directed by Hayao Miyazaki
Many children’s films are purposefully designed to bombard the senses and maintain a vise-grip on those with even the shortest attention spans. But the films of Hayao Miyazaki consciously move away from that. In his own words: “If you just have nonstop action with no breathing space at all, it’s just busyness. But if you take a moment, then the tension building in the film can grow into a wider dimension.” That concept of ma, of stillness in the midst of action, is essential to all of Miyazaki’s work, and its most influential examples live in Spirited Away.
After rushing headfirst into an uncanny otherworld, young protagonist Chihiro finds herself at a bathhouse for spirits. Following a lightning-fast descent to its boiler room, longtime Miyazaki collaborator Joe Hisaishi’s beautiful score disappears, and Chihiro (and in turn the viewer) simply observes as little soot spirits go about their work as it unfolds over three cuts with no dialogue. Later, Chihiro has a moment to stop and simply cry. This kind of stillness is something that those initially responsible for the Western localization of Ghibli films struggled with — such that earlier films like Castle in the Sky initially had these spaces removed in their dubbed versions, which were filled with extra quips or a more protracted soundtrack cue — yet it is essential to Miyazaki’s work. His films breathe.
The greatest moment of calm in Spirited Away is a sequence about two-thirds of the way through the film. A train appears over a seemingly endless, shallow sea — yet another mystical doorway, leading even further beyond the boundary that Chihiro already crossed. But where her arrival in the spirit world was almost instantaneous, this journey is more decompressed. That stillness is associated with Chihiro’s brave decision to shoulder great responsibility, and so ma becomes part of maturity, in a sense. Hisaishi’s score gently envelops the sequence as Chihiro and her companions, including the no-longer-antagonistic spirit No Face, quietly sit among faceless humanoid commuter spirits, without a word exchanged as the frame holds the characters still in the carriage, the shallow sea racing by behind them.
The film revels in the wordlessness of the journey, only the score swelling in the background as different places pass by, and the sun sets, and the audience, along with Chihiro, is brought further and further from the chaos of the bathhouse. Its meaning remains mysterious, maybe even to Miyazaki: Perhaps the sequence represents the restfulness of escaping the working day, or the passage of adulthood, or the quiet certainty inherent in the acceptance of responsibility, or fate. Regardless of the lesson, the moment is moving in its embrace of respite and contemplation, and in the years since, even Western animators have learned to incorporate more patience in their pacing as a result — but never with the same grace as the master. That a children’s film could prove so meditative and trusting in the patience of its audience felt miraculous in 2001. It feels just as miraculous now. ( Click here to watch on HBO Max .)
New Line Cinema, WingNut Films
Directed by Peter Jackson
Animated special effects entered a whole era in Peter Jackson’s Middle Earth. When The Lord of the Rings: The Two Towers fully introduced the character Gollum, it was through an animation technique called “performance capture,” better known as motion capture. Though motion capture had originally come out of the medical industry and had been used for studying joint-related illnesses and observing movements for medical purposes, it had begun to be used in video games.
Using special cameras that recorded actor Andy Serkis’s movements and expressions, Jackson’s team was able to conjure Gollum, a swamplike creature who hops like a frog and smirks like a predator. Serkis would wear a special suit and computers would translate visual data into a totally new creature. (The technology would also be used for orcs, though many of them were created using prosthetics.)
In a scene that showcased the abilities of motion capture to translate unique characteristics from an actor into a whole new character, Gollum interrogates Sméagol, his former Hobbit self. Using the technique, Serkis revealed Gollum as a sort of Russian nesting doll, revealing Sméagol’s tics and how they turned into Gollum’s.
In the last decade, the method has grown in popularity, evangelized by Serkis, who has directed motion-capture sequences in films like Avengers: Age of Ultron and in films of his own, like Mowgli: Legend of the Jungle. Without the techniques used to create Gollum, the landscape of blockbuster cinema in the last 20 years would look radically different. Gollum showed both audiences and filmmakers that a computer-generated character created through motion capture could convey the range and depth of an actor’s performance, if executed elegantly. ( Click here to watch on HBO Max .)
Adelaide Productions, Rebel Base Productions
Directed by Kalvin Lee
The Boondocks, Aaron McGruder’s 2005 animated sitcom adapted from his syndicated comic strip of the same name, was no stranger to controversy when its first season premiered on Adult Swim. But nothing the series had done before, or even after, could compare with the reception the show garnered in the wake of its ninth episode, “Return of the King.” Premiering on what would have been his 77th birthday, the episode, narrated by series protagonist Huey Freeman, tells the story of Dr. Martin Luther King Jr. in an alternate reality, who, instead of being murdered on April 4, 1968, was merely incapacitated, thrust into a 32-year long coma, only to reawaken to a radically changed yet still fundamentally unjust world post-9/11.
As the Freeman family attempts to help King rediscover his purpose, he is forced to grapple not only with the apathy of his own people but a uncanny reality where his very image and status as a martyr for peace have been co-opted by surreptitious forces beyond his control to serve consumerist agendas entirely antithetical to his own. The standout scene, and the reason for its inclusion on this list, is the episode’s climax where Dr. King, having pushed his way through to the podium of his own political rally and exasperated with those in attendance, erupts into a blistering expletive-filled speech condemning everything from BET, Soul Plane, and the complacency of a people and a nation that have lost their way. The most memorable passage of King’s speech, taken from the lyrics of a track by rapper and series collaborator Asheru titled “N**gas,” features King using the N-word as a means of getting across his frustrated and heartbroken message.
The scene catapulted the episode into the eye of firestorm of criticism, earning the series the avowed scorn and condemnation of the Reverend Al Sharpton and, later, a Peabody Award. Despite resigning himself to retirement, King’s speech nonetheless sparks a second civil-rights revolution that, nearly 14 years later, feels as incendiary and timely as when it first aired. It demonstrated both Aaron McGruder and Adult Swim’s willingness and capacity for taking big creative swings that pay off, while crafting a watershed moment in the history of Black American representation in cartoons. There is no other show quite like The Boondocks, and there is no other work of animated television quite like “Return of the King.” ( Click here to watch on HBO Max .)
Directed by Jodie Mack
Jodie Mack is one of the most overlooked greats of contemporary animation, a master of using unconventional materials in her work — everything from still photos to craft objects to computer boards to fabric and much more. Yard Work Is Hard Work is her magnum opus, a half-hour musical about falling in love … and then having to face the realities of domestic life under economic hardship. Made out of pictures cut mainly from magazines, advertisements, and other cultural images through which our expectations of the good life are shaped, it’s full of piercing irony.
This early sequence exemplifies this, as the two leads — an average all-American guy and gal — fantasize about the kind of romance they’re after. They start out singing from separate positions but in perfect harmony, underlining the homogeneity of the ideal they aspire to. The free-flowing, simple (incredibly catchy) lyrics capture youthful idealism beautifully. We will clean up and wipe off the dirt / We will have dinner and we’ll have dessert / We’ll laugh and we’ll flirt, we’ll crunch and we’ll munch / We will have breakfast and we will have lunch / Munch m-munch m-munch munch munch munch … we’ll hang out a bunch! Through it all, Mack pulls out one innovative visual or transition after another, layering on suggestive meanings through the sources of the pictures she uses.
Bridgit Folman Film Gang, Les Films d’Ici, Razor Film Produktion
Directed by Ari Folman
Animation and documentary have mainly intersected in the form of short films over the decades, along with features that made significant use of animated segments (such as In the Realms of the Unreal or Chicago 10). Then came Waltz With Bashir, fully animated, a hit which made history as the first animated film to be nominated for the Oscar for Best Foreign Language Film (now Best International Film). Director Ari Folman first shot the entire thing in live action on a sound stage before animators then used storyboard renderings of the footage as the basis for the animation — a technique related to but not quite the same as rotoscoping. This is not a gimmick but a deliberate distortion of reality to fit with the film’s themes of questioning one’s own memory and sense of what is and isn’t real.
Folman served in the IDF during the Israeli invasion of Lebanon in 1982 and made the movie to document his process of filling in the significant gaps around that time which he realized existed in his memory. The lynchpin of this process is an ambiguous but nebulously sinister dream he has, of swimming with some other soldiers at the beach while flares burn overhead. The story is his slow realization of what this dream really represents, and it is continually recontextualized until all the pieces fit together. Waltz With Bashir has helped pave the way for documentaries to explore the contrast between “reality” and subjective experience with animation, further seen in features like Tower and Is the Man Who Is Tall Happy?
Directed by Don Hertzfeldt
The work of Don Hertzfeldt fluctuates between the freakishly abstract and the harshly mundane, and his first feature film, It’s Such a Beautiful Day, exists somewhere between those two extremes. The film, an absurdist experimental collage of philosophical musings and deadpan humor, tells the surprisingly harrowing story of a life potentially approaching its end. Even though Hertzfeldt is now a two-time Academy Award winner and the only filmmaker to win Sundance’s Grand Jury Prize for Short Film twice, the American animator continues to work independently and on his own terms. His work, self-distributed via Vimeo (with his shorts shared at no cost on YouTube) is emblematic of the virile state of today’s independent animation scene and has attracted nearly fanatical support from alternative and mainstream audiences alike — not to mention animators. It’s not, that is, just his work that is notable: How he works, outside of the animation studio–industrial complex, matters too.
Still, what a work It’s Such a Beautiful Day is. Though its protagonist, Bill, appears at first glance to be a simple soul, the world which he navigates is anything from simple. Live-action imagery is combined with animation on a backdrop of plain white paper rimmed by black, asymmetrical frames, as though each frame is being viewed through a telescope. These sensibilities are all in service of a story that, while immensely heart-rending in its depiction of isolation and loss of memory, is still remarkably down to Earth. That is, until the end.
By the end of the film, Bill is clearly dying, but the narrator, voiced by Hertzfeldt, refuses to believe it. Hertzfeldt breaks apart his own story as it’s being told in search of providing his viewers, and himself, some catharsis. The narrator presents an alternative ending, where Bill instead survives his illness and travels the globe, learning the ways of the world anew and attempting to unlearn his past complacency. His cheating of death is taken to the most exaggerated melancholic endpoint possible: Bill outlives the human race and then Earth itself, even eventually observing the deaths of stars and the universe before the screen cuts to black. It’s difficult to put the emotional effect into words — Hertzfeldt manages to both reaffirm and escape the bleakness of the film’s first “ending.” But that’s what’s so beautiful about It’s Such a Beautiful Day — its seemingly rudimentary presentation is both deceptively complex and unspeakably moving.
Frederator Studios, Cartoon Network Studios
Directed by Larry Leichliter, Adam Muto, Nick Jennings
You know that quote about how the Velvet Underground didn’t sell a lot of records but inspired a lot of bands? You could say the same about everyone who worked on Adventure Time, except Adventure Time was extremely popular when it aired. Pendleton Ward’s seminal series about Jake the Dog and Finn the Human’s adventures in the Land of Ooo.
Endlessly malleable, Adventure Time could be about anything — which is why, as a grander story began to take shape one standalone 11-minute episode at a time, what it chose to be about was astonishing. Quietly asserting that Ooo was a post-apocalyptic world and not a fantasy one, Adventure Time proved surprisingly interested in whose post-apocalypse its stories were set in. Is Finn the only human? Are the show’s villains bad or broken? And why does this world of magic and whimsy just feel like it’s missing something?
In “I Remember You,” Adventure Time finally lets the weight of over a hundred episodes of implication play out between two of its most tragic characters, revealing a shared history between recurring antagonist the Ice King, and Marceline, the 1,000-year-old vampire. Through song — an Adventure Time staple — countless details woven into the background of many episodes are anchored in the lives of two characters living with the consequences of unresolved trauma.
The animators and writers responsible for “I Remember You” — among them Rebecca Sugar (Steven Universe), Kent Osborne (Summer Camp Island), Patrick McHale (Over the Garden Wall) — would spiral out of Adventure Time’s creative primordial Ooo, ushering in a new era of idiosyncratic and emotional animation. strong Click here to watch on Hulu .)
CoMix Wave Films
Directed by Makoto Shinkai
Makoto Shinkai has been heralded over the past decade as one of Japan’s most vital directors of animation, a vanguard of a new generation of animators and a creator whose works position him as an heir apparent to the likes of Hayao Miyazaki. This comparison, however, on its surface, feels reductionist. While it is true that several animators, Shinkai included, have been named possible successors and standard-bearers to Miyazaki’s legacy, and while the two do share an affinity for magical realism, Shinkai’s approach is to juxtapose it as an element apart from the urban spaces which his characters populate, while Miyazaki employs it as a force that is constantly in tune with the pastoral environments which frequently feature as the primary settings of his films.
Your Name., Shinkai’s 2016 breakout film, which became the highest-grossing Japanese film when it first premiered, is a perfect encapsulation of his sensibilities as a director, particularly his affinity for star-crossed lovers entangled in existentially precarious situations like the body-swapping predicament that protagonists Taki and Mitsuha find themselves in. From its photorealistic backgrounds to its animation, writing, and sound design, Your Name. is a gorgeous film from front to back — which makes the distinction of this particular sequence all the more noteworthy. Taki’s trip, both literally and metaphorically, through time is a stirring dreamlike odyssey that sees a drawing on a cave ceiling surface transform into the wisping tail of a brilliant comet streaking and warping like a ribbon through the expanse of time and space, tethering Taki as he experiences first-hand a beautiful lucid vision of Mitsuha’s birth, the passing of her mother, the subsequent estrangement between her and her father.
Yoshitoshi Shinomiya, a frequent collaborator of Shinkai’s, is the chief animator credited for this sequence, rendering the scene in a saturated pastel chalk aesthetic contrasted with deep shadows and ethereal light leaks, crafting a mood that feels both otherworldly and intimate. It is inarguably the standout scene of Your Name., the most beautiful sequence of animation in a film with no shortage of beautiful sequences, and a feat of artistry that is as emotionally resonative as it is visually compelling. In short, it is the quintessence of all that Makoto Shinkai has to offer as a director. Shinkai may or may not be the next Miyazaki, but he is undeniably the Makoto Shinkai of his generation and a creative force that heralds a bright and broad horizon for the future of anime.
Science Saru
Directed by Masaaki Yuasa
A streaming competitor summed up the Netflix anime strategy in 2018: Outbid no matter what. Anime fans are a good investment, clustered across platforms, demographics, and geographies. Word of mouth incites FOMO, working fast and spreading far. All Netflix needs is the right content to draw these fans in.
Devilman Crybaby features the kind of grim, dark grotesquerie that was popular in the straight-to-VHS market of the 1980s, where the original Devilman (1987) took off. This extremely NSFW sequence in the new series is longer than the original, more explicit and transgressive, and would have all but buried Crybaby in television’s graveyard time slots. Instead, it found a perch on Netflix and became one of the most acclaimed anime titles of 2018, responsible for respected veteran director Masaaki Yuasa’s biggest audience to date — 90 percent of it outside Japan.
When Netflix licenses anime as a Netflix Original, it buys the right to stream it on Netflix first and exclusively, globally, promoted alongside other Originals. (It’s done this with new shows as well as old, like Neon Genesis Evangelion .) Streaming platforms aren’t subject to broadcast rules or restrictions, so creators can be as graphic or unconventional as they like. The Netflix subscriber base is so large, so diverse, the audience for anything is already there. The algorithm just has to identify them.
Netflix currently frustrates anime fans by slapping the Originals label on anything it secures for exclusive distribution, whether it was involved in the show’s production or not. But its influence in the anime market is growing and pushing other streaming platforms to compete by catering to anime fans with their own output. Crybaby also demonstrated the value of giving auteurs like Yuasa the resources and autonomy to create unfiltered artistry that fans can’t get elsewhere. ( Click here to watch on Netflix .)
Columbia Pictures, Sony Pictures Animation
Directed by Bob Persichetti, Peter Ramsey, Rodney Rothman
The much-buzzed-about postmodern take on the Spider-Man mythos, Spider-Man: Into The Spider-Verse, was a thrilling revitalization of the increasingly repetitive superhero movie genre. It’s also groundbreaking thanks to its creative approach to 3-D animation, mixing the styles of hand-drawn 2-D, and even the Ben-Day dot texture of classic comic-book printing. Directed by Rodney Rothman, Peter Ramsey, and Bob Persichetti, the whole film is riveting, but one of its sequences has been picked apart far more than the rest and with good reason.
A moment that has, according to Andy Leviton, been part of the film almost since the very beginning was Miles’s “Leap of Faith,” the scene where he becomes Spider-Man. The sequence cuts back and forth between him preparing to take a very literal “leap of faith” that Peter Parker told him about earlier in the film and finally suiting up in a DIY, spray-painted version of the Spider-Man suit. It’s the film’s emotional crux and is handled marvelously as a mostly visual piece of storytelling — both the loud moments and the grace notes, all gorgeous to behold. Miles’s fingers still cling to the glass in unconscious fear, and it breaks away in shards as he jumps. The flailing of his limbs straightens into a precise dive, emphasized by the film’s use of comic-booky sequential panels spliced into the shot. And in perhaps one of the film’s most talked-about visuals — the virtual camera is framed upside down so, in the words of Rothman and Phil Lord’s script, he’s not falling, but rising.
Notably, despite this being the big moment for Miles, he is animated on twos (meaning 12 frames of his movements are played per second) rather than on ones (24 per second, which is the standard). The Spider-Verse animators alternated between animating on ones and twos depending on the scene. In Miles’s case, the fluidity of movement aligns with his confidence as the film progresses, a clear use of the animation toolkit to illustrate story and character development. Even though he’s on twos, the “Leap of Faith” is still an invigorating moment, one that captures the jittery nerves that come with self-actualization and expands upon Stan Lee and Steve Ditko’s original idea for Spider-Man — that under a mask, anyone could be a hero.
That sentiment has rarely been captured with the artistry and nuance shown in Spider-Man: Into the Spider-Verse, to say nothing of seeing it channeled through the experiences of a fully realized Afro-Latino hero (who happens to have great taste in music). It’s now the job of tomorrow’s animators to push the medium forward again. We have faith. strong Click here to watch on Netflix .)
Cartoon Network Studios
Directed by Joe Johnston (supervising), Kat Morris (supervising), Liz Artinian (art)
The story of Steven Universe, a boy who inherits an intergalactic war from his late mother, Rose Quartz (a.k.a Pink Diamond), is also a struggle for self-knowledge and identity. That all comes to a head in the series’ four-part finale, “Change Your Mind,” as Steven confronts the White Diamond, the architect of a tyrannical empire, the Gems, seeking to remake everything in its image and denying both Steven’s existence and his very name. Part of Steven’s fight to save the universe is also a fight to get White to listen to him, all while coming to understand himself better along the way — and the moment where Steven finally succeeds is simply beautiful, not to mention beautifully animated.
In one of the show’s more horrifying moments, Steven is split into two by White Diamond, who plucks the magical gem he inherited from his mother out of his navel. White expects Pink to come back, but instead, Steven’s gem reforms halfway, as a pink-hued mirror image of himself. Terror soon gives way to affection as the two halves unite in one final act of the show’s concept of “fusion” (an idea previously used in Dragon Ball Z , one of the show’s many anime influences) in the pursuit of physical strength, but which here creator Rebecca Sugar & Co. reenvision as an act of empathy and intimacy. While Steven had previously fused with other renegade Gems and his closest friend, Connie, he now fuses with his doppelgänger in an act of self-understanding, a moment beautifully hand-animated on ones (24 frames per second) by none other than storied animator James Baxter, as his two halves embrace and dance and fuse together, complete, in one fluid cut.
Baxter, whose remarkable career spanned many a Disney Renaissance film, had previously worked with Cartoon Network on episodes of Adventure Time, for which Sugar was a director, storyboard artist, and songwriter. Here, he animates the show’s emotional climax in a manner that doesn’t so much break from the show’s style as it infuses it with a classical style of movement. The magic of a moment stems from its visuals as much as it does from its metaphor — one of self-affirmation and self-love that encourages the viewing of Steven’s journey as a trans allegory. Across its five seasons, the series had wrestled with its own network for the ability to openly portray intimacy between LGBTQ+ characters, so to have this moment, rather than a final battle, stand as the climax to the series is something special. By the time this aired, Steven Universe had already influenced several ongoing series, like She-Ra and the Princesses of Power (another extremely queer sci-fi) and OK K.O.! Let’s Be Heroes (brainchild of Steven Universe alum Ian Jones-Quartey), not to mention the portfolios of animation students everywhere.
But the sequence itself feels revolutionary in a way the rest of the show felt like it was building toward for 160 episodes. So much of the defining work by animators on this list is visible in maximalist displays of outsized blockbuster effort. Steven Universe could have ended on such a virtuoso set piece. Its animators had the chops, and made a full-blown movie musical later in the same year. What they chose to represent in “Change Your Mind” instead was internal and rooted in self-love. They found what Norman McLaren would call the “invisible.”
Correction: An earlier version of this list mistakingly referred to Floyd Norman as “Norman Floyd.” Vulture regrets the error.
Eric Vilas-Boas is the entertainment editor at Observer. John Maher is the news editor at Publishers Weekly. Together they ran the animation journalism publication The Dot and Line . |
26 | Watching Age-Restricted Videos | To give you an age-appropriate experience on YouTube, content that isn’t suitable for viewers under 18 is age-re stricted.
Age-restricted videos are not viewable if you:
If you are in Australia, the European Union (EU), European Economic Area (EEA), Switzerland, or the United Kingdom, you may be asked to verify your age to watch age-restricted videos.
In line with the Audiovisual Media Services Directive (AVMSD), you may be asked to verify your date of birth to watch age-restricted videos. AVMSD covers all audiovisual media, including video sharing platforms.
Follow the prompts to submit an image of a valid ID or credit card. Learn more about how age verification works.
You may be asked to verify your date of birth to watch age-restricted videos. This added step is informed by the Australian Online Safety (Restricted Access Systems) Declaration. The declaration requires platforms to take reasonable steps to confirm users are adults in order to access content that is potentially inappropriate for viewers under 18.
Follow the prompts to submit an image of a valid driver’s license, Proof of Age card, or passport. Learn more about how age verification works. |
1 | Easy to Use React Form Library – React Bare Forms | {{ message }}
joegasewicz/react-bare-forms
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session. |
4 | Using GPT-3 for plain language incident root cause from logs | Plain Language root cause summaries. Try it for free!
This project is a favorite of mine and so I wanted to share a glimpse of what we've been up to with OpenAI's amazing GPT-3 language model. Today I'll be sharing a couple of straightforward results. There are more advanced avenues we're exploring for our use of GPT-3, such as fine-tuning (custom pre-training for specific datasets); you'll hear none of that today, but if you're interested in this topic, follow this blog for updates.
You can also see some real-world results from our customer base here.
I believe the future of monitoring has to include autonomous root cause tools. Deployment frequency and complexity make the "new" issue the driver of Mean-Time-To-Resolve (MTTR) today. You may always know when something breaks, but you won't always know what. Problem is, root cause often hides in log files... but since the volume of logs has grown enormous and it's not always clear what to search for when a new issue arises, we saw a need for a smarter tool to help with root cause.
If you structure logs really well (we do this with machine learning), including free-text logs, even when there are only one or two examples for a given event type, you can start talking confidently about a "rare" event. Severity lets you talk about a "bad" event. So, you can imagine one log stream expressing a "rare" event, and then another log stream expressing an event that is both "rare" and "bad". A simple point process model lets us talk naively about how likely it is that such a pair of events should show up as close together in time or closer, given patterns of "rare" and "rare"+"bad" events in those two streams. Watching the data for a while will let you estimate parameters for this model, and you can start picking out bouts of such event pairs. Zebrium takes such a bout, and generates an incident report from it. The results we are seeing…
Our approach works really well at generating root cause reports: if there's a root cause indicator in the logs, it will almost always make its way into a concise root cause report. Our approach has proven robust to different kinds of applications and logs; it requires no training or rules, pre-built or otherwise.
However, there's a big challenge remaining here. The last mile of delivering the benefits of autonomous Root Cause Analysis (RCA) is translating even the concise reports we generate into something the user can understand. The user who needs the quick RCA is not always the person who can understand the oblique root cause indicators in a technical log file. Who is to consume these reports, and what composes them?
This challenge is even more stark for an Managed Service Provider (MSP) or enterprise. At a small company, engineers may be responsible for front-line triage. At a large company or MSP, there will be a front-line, typically junior person who needs a quick RCA; they can't be expected to know all the different intricacies of application logs and what those logs mean. To bring value over the last mile to them, we need plain-language.
Now that I've set the stage, let me show you a couple of examples that I think make it clear how transformational GPT-3 can be in the context of our problem. I think you'll agree with me that autonomous RCA seems a lot more real when it's translated into plain-language, and surely a lot more accessible by a lot of users.
GPT-3 is a state-of-the-art bidirectional autoregressive transformer model, with parameters in the hundreds of billions. It is produced, trained, and maintained by OpenAI, and it has really brought start-of-the art, deep NLP to developers’ fingertips.
The most basic procedure for using GPT-3 looks like this -
1) Construct a “prompt” for the model – this is a sort of open-ended document that leaves the model to “continue the story”, as it were; the continuation or response is called a “completion”. For the below two-line examples, I used the following simple prompt construction:
An expert examined the logs for the incident.
The first log message was: <first_line_goes_here>
The last log message was: <second_line_goes_here>
The expert described what had happened, in plain English:
2) Select model (I used davinci) and parameters - for the below examples, I used the following parameters to the davinci GPT-3 model:
3) Call the API; post-process completion for dangling or repeated phrases, to-taste. Serve warm.
We shut down the Postgres database backing our Atlassian stack. Thousands of events, largely errors, ensued in the logs. Zebrium dutifully generates a root cause report, based on an ML-driven auto-detection. The report looks like this:
As you can see, Zebrium pulled about a dozen log events into the report; the first 9 are shown in the image. You’ll notice that the first event is colored green and the last event purple; these are the “FIRST” and the “WORST”, and we use these to characterize the incident in the incidents list. Here’s what that looks like:
OK, so Zebrium did a pretty good job here. It autonomously picked these two log events out of thousands to display to the user on the list page. Often, the “FIRST” event reflects root cause and the “WORST” reflects follow-on problems; you’ll note here that the first is not even an error, but it’s rare, and happens too close to other incident events, according to our model, to be a coincidence.
Now in this case, it’s pretty clear to someone who has looked at a log file before that the database was shut down by the administrator. You’ll also notice, though, above the log events, a field called “DESCRIPTION”: this was generated by GPT-3. It says:
“The database server was stopped by the administrator.”
We passed in the “FIRST” and “WORST” events, as described above, to GPT-3. Notice that the first line says that the “PostgreSQL RDBMS” was “Stopped”. The second line says that there was a “PSQLException” indicating that a connection was “terminating due to administrator command”.
The word “administrator” appears on a different line than the word “PostgreSQL”, and nowhere in the log messages are the words “database” or “server” to be found at all.
The algorithm is doing the following:
1.) Inferring that the statements on the two lines are related
2.) Inferring that they are both speaking about a “database server”
3.) Putting the two statements together: “stopped by the administrator”
Essentially, this is the sort of summary someone would put into a ticketing system; here, it’s being done automatically, thanks to an appropriately selected pair of log messages coupled with a language model.
Example two: Out of memory
In this case, and using the same parameters as were used in the previous example, we take a root cause report for what turns out to be an out-of-memory issue to GPT-3 and see what sort of summary it gives us.
This particular report was generated in response to a signal from a monitoring tool that saw endpoint latencies spike too high. The system was configured with Zebrium to provide root cause observability across the stack. Here is the report Zebrium ML generated:
As you can see, Zebrium pulled about a dozen log events into the report; the first 9 are shown in the image. You’ll notice that the first event is colored green and the last event purple; these are the “FIRST” and the “WORST”, and we use these to characterize the incident in the incidents list. Here’s what that looks like:
In this case, it’s not so clear that the problem is due to an out-of-memory issue, unless the reader is familiar with Linux and the OOM killer facility. There are a lot of technical parameters in these events as well, obscuring what was really at the heart of the problem.
Taking a look above the log messages again, in the “DESCRIPTION” field, let’s see what GPT-3 gave us:
“The memcached process was killed by the OOM killer
because it was consuming too much memory. The OOM killer
is a Linux kernel feature that allows the system to kill
a process when the system is running out of memory .”
What this summary tells us explicitly in plain-language is that the system was running out of memory. This is in fact the root cause of the original incident, tallying up a successful root cause summary.
For an L1 or L2, the Zebrium root cause summary may not be enough. That person won’t necessarily know that OOM means out-of-memory, or that the “FIRST” message isn’t talking about an application issue entirely within memcached.
In the second example, I would have liked to have seen some recognition that the first line implies that the system was out-of-memory, and therefore the second line was due to related problems (in fact, the application server was paged out). It’s a lot to ask, but I think it might be possible, given some combination of more expressive prompts, fine-tuning, and presenting GPT-3 with other select data from the report (in this case, the report also included metrics anomalies showing a plunge in swap free).
You may have noticed that GPT-3 gave us a lot of extra detail about the Linux OOM killer than was mentioned in the log messages themselves. I believe it was able to provide this context in the same way it knew that PSQL referred to a database server: it was trained on the internet. There is likely enough information in the public domain that, as GPT-3 was being trained, it picked up enough cursory knowledge of these sorts of systems to add additional value.
To test my hypothesis, I spent some time fiddling quite unscientifically with the prompt and model parameters, trying to get it to expound on the OOM example as describe above. I also gave the model a much larger budget of response tokens. Eventually, I teased out this response:
The root cause of the problem is that oomkiller was invoked because of a large number of allocations. The kernel's OOM killer is triggered when the system is out of memory and needs to free some memory. Since this action can kill processes, it is protected by a flag (oom_adj) which can be set or cleared by user space applications. When this flag is set, the kernel will kill processes for which there are no more than one page left in their memory cgroups (cgroups are used to control resource usage on a per-process basis).
By default, Linux uses an algorithm called "RSS" (Resident Set Size) to decide whether or not to trigger the OOM killer. This algorithm calculates how much physical memory each process has reserved and compares it with its current virtual size. If there's enough memory available, then RSS will not touch any process even if they have been consuming too many resources for too long; but if there isn't enough memory available, then RSS will trigger the OOM killer and start killing processes until there's enough physical space again.
So, there’s obviously a lot of specialized knowledge locked away in the trained model, just waiting to be unlocked by the right inputs. Really awesome.
GPT-3 has a lot more capability than we’ve tapped into yet. Even so, I hope I’ve convinced you that the autonomous future of monitoring is already upon us. If you would like to participate in the beta program for the GPT-3 feature, please email us (our production Autonomous RCA platform does not yet include the GPT-3 feature). Follow this blog for my next articles on our use of GPT-3, where I promise to go into more technical depth.
Plain Language root cause summaries. Try it for free! |
8 | What if everyone in the world went vegan? [video] | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
1 | Samsung Merges TV, Home Appliances and Mobile Under a New CEO | Samsung replaces CEOs of 3 key business units
Samsung has led the global TV market in sales for 15 years. Also, it has grown to dominate the home appliance sector with its BESPOKE customizable brand, introduced in June 2019, which allows customers to tailor their own configurations of material, color and modules. Even though Samsung has two powerful end product brands ― BESPOKE for home appliances and Galaxy for mobile devices ― the company has not been able to create enough synergy between them, because their respective business units operated independently. Samsung has already taken steps to fuse the two brands through the BESPOKE edition of the Galaxy Z Flip3 foldable phone released in October. By offering as many as 49 color options, Samsung is letting consumers make their own versions of the clamshell-style foldable smartphones. Under the new vice chairman, who has been recognized for making Samsung become the absolute leader in the TV business, the company is expected to take steps to appeal more to younger consumers through a convergence of the BESPOKE and Galaxy brands. The combined SET business means that Samsung may target a software company for acquisition, due to its need to establish a powerful end product ecosystem as Apple has done. For years, Samsung has reiterated its intention to acquire companies with the potential to strengthen its presence in the IT industry. The electronics giant has not specified its acquisition targets or the specific timing of a possible deal, but industry insiders believe Samsung is interested in acquiring a chip foundry firm. However, chances are also increasing that Samsung may turn its eyes to software industry firms as the combined Galaxy and BESPOKE ecosystem will require strong software that can help the products from the two brands create more value than being just linked to each other. Michael Fritzell, a Singapore-based analyst at Asian Century Stocks, said Samsung's competitive edge over other companies comes from its hardware capability. But he pointed out that the tech giant needs to improve its competence in software to better compete against Apple. "Samsung's American competitor Apple has defended its market share by building an ecosystem of devices via clever software integration. And I also find it instructive that Japanese competitor Sony Corporation today almost makes all of its profits from its gaming console, Sony PlayStation. The PlayStation product is also an entire ecosystem that offers more than just hardware," the analyst told The Korea Times, Tuesday. "The software enables a technology company to lock in the customer and for devices to work together in harmony. If Samsung could build a software ecosystem around its devices rather than relying on Google Android, it would be better able to defend itself from Chinese competitors. Unfortunately, I don't see a strong ecosystem in Samsung's devices that could help ensure customer loyalty," he added. |
3 | Gene editing of crops and livestock may soon be permitted in England | Gene editing of crops and livestock may soon be permitted in England for the first time under a consultation launched by the government on Thursday.
Ministers said changing the current strict rules, which originate from the EU and make gene editing for crops and livestock almost impossible, would bring widespread benefits to consumers and farmers, including healthier food, environmental improvements and better animal welfare.
What is gene editing and how can it be used to rewrite the code of life? But some environmental and animal welfare groups raised concerns that loosening the rules could lead to lower animal welfare, for instance if the technology was used to promote faster growth over animal health, or to enable livestock to be kept in crowded conditions.
Gene editing involves cutting and splicing sections of DNA within a single genome to bring about changes that were previously possible only through lengthy selective breeding of plants and animals. This is a different process from genetic modification, which involves introducing DNA from one species into another, and which will continue to be subject to a near-total ban.
George Eustice, the secretary of state for environment, food and rural affairs, said: “Gene editing has the ability to harness the genetic resources that mother nature has provided, in order to tackle the challenges of our age. This includes breeding crops that perform better, reducing costs to farmers and impacts on the environment, and helping us all adapt to the challenges of climate change.”
Through gene editing, crops could be developed that require fewer pesticides or fertilisers, or which have enhanced nutritional properties. For instance, tomatoes that can lower blood pressure have recently been licensed for sale in Japan. Animal genes could also be edited in ways that would allow the breeding of livestock that was resistant to key diseases, which would reduce the need for antibiotics and so the likelihood of developing resistant superbugs.
However, Peter Stevenson, chief policy adviser at the campaigning group Compassion in World Farming, said the ways in which livestock had been bred for profitable traits in the past suggested the development of gene editing would be harmful to animals. He pointed to genetic selection for broiler chickens, whereby the fast growth rates gave rise to leg abnormalities and lameness, and in laying hens, selecting for high egg production caused osteoporosis, leaving the hens vulnerable to bone fractures.
Breeding animals resistant to diseases would only encourage farmers to stock them more intensively, he added, leading to overcrowding and lower animal welfare. “This is pushing us down the industrial farming route,” he warned. “It is entrenching an antiquated system of farming that we would do better to abandon.”
Gareth Morgan, head of farming at the Soil Association, said: “We question the speed with which the government is using Brexit to pursue a deregulatory agenda in this area. It is vital that citizens and farmers who do not wish to eat or grow gene-edited crops or animals are offered adequate protection.”
Prof Gideon Henderson, chief scientist at Defra, said the government had made clear its commitment to upholding animal welfare standards: “The motivation for this is not lowering animal welfare standards – it’s about the benefits.”
Gene editing has been made possible through the development of tools such as Crispr-Cas9, which allows scientists to finely target sections of DNA, to remove or change them, or to turn certain genes on or off. Developed in 2012, Crispr is cheap and has become widely used among scientists.
But in 2018 the European court of justice controversially ruled that gene editing was essentially the same as genetic modification and should be subject to the same tight rules. GM crops are subject to a near-total ban in the EU, though a few have received permits.
Henderson said allowing gene editing in England should not affect trade in agricultural products with the EU, the biggest market for British farmers. “It will have to be taken into consideration in our exports to the EU – there are ways for gene-edited crops to be labelled, so they can be targeted to markets where we can sell,” he said. “It will not impede trade and may enhance it significantly in some cases [with other countries].”
Many scientists welcomed the government consultation, which will run for 10 weeks until 17 March. Huw Jones, professor of translational genomics for plant breeding at Aberystwyth University, said: “We need food and agriculture, but we also need it to stop harming the planet. A combination of better land management and better crops can do that. In its simplest form, gene editing is merely a speedier way to find the genetic variation made by natural processes.”
Mick Watson, professor of bioinformatics and computational biology at the Roslin Institute at the University of Edinburgh, said: “As well as improving animals’ ability to respond to disease, gene editing could also be used to create fitter, healthier animals with higher standards of animal welfare. [This] could place cutting-edge technology at the heart of UK livestock improvement.”
The National Farmers’ Union also welcomed the consultation, to be set out in detail on Thursday during the online Oxford Farming Conference. Tom Bradshaw, vice-president of the NFU, said: “New precision breeding techniques such as gene editing have the potential to offer huge benefits to UK farming and the environment and are absolutely critical in helping us achieve our climate change net-zero ambition.” |
6 | On Free Software, Education in China and the Covid-19 Pandemic | [][][][][][]
I am a secondary school student from Shanghai, China. This email
discusses the problems I discovered in the Chinese educational system,
in terms of students' right to freedom in computing and options to
control the COVID-19 pandemic from the standpoint of a person living in
China.
When COVID-19 broke out in 2020, students were required to watch lecture
videos produced by the city's education department for twenty minutes,
then join the Tencent Meetings room to discuss in their own class for
10--15 minutes.
Watching the videos wasn't an issue for me. Our apartment has cable TV,
where the videos are broadcast; there was also a website that played the
livestream without JavaScript. However, Tencent Meetings presented a
problem to me.
At the time, I run Arch Linux. (Currently, I run Hyperbola
GNU/Linux-libre, a Free Software-only distribution, which would have
made this even harder.) Tencnet Meetings, claiming to support "all
operating systems and platforms", only supports Windows and macOS. (I
wonder how they passed the resolution to display that statement, I
believe that they have many programmers who use GNU/Linux.) (As of
October 2021, a classmate noted that there is a "Linux versuon".) School
required Tencent Meetings, therefore I went through a hard proccess to
setup QEMU running a Windows 7 virtural machine---I believed that 7
would be slightly better than 10 in terms of privacy, though as always
with nonfree software, I can't really know for sure. It was slightly
unstable, which is an annoyance, for example the connection from the
Windows audio server to pulseaudio would stop working from time to time,
but it was acceptable. Though my setup was okay (in the perspective of
my school), it left me in a psycological crisis about education and
freedom. More on that later.
Offline classes resumed in May 2020, as most of China has minimal cases
of COVID-19. This freed me from using a proprietary
non-privacy-respecting bloated piece of software in a virtual machine,
but it did not free me from teachers' requirement to use WeChat (think
of it as the equiv of WhatsApp in China), Xiaoheiban (A proprietary
classroom information distribution system), or other pieces of nonfree
software.
Similar to the beliefs stated in the GNU Education project, I believe
that schools and educaion are a means of sharing information and
knowledge. I understand that meeting software and lesson management
software are used as means of distributing knowledge, rather than the
knowledge being distributed themselves. However, I believe this doesn't
lead to the argument that the mandate of proprietary software usage is
just, for three reasons as below.
1. There are always going to be curious students who wonder how the
trchnology works. Proprietary software denies them this right.
2. The usage of proprietary software when young may implant dependence
on it in the future.
3. Education is a right and a responsility. Mandating nonfree software
in education adds unjust responsibilities on students.
Point 1 and 2 are explained well in the Education section of the GNU
website, therefore I am not going to focus on them. Focusing on the
third point:
Under laws of almost all countries, citizens have the right to an
education. Traditionally, this involves going to school, meeting
teachers and classmates, listening to classes, taking notes, passing
exams (I have strong opinions that exam systems ought to change to
better represent individual talents, but this is out of scope of this
memo.) and finishing homework. Students loose a slight bit of their
time and freedom of movement (as in, it's not easy to move to a house
100 miles away from school), in exchange for being educated.
However, with schools requiring the use of nonfree software, in effect
students are required to give up their privacy, and digital freedom,
both crucial rights in modern society, as the effect of needing to use
nonfree software. The right to education has effectively turned into an
exchange for other basic rights. This is not acceptable.
Furthermore, in countries like China, 9 years of education is mandatory
for children. I understand this law as a means to the goal of creating
a knowledgeble and educated society, which is good. However, when
mandatory edication mandates nonfree software, it deduces to "children
are required to use nonfree software". So, being a child here is pretty
unlucky, because there goes your right to privacy, your independence,
and your freedom, because of a law that's supposed to help society.
We need to stop using nonfree software in education.
In th beginning of this email, I mentioned COVID-19. You might be
wondering how China fully put the pandemic under control in just 5
months, which is seemingly impossible if all you know is how the US
dealt with this situation.
The answer is that China is implementing strict contact tracing. This
is extremely easy because of the prevaliance of survillance. Many would
argue that this is a benefit of survillance, which I believe to be true.
However, no comparisons were given between losing privacy and increasing
the risk or infection. Briefly inspecting this idea in my head, it's
really hard to think about---privacy and freedom is important in the
long term, at the cost of many lives in the pandemic. The lives of
these dead are gone---they lose not only privacy and computing freedom,
they lose their lives, which costs them their oppurtunity to persue
their dreams in this world, and they have no freedom of choice, speech,
etc as they aren't alive. Once again, this is hard to wrap my mind
around, therefore I would especially like to invite the community to
discuss this.
The contact tracing system used is not Free Software. At first I didn't
understand why (except for the explanation that they want to profit from
harming citizens), but I noticed that the authenticity and accuracy of
the system may be affected if users are allowed to modify their
software. This seems to be the core of some problems with regards to
software freedom---here, the user is not running software to complete
their tasks. Rather, it's the government's way to maintain public
safety, therefore I believe that whether users should be able to modify
software in these conditions is up to discussion. Back to the point,
since a green-code proof from the system is needed to get in a lot of
places, a person basically needs to use proprietary software to live a
normal life (to get into coffee shops, for example).
In America and other countries, things aren't that good either. For one,
the pandemic isn't controlled well. As a consequence, a lot of places
require negative COVID tests to do stuff. A thread on the LibrePlanet
mailing list discusses this issue, as a lot of these tests require
nonfree software on users' phones. Note that this thread spans several
months long, as it is a hot discussion, so look in the september and
october archives too. The thread explains the implications clearly, thus
I am not discussing it here.
https://lists.gnu.org/archive/html/libreplanet-discuss/2021-08/msg00008.html
Additionally, I heard that some US courts require ZOOM for online cases,
therefore it seems that a person' right to judicial justice comes at the
cost of digital freedom. I can't confirm this, but if that's true, I'm
truely disappointed at the judicial system, even though I'm not a US
citizen.
I am looking foward to a freer society, or at least one where the above
problems get solved.
Sincerely,
Andrew Yu
Verbatim redistribution of this memo is allowed worldwide, but
changing it is not allowed, as this is not a technical memo, rather,
a politicol-philosophical opinion paper. |
1 | Bagatto: An extensible, transparent static site generator | Bagatto is a static site generator written in Janet.
It is inspired most directly by Garrett Smith's LambdaPad, a SSG
in Erlang. LambdaPad falls more to code side of the config--code
spectrum, and Bagatto follows that philosophy. Thus, it's designed to
expose the full expressive power and extensiblity of the language it's
written in. Janet is a lisp that's designed for simplicity and ease of
embedding, and thus it's a very good fit for this model.
To create a Bagatto website, you should create a single Janet source
file. Because you're writing normal source code, you have the full
power of the Janet language at your disposal. Bagatto tries to keep
the "magic" to a minimum; in all cases, it tries to make the process
of loading source files and of generating new files completely
transparent, inspectable and extensible.
In Bagatto, a website consists of two things: a data specification,
and a site specification.
A data specification describes all the inputs into the site
generator. These are of two main types: either Janet values or
references to other source files, eg., JSON configuration files or
Markdown articles. When Bagatto evaluates a data specification, it
loads all of the files and parses them for arbitrary
attributes. It includes some baseline attributes, like path and
contents, but allows the site author to specify or implement other
attribute parsers. For instance, it comes with a parser function that
will read the YAML frontmatter from a Markdown file and include those
as additional attributes.
Here's an example data specification:
( def data { :config { :attrs { :title "A Demo Bagatto Config" }}
:posts { :src ( bagatto/slurp-* "posts/*.md" )
:attrs parse-post }
:static { :src ( bagatto/* "static/*" )
:attrs bagatto/parse-base }
:config-json { :src "config.json"
:attrs bagatto/parse-json }
:config-file { :src "config.jdn" }})
A site specification describes all of the outputs of the site
generator: the paths and contents of all the files that the generator
should create. For static files like CSS and images, this might be as
simple as copying the original file to a new path. For the generated
content of the website, this will include rendering templates by
using the attributes in the input step.
Here's an example site specification:
( def site { :post-index { :path index-path
:contents render-post-index }
:posts { :each :posts
:path make-post-path
:contents render-post }
:static { :each :static
:path make-static-path }})
A demo project can be seen in this
repository. This consists of a simple module
which includes some source files and some rendered pages.
To run it, navigate to the demo directory and then run bag:
Bagatto outputs the path of each file it copies or creates.
We can then open up site/index.html in a web browser and click
around. Beware: it's pretty ugly. Both index.janet and the template
files it references are very thoroughly commented; hopefully they can
provide a whirlwind introduction.
See the Manual for a deep dive. |
2 | Alexa Seems to Be Getting Worse with Time | I loved my Amazon Echo when they first came out. I love to cook, and I love to listen to music, so my Amazon Echo naturally won a spot in my kitchen when I first got it. Later on, it got upgraded to a Sonos One. Both speakers had voice control features that I loved, and used all the time. Hands-free timer setting and music control? Sign me up! However, Alexa’s shine has started to wear off - especially in the last six months. What’s changed? Just in the last week or two, I’ve noticed a pretty serious bug in the Alexa software: its timers are not playing alarms when they expire! I can set a timer on my Sonos One using Alexa, open the Alexa app, and watch the timer count down to “00:00:00”. I expected hitting “00:00:00” to result in a tone from the speaker, like it’s done reliably for years. Instead: silence. As if to add insult to injury, the timer stays in the Alexa app, stuck forever at “00:00:00”. I’m not certain if this is a Sonos problem, or an Alexa problem - I see this behavior on my Sonos One, but not on my Sonos Move.Whatever the case, it’s very literally burnt my biscuits. I do, primarily, use it as a kitchen timer, and it’s hard not to notice a bug that ruins your food! Alexa unquestionably has a much lower success rate understanding my wife’s voice than mine. At first, I chalked this up to a matter of training, and understanding how the voice recognition technology works. I’ve participated in the development of a few voice speakers. I know that the best results come when you face the speaker, and speak slowly, clearly, and without pausing. Even with coaching to do this, Alexa either fails to understand my wife, or ignores her completely, at a rate of at least twice my own.Unfortunately, I think I know the reason for this, and it’s not a pretty one. Voice assistant software naturally has a better success rate when tested against speech signals with lower frequency ranges. As a result, they naturally have better chance of understanding deeper voices, which predisposes them to understand men better than women. I thought this was just paranoid suspicion on my part, but I’m not the only one who has noticed this phenomenon. Not to mention that it’s hard to argue with a plot of Voice Command Success Rate against average input frequency. The trend just goes down, down, down as the average input frequency increases.It’s a disappointing bias set by technology. I also get the impression that the quality of voice interactions has gone downhill over time. No longer does Alexa magically find the thing I want to do - whether that’s set a timer, add an item to a grocery list, or start playing a particular album I want to hear. I find myself leaning, more and more, on my smartphone to queue up music. Alexa is just not up to the task on any given day.I have to admit that this might not be a real problem - or, rather, not a change in the voice recognition software. The only thing I know for sure has changed in the last six months is that I’m home a lot more. Working from home has given me many more opportunities to use my voice speaker. I’d estimate that, pre-COVID, I used the voice control functionality maybe five times a day, in two discrete blocks: in the morning, before leaving for work, and in the evening, after getting back home. Working from home has blown that schedule out of the water. Instead of a two-hour window at the start and end of each day, I’m at home _all the time_. This gives me more opportunities to use my voice speaker. Correspondingly, it gives more opportunities for Alexa to fail to meet my expectations.Having previously worked at a company that makes voice speakers, I understand that there’s such a thing as an acceptable failure rate of voice transactions. (The terms of art are “False Accepts” and “False Rejects”, if you’re curious.) This leads me to wonder: is Alexa’s voice assistant technology getting worse, or am I just using it enough to expose its warts?For example: let’s say I use the speaker an average of once per hour. If I’m home for four hours a day (like I was pre-pandemic), that averages out to four voice commands per day. If a voice command fails, on average, one time in twenty, this is likely an acceptable failure rate. It’s easy to forget a failed voice command if it only happens, on average, once a week or so.Now, though, I’m always home. That gives me sixteen waking hours in which to issue voice commands. If we’re holding constant the once-per-hour rate of voice commands, and the one-in-twenty rate of failed voice transactions, then failures of the voice assistant start becoming a daily occurrence, rather than a weekly one. I suspect that this passes some sort of psychological threshold: even though the overall failure rate has not changed, I notice more failures just because the time between failures decreases. Since it’s now a daily occurrence instead of a weekly one, I have less other minutiae going on in my life to help me forget Alexa’s shortcomings.I’d imagine the folks at Amazon would consider this a pretty serious problem, even though their underlying metrics for Alexa’s accept/reject rates likely haven’t changed at all. Being a consumer good, perception of its quality is almost as important as the reality of it! |
50 | Quantum mechanics and our part in creating reality | A new interpretation of quantum mechanics sees agents as playing an active role in the creation of reality. Blake Stacey outlines the case for QBism and its radical potential.
The pandemic shut down our university when I was in the middle of giving a lecture. We had been anticipating the possibility for a few days, but it was still impeccable timing. I finished my spiel, out came the phones, and suddenly we weren't going to see each other post-spring break after all. For the rest of the term, I did what so many teachers found themselves doing: gamely trying to soldier on. I scrounged and borrowed a whiteboard, easel and webcam, set myself up in the nicest light the house had to offer, and did my best to convey graduate-level physics to an audience of tiny rectangles. And like so many other teachers, I learned there's nothing like a radical change of circumstances for driving one to re-evaluate what the essential ideas of a subject must be. In my case, this was complicated by the minor detail that the course I was teaching involved a lot of quantum mechanics, and the physics profession hasn't yet figured out what exactly the essential ideas of quantum mechanics are.
Oh, we know how to do the calculations. Nobody could have designed a laser or a computer chip if we didn't know that much. But the story that our textbooks tell is implicitly a tale of defeat, in a subtle way. They drop a chapter or three of mathematical arcana upon the poor student, not out of cruelty, but because we can't yet do any better. Complex numbers, matrix algebra, partial differential equations, spectral theory --- not only do the topics grow intimidating rather quickly, they also (if we are scrupulously honest) look rather arbitrary. Out of all the mental contrivances that the Mathematics department can serve up, why does quantum physics rely upon such a particular selection, and why do we employ those tools in the way that we do?
It is difficult to avoid turning philosophical about such matters. Questions like "What is the relation between our mathematical abstractions and physical reality?" feel as though they ought to be followed by a "like, dude". The history of attempts to answer such questions is complicated and contentious and written in no one place. Sometimes, the ideas themselves seem as if they are retreating from clarity. At other times, one wonders if philosophers and physicists wish to write as though clarity were the enemy.
I first started to care about the "interpretation" of quantum physics several years after I began using it. Many physicists don't care about such things at all, or they grow out of it rather than into it. It does indubitably feel strange that we would have to "interpret" a scientific theory; such language seems like it would belong more to critics of free verse or abstract art. ("What I feel the sculptor is trying to say...") But sometimes, when we're trying to develop the theory in new directions, or find the clearest way to teach it to the next generation, or we've just had one too many late nights wondering what it all means, we have to get our fingers philosophical.
Sometimes, the ideas themselves seem as if they are retreating from clarity. At other times, one wonders if philosophers and physicists wish to write as though clarity were the enemy.
After navigating the various viewpoints on offer, I found myself drawn to one that had only recently been articulated, the QBism laid out by Christopher Fuchs and Ruediger Schack. QBism has elements that are radical --- perhaps subversive, even --- while at the same time showing how some things we do as part of "weekday physics" are philosophically respectable after all. And, beyond providing a story to tell about the equations we already have on the books, it points to the tantalizing possibility that we can discover where those equations come from.
The QBist take on quantum mechanics is that, at its core, quantum mechanics is a theory of actions and consequences. A QBist looks for objectivity on a different level than the adherents of many other interpretations do. And the kind of lesson that we think the equations are whispering about reality are, in some quarters, downright scandalous. We resort to jargon like "normative structural realism" and "participatory realism" to give our intuitions shape and form. No existing school of philosophy seems quite right for where the physics wants to go; we'll agree with many predecessors, but often with a caveat or a qualification. Perhaps the best place to start is with that capital B.
The Q in QBism came from Quantum, of course, and the B originated with "Bayes". In the wide spectrum of ways to think about probability, "Bayesianism" encompasses a variety of schools of thought which hold that a probability is a value that an agent asserts, a quantitative expression of a degree of belief. Probabilities encode expectations, and without someone around to do the expecting, there would be no probabilities. Before there were weather forecasters, there were no forecasts, even though the world had plenty of weather. In the proto-QBist days, around the turn of the millennium, the idea was just that the probabilities in quantum physics could be understood in a Bayesian way. But "Bayesian" is a broad label, and those early attempts were not very good at narrowing it down; nor, as further investigation revealed, were they internally self-consistent. That early "Quantum Bayesianism" took several more years to mature into QBism.
QBism regards quantum mechanics as a "user's manual". In this interpretation, quantum mechanics is about what happens at the interface between an agent and the rest of nature. Most of the mathematical entities employed in the theory, like the "wavefunctions" of which so much has been said, boil down to being bundles of expectations. Whose expectations? Yours, or mine, or those of whoever has picked up the user's manual and is trying to benefit from its guidance. Expectations for what? For the consequences of the user's own actions. What kind of actions? Any kind, in principle. Very often, physicists think of a "quantum measurement" as something that requires a laboratory to pull off. But in principle, the act of smelling a rose has every right to be considered a "quantum measurement". It is only that roses are big and we manipulate them clumsily, so one's expectations about a rose will be too fuzzed out for invoking quantum mechanics to be worthwhile in practical terms.
QBism has elements that are radical --- perhaps subversive, even --- while at the same time showing how some things we do as part of "weekday physics" are philosophically respectable after all.
Following the genre conventions of information-theory books, let's name our agent Alice. She has a system of interest before her --- perhaps an atom, perhaps an ion-trap quantum computer or a rose or a loaf of sourdough bread. She contemplates the possible actions she might take upon the system. By using quantum mechanics, she can assign probabilities to the possible consequences of each action, in a self-consistent way. Then she makes a choice and reaches out, taking action and experiencing the result. She can then update her expectations for future experiences in accord with this measurement outcome --- with the new fact that she, in synergy with the system, has brought into being. Prior to the measurement, Alice's uncertainty was not due to ignorance of an outcome already there, waiting to be uncovered, but rather her recognition that the fact of the outcome did not yet physically exist. It is this last realization, the principle that measurement outcomes aren't just waiting to be read off but instead require participation to elicit, that opens up the radical new possibilities of quantum physics.
This is at least an internally coherent narrative about the mathematics, which is the first thing we ask of an interpretation. But what does it say about nature that quantum mechanics is such a good user's manual for you or me or Alice to employ? Why this particular sage advice for swimming in the madness and salt of the world? Here we move into the realm of speculation; it is one thing to provide a narrative and another to successfully extract a lesson from it.
Consider again Alice measuring an atom. Her wavefunction for the atom encodes her personal expectations for future experiences, and her changing her wavefunction for it upon obtaining a novel experience is a transformation within her. But both Alice and the atom participate in the measurement event; both partake in the creation of a new fact for the pair of them. And if the atom can participate in such ongoing acts of creation when the other player is an agent, surely it can do so when the other player is not. That is to say, whatever fundamental capacity for creation the atom brings to an event, it should bring whether or not the other participant is a conscious agent, let alone a trained quantum mechanic. As one of our papers said, "Certainly QBism has creation going on all the time and everywhere; quantum measurement is just about an agent hitching a ride and partaking in that ubiquitous process."
It is this last realization, the principle that measurement outcomes aren't just waiting to be read off but instead require participation to elicit, that opens up the radical new possibilities of quantum physics.
This kind of imagery has predecessors. It's not unlike Karen Barad's notion of "intra-actions", or Alfred North Whitehead's "actual occasions" and "throbs of experience". William James wrote of "new being com[ing] in local spots and patches". And John Archibald Wheeler went all in. For him, the generation of a measurement outcome was the "elementary quantum phenomenon". "Is the entirety of existence," he would ask, "rather than being built on particles or fields of force or multidimensional geometry, built upon billions upon billions of elementary quantum phenomena, those elementary acts of 'observer-participancy,' those most ethereal of all the entities that have been forced upon us by the progress of science?" He would say, "In some strange sense the quantum principle tells us that we are dealing with a participatory universe." But what exactly is that "quantum principle"? There, Wheeler said, physics lacks a good answer: "We understand any other principle of physics in enough completeness to summarize it, beginning with a good name, in a dozen words---but not this."
Now, for all our sympathies with Wheeler, QBism does diverge from his vision and terminology in some ways. For one, "observer" is to us far too passive a word; it carries the connotation of leaning back, not of pounding the pavement and wearing out the shoe-leather. That's why we talk instead of agents, as I did above. So, “agent-participancy”, then.
But Wheeler did have his finger on the right question. We need to nail down that "quantum principle"! The next level of sophistication, the next stage of understanding, is surely to abstract away the "agent-". What can we say of the situations where the "ubiquitous process" has nobody along for the ride, no agent Alice to partake? The first step to answering that, we think, is to quantify just how involved an agent is in eliciting an outcome. If Alice is a good user of quantum theory, then her expectations for one measurement must tie together with her expectations for another, rather than being a wild free-for-all. Exactly how the theory says her beliefs should mesh is an indicator of how her participation matters. Indirectly, it is a clue to what participation means.
So far, this is still only imagery. But by teasing apart the mathematics of quantum theory, unraveling the convenient conventions from the deep enigmas, perhaps the imagery can be made more precise and more evocative than ever before. |
117 | To secure the supply chain, you must properly fund it | Yesterday, a new 0day vulnerability dropped in Apache Log4j. It turned out to be worse than the initial analysis: because of recursive nesting of substitutions, it is possible to execute remote code in any program which passes user data to Log4j for logging. Needless to say, the way this disclosure was handled was a disaster, as it was quickly discovered that many popular services were using Log4j, but how did we get here?
Like many projects, Log4j is only maintained by volunteers, and because of this, coordination of security response is naturally more difficult: a coordinated embargo is easy to coordinate, if you have a dedicated maintainer to do it. In the absence of a dedicated maintainer, you have chaos: as soon as a commit lands in git to fix a bug, the race is on: security maintainers are scurrying to reverse engineer what the bug you fixed was, which is why vulnerability embargoes can be helpful.
It turns out that like many other software projects in the commons, Log4j does not have a dedicated maintainer, while corporations make heavy use of the project, and so, as usual, the maintainers have to beg for scraps from their fellow peers or the corporations that use the code. Incidentally, one of the Log4j maintainers’ GitHub sponsors profile is here, if you would like to contribute some money to his cause.
When corporations sponsor the maintenance of the FOSS projects they use, they are effectively buying an insurance policy that guarantees a prompt, well-coordinated response to security problems. The newly established Open Source Program Offices at these companies should ponder which is more expensive: $100k/year salary for a maintainer of a project they are heavily dependent upon, or millions in damages from data breaches when a security vulnerability causes serious customer data exposure, like this one. |
3 | The California Exodus: How to Exit the State with Your Fortune Intact | Home > Finance
Spurred by tax hikes, ever-increasing regulations and the recent departures of high-profile tech companies, many high net worth individuals are eyeing the exit door.
SHARE
Photo courtesy of Tim Gouw via Pexels
Claims of a so-called “California Exodus” have begun again, spurred by tax hikes, ever-increasing regulations and the recent departures of high-profile tech companies and other emerging industries.
It’s not just billionaires like Elon Musk or celebs like Joe Rogan turfing for the allegedly greener pastures of states like Texas and Florida. Many of my clients are low profile, high net worth individuals eyeing the exit door, and there are others who have already made a break for it.
This isn’t just some concern of the uber wealthy. Single Californians pay a 9.3 percent rate once their income exceeds $58,635, which is enough of a bite to make anyone consider a move elsewhere. Couple this with the surge in popularity of remote work and suddenly the “California Exodus” seems a lot less theoretical.
But as The Eagles point out, “Hotel California” isn’t an easy place to leave.
The beautiful weather, the outdoor recreation and social opportunities, the culture—these are difficult things to walk away from. But perhaps even more frustrating are the hurdles to a Golden State exit—committing to a move takes time, money, determination and forethought.
Luckily, the greater your forethought, the less of the rest you should need. Here, I’ll provide advice for tidy beginnings to ending your relationship with the state.
Understand What “Leaving” Actually Means
The tax realities are such that California ranks among the most aggressive states when it comes to proving one’s residency. Many people want the benefit of being in California without being considered residents of California, and the state knows it. So, if you’ve committed to leaving, you need to understand what that actually means, and you need to set yourself up such that if you’re ever challenged on an audit, you can be confident you’ll prevail.
An important step one is selling your California home, or getting it rented out to an unrelated third party. The Franchise Tax Board will look at the continued ownership of a personal residence as evidence that you haven’t cut ties with the state, even if you really aren’t living there. And they might still be skeptical if you’ve transferred the deed to a family member or close friend.
If you own a home in California while owning a home elsewhere, and you’re spending the lion’s share of the year outside of California, the tax board will expect you to have the documentation to back that up.
Heading Off the California Franchise Tax Board
When you file your nonresident return, you’ll be asked to answer several questions about your residency, the date you became a nonresident, whether you own real estate in California, the number of days spent in California, etc.
Don’t try to be clever with your answers. If you have reason to think your return might seem suspicious, you need to have the documentation to back up your story, as your fears might be well-founded. For example, a former resident who moves to Nevada (the nearest state without an income tax) while showing a multimillion-dollar capital gain in the period immediately after your move out of state might as well put an “audit me” sign on their California tax return.
This will be an inevitability for some high-income earners leaving the state.
A Proper Move Takes Time
You can’t walk into a financial planner’s office after reading this and say “I’m moving tomorrow, and I’ve got all this income coming in the day after that, which I don’t want to pay a California tax on.”
A residency audit is one of the more invasive audits you can go through because they’ll ask for every credit card statement and look at the location of every charge made on that credit card. They’ll also check to see if payees of checks and debits from your checking account are California-based. It’s an awful experience to go through, and an expensive one, requiring a knowledgeable CPA who can navigate through it.
Residency audits also happen to be very profitable for the state, and like a dog with a bone, they might not relent on a chase even if a taxpayer has a compelling case. I’ve had a number of high-profile clients with solid cases for non-residency and troves of documentation to back it up, yet the state still aggressively pursued residency claims, likely in the hopes that the taxpayer would want to put the matter behind them and settle.
No surprise then that so many of these audits end in settlement. The best course is to avoid this alternative altogether, and the best way to do that is pre-planning, with plenty of advanced notice for your financial planner.
Mark A. Pariser is a certified public accountant at p . His practice emphasizes the proactive management of the tax and financial affairs of a variety of people who work in film, television, music and technology. His clientele also includes touring acts, international executives and entrepreneurs and other high net worth individuals and their businesses.
p
Worth Weekly
An indispensable guide to finance, investing and entrepreneurship.
Email
Business:
Families:
Women:
Frequency: Weekly
See all newsletters |
1 | Biden to host Manchin in Delaware to discuss finalizing spending bill | President Joe Biden hosted critical moderate Sen. Joe Manchin and Senate Majority Leader Chuck Schumer at his home in Delaware on Sunday in a push to finalize an agreement on a sweeping economic and climate package, a White House official told CNN.
As a critical week for Biden’s agenda begins, the House is looking at voting on the bipartisan infrastructure package on Wednesday or Thursday, according to a source briefed on the plans, and having a detailed agreement on the larger social safety net package before then to help convince progressives to vote for the bipartisan measure.
It’s unclear what the final price tag will be on the larger plan, but the source said that Manchin, a Democrat from West Virginia, has informed Democratic leaders he’s open to $1.75 trillion.
The President is pressing Democrats to come to an agreement before he departs for Europe this week and has shifted sharply to bring the months-long negotiations to an end in recent days, as Democrats on both sides of Pennsylvania Avenue press to clinch a deal on the cornerstone piece of their domestic legislative agenda. The invite was a rare move for Biden; unlike former President Donald Trump, he has not made a habit of inviting lawmakers to his house for meetings.
Biden, Schumer and Manchin had a “productive discussion” about the President’s agenda and the trio “continued to make progress,” according to a White House readout of the breakfast meeting. The group “will have their staffs work on follow-ups from the meeting, and agreed to stay in close touch with each other and the wide range of members who have worked hard on these negotiations,” the readout said.
House Speaker Nancy Pelosi had expressed optimism that the particulars of the bill would be finalized during Sunday’s meeting among Biden, Schumer and Manchin, but the gathering had been unlikely to result in a final deal or even a preliminary agreement on a framework for Democrats to unveil, officials familiar had said. Instead, it had been meant to discuss still-outstanding issues in the talks and hopefully move forward in certain areas where Manchin remains a hold-out, which include climate provisions, Medicare expansion and paid leave.
Manchin’s trip to Delaware was first reported by Politico.
Manchin has been one of two key holdouts on the package, and has succeeded in scaling back the scope of the bill on several fronts, from the overall cost to key elements ranging from climate provisions to the length and scope of the expanded child tax credit.
But several of Manchin’s concerns have yet to be reconciled, as Democrats battle over key elements like the push for an expansion of Medicare for hearing, vision and dental coverage, as well as a more modest national paid leave program proposal, and whether it will make it into the final proposal, according to people familiar with the negotiations.
The talks were scheduled for the morning at around the same time first lady Jill Biden was departing Wilmington for a policy-focused trip to Michigan.
This story has been updated with additional reporting. |
135 | Attempts to make Python fast | / /
Posted on Hacker News there was an Implementation Plan
for making CPython (the official Python implementation) faster. The author claims a 5x speedup
is possible for the low cost of $2 million USD.
The four step plan includes
creating an adaptive interpreter
improvements to internal types
creating a JIT compiler
extending the JIT compiler
We have witnessed other attempts at making Python fast, each achieving their own degree
of success in terms of performance and compatibility. For posterity I started keeping a
list of them here, in no particular order.
Seq
Seq’s performance is usually comparable to that of C or C++, and can often be even better once domain-specific compiler optimizations are applied.
https://seq-lang.org/
Pyston 2
[Pyston] version 2 is 20% faster than stock Python 3.8 on our macrobenchmarks.
https://blog.pyston.org/2020/10/28/pyston-v2-20-faster-python/
Pyston
Pyston is a performance-oriented Python implementation built using LLVM and modern JIT techniques.
https://github.com/pyston/pyston
Unladen Swallow
An optimization branch of CPython, intended to be fully compatible and significantly faster.
https://code.google.com/archive/p/unladen-swallow/
Stackless Python
Stackless Python is an enhanced version of the Python programming language. It allows programmers to reap the benefits of thread-based programming without the performance and complexity problems associated with conventional threads.
https://github.com/stackless-dev/stackless
PyPy
A fast, compliant alternative implementation of Python.
https://www.pypy.org/
Jython
Jython is approximately as fast as CPython–sometimes faster, sometimes slower. Because most JVMs–certainly the fastest ones–do long running, hot code will run faster over time.
https://www.jython.org/
HotPy
The HotPy virtual machine is a high-performance virtual machine for Python.
https://code.google.com/archive/p/hotpy/
Iron Python
Performance is comparable to CPython - much faster for some things … but slower for other things.
https://wiki.python.org/moin/IronPython
Psyco
Psyco is a Python extension module which can greatly speed up the execution of any Python code.
http://psyco.sourceforge.net/
2c-python
Using the generated binary code gives a speed boost from 2 to 4.5 times.
https://github.com/DarrenRainey/2c-python
Cython
Easily tune readable Python code into plain C performance by adding static type declarations.
https://cython.org/
Nuitka
Nuitka is more than 2 times faster than CPython …
http://nuitka.net/pages/overview.html
Pyc
Pyc is a python compiler intended for high performance computing and programming-in-the-large
https://sourceforge.net/projects/pyc/
Shedskin
For a set of 75 non-trivial programs …, measurements show a typical speedup of 2-200 times over CPython.
https://code.google.com/archive/p/shedskin/
Numba
Numba makes Python code fast
http://numba.pydata.org/
Parakeet
Parakeet was a runtime accelerator for an array-oriented subset of Python.
https://pypi.org/project/parakeet/
Cannoli
Cannoli is a compiler for a subset of Python 3.6.5 and is designed to evaluate the language features of Python that negatively impact performance.
https://github.com/joncatanio/cannoli
Happy hacking!
➡️ related posts in the observations series ...
No follow up posts yet. Check back soon! |
8 | ‘Roaring Kitty’ Sued for Securities Fraud over GameStop Rise | Securities Law Feb. 17, 2021, 8:46 PM
Anthony Lin Bloomberg News Share To:
Facebook
LinkedIn
Twitter
Print Keith Gill, one of the most influential voices that pushed GameStop on the WallStreetBets Reddit forum, was hit with a lawsuit that accused him of misrepresenting himself as an amateur investor and profiting by artificially inflating the price of the stock.
The proposed class action against Gill, who adopted the YouTube nickname “Roaring Kitty,” was filed Tuesday in federal court in Massachusetts. The suit said Gill was actually a licensed securities professional who manipulated the market to profit himself. Gill touted GameStop shares through an extensive social media presence on Youtube, Twitter and Reddit, where he used a more profane ...
Learn more about Bloomberg Law or Log In to keep reading: Learn About Bloomberg Law AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber? Log in to keep reading or access research tools. |
4 | Geekbench Underloads M1 Pro/Max CPUs | One of the first things you want to know about any new processor or chip with processor cores is its performance. Is it faster than equivalent processors made by Intel or AMD, and is an M1 Pro faster than the original M1? Over the last year, I’ve been looking at different ways of measuring this for Apple’s M1 chips, and this article and its sequels summarises some of the lessons so far.
My starting point is running widely used benchmarks in Geekbench 5 on the 8-Core Intel Xeon W processor in my iMac Pro. Here’s what I see in Activity Monitor’s CPU History window for a typical test run.
In each of these CPU History windows, time passes from left (oldest) to right (newest) for each of the panels, with red representing system load and green the app load. In this case, Geekbench ‘single core’ tests were run for the period starting about a third of the way across each panel, then the ‘multi-core’ tests cut in just after half way, and are reflected on all the cores, until they complete and load drops to almost zero. Being an Intel CPU, the cores on the left with odd numbers are ‘real’, and those with even numbers on the right are virtual cores achieved in Hyper-Threading.
In fact the ‘single core’ tests are distributed across all eight cores, but look as if their total represents something approaching 100% load on a single core, confirmed by the figure given in Activity Monitor’s main window. The ‘multi-core’ tests only attain 100% briefly on all cores, but average well over 50% throughout, and were sufficient to bring the iMac’s fans up to speed. Load distribution is also fairly even and follows a similar pattern on each core shown.
My conclusion is that the resulting benchmark doesn’t fully assess the capacity of all eight cores, but it’s probably not far off.
When Geekbench 5 runs the same CPU tests on my M1 Mac mini, the picture is quite different.
The single-core tests are run on just two of the Performance (P) cores, where they seldom reach a total of 100% load, but exceed 50% much of the time. While the multi-core tests do load all eight of the cores, they only reach 100% for brief periods at the start and end of the tests, and for much of the time barely reach 50%, although they’re spread evenly, on P and E cores.
Try that on an M1 Pro running on mains power, and the problems are even more apparent.
Single-core tests are distributed across the first cluster of four P cores, and probably amount to a total of significantly less than 100%. The multi-core tests, though, never reach 100% on any of the ten cores, and much of the time fall well short of 50%, although they appear similar in pattern and evenly balanced across the cores, including the E cores.
If we expect a CPU benchmark to reflect maximum capacity of the cores to take load, there’s a wide gulf between the results on the Intel Xeon and Apple’s M1 chips. There are, of course, a host of reasons which could account for this, from inefficient code generation for the ARM cores to inaccuracies in Activity Monitor. Unfortunately, it’s extremely hard to assess why this occurs.
Assuming that the Geekbench performance figures are linear, with twice the performance being reflected as twice the figure (as claimed by Primate Labs), one way to get a better idea is to run multiple copies of the tests to reach the target 100% load. When I first tested my M1 Pro, it returned a result of 1772 for single core, and 12548 multi-core even though none of those tests came close to using 100% of any of its cores. When two copies of Geekbench 5 were run at the same time, started within a couple of seconds of one another, the single core score remained unchanged, and the two multi-core scores were 9828 and 8845, a total of 18,673.
During the initial single core tests, total load exceeded 100% across all four cores in the first cluster. When the multi-core tests were running, 100% was reached for substantial periods at the start and end of that phase, and in between load was well over 50%.
The final test in this series was to run three copies of Geekbench simultaneously, which returned single core scores of 1682-1717, only slightly lower than for a single run, and multi-core scores of 7162, 7061 and 6428, totalling 20,651.
The CPU history shows much fuller load on the cores during the multi-core testing, although even then load wasn’t sustained at 100% throughout.
This isn’t a claim that the Geekbench score for an M1 Pro should be raised to over 20,000, but it suggests that, if these benchmarks were able to make fuller use of the cores in the M1 Pro, they’d be more likely to deliver a score of over 18,000. That relies on such high loading being possible, which also needs demonstration.
My last CPU History for today doesn’t rely on Geekbench, but on some test loads which I’ve been developing in my own app AsmAttic. Each of these tests is a mixed benchmark consisting of integer and floating point operations run millions of times in a tight loop. For the first half of this chart, the P cores were loaded with one copy of the task, which was run fairly evenly across the four cores in the first cluster, with the E cores and the second cluster of P cores largely inactive.
Just after half way, when the P cores had completed that initial task, the two E cores were loaded successively with two copies of the same iterative task, so that with both copies running they reached 100% load. Towards the end of that, I loaded the P cores with multiple copies of the same task, bringing the first cluster to 100%. In the final phase, I loaded eight copies of the same task onto the P cores, and managed then to achieve 100% load across all the cores in both clusters. Not only is it possible to attain 100% core loads using these synthetic tasks, but this can also be seen in real-world apps, for instance when using AppleArchive for compression.
What’s also interesting here is that, despite the great variation in loading of the cores, when run on the P cores 10^8 iterations of the test took 14.2 to 18.9 seconds, quite a tight range considering the differences in total core load during execution.
My next step is to use synthetic loads to compare different M1 chips, and different conditions, including power options, which I’ll describe in the next article.
Share this:
h3 Loading... |
2 | MySQL-Wtsp: Manage a MySQL Database from WhatsApp | {{ message }}
Aziz403/MySQL_WhatsApp
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session. |
3 | Neocortex Supercomputer to Put Cerebras CS-1 to the Test | Over the next year, we should get a good sense of how the Cerebras CS-1 system performs for dual HPC and AI workloads between installations at Argonne and Lawrence Livermore labs, EPCC, and the Pittsburgh Supercomputing Center (PSC).
While HPC/AI blended workloads are at the top of R&D priorities at several supercomputing sites and national labs, it is the PSC installation that we are most keen to watch. This is because the Neocortex system the center was purpose-built to explore the convergence of traditional modeling and simulation and AI/ML with the Cerebras architecture as the foundation of the HPE-build system.
This week, PSC announced that the Neocortex supercomputer is open for wider end user applications, which means we can expect to see a fresh wave of published results in areas ranging from drug discovery, molecular dynamics, CFD, signal processing, image analysis for climate and medical applications, in addition to other projects. While there are no other accelerators on board the Neocortex system for direct benchmarking, work on the PSC system will yield insight into the competitive advantages of a wafer-scale systems approach.
All of this comes at a time when it is unclear which non-GPU accelerator for AI/ML applications on HPC systems will start grabbing share. At the moment, there is a relatively even distribution of AI chip startups at several labs in the U.S. in particular with Cerebras and SambaNova sharing the highest count (of public systems) and Graphcore gaining traction with recent wins in Europe in particular, including systems at the University of Bristol for shared work with CERN.
Despite progress getting novel accelerators into the hands of research centers, it is a long road ahead to challenge GPUs for supercomputing acceleration. Nvidia has spent well over a decade porting and supporting HPC applications and has plenty of hooks for AI/ML. In short, it is going to take massive power and performance gains to get some centers to buy into novel architectures for AI acceleration anytime soon after years of GPU investments. Nonetheless, what we are watching is what architecture plays the HPC/AI convergence game best. And PSC seems to think it has picked a winner in Cerebras via the (relatively small, $5 million) Neocortex machine.
Each Cerebras CS-1 is powered by one Cerebras Wafer Scale Engine (WSE) processor, designed to accelerate training and inference with results emerging on real-world HPC applications (albeit those translated to TensorFlow). The Cerebras WSE is the largest computer chip ever built, containing 400,000 AI-optimized cores implemented on a 46,225 square millimeter wafer with 1.2 trillion transistors (versus billions of transistors in high-end CPUs and GPUs).
Neocortex will use the HPE Superdome Flex as the front-end for the Cerebras CS-1 servers. HPE’s line on this is that specially designed system will enable flexible pre- and post-processing of data flowing in and out of the attached WSEs, preventing bottlenecks and taking full advantage of the WSE capability. The HPE Superdome Flex itself will have 24 terabytes of memory, 205 terabytes of flash storage, 32 Intel Xeon CPUs, and 24 network interface cards for 1.2 terabits per second of data bandwidth to each Cerebras CS-1.
“The Neocortex program introduces a spectacular cutting-edge supercomputer resource to accelerate our research to combat breast cancer,” said Dr. Shandong Wu, Associate Professor of Radiology and Director of the Intelligent Computing for Clinical Imaging (ICCI) Lab at the University of Pittsburgh, who leads a team employing deep learning on Neocortex for high-resolution medical imaging analysis for breast cancer risk prediction. “The program also has a very responsive support team to help us resolve issues and has enabled great progress of our work.”
With Neocortex, users will be able to apply more accurate models and larger training data, scale model parallelism to unprecedented levels and, according to PSC, avoid the need for expensive and time-consuming hyperparameter optimization. A second initiative will focus on development of new algorithms in machine learning and graph analytics.
Dr. John Wohlbier of the Emerging Technology Center at the CMU Software Engineering Institute, another early user, says “As early users, we are working to use Neocortex to train novel graph neural algorithms for very large graphs, such as those found in social networks. We are excited to explore to what extent will AI-specific ASICs, such as the one in Neocortex, enable model training and parameter studies on a much faster timescale than resources found in a typical datacenter or the cloud.”
Over the course of 2021 we’ll be tracking published results for comparison between Cerebras CS-1, SambaNova’s DataScale systems, and Graphcore’s MK2 and its various installations. The opening of Neocortex is the first step in having another productive point of comparison.
Sign up to our Newsletter Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now |
5 | Americans need to embrace a vision of suburbia that includes growth and change | To continue, please click the box below to let us know you're not a robot. |
1 | The Most Sensational, Sordid and Questionable Presidential Election (2003) | Your browser isn’t supported anymore. Update it to get the best YouTube experience and our latest features. Learn more |
1 | Hyper-Realistic Color Pencil Drawings Perfectly Recreate Lustrous Blobs of Paint | Home
/
Art
Hyperrealistic Colored Pencil Drawings Perfectly Recreate Lustrous Blobs of Paint
By Sara Barnes on August 10, 2017
At first glance, the work of artist Cj Hendry looks like pictures of oil paint photographed on a smooth canvas. But, look again—they’re actually a series of hyperrealism drawings called Complimentary Colors. The Australian artist—a former finance student—has produced the luscious blobs using only colored pencils. Thanks to her expert handling of the medium, she has layered the dry pigment so that it has the sheen and viscosity you’d expect from paint.
The vibrant series is a departure from Hendry’s typical style. Prior to starting Complimentary Colors, she worked exclusively in black and white, with subject matter depicting objects in pop culture—like a Chanel perfume bottle and Kayne West’s face on a $100 bill.
So, why the change? It was thanks to the fashion brand Christian Louboutin. They commissioned Hendry for an exhibition of the same name that appeared during Art Basel Hong Kong. To prepare the original pieces for the show was a meticulous process. It took anywhere from a “day or two” up to four weeks to complete one drawing, because of the effort that goes into creating a single hue. Hendry explained she used “12 different colors from black to brown to white. It [was] way harder than I thought.”
Artist Cj Hendry fools the eye with her hyperrealistic art.
Using colored pencils and a lot of patience, she perfectly recreates luscious blobs of oil paint.
Here's Hendry in action, putting finishing touches on some silky pink paint.
A post shared by Cj Hendry (@cj_hendry) on Mar 7, 2017 at 5:33am PST
Cj Hendry: Website | Instagram
All images via Cj Hendry.
Related Articles:
Photorealist Painter Meticulously Recreates City Scenes
Artist Poses Playfully with Incredibly Lifelike Paintings of Swimmers
Hyperrealistic Paintings of Women with a Surreal Twist by Marco Grassi
Giant Paintings of Roses Covered in Dewdrops Capture Every Tiny Detail
Like My Modern Met on Facebook
Become a My Modern Met Member
As a member, you'll join us in our effort to support the arts.
Get Our Weekly Newsletter
Sponsored Content
Get Our Weekly Newsletter |
1 | Plant-based meat was the rage. Now plant-based seafood is taking the spotlight | Companies like Beyond Meat Inc
(BYND) and Impossible Foods are seeing strong sales. Earlier this year, Target launched its own line of 30 vegan food options. Even giant Tyson Foods Inc
(TSN) is getting in on the act with its recent introduction of vegan bratwurst and burger patties.
Plant-based meat sales are soaring, but is there a similar appetite for plant-based seafood?
A host of small companies certainly hope so, and they’re crowding into the alternative seafood space. Sales in the sector grew 23% from 2019 to 2020, according to data from The Good Food Institute, a nonprofit whose mission is “to accelerate alternative protein innovation internationally,” according to its website. In the first half of 2021, the institute said, $116 million has been invested in the seafood space, exceeding 2020’s total of $90 million.
Plant-based seafood could help alleviate the demand for fresh-caught fish, curbing overfishing that can decrease fish populations, reduce biodiversity in the ocean and harm habitats, according to the Natural Resources Defense Council. About 5 billion pounds of seafood are consumed every year in the United States, according to Molly Masterton, the group’s fisheries director, with more than 90% of it harvested, farmed, or processed in countries that lack rigorous seafood management laws.
“Giving consumers more sustainable, affordable, and traceable choices” is a “positive development” for the oceans and the environment, Masterton said. Adding more plant-based seafood options to the market could help take pressure off some of the fish species that are in high demand, she added.
Many plant-based options on the market list ingredients that include seaweed, peas or beans of some kind. Good Catch, a plant-based seafood startup, for example, uses peas, chickpeas, lentils, soy beans, fava beans and navy beans in its products, which include fish sticks, fish fillets, fish burgers, crab cakes and fish cakes.
Good Catch recently completed a round of funding that secured $26.35 million with investments from companies including Louis Dreyfus Company, Unovis Asset Management and Big Idea Ventures. “The only truly sustainable seafood is seafood that allows fish to remain in the ocean,” Chad Sarno, co-founder and chief culinary officer at Good Catch, told CNN Business.
Thai Union Group, one of the world’s largest canned tuna providers, owns a number of tuna brands including Chicken of the Sea, which sold its first can in the US in 1930. Now Thai Union is focusing on plant-based seafood products, according to the company’s website. Earlier this year it launched plant-based seafood in the Thai market, with offerings that include crab dumplings, crab meat and fish nuggets.
OmniFoods, a Hong Kong startup known for its fake pork product “OmniPork,” is also getting in on the plant-based seafood trend, launching a new line of products that include alternatives to fish fillets, fish burgers and cuts of tuna.
Nestle is also focusing on seafood options, particularly in Europe. In August, the company launched Vuna, a vegan alternative to tuna fish. “The demand for plant-based products is really growing strongly in Europe, but also elsewhere,” a Nestle spokesperson said. The company is seeing growth in the sector over the last year and is looking to quickly bring new products to the market; Vuna, for example, was developed in about nine months, the spokesperson added.
In September 2019, Tyson Foods’ venture capital arm invested in New Wave Foods, a plant-based fish company that sells seaweed-based shrimp.
“Approximately 90% of shrimp in the US are imported and more than half come from shrimp farms which contaminate and pollute our oceans,” Michelle Wolf, co-founder of New Wave Foods, told CNN Business. So Wolf and her team got creative and made a shrimp alternative out of plant-based ingredients like seaweed extract, mung bean protein and sunflower oil.
While Wolf wants people to integrate the alternative seafood options into their lifestyles, she doesn’t expect consumers to completely shift their diets and replace traditional seafood with plant-based options. “Even if it’s once a week, those small changes can build up across a population and have positive impacts on public health and our environment at large,” she said. |
1 | How Did Modern Sanitation Lose Its Way? | @
Commentary |
31-08-2018
Abstract: The world is in the throes of a sanitation crisis today with massive amounts of faecal sludge and wastewater from homes, commercial establishments and industries being discharged into rivers and other water bodies without proper treatment. Drinking water, which is pumped to homes and buildings, is being used to flush solids in toilets. The ensuing wastewater is often not fully treated and little effort is made to extract fertilizers or bio-energy from it. However, there was a time in ancient India when recycling nutrients in human waste was deeply understood and implemented. The practices encoded in Indic texts could have been used to extract global ‘Best Practices’ for sanitation and wastewater management.
A sanitation crisis is unfolding today as large amounts of wastewater (used water) from homes, commercial establishments, industries and farmlands are flowing into rivers and other water bodies without proper treatment. Faecal sludge and solid wastes are also finding their way into the food chain. This has led to serious impacts on human health, education, and economy. One part of the crisis is rooted in the practice of using clean water to remove human wastes from toilets, which began with the popularization of flush toilets in the early 20th century. [1]. As piped water began to reach more homes in cities, it was not long before flush toilets became the norm and a big share of clean water was turned to wastewater (to flush the solids). In one press of the flush handle (which typically uses 10-20 litres of clean water), it became possible to send unwanted wastes out of sight. The ensuing sewage or domestic wastewater is often not treated at all or is only partially treated to remove harmful substances such as pathogenic bacteria, pharmaceuticals, hormones etc. Also, there is hardly any effort to extract fertilizers or bio-energy from sewage or solid waste.
However, there was a time in ancient India when hygiene, disease-prevention, environmental protection and the need to recycle nutrients in human waste was deeply understood and implemented. Sanskrit texts such as Sushruta Samhita, Charaka Samhita, Manusmriti, Vayu Purana, Vaastu Shastra, Kamasutra, Arthashastra and others encoded the principles, which could easily have been used to extract global best practices needed for healthy ecosystems in our planet.
The modern sewage disposal system consists of pipes (sewers), which collect domestic wastewater from toilets, bathrooms, kitchens, yards and other places where water is used, in order to transport them to centralized treatment plants. [1] At the plants, the sewage undergoes preliminary, primary, secondary and sometimes advanced treatment depending on what quality of treated effluent is desired. Apart from the huge cost of digging underground and installing sewer lines, considerable energy is required to separate solids from water (which should not have been mixed in the first place) at treatment plants after which the treated product water is sent to rivers, lakes, seas or disposed of on land. The separated solids called sludge or biosolids are sent to landfills or incinerated and in a few cases are converted to fertilizer.
Unfortunately, in many places around the world, due to their huge cost, sewers and wastewater treatment plants either do not exist or they function improperly; therefore untreated sewage is sent directly to water bodies or land leading to the outbreak of diseases, destruction of aquatic life and economic losses. When clean water is provided to a community, it ironically leads to more pollution if the used water generated is not managed properly. According to the United Nations World Water Development Report 2017, on average, high-income countries treat about 70%, upper-middle-income countries treat 38% while the lower middle-income countries treat 28% of the municipal and industrial wastewater they generate. In low-income countries, only 8% undergoes treatment of any kind. Thus, globally, over 80% of all wastewater is discharged without treatment. [20]
The western style water closet (WC) has in fact, proven detrimental to life on this planet, which is acknowledged by many.
The water closet was seen as a victory for public health care, because excrement was moved away immediately from houses and yards. But where was the human excreta going through the sewer pipes? That is yet another question. It went to bodies of water. When it comes to the health of the humans and the state of the environment, the WC was not an improvement; it was a major step backwards. Nutrients were getting out of the use of agriculture and nature’s metabolism into the bodies of water.”
— (Juuti P.S. 2007). (2007) Environmental History of Water. IWA Publishing (Sim 2016)
Today, cities are struggling to find the finances for building sewers and treatment plants. Meanwhile, even when the finances are available, environmental engineers are agonizing over how to efficiently separate solids from wastewater using the least energy, how to chemically treat the solids so that nutrients like phosphorus can be recovered, how to recycle wastewater so that there is lesser burden on the freshwater resources of the world.
The food we eat contains nutrients, which plants have taken from the soil, and the excess nutrients are excreted by the body. Thus, our urine contains valuable phosphate, nitrogen, potassium and other nutrients, which need to go back to nature in order to complete the cycle. Unfortunately, there are only rare instances of these nutrients being recovered for agriculture via the modern sewage disposal system. Importing fertilizers is causing a drain on many countries such as India because phosphorous reserves are confined to a few countries. [17] Currently India imports about 90% of its requirement for phosphate fertilizers. [8] According to 2011 data, the US imports about 85% of its potash and 50% of nitrogen fertilizers. [5]
To put it simply, humans are going through the elaborate process of mining and processing to manufacture fertilizers, applying them to crops, and then losing it all via wastewater. Lakes and streams, which receive an excess of nutrients from wastewater are suffering from serious ecological effects such as oxygen depletion and fish-kills. [17] Also, synthetic fertilizer manufacturing is an energy-intensive industry and is expected to increase to feed an ever-growing population. [22]
The importance of hygiene has been highlighted in many ancient Indic texts. Whether it was general hygiene in terms of bathing or washing hands and feet, or whether it was sexual hygiene, many Sanskrit texts make it clear that a lack of hygiene leads to diseases. It was also understood that disease-causing pathogens in feces needed to be isolated from human settlements and nutrients in wastes had to be safely recycled back to nature. Under no circumstances were human wastes, blood or hazardous substances to be allowed to contaminate water.
“The river having water polluted with soil and feces, insects, snakes and rats and carrying rainwater will aggravate all doshas. Slimy, having insects, impure, full of leaves, moss and mud, having abnormal color and taste, viscous and foul smelling water is not wholesome.”
— Charaka Samhita, Sutrasthanam 27.213, 214[9]
The Manusmriti has allotted many verses to disallowing the pollution of rivers, water bodies, ploughed lands, cow pens, brick altars or even holes inhabited by living creatures. It prohibits urinating while facing the wind for obvious reasons. The text also lists out the impure substances produced by human bodies thereby indicating that after touching all these things, one would need to be cleansed. Among the impurities are oily exudations, semen, blood, urine, feces, nasal mucus, earwax, phlegm, tears and sweat. The mention of avoiding ploughed lands is important because this indicates an understanding that the direct application of human feces to crops is not a healthy practice. [11, 12, 13, 15]
Let him not throw urine or feces into the water, nor saliva, nor clothes defiled by impure substances, nor any other impurity, nor blood, nor poisonous things.
Manusmriti, 4.56 [15]
Far from his dwelling let him remove urine and ordure, far let him remove the water used for washing his feet, and far the remnants of food and the water from his bath.
Manusmriti, 4.151 [15]
Vedic people living in villages walked a long distance from their houses to relieve themselves primarily because they knew the dangers of contamination of food and water by fecal matter. Defecation in the open was followed by covering up of the feces with soil so that no insects would spread infection. The left hand was used for anal washing with some water carried to the spot in a small container and this hand was cleansed thoroughly either with plant-based soaps or with sand. The left hand was not used for eating because it was associated with the touching of impure substances.
Since drinking water was obtained from wells, lakes, tanks and ponds, it was important to keep these sources clean and away from contamination. Rules for preserving the purity and sacredness of water sources are found even in the ancient epic of Mahabharata. The Sanskrit treatise Arthashastra from the third century BCE mentions severe punishments for polluting water sources. [11]
In ancient India, it was possible for rural people to find isolated places far away from habitation as well as from water bodies where, after defecation, the wastes would be safely absorbed into the soil without polluting water or land. Another advantage was that no manual scavenging was needed. With burgeoning populations, it became harder to find such sequestered places, as a result of which the practice of defecating outdoors began to increase the risk of disease. Open defecation cannot be sustained beyond a certain density of population. Even just five decades ago, population densities were not too high. There is an erroneous assumption in some quarters that India’s open defecation problem is rooted in a lack of hygiene that springs from ancient Hindu texts. [7] Such an assumption betrays a lack of awareness of the scientific approach to health and disease that was prevalent in Vedic times. In fact, the whole problem of squalor in Indian cities today stems from a lack of awareness of Hindu texts and a lack of design of sanitation in accordance with Vedic principles of hygiene.
Unlike rural dwellers, latrines and sewerage were well known to city-dwellers in ancient India. The people of the Indus-Sarasvati civilisation were the earliest to use latrines, soak pits, cesspools, pipes and channels for wastewater disposal. Both centralized and decentralized systems have been found in different archaeological sites, some of which go back 8,000 years. Some toilets just had holes in the ground while others had seats. In the centralized disposal model, which has been found in Mohenjo-daro, Harappa, Chanhu-daro, Lothal and other places in Greater India, terracotta pipes carried both bathwater and effluents from pour-flush toilets into the street drains. The effluents were collected in pits lined with clay bricks so that solids could settle and when it was three-quarters filled, the supernatant liquid could flow into a wider drain downstream. It can be guessed that the pits were emptied manually and the biosolids sent for disposal to specific sites. Also, the pits were located at the intersection of several drains so that clogging could be prevented. Meanwhile, in the decentralized disposal model found in Kalibangan, Banavali and other places the effluents from were transferred from households via U-shaped channels into perforated jars placed in the main streets. The biosolids were emptied from the jars periodically. [2]
There is a mistaken belief that urban wastewater management systems were only found in Indus-Sarasvati civilization. In fact, terracotta pipes and brick lined soak pits have been found in Takshshila, Delhi, Ujjain and Arikamedu, which are all located in different parts of India. [2] Further excavations might reveal more ancient towns with well-developed wastewater management systems. Arikamedu, which was a noted centre for textile production has the unique distinction of featuring industrial wastewater systems dating to the first century before the common era. [2] The port city had close trading relations with Rome.
However, the majority of India’s population lived in rural areas or in the periphery of cities, and therefore relieving oneself in open areas far away from housing was more prevalent than the use of toilets. After the Muslim occupation of India, it became common for harems inside forts to have rows of toilets for the women, since the royal womenfolk were not allowed to go outside. It has been speculated that the practice of manual scavenging got intensified during this period, since more and more people were needed to carry the excreta from households. Typically, captured slaves were used for the purpose. [6]
After the European colonizers began to impose in India the so-called modern flush toilets and sewerage systems that they were using in their home countries, the sanitation landscape of the country began to change. The cantonments where the British lived looked neat but the sewage they produced was polluting the countryside. Unfortunately, this system of so-called sanitation was considered modern and became so fashionable that it spread vigorously to the rest of the country even after the colonizers were expelled.
One of the worst epidemics associated with poor sanitation is cholera. While the disease is mentioned in ancient Indian medical texts, it appears that there were only sporadic cases and not epidemics. [4] Besides, those texts already cautioned people to refrain from polluting water sources or drinking water that was contaminated. The first cholera pandemic in India occurred in British-occupied Bengal in 1817 when large numbers were affected in the British army and spread to other places. [4, 16]
It is interesting that the miasma theory prevalent in Europe promoted the dumping of human waste into water bodies. According to this obsolete theory, diseases are caused by bad air or foul odours. The water closet or flush toilets were hailed by Londoners because it removed smells from housing in accordance with the miasma theory. Excreta was disliked only for its smell not for its potential to cause disease. The more it was dumped in the Thames, the safer the residents of London felt [16]. “The city’s sewer commissioners proudly noted the huge volume of human waste that the city’s toilets efficiently deposited into the river; twenty nine thousand cubic yards in the spring of 1848 and eighty thousand cubic yards by the winter of 1849.” [16] The Times reported in 1858 that death rates had declined because the Thames had grown fouler. In other words, the connection between water pollution and disease had not been made yet. There was total ignorance about the fact that when the tide in the North Sea rose, the filth in the Thames was going right back upstream to the intakes of the drinking water companies. [16]
“Nevertheless under the powerful influence of miasmatism, when cholera struck, Londoners believed it was not because too many flush toilets dumped human waste into the river, but because too few did. Flush toilet sales, in the years after the 1832 epidemic, enjoyed “rapid and remarkable growth” according to an 1857 report. Flush toilet sales enjoyed another spike after an 1848 outbreak of cholera. So many Londoners installed flush toilets over the 1850s that the city’s water use nearly doubled between 1850 and 1856”[16]
— Sonia Shah, Tracking Contagions from Cholera to Ebola and Beyond, 2016, Sarah Crichton Books, New York
Given the tendency to use minimal water and to follow the age-old concept of isolation of wastes, it is possible that if Indians had stayed connected with their ancient knowledge, they could have naturally gravitated towards ecological sanitation or ecosan or composting toilets. Generating fertilizers and/or bioenergy from wastes could also have been a natural corollary to such toilets. A system, which sends wastewater from toilets to rivers as sacred as the Sarasvati or Ganga could simply not have originated in ancient India.
In other words, had engineers globally devised a set of ‘Best Practices’ out of the Vedic strictures of not mixing human wastes with clean water, not sending wastewater to freshwater bodies and ensuring that nutrients were safely recycled back to nature, we would have a cleaner, healthier and liveable world today.
“The most environmentally friendly type of toilet is the compost toilet, especially the dry compost model in which urine is collected separately. Urine diluted with water can be used as fertilizer and composted solid waste can be used for soil improvement. The annual amount of urine for one individual could be used to produce 200 kilograms of grain. This method not only recycles the materials in the urine but it also prevents them from getting into groundwater and watercourses. The whole process is manageable by people themselves. The separating model also has a less distinctive smell. It is notable that in the 19th century there was already dry compost and compost toilets in cities combined with different transport systems. Choosing the water closet as the primary system in the late 19th and early 20th centuries ended the product development of dry compost and compost toilets for over a hundred years.
— Juuti P.S. and Wallenius K.J. (2005). Brief History of Wells and Toilets.Pieksamaki.
There has been much controversy over the poor treatment of the jaatis in India that were involved in manual scavenging. However, it must be remembered that throughout history, the task of handling wastes and faeces has never been a dignified one. [18] Until as late as the 20th century, human excrement had to be removed physically from cesspits and privies in Europe. [16,18] The European lower-caste people who did the dirty job were called gongfermours (French) or gong farmers in English. [18] The gong farmers of England were only allowed to work at night, so they were also called nightmen. They came into respectable neighbourhoods in the dead of the night, emptied cesspits and carted away the wastes to the boundaries of the cities. They were required to live in certain areas at the fringes of the city and could not enter the city during daytime. There were severe penalties for breaking this rule. Even after water closets arrived on the scene, their contents flowed into cesspits for a long time and needed to be cleaned out by nightmen. [18]
Worldwide, until modern systems of transporting and handling sewage and sludge using advanced technologies came into existence, workers in this sector were ostracized from society. [18]
Today’s cities are stuck with a water supply and wastewater disposal system, which cannot be redesigned with ecological principles unless huge amounts of capital expenditure are incurred. Over the decades, the thinking has gravitated towards making a wrong design a little more right. Therefore, many cities are adopting measures such as installing low-flush toilets, recycling of wastewater for non-potable uses, extracting biosolids etc. A few cities such as Windhoek (Namibia) and Singapore are recycling used water even for potable use. But the fact remains that in the poorer parts of the world, the cost of laying a network of sewers and treatment plants is quite unaffordable.
Meanwhile, there is also a parallel move to install ecosan toilets in rural areas, which has been successful in varying degrees. Champions of ecosan toilets are carrying out tiny revolutions in various parts of the world and demonstrating how sanitation can turned into a virtuous sustainability business, which generates fertilizers and/or energy. In 2011, the Bill and Melinda Gates Foundation announced the “Reinvent the Toilet Challenge” to researchers around the world to develop innovative and financially profitable systems to manage human waste, which would be off-grid and cost less than US$0.05 per day. [14] Millions of dollars of grants have been awarded to researchers so far and the Challenge is now in its third phase. [14] About US$ 7 million is currently being used to develop an off-grid, energy-neutral sanitation system by Duke University, along with its partners RTI International, Colorado State University and several other partners. [14] The project is currently undergoing testing in India. Kohler, a leading toilet manufacturer in the US has been awarded US$ 1 million to test closed-loop toilets in poor countries. [14] According to Kohler, these toilets will be stand-alone units that take in wastewater, then disinfect and purify it to be reused for toilet flushing. [14]
For now, the world will continue to be divided between those connected to the sewerage grid and those who are not. It is impossible to reverse the centuries of mismanagement of human waste. We have moved very far from the wisdom of ancient rishis when it comes to answering the call of nature. However, there is a window of opportunity to find the right paradigm for those who are not connected to sewer networks.
*Sahana Singh is Editor of Asian Water Magazine (www.asianwater.com.my) and Director, Indian History Awareness and Research (IHAR), a Houston-based think tank. She writes on water management, environment and Indian history, and has recently authored a book titled “The Educational Heritage Of Ancient India – How An Ecosystem of Learning Was Laid To Waste”.
The author would like to thank public policy expert Dr Kallidaikurichi Seetharam for his valuable inputs; Arnab Bhattacharya for his help with references and the IHAR team for their review of this work.
The paper “Best Practices in Indic Hygiene, Sanitation And Environmental Protection – How Did Modern Sanitation Lose Its Way?” was originally presented at Waves 2018 and has been republished with permission.
Sahana Singh writes on environmental (water) issues, current affairs and Indian history. She is a member of Indian History Awareness and Research (IHAR), and has recently authored “The Educational Heritage of Ancient India – How An Ecosystem of Learning Was Laid to Waste”. |
2 | France helicopter crash: Five killed in Alps | France helicopter crash: Five killed in Alps
About sharing
EPA Private company Service Aérien Français runs air services across France A helicopter has crashed in the French Alps killing five of the six people on board, officials say.
The aircraft, owned by a private company, was carrying out a rescue mission when it went down near the town of Bonvillard in the Savoie area.
The cause of the crash is unclear, but officials say it could have been caused by poor weather.
The alarm was raised by the pilot who managed to escape from the helicopter and was found seriously injured.
The helicopter belongs to Service Aérien Français, a private company that conducts search-and-rescue missions and other air services across France.
In a tweet, French President Emmanuel Macron shared "support from the nation to the families, friend and colleagues of these French heroes".
What do we know about the crash? French authorities said the helicopter - a Eurocopter EC135 - was carrying an air rescue crew on a training mission when it fell from an altitude of 1,800 metres (5,900ft).
The surviving pilot raised the alarm at around 19:10 local time (18:10 GMT) after managing to escape from the helicopter.
The crew onboard consisted of two pilots - one in training - along with two winch operators and two mountain rescue workers.
Police union Synergie-Officiers has named Amaury Lagroy de Croutte as one of the victims. Since 2018, Mr de Croutte had led a local mountain unit of the CRS, France's riot police.
Three helicopters have been sent to the site as part of a rescue team of over 40 people. But the aircraft have been unable to reach the site due to fog. Instead, the pilot was recovered by a group of rescuers who approached on foot.
French Interior Minister Gérald Darmanin has said he will visit the site of the incident on Wednesday.
Related Topics Alps
France |
3 | The financial crisis: A foreshadow of the corona crisis (April, 2020) | Historical analogies are a way for us to find our footing. Psychologists regard them as a key plank in the learning process. ‘History doesn’t repeat itself, but it often rhymes,’ they say. So it’s no surprise that the coronavirus crisis draws comparison with a range of historic events.
The disease itself has been compared with the flu pandemic of 1918. Although there have been other flu pandemics since, the ‘Spanish flu’ of 1918 is known for its heavy death toll of 20-50 million. Like all analogies there are differences and a stand-out one here is that the 1918 flu targeted the young, while coronavirus seems to target the old. That it was, oddly, largely forgotten as a historical event is a feature that also doesn’t seem to resonate given the way life is being experienced under coronavirus.
strong Angela Merkel said (on 18 March), “Since the Second World War, there has been no challenge to our nation that has demanded such a degree of common and united action.” Meanwhile, in the UK, commentators have summoned the spirit of the Blitz to ease the burden from social restrictions. Of course, this time both nations are fighting a common enemy. And Keep Calm and Carry On is being used a lot more now than it was back then.
The suddenness is being compared with 9/11. Worldviews changed that day, just as they may be changing again. 9/11 heralded a fear of terror on a scale never before confronted. Like today’s virus, the terror did not discriminate. It is hard to believe that eight weeks ago the stock market was making new highs and the prospect that the state could legislate against basic human freedoms to the extent it has was ridiculous. In the space of a few weeks travel restrictions escalated from a few flights being cancelled from ‘high risk’ places like Beijing and Northern Italy, to flights being cancelled because demand for non-essential travel collapsed, to flights being cancelled because countries literally closed their borders.
From an economic perspective, the analogy is the global financial crisis of 2008. Company finances are being ravaged and monetary authorities are rolling out many of the packages to ease strains in the financial system that they moth-balled after the 2008/09 financial crisis. In fact, the CEO of Marriott International, Arne Sorenson, went as far as to say on 20 March that “Covid-19 is having a more severe and sudden financial impact on our business than 9/11 and the 2009 financial crisis combined.” On Thursday 26 March, US jobless claims jumped to 3.28 million, four times the prior peak in records that have been compiled since 1967. JPMorgan economists expect US GDP to be down 40% in 2Q with unemployment hitting 20%.
Other analogies abound. The Great Depression of the 1930’s; natural disasters; and Prohibition – which amputated from the economy an entire industry in the way that industries such as hospitality are being poleaxed today.
Of them all, the financial crisis of 2008 seems to offer the best framework. But not for its steer on the economic fallout, rather for the spotlight it shines on our understanding of the virus itself.
One of the challenges in the early stages of the financial crisis was dimensioning the losses that were building. No-one knew how big they were, nor where they were. The Financial Crisis Inquiry Commission report recalls that in early 2007 “Goldman marked mortgage-related securities at prices that were significantly lower than those of other companies.” The divergence came to a head in April 2007 when Goldman valued a Bear Stearns hedge fund’s positions at between 65 cents and 100 cents on the dollar. A few weeks later it revalued the positions, going as low as 55 cents on the dollar. Ralph Cioffi, the manager of the Bear Stearns fund, resisted. He proposed using fair value marks based on his team’s models, implying losses that were significantly less than those using Goldman’s marks.
The different approach to marks back then is reflected in the different approach to testing today. On 1 March Nassim Taleb tweeted this. His point was that the “losses” were by then endemic; it was just that some countries were measuring them more accurately than others.
There is a marked-to-market problem.
Low incidence = low testing.
Only places that mark to market properly seem to be Singapore and Bella Italia. pic.twitter.com/seSrdFy3X8
— Nassim Nicholas Taleb (@nntaleb) March 1, 2020
Several weeks later the lack of wholesale testing still makes it difficult to dimension the true scale of the problem. By now it is apparent that the number of confirmed cases (2 million globally at the time of writing) is not a valid reflection of the number of actual cases. But how many actual cases there are remains unknown.
Like with the Bear Stearns hedge funds over a decade ago, some countries are also incentivised to under-represent their caseload. One example is Japan, where the number of confirmed cases began to ramp up immediately after the Tokyo Olympics had been officially postponed, leading some to speculate the number had been artificially suppressed. China too was criticised for under-reporting the severity of the disease when it first broke out in Wuhan in December 2019.
As well as questioning the validity of valuations being employed, the financial crisis also exposed flaws in the models used to underpin various financial structures. “Financial institutions and credit rating agencies embraced mathematical models as reliable predictors of risks, replacing judgment in too many instances. Too often, risk management became risk justification.” (Financial Crisis Inquiry Commission report).
Right now, policy is being steered by a range of epidemiology models that are being used to predict the spread of the virus. The problem with these models – as with all models, including those used on Wall Street in 2007/08 – is that they are only as good as the assumptions they rest on. “All models are wrong,” said statistician George Box, (“but some are useful.”) When it comes to epidemiological models there is a broad range of uncertainty around many of the inputs.
That uncertainty can be categorised in three buckets.
strong such as the number of people infected. As outlined above, this remains a key unknown. Some models use statistical techniques to infer total infections, such as the one from the Centre for Mathematical Modelling of Infectious Diseases which extrapolates from fatalities; others use sampling such as in Iceland. Even the hardest variable of all – fatalities – is subject to uncertainty. The UK’s Department of Health and Social Care (DHSE) reports daily the number of people who died in hospital before 5pm the previous day. But what about those who didn’t die in hospital, but died in care homes or at home? (My neighbour was allowed home from hospital to die – where does he appear in the DHSE dataset?) And what about those who died anyway from other causes – those who died with Covid rather than from Covid (like a friend’s father)? There is also typically a lag between when the death occurs and when it is recorded in the data – incurred but not reported is the jargon used in the insurance industry. If that lag is consistent it may not impact the output of the model much, but any change to the lag can skew resulting projections materially.
The amount of data being generated through this crisis is immense. Unlike in 1918 when ‘20-50 million’ died, by the time this is over we will know precisely how many died. But it may take hindsight to discern the signal from the noise. When I look back at my notes from the financial crisis, there were a lot of datapoints that were fleetingly useful yet turned into dead ends.
Data quality is further compromised when comparing between countries. Datasets between countries are not really compatible with each other, yet with every political system on the planet confronted with a common challenge, there is a tendency for commentators to compare responses. While the overall shape of the spread looks similar, mortality rates remain stubbornly different. There are many explanations include healthcare capacity, population density, global connectedness, prevalence of underlying conditions like obesity, lockdown initiation, and demographics. But it may not be until we are further along the curve that robust comparisons can be made. Singapore had the virus under control before seeing a second wave emerge in April. Germany is being seen as a leader in testing, yet its regime is not without criticism. Sweden has resisted a lockdown on the scale mandated in other European countries; the jury is still out on its strategy.
The second source of uncertainty stems from how the models have been calibrated. The Imperial College model is a bottom-up “individual-based simulation model”. The Institute for Health Metrics and Evaluation (IHME) model is top-down “curve-fitting tool to fit a nonlinear mixed effects model to the available… cumulative death data”. Both models were initially calibrated using data from Wuhan. In Imperial’s case “based on fits to the early growth-rate of the epidemic in Wuhan, we make a baseline assumption that R0=2.4 but examine values between 2.0 and 2.6.” In IHME’s case “the value of the covariate multipliers… was assumed to closely follow the fit obtained from data from Wuhan, which is the time series to reach a stable state in the training dataset.”
Yet Wuhan may not be representative of how the virus spreads in other places and may skew the models to the pessimistic side. Clearly as more data accrues from other parts of the world the models’ performance should improve. So-called VAR models employed by financial institutions before the financial crisis purported to predict with at least 95 percent certainty how much a firm could lose if market prices changed. But the models failed for their reliance on assumptions that were based on limited historical data.
strong What, for example, is the realised impact of policy intervention on behaviour? It seems that in the first three weeks of lockdown in the UK, compliance has been higher than anticipated – schools had been expected to operate at 20 percent capacity, but have been running at 2 percent; the furlough scheme had been expected to cost £10bn, but recent estimates are that it will cost £40bn. These compliance rates may not be constant as we move into the second three weeks.
Another feedback loop emerges from the unintended consequences associated with behaviour driven by the model. The best example of this outside of epidemiology is in car design. Studies have shown that motorists typically drive faster when wearing seatbelts and closer to the car in front when their vehicle is fitted with anti-lock brakes. Epidemiological models don’t currently incorporate widespread mask-wearing as a parameter, at least in the UK, where it hasn’t been advised yet, but if they did, they would presumably also have to accommodate an increase in risk behaviour as more people use masks as cover to take risks they otherwise wouldn’t.
The key here is that models are not static – they are dynamic. Their dynamism sometimes breeds confusion as if they should spit out an unchanging target. Republicans on the House Oversight Committee are calling for a hearing to review the ‘modelling platforms’ used to project the extent and impact of the coronavirus pandemic, claiming they exhibit “conflicting data”.
There are echoes of the Y2K bug in some of the criticism being levelled at modellers. An estimated US$300bn was spent prior to 1 Jan 2000 in solving a problem that never materialised. Planes didn’t fall out of the sky, power systems didn’t fail and bank accounts weren’t wiped out. Yet because the alternative history in which that US$300bn wasn’t spent was never exposed, so-called ‘fear mongers’ got a bad name. That bad name persisted through Brexit and into the current crisis.
The rise of artificial intelligence means that the world runs on models like never before. But just when they should be at their most useful, there is a danger that they lapse into their most useless. Even the simple earnings model of the kind I have been grinding out for 25 years is under scrutiny. Jamie Dimon, Chairman and CEO of JPMorgan said on his recent earnings call:
“There are no models that have GDP down 40%, unemployment growing this rapidly… There are also no models have ever dealt with a government, which is doing a PPP program, which might be $350 billion, it might be $550 billion, unemployment where it looks like 30% or 40% of people going unemployment but higher income than before they went on unemployment. What does that mean for credit card or something like that or that the government is just going to make direct payments to people? … And I think people – you’re making too much mistake trying to model it. When we get the end of the second quarter, we’ll know exactly what happened in the second quarter…”
Perhaps the key flaw inherent in financial models in 2007/08 relates to correlation. A major assumption backing many of the securitisation models in circulation at the time was that house prices would not decline simultaneously across the whole of the US. After all, they hadn’t before. When house prices started to fall nationwide and defaults increased, it turned out that mortgage-backed securities were in fact much more highly correlated than the rating agencies had estimated. Eventually, in late 2008, Moody’s was forced to introduce an asset correlation assumption two to three times higher than it used before the crisis. By then it had downgraded most of its ratings on CDO structures.
Health system capacity is similarly built on the assumption that people do not get sick simultaneously across a population. Nor does it account for the additional layer of correlation that emerges when healthcare staff themselves get sick at the same time, restricting the supply of healthcare provision while demand is going up. On 5 April the UK Secretary of State for Health and Social Care reported that 5.7% of hospital doctors were off sick or otherwise absent because of the virus; a doctors’ survey suggested a much higher figure of 14.6%.
A lot of modelling of optimal critical care capacity has taken place, but it tends to optimise for efficiency rather than for resilience and understates the likelihood of a rise in correlation. At midnight on Thursday 27 February there were 4,122 adult critical care beds available in England, according to NHS data, of which 81.1% were occupied. That occupancy rate had ranged between 75.3% and 88.1% over the ten years since August 2010. Since then, demand has clearly gone up. Fortunately, UK healthcare capacity is not yet full. The ability to create new capacity quickly is helpful, as the fit-out of the NHS Nightingale Hospital at the ExCeL Centre demonstrates, although the availability of additional ventilators remains an issue.
The financial crisis taught that buffers are needed to compensate for the risk of correlation. Banks established additional capital and liquidity buffers following their crisis. It seems clear that those same buffers were not in place with respect to critical care beds, ventilators, personal protective equipment and other elements of healthcare provision. It is perhaps no coincidence that Goldman Sachs had at its disposal millions of masks that it was able to donate to healthcare systems in the U.S. and Europe, which it had acquired over a number of years following prior epidemics like SARS “as a part of our operational risk management efforts”.
When officials let Lehman go bankrupt on Monday 15 September 2008, they didn’t anticipate the chaos it would unleash. Andrew Ross Sorkin relays in his book, Too Big to Fail, the conversation that Hank Paulson, US Secretary of the Treasury, had with President Bush that morning:
Paulson said he was cautiously optimistic that investors would be able to accept the news but warned him that there could be further pressure on the financial system… Paulson walked Bush through the Fed’s plan to keep Lehman’s broker-dealer functioning so that it could complete its trades with other banks. “We’re hoping that over the next couple of days, they can unwind this thing in an organized way,” he said. While Paulson was clearly more disturbed than the president about Lehman’s bankruptcy, he expressed his elation about Bank of America’s decision to buy Merrill Lynch, a sign, he suggested, “of strength” in the market that might “mitigate” the possibility of panic.
But the possibility of panic wasn’t mitigated. Over the next few days shockwaves reverberated throughout the financial system. On Monday officials spent the day looking for ways to support AIG. On Tuesday Lloyd Blankfein, CEO of Goldman Sachs “told Paulson about a new problem he was seeing in the market: Hedge funds that had traded through Lehman’s London unit were suddenly being cut off, sucking billions of dollars out of the market.” On Wednesday Tim Geithner, President of the New York Fed, started the day “thinking about what fresh hell the day would bring. He was most anxious about the latest shocking development: A giant money market fund, Reserve Primary Fund, had broken the buck a day earlier… Investors had started liquidating their accounts, which in turn forced managers to impose a seven-day moratorium on redemptions. Nobody, Geithner worried, knew just how extensive the damage could end up being.”
strongThe two most widely cited explanations as to why Lehman wasn’t bailed out are that moral hazard made it politically unacceptable, and that legal structures were not in place to do it. But a third is an error of omission: policymakers lacked a detailed understanding of the linkages at play within the financial system. The ultimate cost involved in cleaning up the chaos – the bailout of AIG, the support given to other financial institutions, the structures put in place to underpin money market funds – was a multiple of what it would have cost to bail out Lehman. One estimate puts the cost at US$498bn. This compares with losses that Lehman creditors suffered of an estimated US$145bn which is the maximum loss the US government may have incurred had it bailed out Lehman (the actual loss would likely have been lower had the resulting chaos been averted).
The message here is clear and it is one Matthew Jackson makes in his book, The Human Network:
From a network perspective, stopping contagions at earlier points is always easier and cheaper than letting them play out and then trying to clean things up afterward.
There are many features of a financial network that make it prone to contagion. Adam Kucharski lists some of them in his book, The Rules of Contagion. Two in particular are relevant to coronavirus. First, rather than connections being scattered evenly, a handful of firms typically dominate the network. This is apparent in the representation of the 66 largest banks in the Fedwire system shown in the chart:
Source: The Topology of Interbank Payment Flows, Federal Reserve Bank of New York
Staff Report no. 243, March 2006
Second, financial networks are typically ‘disassortative’ which means that highly connected elements, rather than forming clusters between themselves, are mostly linked to less connected elements. This can lead to a more widespread contagion. Lehman was undoubtedly a dominant player in the financial network; when it failed it had trading relationships with over a million counterparties. Although the post-crisis rallying cry centred around banks being ‘too big to fail’ the more accurate representation was that many were ‘too connected to fail’.
Disease can propagate through a network of people in a similar way. Epidemiologists have long known that some people are simply more connected than others and some places more ‘hot’. In the SARS epidemic, this was a major factor: 20 percent of cases caused almost 90 percent of transmission. In the current coronavirus epidemic, the spread occurred very quickly through a religious sect in South Korea. Indeed, across the world, various hubs have become sources of ‘super-spreading’ events – churches, ski resorts, hospitals, a Biogen company conference in Boston. Yet it’s not known how: whether the cause is asymptomatic carriers, people with symptoms that linger but who are not sick enough to isolate, or people who shed an unusual amount of virus. And from a population perspective, unlike SARS, the precise distribution of super-spreaders is not currently known either.
After the crisis, financial regulators learned the importance of super-spreaders from epidemiologists. Andy Haldane, currently Chief Economist at the Bank of England, argued in 2009 that financial ‘super-spreaders’ need to be actively targeted. Subsequently, the most connected banks were formally tagged (as ‘Global Systemically Important Financial Institutions’) and required to hold more capital; ring-fences were put in place around bank activities; and the network was more regularly mapped.
Even though these ideas emerged from epidemiology, how granularly they are employed in the current crisis is unclear. Most epidemiological models assume a homogeneous network structure with people drifting around and interacting with each other in fairly evenly distributed ways. For example, the Imperial model assumes that social distancing reduces contact outside the household by an even 75 percent.
Since the financial crisis, financial institutions have had to regularly log their (trading) positions with regulators. People, on the other hand, don’t have to log their (physical) positions with regulators, at least in Western liberal democracies. But coronavirus could be changing that. Apple and Google recently announced that they are collaborating on building surveillance capability into their smartphones. Given such specific and real-time information on how people interact, a much more accurate model of virus transmission can be crafted. According to CNBC, “the way the system is envisioned, when someone tests positive for Covid-19, local public health agencies will verify the test, then use these apps to notify anybody who may have been within 10 or 15 feet of them in the past few weeks.” Apple and Google have insisted that governments will not be able to require citizens to use the software and that users will have to opt-in. But the more people that opt in, the more accurate the network representation. At 100 percent adoption there’s no need for a model at all – the territory will have displaced the map.
Fortunately, one way in which financial networks are unlike the networks through which coronavirus can spread is in the number of ways in which transmission can occur. In the financial crisis, transmission occurred through lending relationships, through shared exposure to trading positions, as well as indirectly through fear and panic. In this way, it was more like HIV than coronavirus. HIV can spread via sexual relationships, needle exchange, blood transfusion. As far as we know, the only way coronavirus can spread is through respiratory droplets.
When the outbreak began, I thought the analogy to 2008/09 would extend to measures of what to look out for as a leading indicator of the recovery. Back then the flow of new non-performing loans was a useful metric to track and its peak presaged the recovery. Today’s analogue would be the rate of change in confirmed coronavirus cases. Now I’m not so sure. Too much uncertainty overshadows these numbers, and the resultant numbers are anyway a function of the trade-off taken to lock down the economy. Fatalities represent a more robust number but they necessarily lag.
The real leading indicator for recovery is a peak in uncertainty – new NPL formation was simply the proxy. And the thing that will reduce uncertainty now is more testing, including serological testing to reveal how many people have had the disease and by extension how many remain susceptible. Unusually financial markets are discounting less uncertainty than broader society because the financial regulators worked quickly to reduce the range of outcomes by supporting asset prices. The VIX ‘uncertainty gauge’ peaked on 16 March and has been coming down since. This is different from 2008/09 when the markets discounted a much higher degree of uncertainty than broader society.
By the time we come out of this, the world will have changed. How fleetingly is a subject of much debate. Some hold the view that life will revert to normal, much as it did after the 1918 flu pandemic. Others have written that the experience may leave generational scars that will influence how we behave for years to come. And a third perspective is that the crisis will simply turbocharge trends that were in train anyway.
Looking back at the financial crisis, several developments seem likely.
First, just as banks were made to sacrifice efficiency for resilience after their crisis, other segments of the economy will be made to do so now. For banks this shift came through their requirement to hold more capital and more liquidity. New financial securities were devised to provide them with contingent capital that would kick in at the point of maximum need (securities that are being stress tested in the current environment). Banks’ historic tendency towards procyclicality – whereby they would take on more risk at the same time as everyone else and vice versa – was tackled by regulators through the introduction of ‘countercyclical’ buffers.
It seems clear that like banks before them, other players in the economy will hoard more cash post this crisis than they would have done prior, either voluntarily or through regulation. The impact this has on the economy could be significant. Moreover, as is apparent looking at bank valuations post-crisis, the equity value of resilience is a lot lower than the equity value of efficiency. That’s not entirely consistent with theory. The argument goes that higher resilience ➡ less risk ➡ lower cost of capital, so even though returns would be lower, valuations need not be. That argument didn’t play out, and banks are left in this crisis trading on lower valuations than they did in the last one. The implications for overall market valuations as more segments of the market choose resilience over efficiency are significant.
Second, the financial crisis put a big spotlight on how a part of the world people had paid little regard to actually works. Ben Bernanke, Chairman of the Federal Reserve, told the House Financial Services Committee on 24 September 2008:
“People are saying, ‘Wall Street, what does it have to do with me?’ That is the way they are thinking about it. Unfortunately, it has a lot to do with them. It will affect their company, it will affect their job, it will affect their economy. That affects their own lives, affects their ability to borrow and to save and to save for retirement and so on.”
The same could be said of public health policy today. Healthcare is a public good. Private health insurance does not provide immunity from the disease nor from the economic and social fallout that stems from it. We want to be as confident going about our lives that our neighbours are no more carriers of disease than they are a security threat. Public healthcare is more than just a safety net, it is a core element of our societal infrastructure: this crisis shows that we need it collectively as well as individually. Of course, in countries like the UK, the health system is publicly-owned, so the asymmetry at the heart of the financial crisis – that banks kept their profits but socialised their losses – is not present. But a debate around resource allocation will inevitably arise, with those at the centre being held to account.
Third, the financial crisis of 2007/08 was not a single event, but a series of crises that rippled through the financial system and ultimately the economy. This point is often lost on those that seek to establish a single cause. As prevailing market conditions change, bad business models have a tendency to be exposed. Warren Buffett said, “Only when the tide goes out do you discover who’s been swimming naked.” So far, the coronavirus crisis has led to a distinct crisis in the oil market. However, few other shoes have dropped that haven’t been picked up by central banks. That could be a function of the economy freezing or it could be because insufficient time has elapsed. It seems likely that more shoes will drop and the further they are from the epicentre, the more concerning.
Fourth, as discussed above it became apparent during the financial crisis that regulators did not have an accurate map of how the financial system worked. The same is true now of how social mixing works. Reports suggest that technology may be deployed to scale up contact tracing as part of a wider exit strategy. The implications for personal privacy are significant. But just as banks agreed to comply in a revised contract with the regulators, people may agree to comply with this. The trade-off between personal privacy and collective health is one that we may be asked to make.
Perhaps the biggest takeaway is a reminder not to fight the last battle. So much work has gone into avoiding a specific repeat of the 2007/08 financial crisis that we forgot to draw more general conclusions from that experience. Those lessons – always mark-to-market, question your models, watch out for correlation, build buffers and map your network – have never been more relevant.
Microscopic view of coronavirus. Source: Getty Images
Share this:
h3 Loading... |
1 | Retrial: A library for Gradle build security | {{ message }}
jack-bradshaw/Retrial
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session. |
2 | Demystifying “SSH-RSA” in OpenSSH Deprecation Notice | A detailed look on what changes and what remains the same
Dorai Ashok S A
p Follow
Published in Level Up Coding
p 5 min read p Oct 30, 2020 --
Listen
Share
Photo by Marcus Dall Col on Unsplash OpenSSH is an implementation of the SSH 2 protocol by developers of the OpenBSD project. It is ubiquitous, and is the most widely deployed SSH software on servers. In the original SSH 2 protocol (RFC 4253), SHA1 was the recommended hashing algorithm. Since then, over the years through various updates, SHA2 is now recommended for Data Integrity Algorithms (RFC 6668), Key Exchange Algorithms (RFC 8268) and Public Key Algorithms (RFC 8332). While OpenSSH has added support for newer algorithms based on SHA2, it has to deprecate the older SHA1 based algorithms at some point. OpenSSH will be deprecating the public key algorithm “ssh-rsa” in a near-future release. The following notice has appeared in release notes since version 8.2,
It is now possible[1] to perform chosen-prefix attacks against the SHA-1 hash algorithm for less than USD$50K. For this reason, we will be disabling the “ssh-rsa” public key signature algorithm that depends on SHA-1 by default in a near-future release.
[1] “SHA-1 is a Shambles: First Chosen-Prefix Collision on SHA-1 and Application to the PGP Web of Trust” Leurent, G and Peyrin, T (2020) https://eprint.iacr.org/2020/014.pdf
SHA1 (Secure Hash Algorithm 1) was a widely used cryptographically secure hashing algorithm until it was broken for all practical purposes in 2017. Since then, there are ongoing efforts in the industry to move towards a secure algorithm such as SHA2. There is no single strategy here that can be adopted by all. To give a few examples, CA/Browser forum had been working on SHA1 deprecation in TLS certificates for years, the fact that TLS certificates had multi-year validity only complicating things. They have since reduced the lifetime of certificates to help manage future deprecations. Version control systems such as Git and Mercurial have their own short-term and long-term plans but the transition is far from complete.
Unless, you are using an implementation of SSH 2 protocol other than OpenSSH, you will likely not be affected and can safely ignore this deprecation notice.
Algorithm Negotiation The deprecation notice is pretty clear that only the signature algorithm is deprecated. To understand what that means, we will need to go back to the original RFC to understand the algorithm negotiation,
SSH 2 is a well designed protocol that allows for client and server to negotiate the algorithms used at various levels, “ssh-rsa” will be listed under host key algorithms. The option -vv of OpenSSH client can be used to get the host key algorithms supported by both client and server,
ssh -fNvv user@host |& grep "debug.*host key algorithms" Sample output debug2: host key algorithms: ssh-rsa,ssh-dss OpenSSH 5.3p1
debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 OpenSSH 7.4p1
Also, the option -Q sig can be used to list signature algorithms supported by OpenSSH client, including the disabled ones. Once deprecated, the host key algorithms list will not have “ssh-rsa” in them, just as “ssh-dss” is not in the host key algorithms of OpenSSH 7.4.
Photo by Nigel Tadyanehondo on Unsplash Confusion Unfortunately, “ssh-rsa” can mean multiple things in SSH 2 protocol, only the algorithm is being deprecated. An attempt to clarify this can be found in RFC 8332.
Public key algorithm As an algorithm name, “ssh-rsa” is used as part of host key verification, and in public key authentication method for user authentication, where signing and verification is performed with SHA1 hash. This will be deprecated. It affects only the implementations of SSH 2 protocol, which need to work with newer versions of OpenSSH.
Public key format and blob As a key format, “ssh-rsa” has no relation to SHA1. It can be found in files such as authorized_keys, known_hosts and id_rsa.pub. And, as encoded string within the public key blob. These won’t be deprecated.
$ cat tmp.pub AAAAB3NzaC1yc2EAAAADAQABAAABAQDPyjl9euMQ4Crj/0VyP69+ltELAM4Wt0GyG8y3ENEtpa/Qv0XcJ1IZ8l3lRRWt5+ame2LKQJwInK1xo3UqL+JdCA1OX9h1ap8wOWEm6ZHiehB0JNe7BgIwPYl69qLpv48Xywtz28BahxZPSDd7k5NxiH4HIUbau3tHlvsO2LOqj9pQOPEDh+GdmMcgEv0ZQMY9B6uKJqI+RdiDgWHNDUW+pFwRi2xzMFQqPCqC07ykKMI8G/Nl3Q7RQuDiRw9AhO/BrdF1NEa3I4fyg09nPkBP351kBrLl17VPgoVP24VZJkZSojEKnp4KkIhGLTfg+5TqI6kx36blHZpx3g8txAQt
ssh-rsa $ cut -d" " -f2 tmp.pub | base64 -d | head -c16 | hexdump -C00000000 00 00 00 07 73 73 68 2d 72 73 61 00 00 00 03 01 |.........|00000010
ssh-rsa
Impact There will likely be zero impact for most users, who only use OpenSSH. The public key algorithms based on SHA2, i.e. rsa-sha2–512 and rsa-sha2–256, were added into OpenSSH in version 7.2 released on 2016–02–29. And, the following table shows the releases of popular Linux distributions that provide OpenSSH 7.2 or newer,
Other implementations The deprecation mostly impacts other implementations of SSH 2 protocol that need to work with newer versions of OpenSSH, such as PuTTY, libssh2 and its dependents. And, in all likelihood, internal implementations of the protocol within various organizations will be affected too.
Programs that hold the private keys and implement the SSH agent protocol, such as ssh-agent and gpg-agent, are also affected by this deprecation. They will need to support the newer signature algorithms based on SHA2 through signature flags.
Takeaway It is not clear yet, what near-future release means and, when the actual deprecation will happen. However, when it does happen, the algorithm will only be disabled by default and individual users can enable it for host key verification with option HostKeyAlgorithms=+ssh-rsa, if needed.
While there isn’t much clarity, on how to enable “ssh-rsa” signature algorithm for user authentication, if it also gets disabled by default. It is possible that through the discovery process, OpenSSH client may decide to use the deprecated algorithm. If that doesn’t happen, try the option PubkeyAcceptedKeyTypes=+ssh-rsa
When authenticating with an RSA key against a server that does not implement the “server-sig-algs” extension, clients MAY default to an “ssh-rsa” signature to avoid authentication penalties. When the new rsa-sha2-* algorithms have been sufficiently widely adopted to warrant disabling “ssh-rsa”, clients MAY default to one of the new algorithms.
It is important to note that “ssh-rsa” signature algorithm will be deprecated only in newer OpenSSH versions. And, future releases of operating systems and distributions may still have “ssh-rsa” enabled by default, for a while. Unless, you are using an implementation of SSH 2 protocol other than OpenSSH, you will likely not be affected and can safely ignore this deprecation notice.
At 0th Root, we are providing a solution, 0th Root Secure Network — 0SNet , that secures organization’s internal web applications with TLS client certificates. Do check out our product, it is easy to deploy and is also available as images on AWS , GCP and Azure . |
12 | The Housing Affordability Crisis Migration | For months, we have seen in the data how the large-scale shifts coming out of the Pandemic have impacted the housing markets around the country. In terms of rents, tenants have left big expensive places, such as San Francisco, Silicon Valley, Boston, or Manhattan, thereby leaving behind high vacancy rates, plunging rents, and massive churn by the stayers-behind that are chasing that free upgrade to a nicer apartment, as landlords are trying to keep their units filled.
And we have seen in the data that this outflux has created, conversely, a large-scale influx in the destination places, usually less expensive markets, and have driven up rents in those markets.
But there is a troubling component to this shift. Rents are plunging in places with some of the highest household incomes, and they’re soaring in places with much lower household incomes, thereby shifting the housing affordability crisis to people who can least afford it – and it has done so in a matter of months.
The table below lists the 16 counties (of the 100 largest counties) with year-over-year rent declines in February between 10% and 24%. It also shows the median 1-BR asking rent and the median household income, based on a new study by Zumper on the affordability issues that the shifts have caused.
Just to make a point about nothing being universal, there is also a county on the list with a large year-over-year rent decline and one of the lowest rents and one of the lowest household incomes in the US ($41,800): Hidalgo County in Texas.
In terms of median household incomes, San Francisco ($124k) and the Silicon Valley counties of Santa Clara ($133k) and San Mateo ($138k) top the list, and they also top the list in terms of rent declines. For these folks that stayed behind, renting has gotten a lot less unaffordable, so to speak. (If your smartphone clips the sixth column on the right, hold your device in landscape position).
And most of the 16 counties that had year-over-year rent whoppers between +10% and +22% are the ones with relatively modest median household incomes, compared to the most affluent counties. Particularly stunning is the 22% rent surge in Detroit and Indianapolis where median household incomes are at the low end of the spectrum, and this is going to hurt.
There is another element here. Some of the counties on this list are geographically near very expensive housing markets with high household incomes, such as the counties of Kern and San Bernardino to the Los Angeles and San Diego urban areas; or the counties of Sacramento and Fresno to the San Francisco Bay Area, indicating that people move to less expensive cities that are within a few hours’ drive from where they used to be.
Given the huge populations, such as in New York City, Los Angeles, or the Bay Area, they can easily distort the rents in smaller less expensive markets. And as rents drop in expensive cities and soar in cheaper cities, the difference between the two collapses to where at some point, the move isn’t worth it anymore.
The chart shows the difference between 1-BR rents in San Francisco and Sacramento, which has collapsed by 50%, and the difference between 1-BR rents in San Francisco and Fresno, which has collapsed by 46%:
What has essentially taken place is a shift of renters from the most affluent counties to the less affluent counties, and once that shift started gaining momentum, rents surged in those receiving counties.
Some of these people who moved retained their jobs but had shifted to working from anywhere; and they brought their high household incomes with them, and they’re used to the much higher rents in their expensive markets, and they’re bidding up the rents and they still think they’re getting a deal.
But at the receiving end of these shifts are renters with relatively modest household incomes that now have to compete with an influx of renters with high household incomes, and they’re now facing massive and unaffordable rent increases.
Enjoy reading WOLF STREET and want to support it? You can donate. I appreciate it immensely. Click on the beer and iced-tea mug to find out how:
Would you like to be notified via email when WOLF STREET publishes a new article? Sign up here. |
1 | Vodafone, Amazon partner to launch 'edge computing' in UK | Vodafone, Amazon partner to launch 'edge computing' in UK June 16, 2021 11:31 AM UTC Updated ago [1/2] An Amazon logo is seen at its centre in Darlington, County Durham, Britain September 3, 2020. REUTERS/Lee Smith
STOCKHOLM, June 16 (Reuters) - Vodafone (VOD.L) said on Wednesday it has partnered with Amazon Web Services (AWS) (AMZN.O) to launch "edge computing" services for its business customers in the UK.
"Edge computing" uses augmented reality and machine learning to analyse bulk data where it was gathered - whether factory floor, oil rig or office space - before moving it to remote servers in the cloud. To work, it needs fast data transfers of the kind that 5G will provide.
The launch follows Vodafone's trials with companies in a range of areas, including sports technology, autonomous transport, biometric security, remote virtual reality, and factory automation.
Vodafone said under optimum conditions, the latency - time required for data to travel between two points - could be as low as 10 milliseconds, compared with an average of 75 milliseconds for 4G.
"Edge Compute and 5G is a combination no other service provider can deliver in Europe, which means we can offer something unique to our customers," said Anne Sheehan, business director, Vodafone UK.
"We have already seen new services being developed by our trialists – the potential for completely new ideas enabled by this combination is massive."
The company will initially offer low-latency "edge computing" services to customers in London and the surrounding area, as well as towns and cities including Oxford, Cambridge, Bristol and Cardiff.
Customers in Scotland and the northern regions of England will get the service in 2022. AWS has edge services in Tokyo, Daejeon, South Korea and 10 cities across the United States.
Reporting by Supantha Mukherjee, European Technology & Telecoms Correspondent, based in Stockholm; editing by Jason Neely Our Standards: The Thomson Reuters Trust Principles.
Read Next
Technologyp Canada facing rising threat from cyberattacks, defence minister says 6:08 AM UTC
Technologyp House lawmakers urge US to rally allies over China Micron ban June 2, 2023
Technologyp Twitter's head of brand safety and ad quality to leave June 2, 2023
Technologyp YouTube to stop removing content making false claims on past elections June 2, 2023 |
4 | BBC presenter Lisa Shaw died of Covid vaccine complications, coroner finds | An award-winning BBC radio presenter died as a result of complications from the AstraZeneca coronavirus vaccine, a coroner has concluded.
Lisa Shaw, who worked for BBC Radio Newcastle, died at the city’s Royal Victoria Infirmary in May, a little more than three weeks after her first dose of the vaccine developed by academics at the University of Oxford.
The inquest heard that Shaw, 44, had been admitted to hospital after doctors investigating her complaints of headaches found she had suffered a brain haemorrhage.
Karen Dilks, the senior coroner for Newcastle, gave a narrative conclusion. “Lisa died due to complications of an AstraZeneca Covid vaccine,” she said.
Shaw, who was referred to by her married name, Lisa Eve, during the hearing, started complaining of headaches a few days after her vaccination. She eventually visited a hospital A&E department in Durham, where she was diagnosed with a blood clot.
She was transferred to the Royal Victoria Infirmary where she received a number of treatments, including cutting away part of her skull to relieve the pressure on her brain, but despite those efforts she died on 21 May.
Her husband, Gareth Eve, attended the inquest with other members of the family.
Tuomo Polvikoski, a pathologist, told the coroner Shaw was fit and healthy before receiving the vaccine. Asked about the underlying cause of the fatal clotting on her brain, he said the clinical evidence “strongly supports the idea that it was, indeed, vaccine induced”.
“Based on available clinical information, it seems to be the most likely explanation,” he said.
Shaw’s death came weeks after the UK’s vaccine advisory panel restricted use of the Oxford/AstraZeneca vaccine to the over-40s, after rare reports of recipients developing unusual blood clots with low platelets. A number of other countries imposed similar restrictions or suspended use of the vaccine entirely.
Deaths linked to the clots are even rarer. There have been 72 deaths in the UK after 24.8 million first doses and 23.9 million second doses of the AstraZeneca vaccine.
Dr Alison Cave, the chief safety officer with the Medicines and Healthcare products Regulatory Agency which approves vaccines for use in the UK, said the benefits of Covid jabs outweighed the risks and urged people to come forward for vaccination if they are eligible. She said: “Lisa Shaw’s death is tragic and our thoughts are with her family.
“As with any serious suspected side effects, reports of fatalities are evaluated by us, including an assessment of post-mortem details if available. We will be reviewing the coroner’s verdict.”
The family issued a statement, which read: “This is another difficult day in what has been a devastating time for us. The death of our beloved Lisa has left a terrible void in our family and in our lives.
“She truly was the most wonderful wife, mum, daughter, sister and friend. We have said all we want to say in public at this time and ask to be left alone to grieve and rebuild our lives in private. Thank you.” |
1 | The Controversy Around Tether USDT – What Are the Tether Problems? | Many of us may have used or come across the name bitcoin at one point in our lives. It is arguably one of the most famous forms of cryptocurrency going around today and will have a huge impact on the way people transact in the future.
But the real problem of bitcoin and cryptocurrencies is the high volatility. As long as these coins will not be stable, we cannot really use them as a medium of exchange. Think about it, would you like to get a salary in bitcoin, or pay someone else a salary in bitcoins.
For that matter, stable coins were created to keep cryptocurrency valuations stable. And, the most widely used and popular stable crypto coin is Tether USDT.
But, there’s a huge controversy about USDT and the company behind the coin – Tether Limited. So, let’s have a look at what are the main issues of this coin, and could it be a huge pyramid that will explode at some point?
Just like we have noted before, there are many types of cryptocurrencies in the market today. Founded in 2014, Tether is among these digital assets and is currently rated the fourth-largest digital coin by market capitalization today, with its total market cap reaching up to $71.1 billion.
However, unlike Bitcoin and these other cryptocurrencies, Tether is a bit more different. It is referred to as a stable coin, meaning it is tied to other fiat currencies or commodities ( Tether Gold ). That means that Tether is tied directly to a price of a real-world asset, which in this case is the U.S. dollar.
Because of the frequency in fluctuations of other cryptocurrencies, something that may bring about dramatic volatility, this form of currency gives crypto investors a smooth way to stability and assurance of this investment growth.
It may come as a surprise to many of us when we hear that Bitcoin is not the most dominant cryptocurrency in the digital market today. Initially launched in 2014, Tether is the backbone of the digital economy per se. It dominates the digital trade and comes top as the most traded asset in the market. For example, Bitcoin has clocked 34.12 billion in volume, while Tether recorded approximately 84.8 billion as of November 3, 2021.
Unfortunately, the most important coin in the market today is also filled with a lot of controversies. Leading marketers and critics have expressed their fear over the transparency of the coin and the fundamental problems of issuing these large number of USDTs. Following several investigations and audits over the cryptocurrency holdings, reports have emerged on the unwillingness of the company to hand over all its data to clear the concerns. That has left critics believing that the company is sourcing its coins elsewhere rather than the digital market.
The controversy has put Tether in hot soup with the United States Department of Justice, which has launched investigations over possible fraud by its executives. On July 21, 2021, the department noted that investigations and scrutiny were underway on whether the company has concealed important information on bank transactions directly linked to crypto.
Notably, this is not the first time that Tether is finding itself in this kind of situation. It seems the company has been shrouded in controversies since its start, largely due to speculations that it is sort of a Ponzi scheme. After all, a private company cannot have these amounts of USD, meaning if something happens in the future, Tether USDT will not have a value.
What’s more? In 2018, the company was faced with yet another failure to conduct auditing that could have confirmed it received a backup from actual fiat currencies in its tokens. What followed after that were controversial reports about its possible artificial inflation of the Bull Run the previous year to help Bitcoin reach a record high during that period in 2017.
The recent controversies are yet again painting another negative picture of the company. Critics of USDT are now claiming a lack of actual backup of the currency by the U.S. dollar – The critical argument about this is its artificial-like props in the market, which they think is an eventual demise and collapse of the entire crypto market.
Despite Tether Limited’s claims, there is sufficient evidence to believe USDT is not 100% backed by the U.S. dollar. The company itself has given enough reasons to think the backing is not purely by the U.S. dollar. Also, let’s be realistic about it – it’s simply not possible.
Quoted in Forbes , the company had a series of changes in its transparency statement following criticisms on who backed up the coin. While the first one denoted the “professional audit” of the reserves, they later changed to “professional verification,” which also spurred criticisms. They finally settled on “daily” reserves on its currency backing.
However, some investors are still enthusiastic about the crypto markets and believe in Tether’s reserves. It remains the top way through which dollars come in and get out of the crypto exchanges despite this backing not appearing too credible as we advance.
Despite the many controversies, Tether moves on. The investigations are still ongoing, which makes it impossible to know the truth of the matter. To be sure, Tether remains one of the most significant coins in the market right now, and if you are a crypto trader – you kind of need USDT. That’s a good thing to have this coin in the crypto market.
The bad and perhaps the ugly thing about all this is the eventual demise it spells on its investors. Some will think that a day will come when it will collapse completely. With this, the cryptocurrency market is neither assured of its future too although many believe cryptos can replace money in the future . As a backbone to the digital economy, Tether would probably go down with the entire crypto market.
Come what may, this battle will continue to go on, but Tether will strengthen its market value further to support the growing interest in digital investment worldwide.
Author
Recent Posts
Follow me Latest posts by AllinAllSpace (see all) Navigating Child Support Arrears: What You Need to Know - May 12, 2023
7 Gaming Security Tips for Parents - April 27, 2023
The Impact of Stablecoins on the Volatile Crypto Market - April 9, 2023 |
1 | In pictures: Container ship blocks Egypt's Suez Canal | In pictures: Container ship blocking the Suez Canal finally on the move
About sharing
EPA The huge container ship is four football fields long After almost a week blocking the Suez Canal, a 400m-long (1,300ft) container ship is finally on the move again.
The Ever Given, operated by Evergreen Marine Corp, became stuck last Tuesday during a sandstorm.
For days it was lodged diagonally on one of the world's key shipping lanes, causing traffic to build up and other ships to be rerouted.
But now, after an operation involving tug boats and dredging, the vessel is fully refloated and heading north.
EPA The Ever Given became stuck a week ago, wedging itself diagonally across the canal EPA Officials from the Suez Canal Authority visited the stranded ship last week to work out a rescue plan EPA Tug boats were deployed to shift the 200,000-tonne vessel EPA Dredgers cleared approximately 30,000 cubic metres of sand from around the ship's hull EPA Nearly 400 other vessels became stuck in a queue behind the ship AFP On Monday, after days of effort, the ship was at last dislodged Reuters The Ever Given continued its passage north Getty Images Last year the Suez Canal was used by an average of 51.5 ships per day Getty Images The Suez Canal, pictured here in 2017, is 193km (120 miles) long and incorporates three natural lakes All pictures are subject to copyright.
Related Topics Egypt
Mediterranean Sea
Shipping industry
Suez canal
More on this story Egypt's Suez Canal blocked by huge container ship
p
Suez Canal: ‘Delays to shipping for next few days’
p
Why the Suez Canal matters - in 60 secs
p
Egypt opens Suez Canal expansion
p |