id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
37,828,740
https://wptavern.com/wordpress-reverts-live-preview-button-on-plugins-after-developer-backlash
WordPress Reverts Live Preview Button on Plugins After Developer Backlash
Sarah Gooding
Last week WordPress meta contributors implemented a “Live Preview” button for plugins in the official directory, with the intention of allowing users to safely test any plugin in one click. The button went live across all of WordPress.org’s 59,000+ plugins but took plugin developers by surprise as it was pushed through without any communication or input from stakeholders. The implementation was premature and failed to take into consideration the many different types of plugins that appear to be broken due to inadequate support in the Playground testing environment. Five weeks ago, Automattic-sponsored Meta team contributor Steve Dufresne commented on the ticket, “Adding that it’s likely still the case (someone else can confirm), not all plugins work in the Playground so we should build in an opt-out mechanism.” This suggestion was roundly ignored by other participants on the ticket and the Playground previews went live. It became immediately apparent that this was done without thorough testing as many plugin authors reported the previews created an unfavorable, broken experience for users. “Who decided to release Preview without posting on make.wordpress.org/plugins/ with some advanced warning to plugin devs?” WordPress developer Alan Fuller asked, starting a discussion in the #meta Slack channel. “Was that a #meta decision? Can it be reverted and due notice given?” Retired Plugins team rep Mika Epstein identified three major use cases that were missed, which she estimates will impact 30-40% of plugins not working in the Playground environment: - It won’t work for add-on plugins (ie. anything for Woo) because we have no way to identify plugin dependancies, and the sandbox won’t know to install the ‘parent’ plugin - It won’t work (well) for anything that requires a lot of customization (WooCommerce itself) - It won’t work AT ALL for anything that’s a server integration (Memcached, Redis, etc). - Multisite Participants noted that DEBUG is also set to True, allowing unrelated warnings and notices to be displayed to the visitor. “It stinks to work really hard on a plugin and then have some preview show up that makes it look totally broken when it’s not,” WordPress developer Ben Sibley said. “This feature is a neat idea, but it needs a lot more work. We’ve gone decades without live previews; why was there suddenly a rush to launch this today when it’s demonstrably unreliable? “As others have stated, this should be rolled back immediately and switched to an opt-in feature. Once it’s rolled back, work on giving plugin devs information about how the preview works so we can decide if it’s right for us or not. There is no rush to release this without proper communication and testing!” Newsletter Glue co-founder Lesley Sim requested the feature be opt-in, contending that the average user won’t have patience if something appears broken and will assume there is a problem with the plugin, not the directory or the playground. “So it ends up reflecting badly on the plugin developer, which can be really stressful for them if it means a loss in (potential) revenue/installs (yes, I understand that many people think this shouldn’t be a key concern, but it is the reality for many small plugin devs) or if they have additional support burden as a result of this feature, which is completely out of their hands,” Sim said. After others echoed these concerns, Automattic-sponsored contributor Alex Shiels, who implemented the feature, said he didn’t expect it would be controversial and said he was “over-optimistic about how smoothly it would work.” He deployed a commit that added an opt-out toggle so plugin committers could disable the Live Preview button. “The reason I didn’t communicate prior to deploy is, there was discussion on the ticket for a month prior; and because Playground has been live for several months now,” Shiels said. “Every published plugin in the directory has already been available for running in the Playground since well before this ticket. All I did was make it easy to get there with a single click. Apologies for catching you all off-guard.” Others requested WordPress.org implement a customizable Demo link url in the readme file, instead of turning Playground previews on for all plugins, along with many more suggestions for making the environment better for showcasing plugins. After continued pushback urging Shiels to make the feature opt-in instead of opt-out, he removed the button on Friday, October 6. “I do want to emphasize that a lot of the worry and concern wasn’t about the fact that a plugin was broken in Playground,” plugin developer Aurooba Ahmed said. “Most of us know if our plugin works in playground or not, it was that a very apparent feature was pushed to the plugin repo that affects how users evaluate plugins, without discussion and feedback from enough of the key stakeholder audiences. “I look forward to seeing how the feature is iterated upon (because ultimately it’s a fantastic concept) so that it can be useful in all the right ways for all stakeholders.” In the meantime, users who enjoy having quick access to Playground may want to check out the Chrome browser extension created by LUBUS, a development agency. It adds a Playground” button to theme and plugin pages on WordPress.org so users can test drive extensions with one click. When adding the opt-out toggle, Shiels commented on the ticket that plugins broken in Playground were broken before the ticket was opened and will remain that way even if the plugin does not opt into the Live Preview button. “I know the Playground team is hard at work on addressing bugs and compatibility issues there,” Shiels said. “And I intend to further improve the Live Preview support in the plugin directory to make things better for users and plugin developers alike. Many of your concerns can be addressed using Blueprints which will allow configuring and installing dependencies, importing demo content, and other neat things. I’ll work on making Blueprint support available as soon as I’ve confirmed some engineering details with the Playground team.” There is more work to be done before this feature is ready for rollout. The live preview button is currently disabled while contributors iron out compatibility issues. Another example how techy/dev/coders are put to be more important than the end-user, the customer, the consumer. Put consumers FIRST! Most of them won’t spend the time testing plugins in their environment as all of them knows that after installing a plugin and removing it, there is a lot of junk in terms of files/database entries that is left behind. This is one more example how the environment is self-serving coders instead of CONSUMERS. We need to put consumers/customers FIRST!
true
true
true
Last week WordPress meta contributors implemented a “Live Preview” button for plugins in the official directory, with the intention of allowing users to safely test any plugin in one cl…
2024-10-13 00:00:00
2023-10-10 00:00:00
https://wptavern.com/wp-…t-6.29.19-pm.png
article
wptavern.com
WP Tavern
null
null
4,260,435
http://news.cnet.com/8301-1023_3-57474126-93/facebook-stock-drops-on-news-of-decline-in-user-base/
CNET: Product reviews, advice, how-tos and the latest news
Jon Reed
Best of the Best Editors' picks and our top buying guides Best of the Best Editors' picks and our top buying guides Upgrade your inbox Get CNET Insider From talking fridges to iPhones, our experts are here to help make the world a little less complicated. ## More to Explore ## Latest ### Today's NYT Connections Hints, Answers and Help for Oct. 13, #490 32 seconds ago### Today's Wordle Hints, Answer and Help for Oct. 13, #1212 34 seconds ago### Today's NYT Strands Hints, Answers and Help for Oct. 13, #224 37 seconds ago### Today's NYT Mini Crossword Answers for Oct. 13 14 minutes ago### Best Internet Providers in Indianapolis, Indiana 1 hour ago### Best Latex Mattress of 2024, Tested and Hand-Selected by Our Experts 1 hour ago### Best Internet Providers in Lakewood, Colorado 1 hour ago### Best Internet Providers in Houston, Texas 2 hours ago### Best Internet Providers in Jacksonville, Oregon 2 hours ago### Best Internet Providers in Johnson City, Tennessee 3 hours ago### Best Internet Providers in Hawaii 4 hours ago### Best Walmart Holiday Deals Still Available: Last Chance for Big Savings on Tech, Home Goods and More 4 hours ago### Best Internet Providers in Honolulu, Hawaii 4 hours ago### The Best Spots in Your Home To Help Indoor Plants Grow 5 hours ago### Best Internet Providers in Jacksonville, North Carolina 5 hours ago## Our Expertise Expertise Lindsey Turrentine is executive vice president for content and audience. She has helped shape digital media since digital media was born. 0357911176 02468104 024681025 ## Tech ## Money ## Crossing the Broadband Divide Millions of Americans lack access to high-speed internet. Here's how to fix that. ## Energy and Utilities ## Deep Dives Immerse yourself in our in-depth stories. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Internet Low-Cost Internet Guide for All 50 States: Despite the End of ACP, You Still Have Options 10/05/2024 ## Sleep Through the Night Get the best sleep of your life with our expert tips. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Tech Tips Get the most out of your phone with this expert advice. Get the best price on everything CNET Shopping helps you get the best prices on your favorite products. Get promo codes and discounts with a single click. Add to Chrome - it's free! ## Home ## Daily Puzzle Answers ## Living Off Grid CNET's Eric Mack has lived off the grid for over three years. Here's what he learned.
true
true
true
Get full-length product reviews, the latest news, tech coverage, daily deals, and category deep dives from CNET experts worldwide.
2024-10-13 00:00:00
2024-10-12 00:00:00
https://www.cnet.com/a/i…t=675&width=1200
website
cnet.com
CNET
null
null
8,436
http://mindfulentrepreneur.com/blog/2007/03/25/are-you-100-committed/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
897,811
http://www.ecornell.com/l-entrepreneur-video-contest
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,309,750
http://neutrondrive.logdown.com/posts/233083-google-drive-support-with-multiple-accounts
Logdown, blog things with Markdown
null
## The missing blogging platform for hackers Blog the richest content, with the least effort. Take a Tour## Revolutionary Editor with Powerful features Supports GitHub Flavored Markdown, LaTex in-editor preview, with the most handy image uploading interfaces. Never feel that easy blogging with code blocks, tables, and even math equations. Auto detect changes and enable prevent window closing to save your draft thoughts in last seconds. Try the Demo now. ## Features done right Be the first to try out Logdown. Get Started Now
true
true
true
Logdown, the missing blogging platform for Hackers
2024-10-13 00:00:00
2024-01-01 00:00:00
http://logdown.com/images/og_earth_l.jpg
null
null
null
null
null
39,781,219
https://arstechnica.com/science/2024/03/health-experts-plead-for-unvaxxed-americans-to-get-measles-shot-as-cases-rise/
Health experts plead for unvaxxed Americans to get measles shot as cases rise
Beth Mole
The Centers for Disease Control and Prevention and the American Medical Association sent out separate but similar pleas on Monday for unvaccinated Americans to get vaccinated against the extremely contagious measles virus as vaccination rates have slipped, cases are rising globally and nationally, and the spring-break travel period is beginning. In the first 12 weeks of 2024, US measles cases have already matched and likely exceeded the case total for all of 2023. According to the CDC, there were 58 measles cases reported from 17 states as of March 14. But media tallies indicate there have been more cases since then, with at least 60 cases now in total, according to CBS News. In 2023, there were 58 cases in 20 states. "As evident from the confirmed measles cases reported in 17 states so far this year, when individuals are not immunized as a matter of personal preference or misinformation, they put themselves and others at risk of disease—including children too young to be vaccinated, cancer patients, and other immunocompromised people," AMA President Jesse Ehrenfeld said in a statement urging vaccination Monday. The latest data indicates that vaccination rates among US kindergarteners have slipped to 93 percent nationally, below the 95 percent target to prevent the spread of the disease. And vaccine exemptions for non-medical reasons have reached an all-time high. The CDC released a health advisory on Monday also urging measles vaccination. The CDC drove home the point that unvaccinated Americans are largely responsible for importing the virus, and pockets of unvaccinated children in local communities spread it once it's here. The 58 measles infections that have been reported to the agency so far include cases from seven outbreaks in seven states. Most of the cases are in vaccine-eligible children aged 12 months and older who are unvaccinated. Of the 58 cases, 54 (93 percent) are linked to international travel, and most measles importations are by unvaccinated US residents who travel abroad and bring measles home with them, the CDC flagged.
true
true
true
The US hit last year’s total in under 12 weeks, suggesting we’re in for a bad time.
2024-10-13 00:00:00
2024-03-19 00:00:00
https://cdn.arstechnica.…7669-scaled.jpeg
article
arstechnica.com
Ars Technica
null
null
40,695,781
https://medium.com/@marginaliant/a-third-of-my-online-college-students-are-ai-powered-spambots-now-what-91c6e34b5d11
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
17,202,761
https://sysadmincasts.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
4,606,441
http://www.rookieoven.com/2012/10/03/finding-a-voice-for-your-startup/
Finding a voice for your startup
Cally Russell
# Finding a voice for your startup Blog post by Cally Russell on the RookieOven blog about Finding a voice for your startup. Read about Scottish startups and the tech community from founders. Cally Russell | Wednesday October 3rd 2012 I had the great pleasure of attending the Social Media Week Glasgow retail event hosted by Incentive Media and Harper McLeod last week. The event focused on how you could use social media in regards to a retail business and it got me thinking about the ‘voice’ adopted by companies, young and old, on social media. Before setting off on my own ventures I worked for an award winning global PR agency and one of the most important things I took from my time there was how vital it is to ensure your message comes across with the correct ‘voice’. ### What do I mean by ‘voice’? How your message sounds to others, the tone within it, the level of knowledge that is portrayed and most importantly a message people feel they can connect with. This is something I am putting into practice with my startup Mallzee. We’re a company that’s about online retail and changing the way people shop. Our audience via social media is one that is style conscious and with this audience getting the ‘voice’ right is vital. That’s why everything that is user-facing, whether it’s a tweet, a flyer or an email to our prelaunch; we think about the ‘voice’ and make sure it fits our company and most importantly, our users. The voice we use isn’t mine. My voice simply doesn’t fit. My tone isn’t the same as our customers and a lot of the time our interests aren’t the same. ### Find yours We know who our customers and we have identified keywords for our voice: Young, Popular, Chic, On Trend and High End. From here, everyone on the team thinks about these words whenever they’re doing something user facing. This is vital. That’s how our customers talk so that’s the voice we need to adopt. It’s difficult at times to make sure we always use this ‘voice’ but we know that it needs to be right or we could put off potential users. What I’m trying to say is that sometimes your own voice and your companies voice aren’t the same thing. It’s not a bad thing, but it’s worth doing something about it. Getting your voice right gives you a much better connection to your users, potential users and gives you an ideal opportunity to create a bond – no matter the industry. That’s what it’s all about, right? Connecting with people in a manner that they feel comfortable with and that gives you the best possible opportunity to keep their custom or win them over. So what is your voice? Have you defined it yet? Previous: Entitlement
true
true
true
Blog post by Cally Russell on the RookieOven blog about Finding a voice for your startup. Read about Scottish startups and the tech community from founders.
2024-10-13 00:00:00
2012-10-03 00:00:00
/images/project-meta.png
article
rookieoven.com
RookieOven
null
null
10,492,098
http://www.au.af.mil/au/awc/awcgate/cst/bh_2013_moore.pdf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,228,307
http://www.mozilla.org/en-US/firefox/23.0.1/releasenotes/
Firefox 23.0.1, See All New Features, Updates and Fixes
null
# Firefox Release Notes Release Notes tell you what’s new in Firefox. As always, we welcome your feedback. You can also file a bug in Bugzilla or see the system requirements of this release. Download Firefox — English (US) Your system may not meet the requirements for Firefox, but you can try one of these versions: Download Firefox — English (US) - **Download Firefox** - **Download Firefox** - **Download Firefox** - **Download Firefox** - **Download Firefox** - **Download Firefox** - **Download for Linux 64-bit** - **Download for Linux 32-bit** - **Firefox for Android** - **Firefox for iOS** ## Get the most recent version Download Firefox — English (US) Your system may not meet the requirements for Firefox, but you can try one of these versions: Download Firefox — English (US) - **Download Firefox** - **Download Firefox** - **Download Firefox** - **Download Firefox** - **Download Firefox** - **Download Firefox** - **Download for Linux 64-bit** - **Download for Linux 32-bit** - **Firefox for Android** - **Firefox for iOS**
true
true
true
null
2024-10-13 00:00:00
2013-08-16 00:00:00
https://www.mozilla.org/…4ad05d4125a5.png
website
mozilla.org
Mozilla
null
null
26,310,078
https://github.com/Who23/nchook
GitHub - Who23/nchook: 🪝 A hook into MacOS's notification center.
Who
A hook into macOS's notification center, to run a script when a notification sent. With homebrew: `brew install who23/formulae/nchook` Start the nchook daemon with `brew services start nchook` Create or symlink an executable file at `~/.config/nchook/nchook_script` . This script will be run as `nchook_script APP TITLE BODY TIME` . Note that TIME is a unix timestamp in seconds.
true
true
true
🪝 A hook into MacOS's notification center. Contribute to Who23/nchook development by creating an account on GitHub.
2024-10-13 00:00:00
2021-02-24 00:00:00
https://opengraph.githubassets.com/7da68a0ce7ad31ea65d58000e0ab44fcfdac1cc03fedb8a1018d1114f41a4150/Who23/nchook
object
github.com
GitHub
null
null
25,027,581
https://instafunc.net
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
90,023
http://www.nytimes.com/2007/12/16/technology/16goog.html?ex=1355461200&en=e8b94d40d6584db4&ei=5090&partner=rssuserland&emc=rss
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,493,793
http://analysisdesignmatrix.com/ADM0002-0003-0006-0000-WindowsWixInstaller.html
ADM0002-0003-0006-0000 Windows WiX Installer
null
**Introduction** Most applications for the Windows platform need to be installed before they can be run. When an application file is downloaded and saved on a computer it usually has an extension of ".msi". A MSI file contains the executable, auxiliary files and configuration information that updates the Windows registry to handle the running application. A relatively easy way to construct a MSI installer file is provided by the WiX (Windows Installer XML) toolset . The Matrix Windows WiX Installer model describes and generates a XML file which is processed by the WiX toolset to produce a MSI Windows Installer package that will deploy an application. This example generates the actual XML file for the Matrix Model Compiler (Learning Edition) and demonstrates non-code Model Compiler generation. Matrix is available as a free download . **Overview** Windows installer packages can be very complex. The Windows WiX Installer model explained here describes just a simple subset of WiX elements. Not only does Microsoft employ WiX to install its own software it is also commonly used by many other companies. The WiX Toolset is free from Microsoft. WiX provides a level of abstraction above what would otherwise be required to write a MSI installer package which would typically involve many low-level function calls. The Windows WiX Installer model raises the abstraction level again, admittedly not by a great deal. In other words, it models WiX elements very closely but does show how the XML tags relate. **Model Walkthrough** The Domain Bridge Diagram (DBD) shows the Matrix Installer domain. The model is presented here as an isolated domain or rather as a stand-alone model but normally it would form a low lying architectural domain of a Software Architecture meta model or Model Compiler that is designed to target the Windows platform. **Realm:Analysis_Of_Application** The Entity Relationship Diagram (ERD) shows the relationships between the various entities in the system. For the Matrix Installer domain, all the entities correspond to WiX XML elements and all relationships are described by the rather bland but true term of "Includes". The WiX XML file is very hierarchical. By following the relationship directions it can easily be seen which elements are at the top of the hierarchy and which are at the bottom and which are in between. Reflexive relationships on both the Directory and Feature entities indicate that their object instances can be embedded within themselves. For example, a Directory can be created at installation time that contains other Directories. Reflexive relationships also indicate that events are to be sent from objects of an entity to objects of the same entity in asynchronous models or recursive process includes are to be used in synchronous models. The notation used by the diagrams below is based on Shlaer-Mellor Method notation (similar to UML) . To see the full size ERD, click on the diagram. **Domain:Matrix_Installer** The Windows WiX Installer model is an asynchronous model and as such presents certain problems that arise out of the fact that the Matrix language is inherently concurrent but what is required is to sequentialise all text generation for output to a single file. A pattern that can be seen in the collection of STDs below is that of a state that asks many objects in another entity to do something via events. In this case write text out to a file. Obviously, the order in which the text is output from the objects can be important and so a mechanism must be found to control the normally concurrent aspect of the modeling technique. It can be seen in the Matrix model code (but not shown here) that extra attributes all called "Processed" and extra relationships called "Writing" are included to sequentialise the operation of the model. Eventually, a pattern statement will be developed to automatically instrument these extra attributes and relationships which will hide the their implementation from the developer. This technique is also found in asynchronously written Model Compilers. It is sometimes acceptable for a state to fire-and-forget events at a whole bunch of objects where the order happens to be irrelevant. In most cases the order is important. A greater problem is caused by the order of event processing which is inherently indeterminable. During processing, events may be intermixed with previously generated events in situations where the order of event handling is crucial. On the STD picture below, click once to get an expanded view of the diagram. Clicking again will reveal the same diagram again but this time annotated to show the source of actual events and their destination among the collaborating entity STDs. An event based notation is used . **Domain State Transition Diagrams** **Matrix Model** There are three different versions of the Windows WiX Installer model. The Matrix model, test scenario and generated source code files are available on GitHub: **Windows_Wix_Installer** - The asynchronous Matrix Windows WiX Installer model which splits functionality among several state machines . **Prof_Asyn_Windows_Wix_Installer** - The professional edition asynchronous model which uses process include statements to arrange the model's code in several files according to entity . **Prof_Sync_Windows_Wix_Installer** - The professional edition synchronous model that uses one event, one STD and process include statements some of which are invoked recursively . Populating the model with configuration data (M0 Real World Objects) by hand can be labourious. However, a Domain Specific Language (DSL) could be created as part of a separate application to greatly ease this process. **Expected Output** All three versions of the model produce the identical XML WiX source file (.wxs). The file runs to over 6,000 lines . |
true
true
true
null
2024-10-13 00:00:00
2017-01-01 00:00:00
null
null
null
null
null
null
16,010,748
http://www.partym.org/issues/taxes/index.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
40,985,389
https://unocss.dev/presets/attributify
UnoCSS
Anthony Fu
# Attributify preset This enables the attributify mode for other presets. ## Installation `pnpm add -D @unocss/preset-attributify` `yarn add -D @unocss/preset-attributify` `npm install -D @unocss/preset-attributify` ``` import presetAttributify from '@unocss/preset-attributify' export default defineConfig({ presets: [ presetAttributify({ /* preset options */ }), // ... ], }) ``` TIP This preset is included in the `unocss` package, you can also import it from there: `import { presetAttributify } from 'unocss'` ## Attributify Mode Imagine you have this button using Tailwind CSS's utilities. When the list gets longer, it becomes really hard to read and maintain. ``` <button class="bg-blue-400 hover:bg-blue-500 text-sm text-white font-mono font-light py-2 px-4 rounded border-2 border-blue-200 dark:bg-blue-500 dark:hover:bg-blue-600"> Button </button> ``` With attributify mode, you can separate utilities into attributes: ``` <button bg="blue-400 hover:blue-500 dark:blue-500 dark:hover:blue-600" text="sm white" font="mono light" p="y-2 x-4" border="2 rounded blue-200" > Button </button> ``` For example, `text-sm text-white` could be grouped into `text="sm white"` without duplicating the same prefix. ## Prefix self-referencing For utilities like `flex` , `grid` , `border` , that have the utilities same as the prefix, a special `~` value is provided. For example: ``` <button class="border border-red"> Button </button> ``` Can be written as: ``` <button border="~ red"> Button </button> ``` ## Valueless attributify In addition to Windi CSS's attributify mode, this preset also supports valueless attributes. For example, `<div class="m-2 rounded text-teal-400" />` now can be `<div m-2 rounded text-teal-400 />` INFO Note: If you are using JSX, `<div foo>` might be transformed to `<div foo={true}>` which will make the generated CSS from UnoCSS fail to match the attributes. To solve this, you might want to try `transformer-attributify-jsx` along with this preset. ## Properties conflicts If the name of the attributes mode ever conflicts with the elements' or components' properties, you can add `un-` prefix to be specific to UnoCSS's attributify mode. For example: ``` <a text="red">This conflicts with links' `text` prop</a> <!-- to --> <a un-text="red">Text color to red</a> ``` Prefix is optional by default, if you want to enforce the usage of prefix, set ``` presetAttributify({ prefix: 'un-', prefixedOnly: true, // <-- }) ``` You can also disable the scanning for certain attributes by: ``` presetAttributify({ ignoreAttributes: [ 'text' // ... ] }) ``` ## TypeScript support (JSX/TSX) Create `shims.d.ts` with the following content: By default, the type includes common attributes from `@unocss/preset-uno` . If you need custom attributes, refer to the type source to implement your own type. ### Vue Since Volar 0.36, it's now strict to unknown attributes. To opt-out, you can add the following file to your project: ``` declare module '@vue/runtime-dom' { interface HTMLAttributes { [key: string]: any } } declare module '@vue/runtime-core' { interface AllowedComponentProps { [key: string]: any } } export {} ``` ### React ``` import type { AttributifyAttributes } from '@unocss/preset-attributify' declare module 'react' { interface HTMLAttributes<T> extends AttributifyAttributes {} } ``` ### Vue 3 ``` import type { AttributifyAttributes } from '@unocss/preset-attributify' declare module '@vue/runtime-dom' { interface HTMLAttributes extends AttributifyAttributes {} } ``` ### SolidJS ``` import type { AttributifyAttributes } from '@unocss/preset-attributify' declare module 'solid-js' { namespace JSX { interface HTMLAttributes<T> extends AttributifyAttributes {} } } ``` ### Svelte & SvelteKit ``` declare namespace svelteHTML { import type { AttributifyAttributes } from '@unocss/preset-attributify' type HTMLAttributes = AttributifyAttributes } ``` ### Astro ``` import type { AttributifyAttributes } from '@unocss/preset-attributify' declare global { namespace astroHTML.JSX { interface HTMLAttributes extends AttributifyAttributes { } } } ``` ### Preact ``` import type { AttributifyAttributes } from '@unocss/preset-attributify' declare module 'preact' { namespace JSX { interface HTMLAttributes extends AttributifyAttributes {} } } ``` ### Attributify with Prefix ``` import type { AttributifyNames } from '@unocss/preset-attributify' type Prefix = 'uno:' // change it to your prefix interface HTMLAttributes extends Partial<Record<AttributifyNames<Prefix>, string>> {} ``` ## Options ### strict **type:**`boolean` **default:**`false` Only generate CSS for attributify or class. ### prefix **type:**`string` **default:**`'un-'` The prefix for attributify mode. ### prefixedOnly **type:**`boolean` **default:**`false` Only match for prefixed attributes. ### nonValuedAttribute **type:**`boolean` **default:**`true` Support matching non-valued attributes. ### ignoreAttributes **type:**`string[]` A list of attributes to be ignored from extracting. ### trueToNonValued **type:**`boolean` **default:**`false` Non-valued attributes will also match if the actual value represented in DOM is `true` . This option exists for supporting frameworks that encodes non-valued attributes as `true` . Enabling this option will break rules that ends with `true` . ## Credits Initial idea by @Tahul and @antfu. Prior implementation in Windi CSS by @voorjaar.
true
true
true
The UnoCSS preset that enables the attributify mode for other presets.
2024-10-13 00:00:00
2024-09-14 00:00:00
https://unocss.dev/og.png#1
website
unocss.dev
Antfu7
null
null
3,145,613
http://divisbyzero.com/2011/10/07/the-danger-of-false-positives/
The danger of false positives
Dave Richeson
As I mentioned earlier, I’m teaching a first-year seminar this semester called “Science or Nonsense?” On Monday and Wednesday this week we discussed some math/stats/numeracy topics. We talked about the Sally Clark murder trial, the prosecutor’s fallacy, the use of DNA testing in law enforcement, Simpson’s paradox, the danger of false positives, and the 2009 mammogram screening recommendations. I made a GeoGebra applet to illustrate the dangers of false positives. So I thought I’d share that here. Here’s the statement of the problem. Suppose Lenny Oiler visits his doctor for a routine checkup. The doctor says that he must test all patients (regardless of whether they have symptoms) for rare disease called *analysisitis.* (This horrible illness can lead to severe pain in a patient’s epsylawns and del-tahs. It should not to be confused with *analysis situs*.) The doctor says that the test is 99% effective when given to people who are ill (the probability the test will come back positive) and it is 95% effective when given to people who are healthy (it will come back negative). Two days later the doctor informs Lenny that the test came back positive. **Should Lenny be worried?** Surprisingly, we do not have enough information to answer the question, and Lenny (being pretty good at math) realizes this. After a little investigating he finds out that approximately 1 in every 2000 people have analysisitis (about 0.05% of the population). **Now should Lenny be worried?** Obviously he should take notice because he tested positive. But he should not be too worried. It turns out that there is less than a 1% chance that he has analysisitis. Notice that there are four possible outcomes for a person in Lenny’s position. A person is either ill or healthy and the test may come back positive or negative. The four outcomes are shown in the chart below. Test result | Ill | Healthy | Positive | true positive | false positive | Negative | false negative | true negative | Obviously, the two red boxes are the ones to worry about because the test is giving the incorrect result. But in this case, because the test came back positive, we’re interested in the top row. For simplicity, suppose the city that is being screened has a population of 1 million. Then approximately (1000000)(0.0005)=500 people have the illness. Of these (500)(.99)=495 will test positive and (500)(0.01)=5 will test negative. Of the 999,500 healthy people (999500)(.05)=49975 will test positive and (999500)(.95)=949525 will test negative. This is summarized in the following chart. Test result | Ill | Healthy | Positive | 495 | 49975 | Negative | 5 | 949525 | Thus, 495+49975=50470 people test positive, and of these only 495 are ill. So the chance that a recipient of positive test result is sick is 495/50470=0.0098=0.98%. That should seem shockingly low! I wonder how many physicians are aware of this phenomenon. You can try out this or other examples using this GeoGebra applet that I made. I really like the use of geogebra to illustrate this. This is one of those questions (not uncommon in probability) where you are fighting against “common sense” (which sometimes turns out to be nonsense:). I modeled a similar problem in Fathom for use with a senior high school class a few years back & describe it here: http://www.mathrecreation.com/2009/03/false-positives.html Hi, Your analysis reminds me of Bayes’ theory, does it not? But I really like your game-like table approach of explaining the problem. I have never thought of this. I show this in my class as an application of Bayes’ rule – although now I am wondering if the formal algebraic manipulations actually do more to obscure rather than illuminate the intuitive reasons for this phenomena. Leonard Mlodinow discusses the same thing in “The Drunkard’s Walk”, but this line of argument is a revelation of the ideas behind Baye’s Theorem. Finally, GeoGebra applet is a great idea. can you please put download link for the geogebra worksheet as some of us do not have working Java. Your analysis reminds me of Bayes’ theory, does it not?When I’ve written about this phenomenon, I’ve called the disease “Bayesianitis” ! Hi, I don’t understand why, in your conclusions, when you count people who test positive, you consider 500 + 49975 instead of 495 + 49975. Obviously the percentage concerning “the chance that a recipient of positive test result is sick” rises a little, but I would know if I was wrong! Thank u, your posts are always a shaking reading. Thanks for catching that. It is fixed now.
true
true
true
As I mentioned earlier, I’m teaching a first-year seminar this semester called “Science or Nonsense?” On Monday and Wednesday this week we discussed some math/stats/numeracy topic…
2024-10-13 00:00:00
2011-10-07 00:00:00
https://divisbyzero.com/…t-1-53-34-pm.png
article
divisbyzero.com
David Richeson: Division by Zero
null
null
1,308,512
http://scienceblogs.com/pharyngula/2010/04/they_arent_doing_the_right_tes.php
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,769,300
https://medium.freecodecamp.com/lessons-learned-from-leading-women-in-tech-organizations-37eca542b2a5#.5fdyduuo2
Lessons Learned from Leading Women in Tech Organizations
Freecodecamp
By Alaina Kafkes Without keeping life’s stressors in check, any well-intentioned community leader could crumble as an individual while simultaneously attempting to serve their community. This article is part-guidebook, part-reflection on my leadership of women in technology communities at Northwestern. I’ve codified some of the lessons I’ve had to learn — sometimes the hard way — into eight points of advice for future leaders. Though I speak from my experience as a woman in leadership, many of these points are *not* unique to women. Other underrepresented minorities may find them relatable, too. #### My Background in Code and Community-Building My interests in coding and community-building have always gone hand in hand. Before I even declared my computer science major, I joined the executive board of Northwestern University’s Women in Computing (WiC). After an unforgettable experience at #GHC15, I sought to build WiC into a women in technology *community* rather than a club. I piloted an undergraduate-graduate mentorship program. I connected groups of WiC members with local technology-oriented non-profits like She Is Code. Soon after being chosen as co-president of WiC, I approached my fellow WiC co-president with an ambitious idea: founding a newcomer-friendly, interdisciplinary women’s hackathon right here in Chicago. And just a few days ago, we released hacker applications for the inaugural BuildHer. I consider myself lucky to have played a part in the growth of WiC and the instantiation of BuildHer. When I graduate Northwestern, I firmly believe that these organizations will continue their upward trends in attracting and satisfying members. I feel immense joy when I envision future generations of Northwestern women advancing the community-building missions of WiC and BuildHer. Last week, WiC opened up applications for its executive board — the first one I won’t be a part of. Although I have many incredible memories from WiC and BuildHer, it would be irresponsible of me to treat the role of leader as the “paragon” amongst women in technology. Yes, many women in the WiC community seemed to know and admire me, but leading women in technology communities did not make me “perfect” — far from it. At times, I struggled to balance my commitments, and I felt like I couldn’t give my all to WiC and BuildHer, let alone my academics, my professional opportunities, my friends and family, and my well-being. **But serving as a women in technology community leader has taught me so much more than it has taken from me.** Without further ado, here’s my advice for future community-builders and leaders. **Lesson #1: Running women in technology communities can lead to the erasure of the leader’s academic and/or vocational passions within technology.** Championing the cause of women in technology can feel all-consuming for individuals, and this sentiment is only exacerbated amongst community-builders. Because I spent so much of my energy shaping WiC and supporting its members, I did not seek out opportunities to pursue other technological interests beyond advocating for women in technology. My primary identity in technology communities became “woman” or “leader” even though I have a whole host of tech-y passions such as health tech and blockchain. Though I stand in solidarity with *all* women in technology, I believe that serving as a leader in organizations like WiC and BuildHer temporarily usurped my passions and diminished my identity. Only after I realized this and re-evaluated how I spent my time was I able to rekindle my other computing interests. **Lesson #2: Select future leaders based on passion, vision, and empathy, not raw technical ability.** Meg — my BuildHer co-founder — opened my eyes to these first two criteria as we sifted through applications for potential BuildHer co-organizers. One of her insights in particular struck me: successful, dedicated women in technology leaders should care as much — if not more! — about the cause as about coding itself. Choosing our fellow BuildHer co-organizers based off the aspirations and dreams they shared with us has blessed us with a board that has devoted themselves to making the inaugural BuildHer successful. Case in point: a few BuildHer co-organizers stayed up all night to paint Northwestern’s famous rock with the BuildHer logo and hashtag to increase our campus presence. Aside from passion and vision, I firmly believe that empathy is another key trait to select for in women in technology leaders. I’ll discuss that in more detail below. **Lesson #3: All leaders lead, but the best leaders ***must* empathize. *must*empathize. This is especially true for community-driven organizations like WiC. I am proud of the many milestones that WiC hit while I was co-president — how we grew WiC from 10 attendees to over 70 at our biggest events, for instance — but they cannot hold a candle to the heart-to-hearts that I’ve had with WiC members about our hopes, dreams, struggles, and fears. I believe that my greatest impact on WiC — or, perhaps on Northwestern’s computer science community as a whole — has been convincing dozens of women that they belong in this hyper-competitive albeit exciting and rewarding industry. **WiC Whiteboards: Why do you support women in STEM?** _Thank you to everyone that came out to take pictures and express their support for women in STEM!_www.facebook.com In ten years, people are more likely to remember the personal impact someone had on them as a leader rather than the specific initiatives implemented by that person. This principle applies beyond the scope of running women in technology communities, so I hope to carry it with me throughout life. **Lesson #4: There ***is* a secret formula for increasing participation in women in technology communities. *is*a secret formula for increasing participation in women in technology communities. I’ve found a few common characteristics shared by WiC’s most attended events between 2015 and 2017. They are: - Plan, advertise, and execute a killer back-to-school event that gets members hyped about forthcoming events and opportunities - Host events frequently and at consistent times/locations - Brainstorm events that will share a skill, teach a new concept, or otherwise add value to your community - Encourage regular, candid feedback from both executive and general members to incorporate into future event planning Oh, and don’t forget to throw a non-technical social event from time to time! According to WiC’s most recent feedback survey, community tops the list of things WiC members wish to gain by being a part of WiC. **Lesson #5: Use your network to connect people to each other and to the overarching community.** Though I knew almost every WiC member by name, not everyone else did. Often, I would experience major déjà vu when talking to a WiC member, only to realize that I’d had a similar conversation with another WiC member recently! I would capitalize on these opportunities to create and strengthen friendships within WiC by introducing members with related interests to one another. Besides building a interwoven community, connecting members with related interests can also drive innovation and entrepreneurship. Recent example: I connected two sophomores who wanted to further explore health tech, and now they are coding a menstrual health app at the Feinberg School of Medicine. How rad is that? When members of a community bond, they tend to stick with the greater community, too; even if someone pursued tech out of genuine interest, the friendships that they will build in organizations like WiC can convince them to stay in tech. **Lesson #6: Be wary of hyper-competitiveness.** Personal accounts that I’ve heard from the women of WiC suggest that being among the only women in a class, research group, or other community can amplify the pressure to be the “women in tech paragon.” I put this term in quotes because, truly, there’s no such thing; each member of WiC and BuildHer adds value to these communities thanks to their unique passions, commitments, backgrounds, and goals. Pressure to be the exemplary woman in tech — to prove oneself in a male-dominated field — infects almost every woman. I’ve even seen some of the negative repercussions of this syndrome at Grace Hopper. But how can a leader detect the difference between a competitive individual and someone striving to be the “women in tech paragon?” My advice to women in tech leaders is to read the writing of other women in technology prolifically, and share pieces that put in perspective the feelings of loneliness and unworthiness that may inspire the “women in tech paragon” syndrome. I’ve shared articles (like this one) over the BuildHer email and in WiC’s private Facebook group in order to provide further support for aspiring women in tech at Northwestern. **Lesson #7: Make newcomers and less experienced members feel welcome.** Don’t just say hi to any new faces at every event; ask for their name and if they’d like to connect with your women in technology organization over email, on social media, et cetera. Reach out to women in introductory computer science courses by asking a professor or teaching assistant to share your event calendar or contact information over Piazza. For tech talks or other skill-building events, teach a beginner-level skill at the beginning and then move onto a more intermediate- or advanced-level topic for the remainder of the event. This brings me to my final point… **Lesson #8: Be inclusive.** Intersectionality is much more than a buzzword; understanding and catering to intersectional identities is a must in women in technology communities. Being a “woman” is just one aspect of a person’s identity, so not all members of a women in technology organization will have the same needs. Some steps that WiC has taken to better serve intersectional identities within our membership include organizing a Friday night shabbat during #GHC16 and advertising fellowships like CODE2040… but we could do *so* much better. That’s why for BuildHer, the co-organizers have committed to bringing in speakers that reflect the full spectrum of diversity amongst BuildHer hackers — not just one type of woman. Leading WiC and founding BuildHer has been an honor, as is sharing some of the wisdom that I have accrued along the way. Future women in technology leaders: know that organizing and building a community may not be all rosy, but it is quite rewarding. No matter what you do with this advice, I hope you soar. *If this piece resonated with you, please share & recommend. Any questions? Reach out to me on Twitter. Thank you Amy Chen, Steven Bennett, Rohan Verma, Alyss Noland, and Maxcell Wilson for sharing your feedback as I wrote and revised this piece.*
true
true
true
By Alaina Kafkes Without keeping life’s stressors in check, any well-intentioned community leader could crumble as an individual while simultaneously attempting to serve their community. This article is part-guidebook, part-reflection on my leadershi...
2024-10-13 00:00:00
2017-02-28 00:00:00
https://cdn-media-1.free…JDT6IfvtS-w.jpeg
article
freecodecamp.org
freeCodeCamp.org
null
null
29,976,476
https://infopages.traveldoc.aero/information/coronavirus
- InfoPage
null
To help us improve our customer experience we collect and analyze information about our website traffic. By clicking on "Accept" you agree to set cookies storing information relating to all technologies outlined in our Privacy Policy. You can remove your consent at any time by clicking on Remove Cookie Consent at the foot of the page.
true
true
true
null
2024-10-13 00:00:00
2024-01-01 00:00:00
null
null
null
null
null
null
3,242,975
http://lcamtuf.blogspot.com/2011/11/tangled-web-is-out.html
"The Tangled Web" is out
null
`939758568` gets you 30% off. No Starch provides a complimentary, DRM-free PDF, Mobi, and ePub bundle with every paper copy; you can also buy e-book edition separately. Kindle and other third-party formats should be available very soon. More info about the book itself, including a sample chapter, can be found on this page. Are there yet any plans for translation to other languages? ReplyDeleteIt's up to individual publishers to buy the rights, translate, and publish it in other countries. ReplyDeleteAbout the best advice I can give is that if you want to see it happen, you should probably check with your local IT publisher :-) Amazon is quoting December the 12th for availability here in the UK. The sample chapter was superb (not sure I ever thought I’d be applying that superlative to a chapter on HTTP). I love the characterisation of some of the more esoteric HTTP request methods as “thought experiments”. ReplyDeleteFWIW, if you're in the UK, you can order a print copy from No Starch at 40% off, with shipping around ~$10. Probably comparable in price, and you get a free PDF right away. ReplyDelete
true
true
true
Okay, okay, it's official. You can now buy The Tangled Web from Amazon , Barnes & Noble , and all the other usual retailers for around $30....
2024-10-13 00:00:00
2011-11-15 00:00:00
https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_vWGKss8UzWl3I2FHc40Vp4z_SxlqpWZ3bpEFU9wmb3HO_nDL0uiXRbVN55vZZ31JycovaX5pe2ZMRccTys0kIHiDNCL_ro9LDwSOenVVyFu6TwqSkUvQ=w1200-h630-p-k-no-nu
null
blogspot.com
lcamtuf.blogspot.com
null
null
3,681,116
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Appendix D
null
Numerical Computation Guide | ## What Every Computer Scientist Should Know About Floating-Point Arithmetic Note This appendix is an edited reprint of the paperWhat Every Computer Scientist Should Know About Floating-Point Arithmetic, by David Goldberg, published in the March, 1991 issue of Computing Surveys. Copyright 1991, Association for Computing Machinery, Inc., reprinted by permission. ## Abstract Floating-point arithmetic is considered an esoteric subject by many people. This is rather surprising because floating-point is ubiquitous in computer systems. Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on those aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating-point standard, and concludes with numerous examples of how computer builders can better support floating-point. Categories and Subject Descriptors: (Primary) C.0 [Computer Systems Organization]: General -- instruction set design; D.3.4 [Programming Languages]: Processors --compilers, optimization; G.1.0 [Numerical Analysis]: General --computer arithmetic, error analysis, numerical algorithms(Secondary)D.2.1 [Software Engineering]: Requirements/Specifications -- languages; D.3.4 Programming Languages]: Formal Definitions and Theory --semantics; D.4.1 Operating Systems]: Process Management --synchronization.General Terms: Algorithms, Design, Languages Additional Key Words and Phrases: Denormalized number, exception, floating-point, floating-point standard, gradual underflow, guard digit, NaN, overflow, relative error, rounding error, rounding mode, ulp, underflow. ## Introduction Builders of computer systems often need information about floating-point arithmetic. There are, however, remarkably few sources of detailed information about it. One of the few books on the subject, Floating-Point Computationby Pat Sterbenz, is long out of print. This paper is a tutorial on those aspects of floating-point arithmetic (floating-pointhereafter) that have a direct connection to systems building. It consists of three loosely connected parts. The first section, Rounding Error, discusses the implications of using different rounding strategies for the basic operations of addition, subtraction, multiplication and division. It also contains background information on the two methods of measuring rounding error, ulps and`relative` `error` . The second part discusses the IEEE floating-point standard, which is becoming rapidly accepted by commercial hardware manufacturers. Included in the IEEE standard is the rounding method for basic operations. The discussion of the standard draws on the material in the section Rounding Error. The third part discusses the connections between floating-point and the design of various aspects of computer systems. Topics include instruction set design, optimizing compilers and exception handling.I have tried to avoid making statements about floating-point without also giving reasons why the statements are true, especially since the justifications involve nothing more complicated than elementary calculus. Those explanations that are not central to the main argument have been grouped into a section called "The Details," so that they can be skipped if desired. In particular, the proofs of many of the theorems appear in this section. The end of each proof is marked with the z symbol. When a proof is not included, the z appears immediately following the statement of the theorem. ## Rounding Error Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation. The section Relative Error and Ulps describes how it is measured. Since most floating-point calculations have rounding error anyway, does it matter if the basic arithmetic operations introduce a little bit more rounding error than necessary? That question is a main theme throughout this section. The section Guard Digits discusses guarddigits, a means of reducing the error when subtracting two nearby numbers. Guard digits were considered sufficiently important by IBM that in 1968 it added a guard digit to the double precision format in the System/360 architecture (single precision already had a guard digit), and retrofitted all existing machines in the field. Two examples are given to illustrate the utility of guard digits.The IEEE standard goes further than just requiring the use of a guard digit. It gives an algorithm for addition, subtraction, multiplication, division and square root, and requires that implementations produce the same result as that algorithm. Thus, when a program is moved from one machine to another, the results of the basic operations will be the same in every bit if both machines support the IEEE standard. This greatly simplifies the porting of programs. Other uses of this precise specification are given in Exactly Rounded Operations. ## Floating-point Formats Several different representations of real numbers have been proposed, but by far the most widely used is the floating-point representation. 1Floating-point representations have a base (which is always assumed to be even) and a precisionp. If = 10 andp= 3, then the number 0.1 is represented as 1.00 × 10-1. If = 2 andp= 24, then the decimal number 0.1 cannot be represented exactly, but is approximately 1.10011001100110011001101 × 2-4.In general, a floating-point number will be represented as ± (1) .d.dd... d×, whereed.dd... dis called thesignificandand has2pdigits. More precisely ±d0. d1d2...dp-1×represents the numbere The term floating-point numberwill be used to mean a real number that can be exactly represented in the format under discussion. Two other parameters associated with floating-point representations are the largest and smallest allowable exponents,emaxandemin. Since there areppossible significands, andemax-emin+ 1 possible exponents, a floating-point number can be encoded in bits, where the final +1 is for the sign bit. The precise encoding is not important for now. There are two reasons why a real number might not be exactly representable as a floating-point number. The most common situation is illustrated by the decimal number 0.1. Although it has a finite decimal representation, in binary it has an infinite repeating representation. Thus when = 2, the number 0.1 lies strictly between two floating-point numbers and is exactly representable by neither of them. A less common situation is that a real number is out of range, that is, its absolute value is larger than × or smaller than 1.0 × . Most of this paper discusses issues due to the first reason. However, numbers that are out of range will be discussed in the sections Infinity and Denormalized Numbers. Floating-point representations are not necessarily unique. For example, both 0.01 × 10 1and 1.00 × 10-1represent 0.1. If the leading digit is nonzero (d00 in equation (1) above), then the representation is said to benormalized. The floating-point number 1.00 × 10-1is normalized, while 0.01 × 101is not. When = 2,p= 3,emin= -1 andemax= 2 there are 16 normalized floating-point numbers, as shown in FIGURE D-1. The bold hash marks correspond to numbers whose significand is 1.00. Requiring that a floating-point representation be normalized makes the representation unique. Unfortunately, this restriction makes it impossible to represent zero! A natural way to represent 0 is with 1.0 × , sincethis preserves the fact that the numerical ordering of nonnegative real numbers corresponds to the lexicographic ordering of their floating-point representations.3When the exponent is stored in akbit field, that means that only 2- 1 values are available for use as exponents, since one must be reserved to represent 0.kNote that the × in a floating-point number is part of the notation, and different from a floating-point multiply operation. The meaning of the × symbol should be clear from the context. For example, the expression (2.5 × 10 -3) × (4.0 × 102) involves only a single floating-point multiplication. FIGURE D-1 Normalized numbers when = 2,p= 3,emin= -1,emax= 2 ## Relative Error and Ulps Since rounding error is inherent in floating-point computation, it is important to have a way to measure this error. Consider the floating-point format with = 10 and p= 3, which will be used throughout this section. If the result of a floating-point computation is 3.12 × 10-2, and the answer when computed to infinite precision is .0314, it is clear that this is in error by 2 units in the last place. Similarly, if the real number .0314159 is represented as 3.14 × 10-2, then it is in error by .159 units in the last place. In general, if the floating-point numberd.d...d×is used to representez, then it is in error byd.d...d- (z/)eunits in the last place.p-14,5The termulpswill be used as shorthand for "units in the last place." If the result of a calculation is the floating-point number nearest to the correct result, it still might be in error by as much as .5 ulp. Another way to measure the difference between a floating-point number and the real number it is approximating isrelative error, which is simply the difference between the two numbers divided by the real number. For example the relative error committed when approximating 3.14159 by 3.14 × 100is .00159/3.14159 .0005.To compute the relative error that corresponds to .5 ulp, observe that when a real number is approximated by the closest possible floating-point number (2)d.dd...dd×,ethe error can be as large as 0.00...00' ×, where ' is the digit /2, there areepunits in the significand of the floating-point number, andpunits of 0 in the significand of the error. This error is ((/2)) ×-p. Since numbers of the formed.dd...dd×all have the same absolute error, but have values that range betweeneand ×e, the relative error ranges between ((/2)e) ×-p/eand ((/2)e) ×-p/e. That is,e+1 In particular, the relative error corresponding to .5 ulp can vary by a factor of . This factor is called the wobble. Setting = (/2)to the largest of the bounds in (2) above, we can say that when a real number is rounded to the closest floating-point number, the relative error is always bounded by-pe, which is referred to asmachine epsilon.In the example above, the relative error was .00159/3.14159 .0005. In order to avoid such small numbers, the relative error is normally written as a factor times , which in this case is = (/2) = 5(10)-p-3= .005. Thus the relative error would be expressed as (.00159/3.14159)/.005) 0.1.To illustrate the difference between ulps and relative error, consider the real number x= 12.35. It is approximated by = 1.24 × 101. The error is 0.5 ulps, the relative error is 0.8. Next consider the computation 8 . The exact value is 8x= 98.8, while the computed value is 8 = 9.92 × 101. The error is now 4.0 ulps, but the relative error is still 0.8. The error measured in ulps is 8 times larger, even though the relative error is the same. In general, when the base is , a fixed relative error expressed in ulps can wobble by a factor of up to . And conversely, as equation (2) above shows, a fixed error of .5 ulps results in a relative error that can wobble by .The most natural way to measure rounding error is in ulps. For example rounding to the nearest floating-point number corresponds to an error of less than or equal to .5 ulp. However, when analyzing the rounding error caused by various formulas, relative error is a better measure. A good illustration of this is the analysis in the section Theorem 9. Since can overestimate the effect of rounding to the nearest floating-point number by the wobble factor of , error estimates of formulas will be tighter on machines with a small . When only the order of magnitude of rounding error is of interest, ulps and may be used interchangeably, since they differ by at most a factor of . For example, when a floating-point number is in error by (3) contaminated digits lognulps, that means that the number of contaminated digits is logn. If the relative error in a computation isn, thenn. ## Guard Digits One method of computing the difference between two floating-point numbers is to compute the difference exactly and then round it to the nearest floating-point number. This is very expensive if the operands differ greatly in size. Assuming p= 3, 2.15 × 1012- 1.25 × 10-5would be calculated as x= 2.15 × 1012y= .0000000000000000125 × 1012x-y= 2.1499999999999999875 × 1012which rounds to 2.15 × 10 12. Rather than using all these digits, floating-point hardware normally operates on a fixed number of digits. Suppose that the number of digits kept isp, and that when the smaller operand is shifted right, digits are simply discarded (as opposed to rounding). Then 2.15 × 1012- 1.25 × 10-5becomes x= 2.15 × 1012y= 0.00 × 1012x-y= 2.15 × 1012The answer is exactly the same as if the difference had been computed exactly and then rounded. Take another example: 10.1 - 9.93. This becomes x= 1.01 × 101y= 0.99 × 101x-y= .02 × 101The correct answer is .17, so the computed difference is off by 30 ulps and is wrong in every digit! How bad can the error be? ## Theorem 1 Using a floating-point format with parametersandp, and computing differences usingpdigits, the relative error of the result can be as large as-1.## Proof - A relative error of - 1 in the expression x-yoccurs whenx= 1.00...0 andy= ...., where = - 1. Hereyhaspdigits (all equal to ). The exact difference isx-y=-. However, when computing the answer using onlyppdigits, the rightmost digit ofygets shifted off, and so the computed difference is-p+1. Thus the error is--p-p+1=-( - 1), and the relative error isp-( - 1)/p-= - 1. zpWhen =2, the relative error can be as large as the result, and when =10, it can be 9 times larger. Or to put it another way, when =2, equation (3) shows that the number of contaminated digits is log 2(1/) = log2(2) =pp. That is, all of thepdigits in the result are wrong! Suppose that one extra digit is added to guard against this situation (aguard digit). That is, the smaller number is truncated top+ 1 digits, and then the result of the subtraction is rounded topdigits. With a guard digit, the previous example becomes x= 1.010 × 101y= 0.993 × 101x- y = .017 × 101and the answer is exact. With a single guard digit, the relative error of the result may be greater than , as in 110 - 8.59. x= 1.10 × 102y= .085 × 102x-y= 1.015 × 102This rounds to 102, compared with the correct answer of 101.41, for a relative error of .006, which is greater than = .005. In general, the relative error of the result can be only slightly larger than . More precisely, ## Theorem 2 Ifxandyare floating-point numbers in a format with parametersandp, and if subtraction is done withp+ 1 digits (i.e. one guard digit), then the relative rounding error in the result is less than 2.This theorem will be proven in Rounding Error. Addition is included in the above theorem since xandycan be positive or negative.## Cancellation The last section can be summarized by saying that without a guard digit, the relative error committed when subtracting two nearby quantities can be very large. In other words, the evaluation of any expression containing a subtraction (or an addition of quantities with opposite signs) could result in a relative error so large that allthe digits are meaningless (Theorem 1). When subtracting nearby quantities, the most significant digits in the operands match and cancel each other. There are two kinds of cancellation: catastrophic and benign. Catastrophic cancellationoccurs when the operands are subject to rounding errors. For example in the quadratic formula, the expressionb2- 4acoccurs. The quantitiesb2and 4acare subject to rounding errors since they are the results of floating-point multiplications. Suppose that they are rounded to the nearest floating-point number, and so are accurate to within.5ulp. When they are subtracted, cancellation can cause many of the accurate digits to disappear, leaving behind mainly digits contaminated by rounding error. Hence the difference might have an error of many ulps. For example, considerb= 3.34,a= 1.22, andc= 2.28. The exact value ofb2- 4acis .0292. Butb2rounds to 11.2 and 4acrounds to 11.1, hence the final answer is .1 which is an error by 70 ulps, even though 11.2 - 11.1 is exactly equal to .16. The subtraction did not introduce any error, but rather exposed the error introduced in the earlier multiplications. Benign cancellationoccurs when subtracting exactly known quantities. Ifxandyhave no rounding error, then by Theorem 2 if the subtraction is done with a guard digit, the differencex-y has a very small relative error (less than 2).A formula that exhibits catastrophic cancellation can sometimes be rearranged to eliminate the problem. Again consider the quadratic formula (4) When , then does not involve a cancellation and . But the other addition (subtraction) in one of the formulas will have a catastrophic cancellation. To avoid this, multiply the numerator and denominator of r1by (and similarly for (5)r2) to obtain If and , then computing r1using formula (4) will involve a cancellation. Therefore, use formula (5) for computingr1and (4) forr2. On the other hand, ifb< 0, use (4) for computingr1and (5) forr2.The expression x2-y2is another formula that exhibits catastrophic cancellation. It is more accurate to evaluate it as (x-y)(x+y).7Unlike the quadratic formula, this improved form still has a subtraction, but it is a benign cancellation of quantities without rounding error, not a catastrophic one. By Theorem 2, the relative error inx-yis at most 2. The same is true ofx+y. Multiplying two quantities with a small relative error results in a product with a small relative error (see the section Rounding Error).In order to avoid confusion between exact and computed values, the following notation is used. Whereas x-ydenotes the exact difference ofxandy,xydenotes the computed difference (i.e., with rounding error). Similarly , , and denote computed addition, multiplication, and division, respectively. All caps indicate the computed value of a function, as in`LN(x)` or`SQRT(x)` . Lowercase functions and traditional mathematical notation denote their exact values as in ln(x) and .Although ( xy) (xy) is an excellent approximation tox2- y2, the floating-point numbersxandymight themselves be approximations to some true quantities and . For example, and might be exactly known decimal numbers that cannot be expressed exactly in binary. In this case, even thoughxyis a good approximation tox-y, it can have a huge relative error compared to the true expression , and so the advantage of (x+y)(x-y) overx2- y2is not as dramatic. Since computing (x+y)(x-y) is about the same amount of work as computingx2- y2, it is clearly the preferred form in this case. In general, however, replacing a catastrophic cancellation by a benign one is not worthwhile if the expense is large, because the input is often (but not always) an approximation. But eliminating a cancellation entirely (as in the quadratic formula) is worthwhile even if the data are not exact. Throughout this paper, it will be assumed that the floating-point inputs to an algorithm are exact and that the results are computed as accurately as possible.The expression x2-y2is more accurate when rewritten as (x-y)(x+y) because a catastrophic cancellation is replaced with a benign one. We next present more interesting examples of formulas exhibiting catastrophic cancellation that can be rewritten to exhibit only benign cancellation.The area of a triangle can be expressed directly in terms of the lengths of its sides (6)a,b, andcas (Suppose the triangle is very flat; that is, ab+c. Thensa, and the term (s-a) in formula (6) subtracts two nearby numbers, one of which may have rounding error. For example, ifa= 9.0,b=c= 4.53, the correct value ofsis 9.03 andAis 2.342.... Even though the computed value ofs(9.05) is in error by only 2 ulps, the computed value ofAis 3.04, an error of 70 ulps.There is a way to rewrite formula (6) so that it will return accurate results even for flat triangles [Kahan 1986]. It is (7) If a,b,andcdo not satisfyabc, rename them before applying (7). It is straightforward to check that the right-hand sides of (6) and (7) are algebraically identical. Using the values ofa,b, andcabove gives a computed area of 2.35, which is 1 ulp in error and much more accurate than the first formula.Although formula (7) is much more accurate than (6) for this example, it would be nice to know how well (7) performs in general. ## Theorem 3 The rounding error incurred when using (7) to compute the area of a triangle is at most 11, provided that subtraction is performed with a guard digit, e .005, and that square roots are computed to within 1/2 ulp.The condition that e< .005 is met in virtually every actual floating-point system. For example when = 2,p8 ensures thate< .005, and when = 10,p3 is enough.In statements like Theorem 3 that discuss the relative error of an expression, it is understood that the expression is computed using floating-point arithmetic. In particular, the relative error is actually of the expression (8)`SQRT` ((a(bc)) (c(ab))(c(ab))(a(bc)))4 Because of the cumbersome nature of (8), in the statement of theorems we will usually say the computed value ofErather than writing outEwith circle notation.Error bounds are usually too pessimistic. In the numerical example given above, the computed value of (7) is 2.35, compared with a true value of 2.34216 for a relative error of 0.7, which is much less than 11. The main reason for computing error bounds is not to get precise bounds but rather to verify that the formula does not contain numerical problems. A final example of an expression that can be rewritten to use benign cancellation is (1 + 100x), where . This expression arises in financial calculations. Consider depositing $100 every day into a bank account that earns an annual interest rate of 6%, compounded daily. Ifnn= 365 andi= .06, the amount of money accumulated at the end of one year is dollars. If this is computed using = 2 and p= 24, the result is $37615.45 compared to the exact answer of $37614.05, a discrepancy of $1.40. The reason for the problem is easy to see. The expression 1 +i/ninvolves adding 1 to .0001643836, so the low order bits ofi/nare lost. This rounding error is amplified when 1 +i/nis raised to thenth power.The troublesome expression (1 + i/n)can be rewritten asnenln(1 +i/n), where now the problem is to compute ln(1 +x) for smallx. One approach is to use the approximation ln(1 +x)x, in which case the payment becomes $37617.26, which is off by $3.21 and even less accurate than the obvious formula. But there is a way to compute ln(1 +x) very accurately, as Theorem 4 shows [Hewlett-Packard 1982]. This formula yields $37614.07, accurate to within two cents!Theorem 4 assumes that `LN(x)` approximates ln(x) to within 1/2 ulp. The problem it solves is that whenxis small,`LN` (1x) is not close to ln(1 +x) because 1xhas lost the information in the low order bits ofx. That is, the computed value of ln(1 +x) is not close to its actual value when .## Theorem 4 If ln(1 + x) is computed using the formula the relative error is at most 5when 0 x < 3/4, provided subtraction is performed with a guard digit, e < 0.1, and ln is computed to within 1/2 ulp.This formula will work for any value of .xbut is only interesting for , which is where catastrophic cancellation occurs in the naive formula ln(1 +x). Although the formula may seem mysterious, there is a simple explanation for why it works. Write ln(1 +x) as The left hand factor can be computed exactly, but the right hand factor µ( x) = ln(1 +x)/xwill suffer a large rounding error when adding 1 tox. However, µ is almost constant, since ln(1 +x)x. So changingxslightly will not introduce much error. In other words, if , computing will be a good approximation toxµ(x) = ln(1 +x). Is there a value for for which and can be computed accurately? There is; namely = (1x) 1, because then 1 + is exactly equal to 1x.The results of this section can be summarized by saying that a guard digit guarantees accuracy when nearby precisely known quantities are subtracted (benign cancellation). Sometimes a formula that gives inaccurate results can be rewritten to have much higher numerical accuracy by using benign cancellation; however, the procedure only works if subtraction is performed using a guard digit. The price of a guard digit is not high, because it merely requires making the adder one bit wider. For a 54 bit double precision adder, the additional cost is less than 2%. For this price, you gain the ability to run many algorithms such as formula (6) for computing the area of a triangle and the expression ln(1 + x). Although most modern computers have a guard digit, there are a few (such as Cray systems) that do not.## Exactly Rounded Operations When floating-point operations are done with a guard digit, they are not as accurate as if they were computed exactly then rounded to the nearest floating-point number. Operations performed in this manner will be called exactly rounded.8The example immediately preceding Theorem 2 shows that a single guard digit will not always give exactly rounded results. The previous section gave several examples of algorithms that require a guard digit in order to work properly. This section gives examples of algorithms that require exact rounding.So far, the definition of rounding has not been given. Rounding is straightforward, with the exception of how to round halfway cases; for example, should 12.5 round to 12 or 13? One school of thought divides the 10 digits in half, letting {0, 1, 2, 3, 4} round down, and {5, 6, 7, 8, 9} round up; thus 12.5 would round to 13. This is how rounding works on Digital Equipment Corporation's VAX computers. Another school of thought says that since numbers ending in 5 are halfway between two possible roundings, they should round down half the time and round up the other half. One way of obtaining this 50% behavior to require that the rounded result have its least significant digit be even. Thus 12.5 rounds to 12 rather than 13 because 2 is even. Which of these methods is best, round up or round to even? Reiser and Knuth [1975] offer the following reason for preferring round to even. ## Theorem 5 Let x and y be floating-point numbers, and define x0= x, x1= (x0y)y,..., xn= (xn-1y)y. Ifandare exactly rounded using round to even, then either xn= x for all n or xn= x1for all n1. zTo clarify this result, consider = 10, p= 3 and letx= 1.00,y= -.555. When rounding up, the sequence becomesx0y = 1.56,x1= 1.56 .555 = 1.01,x1y = 1.01 .555 = 1.57, and each successive value of xnincreases by .01, untilxn= 9.45 (n 845)9. Under round to even,xnis always 1.00. This example suggests that when using the round up rule, computations can gradually drift upward, whereas when using round to even the theorem says this cannot happen. Throughout the rest of this paper, round to even will be used.One application of exact rounding occurs in multiple precision arithmetic. There are two basic approaches to higher precision. One approach represents floating-point numbers using a very large significand, which is stored in an array of words, and codes the routines for manipulating these numbers in assembly language. The second approach represents higher precision floating-point numbers as an array of ordinary floating-point numbers, where adding the elements of the array in infinite precision recovers the high precision floating-point number. It is this second approach that will be discussed here. The advantage of using an array of floating-point numbers is that it can be coded portably in a high level language, but it requires exactly rounded arithmetic. The key to multiplication in this system is representing a product xy as a sum, where each summand has the same precision asxandy. This can be done by splittingxandy. Writingx=xh+ xandly=yh+ y, the exact product islxy=xhyh+xhyl+xlyh+xlyl. If xandyhavepbit significands, the summands will also havepbit significands provided thatxl,xh,yh,ylcan be represented using [p/2] bits. Whenpis even, it is easy to find a splitting. The numberx0.x1...xp - 1can be written as the sum ofx0.x1...xp/2 - 1and 0.0 ... 0xp/2...xp - 1. Whenpis odd, this simple splitting method will not work. An extra bit can, however, be gained by using negative numbers. For example, if = 2,p= 5, andx= .10111,xcan be split asxh= .11 andxl= -.00001. There is more than one way to split a number. A splitting method that is easy to compute is due to Dekker [1971], but it requires more than a single guard digit.## Theorem 6 Let p be the floating-point precision, with the restriction that p is even when> 2, and assume that floating-point operations are exactly rounded. Then ifk=[p/2]is half the precision (rounded up) and m =k+1, x can be split as x = xh+ xl, wherexh= (mx)(mxx), xl= xxh, and each xiis representable using[p/2]bits of precision.To see how this theorem works in an example, let = 10, p= 4,b= 3.476,a= 3.463, andc= 3.479. Thenb2-acrounded to the nearest floating-point number is .03480, whilebb= 12.08,ac= 12.05, and so the computed value ofb2-acis .03. This is an error of 480 ulps. Using Theorem 6 to writeb= 3.5 - .024,a= 3.5 - .037, andc= 3.5 - .021,b2becomes 3.52- 2 × 3.5 × .024 + .0242. Each summand is exact, sob2= 12.25 - .168 + .000576, where the sum is left unevaluated at this point. Similarly,ac= 3.52- (3.5 × .037 + 3.5 × .021) + .037 × .021 = 12.25 - .2030 +.000777. Finally, subtracting these two series term by term gives an estimate forb2-acof 0 .0350 .000201 = .03480, which is identical to the exactly rounded result. To show that Theorem 6 really requires exact rounding, considerp= 3, = 2, andx= 7. Thenm= 5,mx= 35, andmx= 32. If subtraction is performed with a single guard digit, then (mx)x= 28. Therefore,xh= 4 andxl= 3, hencexlis not representable with [p/2] = 1 bit.As a final example of exact rounding, consider dividing mby 10. The result is a floating-point number that will in general not be equal tom/10. When = 2, multiplyingm/10 by 10 will restorem, provided exact rounding is being used. Actually, a more general fact (due to Kahan) is true. The proof is ingenious, but readers not interested in such details can skip ahead to section The IEEE Standard.## Theorem 7 When=2, ifmandnare integers with |m| <2p - 1andnhas the special form n =2i+2j, then (mn)n = m, provided floating-point operations are exactly rounded.## Proof (9) - Scaling by a power of two is harmless, since it changes only the exponent, not the significand. If q=m/n, then scalenso that 2p- 1n< 2and scalepmso that 1/2 <q< 1. Thus, 2p- 2<m< 2. Sincepmhaspsignificant bits, it has at most one bit to the right of the binary point. Changing the sign ofmis harmless, so assume thatq> 0.- If = mn, to prove the theorem requires showing that | - - That is because mhas at most 1 bit right of the binary point, sonwill round tom. To deal with the halfway case when |n-m| = 1/4, note that since the initial unscaledmhad |m| < 2p- 1, its low-order bit was 0, so the low-order bit of the scaledmis also 0. Thus, halfway cases will round tom.- Suppose that q= .q1q2..., and let = .q1q2...qp1. To estimate |n-m|, first computeq| = |N/2p+ 1-m/n|, . - where Nis an odd integer. Sincen= 2+ 2iand 2jp- 1n< 2, it must be thatpn= 2p- 1+ 2for some kkp -2, and thus | - - The numerator is an integer, and since Nis odd, it is in fact an odd integer. Thus,q| 1/(n2p+ 1 -).k | - Assume q< (the caseq> is similar).10Thenn<m, andm-n|=m-n=n(q-) =n(q-( -2))-p-1 =(2p-1+2)2k-2-p-1=-p-1+k - This establishes (9) and proves the theorem. 11zThe theorem holds true for any base , as long as 2 + 2iis replaced byj+i. As gets larger, however, denominators of the formj+iare farther and farther apart.jWe are now in a position to answer the question, Does it matter if the basic arithmetic operations introduce a little more rounding error than necessary? The answer is that it does matter, because accurate basic operations enable us to prove that formulas are "correct" in the sense they have a small relative error. The section Cancellation discussed several algorithms that require guard digits to produce correct results in this sense. If the input to those formulas are numbers representing imprecise measurements, however, the bounds of Theorems 3 and 4 become less interesting. The reason is that the benign cancellation x-ycan become catastrophic ifxandyare only approximations to some measured quantity. But accurate operations are useful even in the face of inexact data, because they enable us to establish exact relationships like those discussed in Theorems 6 and 7. These are useful even if every floating-point variable is only an approximation to some actual value.## The IEEE Standard There are two different IEEE standards for floating-point computation. IEEE 754 is a binary standard that requires = 2, p= 24 for single precision andp= 53 for double precision [IEEE 1987]. It also specifies the precise layout of bits in a single and double precision. IEEE 854 allows either = 2 or = 10 and unlike 754, does not specify how floating-point numbers are encoded into bits [Cody et al. 1984]. It does not require a particular value forp, but instead it specifies constraints on the allowable values ofpfor single and double precision. The termIEEE Standardwill be used when discussing properties common to both standards.This section provides a tour of the IEEE standard. Each subsection discusses one aspect of the standard and why it was included. It is not the purpose of this paper to argue that the IEEE standard is the best possible floating-point standard but rather to accept the standard as given and provide an introduction to its use. For full details consult the standards themselves [IEEE 1987; Cody et al. 1984]. ## Formats and Operations ## Base It is clear why IEEE 854 allows = 10. Base ten is how humans exchange and think about numbers. Using = 10 is especially appropriate for calculators, where the result of each operation is displayed by the calculator in decimal. There are several reasons why IEEE 854 requires that if the base is not 10, it must be 2. The section Relative Error and Ulps mentioned one reason: the results of error analyses are much tighter when is 2 because a rounding error of .5 ulp wobbles by a factor of when computed as a relative error, and error analyses are almost always simpler when based on relative error. A related reason has to do with the effective precision for large bases. Consider = 16, p= 1 compared to = 2,p= 4. Both systems have 4 bits of significand. Consider the computation of 15/8. When = 2, 15 is represented as 1.111 × 23, and 15/8 as 1.111 × 20. So 15/8 is exact. However, when = 16, 15 is represented asF× 160, whereFis the hexadecimal digit for 15. But 15/8 is represented as 1 × 160, which has only one bit correct. In general, base 16 can lose up to 3 bits, so that a precision ofphexadecimal digits can have an effective precision as low as 4p- 3 rather than 4pbinary bits. Since large values of have these problems, why did IBM choose = 16 for its system/370? Only IBM knows for sure, but there are two possible reasons. The first is increased exponent range. Single precision on the system/370 has = 16,p= 6. Hence the significand requires 24 bits. Since this must fit into 32 bits, this leaves 7 bits for the exponent and one for the sign bit. Thus the magnitude of representable numbers ranges from about to about = . To get a similar exponent range when = 2 would require 9 bits of exponent, leaving only 22 bits for the significand. However, it was just pointed out that when = 16, the effective precision can be as low as 4p- 3 = 21 bits. Even worse, when = 2 it is possible to gain an extra bit of precision (as explained later in this section), so the = 2 machine has 23 bits of precision to compare with a range of 21 - 24 bits for the = 16 machine.Another possible explanation for choosing = 16 has to do with shifting. When adding two floating-point numbers, if their exponents are different, one of the significands will have to be shifted to make the radix points line up, slowing down the operation. In the = 16, p= 1 system, all the numbers between 1 and 15 have the same exponent, and so no shifting is required when adding any of the ( ) = 105 possible pairs of distinct numbers from this set. However, in the = 2,p= 4 system, these numbers have exponents ranging from 0 to 3, and shifting is required for 70 of the 105 pairs.In most modern hardware, the performance gained by avoiding a shift for a subset of operands is negligible, and so the small wobble of = 2 makes it the preferable base. Another advantage of using = 2 is that there is a way to gain an extra bit of significance. 12Since floating-point numbers are always normalized, the most significant bit of the significand is always 1, and there is no reason to waste a bit of storage representing it. Formats that use this trick are said to have ahiddenbit. It was already pointed out in Floating-point Formats that this requires a special convention for 0. The method given there was that an exponent ofemin- 1 and a significand of all zeros represents not , but rather 0.IEEE 754 single precision is encoded in 32 bits using 1 bit for the sign, 8 bits for the exponent, and 23 bits for the significand. However, it uses a hidden bit, so the significand is 24 bits ( p= 24), even though it is encoded using only 23 bits.## Precision The IEEE standard defines four different precisions: single, double, single-extended, and double-extended. In IEEE 754, single and double precision correspond roughly to what most floating-point hardware provides. Single precision occupies a single 32 bit word, double precision two consecutive 32 bit words. Extended precision is a format that offers at least a little extra precision and exponent range (TABLE D-1). TABLE D-1IEEE 754 Format Parametersp24 32 53 64 emax+127 1023 +1023 > 16383 emin-126 -1022 -1022 -16382 Exponent width in bits 8 11 11 15 Format width in bits 32 43 64 79 The IEEE standard only specifies a lower bound on how many extra bits extended precision provides. The minimum allowable double-extended format is sometimes referred to as 80-bit format, even though the table shows it using 79 bits. The reason is that hardware implementations of extended precision normally do not use a hidden bit, and so would use 80 rather than 79 bits.13The standard puts the most emphasis on extended precision, making no recommendation concerning double precision, but strongly recommending that Implementations should support the extended format corresponding to the widest basic format supported,...One motivation for extended precision comes from calculators, which will often display 10 digits, but use 13 digits internally. By displaying only 10 of the 13 digits, the calculator appears to the user as a "black box" that computes exponentials, cosines, etc. to 10 digits of accuracy. For the calculator to compute functions like exp, log and cos to within 10 digits with reasonable efficiency, it needs a few extra digits to work with. It is not hard to find a simple rational expression that approximates log with an error of 500 units in the last place. Thus computing with 13 digits gives an answer correct to 10 digits. By keeping these extra 3 digits hidden, the calculator presents a simple model to the operator. Extended precision in the IEEE standard serves a similar function. It enables libraries to efficiently compute quantities to within about .5 ulp in single (or double) precision, giving the user of those libraries a simple model, namely that each primitive operation, be it a simple multiply or an invocation of log, returns a value accurate to within about .5 ulp. However, when using extended precision, it is important to make sure that its use is transparent to the user. For example, on a calculator, if the internal representation of a displayed value is not rounded to the same precision as the display, then the result of further operations will depend on the hidden digits and appear unpredictable to the user. To illustrate extended precision further, consider the problem of converting between IEEE 754 single precision and decimal. Ideally, single precision numbers will be printed with enough digits so that when the decimal number is read back in, the single precision number can be recovered. It turns out that 9 decimal digits are enough to recover a single precision binary number (see the section Binary to Decimal Conversion). When converting a decimal number back to its unique binary representation, a rounding error as small as 1 ulp is fatal, because it will give the wrong answer. Here is a situation where extended precision is vital for an efficient algorithm. When single-extended is available, a very straightforward method exists for converting a decimal number to a single precision binary one. First read in the 9 decimal digits as an integer N, ignoring the decimal point. From TABLE D-1,p32, and since 109< 2324.3 × 109,Ncan be represented exactly in single-extended. Next find the appropriate power 10necessary to scalePN. This will be a combination of the exponent of the decimal number, together with the position of the (up until now) ignored decimal point. Compute 10|P|. If |P| 13, then this is also represented exactly, because 1013= 213513, and 513< 232. Finally multiply (or divide ifp< 0)Nand 10|P|. If this last operation is done exactly, then the closest binary number is recovered. The section Binary to Decimal Conversion shows how to do the last multiply (or divide) exactly. Thus for |P| 13, the use of the single-extended format enables 9-digit decimal numbers to be converted to the closest binary number (i.e. exactly rounded). If |P| > 13, then single-extended is not enough for the above algorithm to always compute the exactly rounded binary equivalent, but Coonen [1984] shows that it is enough to guarantee that the conversion of binary to decimal and back will recover the original binary number.If double precision is supported, then the algorithm above would be run in double precision rather than single-extended, but to convert double precision to a 17-digit decimal number and back would require the double-extended format. ## Exponent Since the exponent can be positive or negative, some method must be chosen to represent its sign. Two common methods of representing signed numbers are sign/magnitude and two's complement. Sign/magnitude is the system used for the sign of the significand in the IEEE formats: one bit is used to hold the sign, the rest of the bits represent the magnitude of the number. The two's complement representation is often used in integer arithmetic. In this scheme, a number in the range [-2 p-1, 2p-1- 1] is represented by the smallest nonnegative number that is congruent to it modulo 2p.The IEEE binary standard does not use either of these methods to represent the exponent, but instead uses a biasedrepresentation. In the case of single precision, where the exponent is stored in 8 bits, the bias is 127 (for double precision it is 1023). What this means is that if is the value of the exponent bits interpreted as an unsigned integer, then the exponent of the floating-point number is - 127. This is often called theunbiased exponentto distinguish from the biased exponent .Referring to TABLE D-1, single precision has emax= 127 andemin= -126. The reason for having |emin| <emaxis so that the reciprocal of the smallest number will not overflow. Although it is true that the reciprocal of the largest number will underflow, underflow is usually less serious than overflow. The section Base explained thatemin- 1 is used for representing 0, and Special Quantities will introduce a use foremax+ 1. In IEEE single precision, this means that the biased exponents range betweenemin- 1 = -127 andemax+ 1 = 128, whereas the unbiased exponents range between 0 and 255, which are exactly the nonnegative numbers that can be represented using 8 bits.## Operations The IEEE standard requires that the result of addition, subtraction, multiplication and division be exactly rounded. That is, the result must be computed exactly and then rounded to the nearest floating-point number (using round to even). The section Guard Digits pointed out that computing the exact difference or sum of two floating-point numbers can be very expensive when their exponents are substantially different. That section introduced guard digits, which provide a practical way of computing differences while guaranteeing that the relative error is small. However, computing with a single guard digit will not always give the same answer as computing the exact result and then rounding. By introducing a second guard digit and a third stickybit, differences can be computed at only a little more cost than with a single guard digit, but the result is the same as if the difference were computed exactly and then rounded [Goldberg 1990]. Thus the standard can be implemented efficiently.One reason for completely specifying the results of arithmetic operations is to improve the portability of software. When a program is moved between two machines and both support IEEE arithmetic, then if any intermediate result differs, it must be because of software bugs, not from differences in arithmetic. Another advantage of precise specification is that it makes it easier to reason about floating-point. Proofs about floating-point are hard enough, without having to deal with multiple cases arising from multiple kinds of arithmetic. Just as integer programs can be proven to be correct, so can floating-point programs, although what is proven in that case is that the rounding error of the result satisfies certain bounds. Theorem 4 is an example of such a proof. These proofs are made much easier when the operations being reasoned about are precisely specified. Once an algorithm is proven to be correct for IEEE arithmetic, it will work correctly on any machine supporting the IEEE standard. Brown [1981] has proposed axioms for floating-point that include most of the existing floating-point hardware. However, proofs in this system cannot verify the algorithms of sections Cancellation and Exactly Rounded Operations, which require features not present on all hardware. Furthermore, Brown's axioms are more complex than simply defining operations to be performed exactly and then rounded. Thus proving theorems from Brown's axioms is usually more difficult than proving them assuming operations are exactly rounded. There is not complete agreement on what operations a floating-point standard should cover. In addition to the basic operations +, -, × and /, the IEEE standard also specifies that square root, remainder, and conversion between integer and floating-point be correctly rounded. It also requires that conversion between internal formats and decimal be correctly rounded (except for very large numbers). Kulisch and Miranker [1986] have proposed adding inner product to the list of operations that are precisely specified. They note that when inner products are computed in IEEE arithmetic, the final answer can be quite wrong. For example sums are a special case of inner products, and the sum ((2 × 10 -30+ 1030) - 1030) - 10-30is exactly equal to 10-30, but on a machine with IEEE arithmetic the computed result will be -10-30. It is possible to compute inner products to within 1 ulp with less hardware than it takes to implement a fast multiplier [Kirchner and Kulish 1987].1415All the operations mentioned in the standard are required to be exactly rounded except conversion between decimal and binary. The reason is that efficient algorithms for exactly rounding all the operations are known, except conversion. For conversion, the best known efficient algorithms produce results that are slightly worse than exactly rounded ones [Coonen 1984]. The IEEE standard does not require transcendental functions to be exactly rounded because of the table maker's dilemma. To illustrate, suppose you are making a table of the exponential function to 4 places. Then exp(1.626) = 5.0835. Should this be rounded to 5.083 or 5.084? If exp(1.626) is computed more carefully, it becomes 5.08350. And then 5.083500. And then 5.0835000. Since exp is transcendental, this could go on arbitrarily long before distinguishing whether exp(1.626) is 5.083500...0dddor 5.0834999...9ddd. Thus it is not practical to specify that the precision of transcendental functions be the same as if they were computed to infinite precision and then rounded. Another approach would be to specify transcendental functions algorithmically. But there does not appear to be a single algorithm that works well across all hardware architectures. Rational approximation, CORDIC,16and large tables are three different techniques that are used for computing transcendentals on contemporary machines. Each is appropriate for a different class of hardware, and at present no single algorithm works acceptably over the wide range of current hardware.## Special Quantities On some floating-point hardware every bit pattern represents a valid floating-point number. The IBM System/370 is an example of this. On the other hand, the VAX TMreserves some bit patterns to represent special numbers calledreserved operands. This idea goes back to the CDC 6600, which had bit patterns for the special quantities`INDEFINITE` and`INFINITY` .The IEEE standard continues in this tradition and has NaNs ( Not a Number) and infinities. Without any special quantities, there is no good way to handle exceptional situations like taking the square root of a negative number, other than aborting computation. Under IBM System/370 FORTRAN, the default action in response to computing the square root of a negative number like -4 results in the printing of an error message. Since every bit pattern represents a valid number, the return value of square root must be some floating-point number. In the case of System/370 FORTRAN, is returned. In IEEE arithmetic, a NaN is returned in this situation.The IEEE standard specifies the following special values (see TABLE D-2): ± 0, denormalized numbers, ± and NaNs (there is more than one NaN, as explained in the next section). These special values are all encoded with exponents of either emax+ 1 oremin- 1 (it was already pointed out that 0 has an exponent ofemin- 1). TABLE D-2IEEE 754 Special Valuese=emin- 1f= 0±0 e=emin- 1f0emineemax-- 1. f× 2ee=emax+ 1f= 0± e=emax+ 1f0`NaN` ## NaNs Traditionally, the computation of 0/0 or has been treated as an unrecoverable error which causes a computation to halt. However, there are examples where it makes sense for a computation to continue in such a situation. Consider a subroutine that finds the zeros of a function f, say`zero(f)` . Traditionally, zero finders require the user to input an interval [a,b] on which the function is defined and over which the zero finder will search. That is, the subroutine is called as`zero(f` ,`a` ,`b)` . A more useful zero finder would not require the user to input this extra information. This more general zero finder is especially appropriate for calculators, where it is natural to simply key in a function, and awkward to then have to specify the domain. However, it is easy to see why most zero finders require a domain. The zero finder does its work by probing the function`f` at various values. If it probed for a value outside the domain of`f` , the code for`f` might well compute 0/0 or , and the computation would halt, unnecessarily aborting the zero finding process.This problem can be avoided by introducing a special value called NaN, and specifying that the computation of expressions like 0/0 and produce NaN, rather than halting. A list of some of the situations that can cause a NaN are given in TABLE D-3. Then when `zero(f)` probes outside the domain of`f` , the code for`f` will return NaN, and the zero finder can continue. That is,`zero(f)` is not "punished" for making an incorrect guess. With this example in mind, it is easy to see what the result of combining a NaN with an ordinary floating-point number should be. Suppose that the final statement of`f` is`return(-b +` `sqrt(d))/(2*a)` . Ifd< 0, then`f` should return a NaN. Sinced< 0,`sqrt(d)` is a NaN, and`-b + sqrt(d)` will be a NaN, if the sum of a NaN and any other number is a NaN. Similarly if one operand of a division operation is a NaN, the quotient should be a NaN. In general, whenever a NaN participates in a floating-point operation, the result is another NaN. TABLE D-3Operations That Produce a NaN++ (- ) × 0 × / 0/0, / `REM` x`REM` 0,`REM` y(when x <0) Another approach to writing a zero solver that doesn't require the user to input a domain is to use signals. The zero-finder could install a signal handler for floating-point exceptions. Then if `f` was evaluated outside its domain and raised an exception, control would be returned to the zero solver. The problem with this approach is that every language has a different method of handling signals (if it has a method at all), and so it has no hope of portability.In IEEE 754, NaNs are often represented as floating-point numbers with the exponent emax+ 1 and nonzero significands. Implementations are free to put system-dependent information into the significand. Thus there is not a unique NaN, but rather a whole family of NaNs. When a NaN and an ordinary floating-point number are combined, the result should be the same as the NaN operand. Thus if the result of a long computation is a NaN, the system-dependent information in the significand will be the information that was generated when the first NaN in the computation was generated. Actually, there is a caveat to the last statement. If both operands are NaNs, then the result will be one of those NaNs, but it might not be the NaN that was generated first.## Infinity Just as NaNs provide a way to continue a computation when expressions like 0/0 or are encountered, infinities provide a way to continue when an overflow occurs. This is much safer than simply returning the largest representable number. As an example, consider computing , when = 10, p= 3, andemax= 98. Ifx= 3 × 1070andy= 4 × 1070, thenx2will overflow, and be replaced by 9.99 × 1098. Similarlyy2, andx2+y2will each overflow in turn, and be replaced by 9.99 × 1098. So the final result will be , which is drastically wrong: the correct answer is 5 × 1070. In IEEE arithmetic, the result ofx2is , as isy2,x2+y2and . So the final result is , which is safer than returning an ordinary floating-point number that is nowhere near the correct answer.17The division of 0 by 0 results in a NaN. A nonzero number divided by 0, however, returns infinity: 1/0 = , -1/0 = -. The reason for the distinction is this: if f(x) 0 andg(x) 0 asxapproaches some limit, thenf(x)/g(x) could have any value. For example, whenf(x) = sinxandg(x) =x, thenf(x)/g(x) 1 asx0. But whenf(x) = 1 - cosx,f(x)/g(x) 0. When thinking of 0/0 as the limiting situation of a quotient of two very small numbers, 0/0 could represent anything. Thus in the IEEE standard, 0/0 results in a NaN. But whenc> 0,f(x)c,and g(x)0, thenf(x)/g(x) ±, for any analytic functions f and g. Ifg(x) < 0 for smallx, thenf(x)/g(x) -, otherwise the limit is +. So the IEEE standard definesc/0 = ±, as long asc0. The sign of depends on the signs ofcand 0 in the usual way, so that -10/0 = -, and -10/-0 = +. You can distinguish between getting because of overflow and getting because of division by zero by checking the status flags (which will be discussed in detail in section Flags). The overflow flag will be set in the first case, the division by zero flag in the second.The rule for determining the result of an operation that has infinity as an operand is simple: replace infinity with a finite number .xand take the limit asx. Thus 3/ = 0, because Similarly, 4 - = -, and = . When the limit doesn't exist, the result is a NaN, so / will be a NaN (TABLE D-3 has additional examples). This agrees with the reasoning used to conclude that 0/0 should be a NaN. When a subexpression evaluates to a NaN, the value of the entire expression is also a NaN. In the case of ± however, the value of the expression might be an ordinary floating-point number because of rules like 1/ = 0. Here is a practical example that makes use of the rules for infinity arithmetic. Consider computing the function x/(x2+ 1). This is a bad formula, because not only will it overflow whenxis larger than , but infinity arithmetic will give the wrong answer because it will yield 0, rather than a number near 1/x. However,x/(x2+ 1) can be rewritten as 1/(x+x-1). This improved expression will not overflow prematurely and because of infinity arithmetic will have the correct value whenx= 0: 1/(0 + 0-1) = 1/(0 + ) = 1/ = 0. Without infinity arithmetic, the expression 1/(x+x-1) requires a test forx= 0, which not only adds extra instructions, but may also disrupt a pipeline. This example illustrates a general fact, namely that infinity arithmetic often avoids the need for special case checking; however, formulas need to be carefully inspected to make sure they do not have spurious behavior at infinity (asx/(x2+ 1) did).## Signed Zero Zero is represented by the exponent emin- 1 and a zero significand. Since the sign bit can take on two different values, there are two zeros, +0 and -0. If a distinction were made when comparing +0 and -0, simple tests like`if` `(x` `=` `0)` would have very unpredictable behavior, depending on the sign of`x` . Thus the IEEE standard defines comparison so that +0 = -0, rather than -0 < +0. Although it would be possible always to ignore the sign of zero, the IEEE standard does not do so. When a multiplication or division involves a signed zero, the usual sign rules apply in computing the sign of the answer. Thus 3·(+0) = +0, and +0/-3 = -0. If zero did not have a sign, then the relation 1/(1/x) =xwould fail to hold whenx= ±. The reason is that 1/- and 1/+ both result in 0, and 1/0 results in +, the sign information having been lost. One way to restore the identity 1/(1/x) =xis to only have one kind of infinity, however that would result in the disastrous consequence of losing the sign of an overflowed quantity.Another example of the use of signed zero concerns underflow and functions that have a discontinuity at 0, such as log. In IEEE arithmetic, it is natural to define log 0 = - and log xto be a NaN whenx< 0. Suppose thatxrepresents a small negative number that has underflowed to zero. Thanks to signed zero,xwill be negative, so log can return a NaN. However, if there were no signed zero, the log function could not distinguish an underflowed negative number from 0, and would therefore have to return -. Another example of a function with a discontinuity at zero is the signum function, which returns the sign of a number.Probably the most interesting use of signed zero occurs in complex arithmetic. To take a simple example, consider the equation . This is certainly true when z0. Ifz= -1, the obvious computation gives and . Thus, ! The problem can be traced to the fact that square root is multi-valued, and there is no way to select the values so that it is continuous in the entire complex plane. However, square root is continuous if abranch cutconsisting of all negative real numbers is excluded from consideration. This leaves the problem of what to do for the negative real numbers, which are of the form -x+i0, wherex> 0. Signed zero provides a perfect way to resolve this problem. Numbers of the formx+i(+0) have one sign and numbers of the formx+i(-0) on the other side of the branch cut have the other sign . In fact, the natural formulas for computing will give these results.Back to . If 1/z=1 = -1 +i0, thenz= 1/(-1 +i0) = [(-1-i0)]/[(-1 +i0)(-1 -i0)] = (-1 --i0)/((-1)2- 02) = -1 +i(-0), and so , while . Thus IEEE arithmetic preserves this identity for all z. Some more sophisticated examples are given by Kahan [1987]. Although distinguishing between +0 and -0 has advantages, it can occasionally be confusing. For example, signed zero destroys the relationx=y1/x= 1/y, which is false whenx= +0 andy= -0. However, the IEEE committee decided that the advantages of utilizing the sign of zero outweighed the disadvantages.## Denormalized Numbers Consider normalized floating-point numbers with = 10, (10)p= 3, andemin= -98. The numbersx= 6.87 × 10-97andy= 6.81 × 10-97appear to be perfectly ordinary floating-point numbers, which are more than a factor of 10 larger than the smallest floating-point number 1.00 × 10-98. They have a strange property, however:xy= 0 even thoughxy! The reason is thatx-y= .06 × 10-97= 6.0 × 10-99is too small to be represented as a normalized number, and so must be flushed to zero. How important is it to preserve the propertyx = yx - y =0 ? It's very easy to imagine writing the code fragment, `if` `(x` `y)` `then` `z` `=` `1/(x-y)` , and much later having a program fail due to a spurious division by zero. Tracking down bugs like this is frustrating and time consuming. On a more philosophical level, computer science textbooks often point out that even though it is currently impractical to prove large programs correct, designing programs with the idea of proving them often results in better code. For example, introducing invariants is quite useful, even if they aren't going to be used as part of a proof. Floating-point code is just like any other code: it helps to have provable facts on which to depend. For example, when analyzing formula (6), it was very helpful to know thatx/2 <y< 2xxy=x-y. Similarly, knowing that (10) is true makes writing reliable floating-point code easier. If it is only true for most numbers, it cannot be used to prove anything.The IEEE standard uses denormalized 18numbers, which guarantee (10), as well as other useful relations. They are the most controversial part of the standard and probably accounted for the long delay in getting 754 approved. Most high performance hardware that claims to be IEEE compatible does not support denormalized numbers directly, but rather traps when consuming or producing denormals, and leaves it to software to simulate the IEEE standard.19The idea behind denormalized numbers goes back to Goldberg [1967] and is very simple. When the exponent isemin, the significand does not have to be normalized, so that when = 10,p= 3 andemin= -98, 1.00 × 10-98is no longer the smallest floating-point number, because 0.98 × 10-98is also a floating-point number.There is a small snag when = 2 and a hidden bit is being used, since a number with an exponent of eminwill always have a significand greater than or equal to 1.0 because of the implicit leading bit. The solution is similar to that used to represent 0, and is summarized in TABLE D-2. The exponenteminis used to represent denormals. More formally, if the bits in the significand field areb1,b2, ...,bp -1, and the value of the exponent ise, then whene>emin- 1, the number being represented is 1.b1b2...bp - 1× 2whereas whenee=emin- 1, the number being represented is 0.b1b2...bp - 1× 2e+ 1. The +1 in the exponent is needed because denormals have an exponent ofemin, notemin- 1.Recall the example of = 10, p= 3,emin= -98,x= 6.87 × 10-97andy= 6.81 × 10-97presented at the beginning of this section. With denormals,x-ydoes not flush to zero but is instead represented by the denormalized number .6 × 10-98. This behavior is called gradualunderflow. It is easy to verify that (10) always holds when using gradual underflow. FIGURE D-2 Flush To Zero Compared With Gradual Underflow FIGURE D-2 illustrates denormalized numbers. The top number line in the figure shows normalized floating-point numbers. Notice the gap between 0 and the smallest normalized number . If the result of a floating-point calculation falls into this gulf, it is flushed to zero. The bottom number line shows what happens when denormals are added to the set of floating-point numbers. The "gulf" is filled in, and when the result of a calculation is less than , it is represented by the nearest denormal. When denormalized numbers are added to the number line, the spacing between adjacent floating-point numbers varies in a regular way: adjacent spacings are either the same length or differ by a factor of . Without denormals, the spacing abruptly changes from to , which is a factor of , rather than the orderly change by a factor of . Because of this, many algorithms that can have large relative error for normalized numbers close to the underflow threshold are well-behaved in this range when gradual underflow is used.Without gradual underflow, the simple expression x- ycan have a very large relative error for normalized inputs, as was seen above forx= 6.87 × 10-97andy= 6.81 × 10-97. Large relative errors can happen even without cancellation, as the following example shows [Demmel 1984]. Consider dividing two complex numbers,a+ibandc+id. The obvious formula·i suffers from the problem that if either component of the denominator c+idis larger than , the formula will overflow, even though the final result may be well within range. A better method of computing the quotients is to use Smith's formula:(11) Applying Smith's formula to (2 · 10 -98+i10-98)/(4 · 10-98+i(2 · 10-98)) gives the correct answer of 0.5 with gradual underflow. It yields 0.4 with flush to zero, an error of 100 ulps. It is typical for denormalized numbers to guarantee error bounds for arguments all the way down to 1.0 x .## Exceptions, Flags and Trap Handlers When an exceptional condition like division by zero or overflow occurs in IEEE arithmetic, the default is to deliver a result and continue. Typical of the default results are NaN for 0/0 and , and for 1/0 and overflow. The preceding sections gave examples where proceeding from an exception with these default values was the reasonable thing to do. When any exception occurs, a status flag is also set. Implementations of the IEEE standard are required to provide users with a way to read and write the status flags. The flags are "sticky" in that once set, they remain set until explicitly cleared. Testing the flags is the only way to distinguish 1/0, which is a genuine infinity from an overflow. Sometimes continuing execution in the face of exception conditions is not appropriate. The section Infinity gave the example of x/(x2+ 1). Whenx> , the denominator is infinite, resulting in a final answer of 0, which is totally wrong. Although for this formula the problem can be solved by rewriting it as 1/(x+x-1), rewriting may not always solve the problem. The IEEE standard strongly recommends that implementations allow trap handlers to be installed. Then when an exception occurs, the trap handler is called instead of setting the flag. The value returned by the trap handler will be used as the result of the operation. It is the responsibility of the trap handler to either clear or set the status flag; otherwise, the value of the flag is allowed to be undefined.The IEEE standard divides exceptions into 5 classes: overflow, underflow, division by zero, invalid operation and inexact. There is a separate status flag for each class of exception. The meaning of the first three exceptions is self-evident. Invalid operation covers the situations listed in TABLE D-3, and any comparison that involves a NaN. The default result of an operation that causes an invalid exception is to return a NaN, but the converse is not true. When one of the operands to an operation is a NaN, the result is a NaN but no invalid exception is raised unless the operation also satisfies one of the conditions in TABLE D-3. 20 TABLE D-4Exceptions in IEEE 754*overflow ± or ± xmaxround( x2-)underflow 0, or denormal round( x2)divide by zero ± operands invalid NaN operands inexact round( x)round( x) * xis the exact result of the operation, = 192 for single precision, 1536 for double, andxmax= 1.11 ...11 × .The inexact exception is raised when the result of a floating-point operation is not exact. In the = 10, p= 3 system, 3.5 4.2 = 14.7 is exact, but 3.5 4.3 = 15.0 is not exact (since 3.5 · 4.3 = 15.05), and raises an inexact exception. Binary to Decimal Conversion discusses an algorithm that uses the inexact exception. A summary of the behavior of all five exceptions is given in TABLE D-4.There is an implementation issue connected with the fact that the inexact exception is raised so often. If floating-point hardware does not have flags of its own, but instead interrupts the operating system to signal a floating-point exception, the cost of inexact exceptions could be prohibitive. This cost can be avoided by having the status flags maintained by software. The first time an exception is raised, set the software flag for the appropriate class, and tell the floating-point hardware to mask off that class of exceptions. Then all further exceptions will run without interrupting the operating system. When a user resets that status flag, the hardware mask is re-enabled. ## Trap Handlers One obvious use for trap handlers is for backward compatibility. Old codes that expect to be aborted when exceptions occur can install a trap handler that aborts the process. This is especially useful for codes with a loop like `do` `S` `until` `(x` `>=` `100)` . Since comparing a NaN to a number with <, , >, , or = (but not ) always returns false, this code will go into an infinite loop if`x` ever becomes a NaN.There is a more interesting use for trap handlers that comes up when computing products such as that could potentially overflow. One solution is to use logarithms, and compute exp instead. The problem with this approach is that it is less accurate, and that it costs more than the simple expression , even if there is no overflow. There is another solution using trap handlers calledover/underflow countingthat avoids both of these problems [Sterbenz 1974].The idea is as follows. There is a global counter initialized to zero. Whenever the partial product overflows for some k, the trap handler increments the counter by one and returns the overflowed quantity with the exponent wrapped around. In IEEE 754 single precision,emax= 127, so ifpk= 1.45 × 2130, it will overflow and cause the trap handler to be called, which will wrap the exponent back into range, changingpkto 1.45 × 2-62(see below). Similarly, ifpkunderflows, the counter would be decremented, and negative exponent would get wrapped around into a positive one. When all the multiplications are done, if the counter is zero then the final product ispn. If the counter is positive, the product overflowed, if the counter is negative, it underflowed. If none of the partial products are out of range, the trap handler is never called and the computation incurs no extra cost. Even if there are over/underflows, the calculation is more accurate than if it had been computed with logarithms, because eachpkwas computed frompk - 1using a full precision multiply. Barnett [1987] discusses a formula where the full accuracy of over/underflow counting turned up an error in earlier tables of that formula.IEEE 754 specifies that when an overflow or underflow trap handler is called, it is passed the wrapped-around result as an argument. The definition of wrapped-around for overflow is that the result is computed as if to infinite precision, then divided by 2, and then rounded to the relevant precision. For underflow, the result is multiplied by 2. The exponent is 192 for single precision and 1536 for double precision. This is why 1.45 x 2 130was transformed into 1.45 × 2-62in the example above.## Rounding Modes In the IEEE standard, rounding occurs whenever an operation has a result that is not exact, since (with the exception of binary decimal conversion) each operation is computed exactly and then rounded. By default, rounding means round toward nearest. The standard requires that three other rounding modes be provided, namely round toward 0, round toward +, and round toward -. When used with the convert to integer operation, round toward - causes the convert to become the floor function, while round toward + is ceiling. The rounding mode affects overflow, because when round toward 0 or round toward - is in effect, an overflow of positive magnitude causes the default result to be the largest representable number, not +. Similarly, overflows of negative magnitude will produce the largest negative number when round toward + or round toward 0 is in effect. One application of rounding modes occurs in interval arithmetic (another is mentioned in Binary to Decimal Conversion). When using interval arithmetic, the sum of two numbers xandyis an interval , where isxyrounded toward -, and isxyrounded toward +. The exact result of the addition is contained within the interval . Without rounding modes, interval arithmetic is usually implemented by computing and , where is machine epsilon.21This results in overestimates for the size of the intervals. Since the result of an operation in interval arithmetic is an interval, in general the input to an operation will also be an interval. If two intervals , and , are added, the result is , where is with the rounding mode set to round toward -, and is with the rounding mode set to round toward +.When a floating-point calculation is performed using interval arithmetic, the final answer is an interval that contains the exact result of the calculation. This is not very helpful if the interval turns out to be large (as it often does), since the correct answer could be anywhere in that interval. Interval arithmetic makes more sense when used in conjunction with a multiple precision floating-point package. The calculation is first performed with some precision p. If interval arithmetic suggests that the final answer may be inaccurate, the computation is redone with higher and higher precisions until the final interval is a reasonable size.## Flags The IEEE standard has a number of flags and modes. As discussed above, there is one status flag for each of the five exceptions: underflow, overflow, division by zero, invalid operation and inexact. There are four rounding modes: round toward nearest, round toward +, round toward 0, and round toward -. It is strongly recommended that there be an enable mode bit for each of the five exceptions. This section gives some simple examples of how these modes and flags can be put to good use. A more sophisticated example is discussed in the section Binary to Decimal Conversion. Consider writing a subroutine to compute xn, wherenis an integer. Whenn> 0, a simple routine like PositivePower(x,n) {while (n is even) {x = x*xn = n/2}u = xwhile (true) {n = n/2if (n==0) return ux = x*xif (n is odd) u = u*x} If n < 0, then a more accurate way to compute xnis not to call`PositivePower(1/x,` `-n)` but rather`1/PositivePower(x,` `-n)` , because the first expression multipliesnquantities each of which have a rounding error from the division (i.e., 1/x). In the second expression these are exact (i.e.,x), and the final division commits just one additional rounding error. Unfortunately, these is a slight snag in this strategy. If`PositivePower(x,` `-n)` underflows, then either the underflow trap handler will be called, or else the underflow status flag will be set. This is incorrect, because ifx-underflows, thennxnwill either overflow or be in range.22But since the IEEE standard gives the user access to all the flags, the subroutine can easily correct for this. It simply turns off the overflow and underflow trap enable bits and saves the overflow and underflow status bits. It then computes`1/PositivePower(x,` `-n)` . If neither the overflow nor underflow status bit is set, it restores them together with the trap enable bits. If one of the status bits is set, it restores the flags and redoes the calculation using`PositivePower(1/x,` `-n)` , which causes the correct exceptions to occur.Another example of the use of flags occurs when computing arccos via the formula arccosx= 2 arctan . If arctan() evaluates to /2, then arccos(-1) will correctly evaluate to 2·arctan() =, because of infinity arithmetic. However, there is a small snag, because the computation of (1 - x)/(1 +x) will cause the divide by zero exception flag to be set, even though arccos(-1) is not exceptional. The solution to this problem is straightforward. Simply save the value of the divide by zero flag before computing arccos, and then restore its old value after the computation.## Systems Aspects The design of almost every aspect of a computer system requires knowledge about floating-point. Computer architectures usually have floating-point instructions, compilers must generate those floating-point instructions, and the operating system must decide what to do when exception conditions are raised for those floating-point instructions. Computer system designers rarely get guidance from numerical analysis texts, which are typically aimed at users and writers of software, not at computer designers. As an example of how plausible design decisions can lead to unexpected behavior, consider the following BASIC program. q = 3.0/7.0if q = 3.0/7.0 then print "Equal":else print "Not Equal" When compiled and run using Borland's Turbo Basic on an IBM PC, the program prints `Not` `Equal` ! This example will be analyzed in the next sectionIncidentally, some people think that the solution to such anomalies is never to compare floating-point numbers for equality, but instead to consider them equal if they are within some error bound E. This is hardly a cure-all because it raises as many questions as it answers. What should the value ofEbe? Ifx< 0 and y > 0 are withinE, should they really be considered to be equal, even though they have different signs? Furthermore, the relation defined by this rule,a~b|a -b| <E, is not an equivalence relation becausea~bandb~cdoes not imply thata~c.## Instruction Sets It is quite common for an algorithm to require a short burst of higher precision in order to produce accurate results. One example occurs in the quadratic formula ( )/2 a. As discussed in the section Proof of Theorem 4, whenb24ac, rounding error can contaminate up to half the digits in the roots computed with the quadratic formula. By performing the subcalculation ofb2- 4acin double precision, half the double precision bits of the root are lost, which means that all the single precision bits are preserved.The computation of b2- 4acin double precision when each of the quantitiesa,b, andcare in single precision is easy if there is a multiplication instruction that takes two single precision numbers and produces a double precision result. In order to produce the exactly rounded product of twop-digit numbers, a multiplier needs to generate the entire 2pbits of product, although it may throw bits away as it proceeds. Thus, hardware to compute a double precision product from single precision operands will normally be only a little more expensive than a single precision multiplier, and much cheaper than a double precision multiplier. Despite this, modern instruction sets tend to provide only instructions that produce a result of the same precision as the operands.23If an instruction that combines two single precision operands to produce a double precision product was only useful for the quadratic formula, it wouldn't be worth adding to an instruction set. However, this instruction has many other uses. Consider the problem of solving a system of linear equations, a11x1+a12x2+· · · +a1nxn=b1 a21x1+a22x2+· · · +a2nxn=b2 · · · an1x1+an2x2+· · ·+annxn=bn which can be written in matrix form as Ax=b, where Suppose that a solution (12) =x(1)is computed by some method, perhaps Gaussian elimination. There is a simple way to improve the accuracy of the result callediterative improvement. First computeAx(1)-b (13)Ay= Note that if (14)x(1)is an exact solution, then is the zero vector, as isy. In general, the computation of andywill incur rounding error, soAyAx(1)-b=A(x(1)-x), wherexis the (unknown) true solution. Thenyx(1)-x, so an improved estimate for the solution isx(2)=x(1)-y The three steps (12), (13), and (14) can be repeated, replacing x(1)withx(2), andx(2)withx(3). This argument thatx(i+ 1)is more accurate than x(i)is only informal. For more information, see [Golub and Van Loan 1989].When performing iterative improvement, is a vector whose elements are the difference of nearby inexact floating-point numbers, and so can suffer from catastrophic cancellation. Thus iterative improvement is not very useful unless = Ax(1)-bis computed in double precision. Once again, this is a case of computing the product of two single precision numbers (Aandx(1)), where the full double precision result is needed.To summarize, instructions that multiply two floating-point numbers and return a product with twice the precision of the operands make a useful addition to a floating-point instruction set. Some of the implications of this for compilers are discussed in the next section. ## Languages and Compilers The interaction of compilers and floating-point is discussed in Farnum [1988], and much of the discussion in this section is taken from that paper. ## Ambiguity Ideally, a language definition should define the semantics of the language precisely enough to prove statements about programs. While this is usually true for the integer part of a language, language definitions often have a large grey area when it comes to floating-point. Perhaps this is due to the fact that many language designers believe that nothing can be proven about floating-point, since it entails rounding error. If so, the previous sections have demonstrated the fallacy in this reasoning. This section discusses some common grey areas in language definitions, including suggestions about how to deal with them. Remarkably enough, some languages don't clearly specify that if `x` is a floating-point variable (with say a value of`3.0/10.0` ), then every occurrence of (say)`10.0*x` must have the same value. For example Ada, which is based on Brown's model, seems to imply that floating-point arithmetic only has to satisfy Brown's axioms, and thus expressions can have one of many possible values. Thinking about floating-point in this fuzzy way stands in sharp contrast to the IEEE model, where the result of each floating-point operation is precisely defined. In the IEEE model, we can prove that`(3.0/10.0)*10.0` evaluates to`3` (Theorem 7). In Brown's model, we cannot.Another ambiguity in most language definitions concerns what happens on overflow, underflow and other exceptions. The IEEE standard precisely specifies the behavior of exceptions, and so languages that use the standard as a model can avoid any ambiguity on this point. Another grey area concerns the interpretation of parentheses. Due to roundoff errors, the associative laws of algebra do not necessarily hold for floating-point numbers. For example, the expression `(x+y)+z` has a totally different answer than`x+(y+z)` whenx= 1030,y= -1030andz= 1 (it is 1 in the former case, 0 in the latter). The importance of preserving parentheses cannot be overemphasized. The algorithms presented in theorems 3, 4 and 6 all depend on it. For example, in Theorem 6, the formulaxh=mx- (mx-x) would reduce toxh=xif it weren't for parentheses, thereby destroying the entire algorithm. A language definition that does not require parentheses to be honored is useless for floating-point calculations.Subexpression evaluation is imprecisely defined in many languages. Suppose that `ds` is double precision, but`x` and`y` are single precision. Then in the expression`ds` `+` `x*y` is the product performed in single or double precision? Another example: in`x` `+` `m/n` where`m` and`n` are integers, is the division an integer operation or a floating-point one? There are two ways to deal with this problem, neither of which is completely satisfactory. The first is to require that all variables in an expression have the same type. This is the simplest solution, but has some drawbacks. First of all, languages like Pascal that have subrange types allow mixing subrange variables with integer variables, so it is somewhat bizarre to prohibit mixing single and double precision variables. Another problem concerns constants. In the expression`0.1*x` , most languages interpret 0.1 to be a single precision constant. Now suppose the programmer decides to change the declaration of all the floating-point variables from single to double precision. If 0.1 is still treated as a single precision constant, then there will be a compile time error. The programmer will have to hunt down and change every floating-point constant.The second approach is to allow mixed expressions, in which case rules for subexpression evaluation must be provided. There are a number of guiding examples. The original definition of C required that every floating-point expression be computed in double precision [Kernighan and Ritchie 1978]. This leads to anomalies like the example at the beginning of this section. The expression `3.0/7.0` is computed in double precision, but if`q` is a single-precision variable, the quotient is rounded to single precision for storage. Since 3/7 is a repeating binary fraction, its computed value in double precision is different from its stored value in single precision. Thus the comparisonq= 3/7 fails. This suggests that computing every expression in the highest precision available is not a good rule.Another guiding example is inner products. If the inner product has thousands of terms, the rounding error in the sum can become substantial. One way to reduce this rounding error is to accumulate the sums in double precision (this will be discussed in more detail in the section Optimizers). If `d` is a double precision variable, and`x[]` and`y[]` are single precision arrays, then the inner product loop will look like`d` `=` `d` `+` `x[i]*y[i]` . If the multiplication is done in single precision, than much of the advantage of double precision accumulation is lost, because the product is truncated to single precision just before being added to a double precision variable.A rule that covers both of the previous two examples is to compute an expression in the highest precision of any variable that occurs in that expression. Then `q` `=` `3.0/7.0` will be computed entirely in single precision24and will have the boolean value true, whereas`d` `=` `d` `+` `x[i]*y[i]` will be computed in double precision, gaining the full advantage of double precision accumulation. However, this rule is too simplistic to cover all cases cleanly. If`dx` and`dy` are double precision variables, the expression`y` `=` `x` `+` `single(dx-dy)` contains a double precision variable, but performing the sum in double precision would be pointless, because both operands are single precision, as is the result.A more sophisticated subexpression evaluation rule is as follows. First assign each operation a tentative precision, which is the maximum of the precisions of its operands. This assignment has to be carried out from the leaves to the root of the expression tree. Then perform a second pass from the root to the leaves. In this pass, assign to each operation the maximum of the tentative precision and the precision expected by the parent. In the case of `q` `=` `3.0/7.0` , every leaf is single precision, so all the operations are done in single precision. In the case of`d` `=` `d` `+` `x[i]*y[i]` , the tentative precision of the multiply operation is single precision, but in the second pass it gets promoted to double precision, because its parent operation expects a double precision operand. And in`y` `=` `x` `+` `single(dx-dy)` , the addition is done in single precision. Farnum [1988] presents evidence that this algorithm in not difficult to implement.The disadvantage of this rule is that the evaluation of a subexpression depends on the expression in which it is embedded. This can have some annoying consequences. For example, suppose you are debugging a program and want to know the value of a subexpression. You cannot simply type the subexpression to the debugger and ask it to be evaluated, because the value of the subexpression in the program depends on the expression it is embedded in. A final comment on subexpressions: since converting decimal constants to binary is an operation, the evaluation rule also affects the interpretation of decimal constants. This is especially important for constants like `0.1` which are not exactly representable in binary.Another potential grey area occurs when a language includes exponentiation as one of its built-in operations. Unlike the basic arithmetic operations, the value of exponentiation is not always obvious [Kahan and Coonen 1982]. If `**` is the exponentiation operator, then`(-3)**3` certainly has the value -27. However,`(-3.0)**3.0` is problematical. If the`**` operator checks for integer powers, it would compute`(-3.0)**3.0` as -3.03= -27. On the other hand, if the formulaxy=eylogis used to definex`**` for real arguments, then depending on the log function, the result could be a NaN (using the natural definition of log(x) =`NaN` whenx< 0). If the FORTRAN`CLOG` function is used however, then the answer will be -27, because the ANSI FORTRAN standard defines`CLOG(-3.0)` to bei+ log 3 [ANSI 1978]. The programming language Ada avoids this problem by only defining exponentiation for integer powers, while ANSI FORTRAN prohibits raising a negative number to a real power.In fact, the FORTRAN standard says that - Any arithmetic operation whose result is not mathematically defined is prohibited... Unfortunately, with the introduction of ± by the IEEE standard, the meaning of not mathematically definedis no longer totally clear cut. One definition might be to use the method shown in section Infinity. For example, to determine the value ofab, consider non-constant analytic functionsfandgwith the property thatf(x)aandg(x)basx0. Iff(x)g(x)always approaches the same limit, then this should be the value ofab. This definition would set 2= which seems quite reasonable. In the case of 1.0, whenf(x) = 1 andg(x) = 1/xthe limit approaches 1, but whenf(x) = 1 -xandg(x) = 1/xthe limit ise-1. So 1.0, should be a NaN. In the case of 00,f(x)g(x)=eg(x)logf(x). Sincefandgare analytic and take on the value 0 at 0,f(x) =a1x1+a2x2+ ... andg(x) =b1x1+b2x2+ .... Thus limx0g(x) logf(x) = limx0xlog(x(a1+a2x+ ...)) = limx0xlog(a1x) = 0. Sof(x)g(x)e0= 1 for allfandg, which means that 00= 1.2526Using this definition would unambiguously define the exponential function for all arguments, and in particular would define`(-3.0)**3.0` to be -27.## The IEEE Standard The section The IEEE Standard," discussed many of the features of the IEEE standard. However, the IEEE standard says nothing about how these features are to be accessed from a programming language. Thus, there is usually a mismatch between floating-point hardware that supports the standard and programming languages like C, Pascal or FORTRAN. Some of the IEEE capabilities can be accessed through a library of subroutine calls. For example the IEEE standard requires that square root be exactly rounded, and the square root function is often implemented directly in hardware. This functionality is easily accessed via a library square root routine. However, other aspects of the standard are not so easily implemented as subroutines. For example, most computer languages specify at most two floating-point types, while the IEEE standard has four different precisions (although the recommended configurations are single plus single-extended or single, double, and double-extended). Infinity provides another example. Constants to represent ± could be supplied by a subroutine. But that might make them unusable in places that require constant expressions, such as the initializer of a constant variable. A more subtle situation is manipulating the state associated with a computation, where the state consists of the rounding modes, trap enable bits, trap handlers and exception flags. One approach is to provide subroutines for reading and writing the state. In addition, a single call that can atomically set a new value and return the old value is often useful. As the examples in the section Flags show, a very common pattern of modifying IEEE state is to change it only within the scope of a block or subroutine. Thus the burden is on the programmer to find each exit from the block, and make sure the state is restored. Language support for setting the state precisely in the scope of a block would be very useful here. Modula-3 is one language that implements this idea for trap handlers [Nelson 1991]. There are a number of minor points that need to be considered when implementing the IEEE standard in a language. Since x-x= +0 for allx,27(+0) - (+0) = +0. However, -(+0) = -0, thus -xshould not be defined as 0 - x. The introduction of NaNs can be confusing, because a NaN is never equal to any other number (including another NaN), sox=xis no longer always true. In fact, the expressionxxis the simplest way to test for a NaN if the IEEE recommended function`Isnan` is not provided. Furthermore, NaNs are unordered with respect to all other numbers, soxycannot be defined asnotx>y. Since the introduction of NaNs causes floating-point numbers to become partially ordered, a`compare` function that returns one of <, =, >, orunorderedcan make it easier for the programmer to deal with comparisons.Although the IEEE standard defines the basic floating-point operations to return a NaN if any operand is a NaN, this might not always be the best definition for compound operations. For example when computing the appropriate scale factor to use in plotting a graph, the maximum of a set of values must be computed. In this case it makes sense for the max operation to simply ignore NaNs. Finally, rounding can be a problem. The IEEE standard defines rounding very precisely, and it depends on the current value of the rounding modes. This sometimes conflicts with the definition of implicit rounding in type conversions or the explicit `round` function in languages. This means that programs which wish to use IEEE rounding can't use the natural language primitives, and conversely the language primitives will be inefficient to implement on the ever increasing number of IEEE machines.## Optimizers Compiler texts tend to ignore the subject of floating-point. For example Aho et al. [1986] mentions replacing `x/2.0` with`x*0.5` , leading the reader to assume that`x/10.0` should be replaced by`0.1*x` . However, these two expressions do not have the same semantics on a binary machine, because 0.1 cannot be represented exactly in binary. This textbook also suggests replacing`x*y-x*z` by`x*(y-z)` , even though we have seen that these two expressions can have quite different values whenyz. Although it does qualify the statement that any algebraic identity can be used when optimizing code by noting that optimizers should not violate the language definition, it leaves the impression that floating-point semantics are not very important. Whether or not the language standard specifies that parenthesis must be honored,`(x+y)+z` can have a totally different answer than`x+(y+z)` , as discussed above. There is a problem closely related to preserving parentheses that is illustrated by the following code eps = 1;do eps = 0.5*eps; while (eps + 1 > 1); :This is designed to give an estimate for machine epsilon. If an optimizing compiler notices that eps+ 1 > 1eps> 0, the program will be changed completely. Instead of computing the smallest numberxsuch that 1xis still greater thanx(xe), it will compute the largest numberxfor whichx/2 is rounded to 0 (x). Avoiding this kind of "optimization" is so important that it is worth presenting one more very useful algorithm that is totally ruined by it.Many problems, such as numerical integration and the numerical solution of differential equations involve computing sums with many terms. Because each addition can potentially introduce an error as large as .5 ulp, a sum involving thousands of terms can have quite a bit of rounding error. A simple way to correct for this is to store the partial summand in a double precision variable and to perform each addition using double precision. If the calculation is being done in single precision, performing the sum in double precision is easy on most computer systems. However, if the calculation is already being done in double precision, doubling the precision is not so simple. One method that is sometimes advocated is to sort the numbers and add them from smallest to largest. However, there is a much more efficient method which dramatically improves the accuracy of sums, namely ## Theorem 8 (Kahan Summation Formula) Suppose that is computed using the following algorithm S = X[1];C = 0;for j = 2 to N {Y = X[j] - C;T = S + Y;C = (T - S) - Y;S = T;} Then the computed sumSis equal towhere.Using the naive formula , the computed sum is equal to where | | < (jn-j)e. Comparing this with the error in the Kahan summation formula shows a dramatic improvement. Each summand is perturbed by only 2e, instead of perturbations as large asnein the simple formula. Details are in, Errors In Summation.An optimizer that believed floating-point arithmetic obeyed the laws of algebra would conclude that C= [T-S] -Y= [(S+Y)-S] -Y= 0, rendering the algorithm completely useless. These examples can be summarized by saying that optimizers should be extremely cautious when applying algebraic identities that hold for the mathematical real numbers to expressions involving floating-point variables.Another way that optimizers can change the semantics of floating-point code involves constants. In the expression `1.0E-40*x` , there is an implicit decimal to binary conversion operation that converts the decimal number to a binary constant. Because this constant cannot be represented exactly in binary, the inexact exception should be raised. In addition, the underflow flag should to be set if the expression is evaluated in single precision. Since the constant is inexact, its exact conversion to binary depends on the current value of the IEEE rounding modes. Thus an optimizer that converts`1.0E-40` to binary at compile time would be changing the semantics of the program. However, constants like 27.5 which are exactly representable in the smallest available precision can be safely converted at compile time, since they are always exact, cannot raise any exception, and are unaffected by the rounding modes. Constants that are intended to be converted at compile time should be done with a constant declaration, such as`const` `pi` `=` `3.14159265` .Common subexpression elimination is another example of an optimization that can change floating-point semantics, as illustrated by the following code C = A*B;RndMode = UpD = A*B; Although `A*B` can appear to be a common subexpression, it is not because the rounding mode is different at the two evaluation sites. Three final examples:x=xcannot be replaced by the boolean constant`true` , because it fails whenxis a NaN; -x= 0 -xfails forx= +0; andx<yis not the opposite ofxy, because NaNs are neither greater than nor less than ordinary floating-point numbers.Despite these examples, there are useful optimizations that can be done on floating-point code. First of all, there are algebraic identities that are valid for floating-point numbers. Some examples in IEEE arithmetic are x+y=y+x, 2 ×x=x+x, 1 ×x=x, and 0.5×x=x/2. However, even these simple identities can fail on a few machines such as CDC and Cray supercomputers. Instruction scheduling and in-line procedure substitution are two other potentially useful optimizations.28As a final example, consider the expression `dx` `=` `x*y` , where`x` and`y` are single precision variables, and`dx` is double precision. On machines that have an instruction that multiplies two single precision numbers to produce a double precision number,`dx` `=` `x*y` can get mapped to that instruction, rather than compiled to a series of instructions that convert the operands to double and then perform a double to double precision multiply.Some compiler writers view restrictions which prohibit converting ( x+y) +ztox+ (y+z) as irrelevant, of interest only to programmers who use unportable tricks. Perhaps they have in mind that floating-point numbers model real numbers and should obey the same laws that real numbers do. The problem with real number semantics is that they are extremely expensive to implement. Every time twonbit numbers are multiplied, the product will have 2nbits. Every time twonbit numbers with widely spaced exponents are added, the number of bits in the sum is n + the space between the exponents. The sum could have up to(emax-emin) + n bits, or roughly 2·emax+ n bits. An algorithm that involves thousands of operations (such as solving a linear system) will soon be operating on numbers with many significant bits, and be hopelessly slow. The implementation of library functions such as sin and cos is even more difficult, because the value of these transcendental functions aren't rational numbers. Exact integer arithmetic is often provided by lisp systems and is handy for some problems. However, exact floating-point arithmetic is rarely useful.The fact is that there are useful algorithms (like the Kahan summation formula) that exploit the fact that ( x+y) +zx+ (y+z), and work whenever the boundab= (a+b)(1 + ) holds (as well as similar bounds for -, × and /). Since these bounds hold for almost all commercial hardware, it would be foolish for numerical programmers to ignore such algorithms, and it would be irresponsible for compiler writers to destroy these algorithms by pretending that floating-point variables have real number semantics. ## Exception Handling The topics discussed up to now have primarily concerned systems implications of accuracy and precision. Trap handlers also raise some interesting systems issues. The IEEE standard strongly recommends that users be able to specify a trap handler for each of the five classes of exceptions, and the section Trap Handlers, gave some applications of user defined trap handlers. In the case of invalid operation and division by zero exceptions, the handler should be provided with the operands, otherwise, with the exactly rounded result. Depending on the programming language being used, the trap handler might be able to access other variables in the program as well. For all exceptions, the trap handler must be able to identify what operation was being performed and the precision of its destination. The IEEE standard assumes that operations are conceptually serial and that when an interrupt occurs, it is possible to identify the operation and its operands. On machines which have pipelining or multiple arithmetic units, when an exception occurs, it may not be enough to simply have the trap handler examine the program counter. Hardware support for identifying exactly which operation trapped may be necessary. Another problem is illustrated by the following program fragment. x = y*z;z = x*w;a = b + c;d = a/x; Suppose the second multiply raises an exception, and the trap handler wants to use the value of `a` . On hardware that can do an add and multiply in parallel, an optimizer would probably move the addition operation ahead of the second multiply, so that the add can proceed in parallel with the first multiply. Thus when the second multiply traps,`a` `=` `b` `+` `c` has already been executed, potentially changing the result of`a` . It would not be reasonable for a compiler to avoid this kind of optimization, because every floating-point operation can potentially trap, and thus virtually all instruction scheduling optimizations would be eliminated. This problem can be avoided by prohibiting trap handlers from accessing any variables of the program directly. Instead, the handler can be given the operands or result as an argument.But there are still problems. In the fragment x = y*z;z = a + b; the two instructions might well be executed in parallel. If the multiply traps, its argument `z` could already have been overwritten by the addition, especially since addition is usually faster than multiply. Computer systems that support the IEEE standard must provide some way to save the value of`z` , either in hardware or by having the compiler avoid such a situation in the first place.W. Kahan has proposed using presubstitutioninstead of trap handlers to avoid these problems. In this method, the user specifies an exception and the value he wants to be used as the result when the exception occurs. As an example, suppose that in code for computing (sinx)/x, the user decides thatx= 0 is so rare that it would improve performance to avoid a test forx= 0, and instead handle this case when a 0/0 trap occurs. Using IEEE trap handlers, the user would write a handler that returns a value of 1 and install it before computing sinx/x. Using presubstitution, the user would specify that when an invalid operation occurs, the value 1 should be used. Kahan calls this presubstitution, because the value to be used must be specified before the exception occurs. When using trap handlers, the value to be returned can be computed when the trap occurs.The advantage of presubstitution is that it has a straightforward hardware implementation. 29As soon as the type of exception has been determined, it can be used to index a table which contains the desired result of the operation. Although presubstitution has some attractive attributes, the widespread acceptance of the IEEE standard makes it unlikely to be widely implemented by hardware manufacturers.## The Details A number of claims have been made in this paper concerning properties of floating-point arithmetic. We now proceed to show that floating-point is not black magic, but rather is a straightforward subject whose claims can be verified mathematically. This section is divided into three parts. The first part presents an introduction to error analysis, and provides the details for the section Rounding Error. The second part explores binary to decimal conversion, filling in some gaps from the section The IEEE Standard. The third part discusses the Kahan summation formula, which was used as an example in the section Systems Aspects. ## Rounding Error In the discussion of rounding error, it was stated that a single guard digit is enough to guarantee that addition and subtraction will always be accurate (Theorem 2). We now proceed to verify this fact. Theorem 2 has two parts, one for subtraction and one for addition. The part for subtraction is ## Theorem 9 If x and y are positive floating-point numbers in a format with parametersand p, and if subtraction is done with p + 1 digits (i.e. one guard digit), then the relative rounding error in the result is less thane2e. ## Proof (15) y - < ( - 1)( - Interchange xandyif necessary so thatx> y. It is also harmless to scalexandyso thatxis represented byx0.x1...xp - 1×0. Ifyis represented asy0.y1...yp-1, then the difference is exact. Ifyis represented as0.y1...yp, then the guard digit ensures that the computed difference will be the exact difference rounded to a floating-point number, so the rounding error is at moste. In general, lety= 0.0 ... 0yk + 1...yk +and bepytruncated top+ 1 digits. Then-p- 1+-p- 2+ ...+-p-).k (16) || (/2) - From the definition of guard digit, the computed value of x-yisx- rounded to be a floating-point number, that is, (x- ) + , where the rounding error satisfies-.p (17) - The exact difference is x-y, so the error is (x-y) - (x- + ) = -y+ . There are three cases. If x - y 1 then the relative error is bounded by-[( - 1)(p-1+ ... +-) + /2] <k-(1 + /2) .p > (- 1)( - Secondly, if x- < 1, then = 0. Since the smallest thatx-ycan be is-1+ ... +-), where = - 1,k (18) . - in this case the relative error is bounded by - The final case is when x- y < 1 butx- 1. The only way this could happen is ifx- = 1, in which case = 0. But if = 0, then (18) applies, so that again the relative error is bounded by-<p-(1 + /2). zpWhen = 2, the bound is exactly 2 e, and this bound is achieved forx= 1 + 22 -andpy= 21 -- 2p1 - 2in the limit aspp. When adding numbers of the same sign, a guard digit is not necessary to achieve good accuracy, as the following result shows.## Theorem 10 If x0and y0, then the relative error in computing x + y is at most2, even if no guard digits are used.## Proof . - The algorithm for addition with kguard digits is similar to that for subtraction. Ifxy, shiftyright until the radix points ofxandyare aligned. Discard any digits shifted past thep+kposition. Compute the sum of these twop+kdigit numbers exactly. Then round topdigits.- We will verify the theorem when no guard digits are used; the general case is similar. There is no loss of generality in assuming that xy0 and thatxis scaled to be of the formd.dd...d×0. First, assume there is no carry out. Then the digits shifted off the end ofyhave a value less than-p+ 1, and the sum is at least 1, so the relative error is less than-p+1/1 = 2e. If there is a carry out, then the error from shifting must be added to the rounding error of The sum is at least , so the relative error is less than 2. z It is obvious that combining these two theorems gives Theorem 2. Theorem 2 gives the relative error for performing one operation. Comparing the rounding error of (19)x2-y2and (x+y) (x-y) requires knowing the relative error of multiple operations. The relative error ofxyis1= [(xy) - (x-y)] / (x-y), which satisfies |1| 2e. Or to write it another wayxy= (x-y) (1 +1), |1| 2e (20)xy= (x+y) (1 +2), |2| 2e Assuming that multiplication is performed by computing the exact product and then rounding, the relative error is at most .5 ulp, so (21)uv=uv(1 +3), |3|e for any floating-point numbers (22) (uandv. Putting these three equations together (lettingu=xyandv=xy) givesxy) (xy) = (x-y) (1 +1) (x+y) (1 +2) (1 +3) So the relative error incurred when computing ( (23)x-y) (x+y) is This relative error is equal to 1+2+3+12+13+23+123, which is bounded by 5 + 82. In other words, the maximum relative error is about 5 rounding errors (sinceeis a small number,e2is almost negligible).A similar analysis of ( (x x) (y y) = [xx) (yy) cannot result in a small value for the relative error, because when two nearby values ofxandyare plugged intox2- y2, the relative error will usually be quite large. Another way to see this is to try and duplicate the analysis that worked on (xy) (xy), yieldingx2(1 +1) -y2(1 +2)] (1 +3) = ((x2-y2) (1 +1) + (1-2)y2) (1 +3) When xandyare nearby, the error term (1-2)y2can be as large as the resultx2- y2. These computations formally justify our claim that (x-y) (x+y) is more accurate thanx2-y2.We next turn to an analysis of the formula for the area of a triangle. In order to estimate the maximum error that can occur when computing with (7), the following fact will be needed. ## Theorem 11 If subtraction is performed with a guard digit, and y/2x2y, then x - y is computed exactly.## Proof - Note that if xandyhave the same exponent, then certainlyxyis exact. Otherwise, from the condition of the theorem, the exponents can differ by at most 1. Scale and interchangexandyif necessary so that 0yx, andxis represented asx0.x1...xp - 1andyas 0.y1...yp. Then the algorithm for computingxywill computex-yexactly and round to a floating-point number. If the difference is of the form 0.d1...dp, the difference will already bepdigits long, and no rounding is necessary. Sincex2y,x-yy, and sinceyis of the form 0.d1...dp, so isx-y. zWhen > 2, the hypothesis of Theorem 11 cannot be replaced by y/xy; the stronger conditiony/2x2yis still necessary. The analysis of the error in (x-y) (x+y), immediately following the proof of Theorem 10, used the fact that the relative error in the basic operations of addition and subtraction is small (namely equations (19) and (20)). This is the most common kind of error analysis. However, analyzing formula (7) requires something more, namely Theorem 11, as the following proof will show.## Theorem 12 If subtraction uses a guard digit, and if a,b and c are the sides of a triangle (abc), then the relative error in computing (a + (b + c))(c - (a - b))(c + (a - b))(a +(b - c)) is at most16, providede< .005.## Proof ( - Let's examine the factors one by one. From Theorem 10, bc= (b+c) (1 +1), where1is the relative error, and |1| 2. Then the value of the first factor isa(bc)) = (a+ (bc)) (1 +2) = (a+ (b+ c) (1 +1))(1 +2), ( - and thus a+b+c) (1 - 2)2[a+ (b+c) (1 - 2)] · (1-2) a (bc) [a+ (b+c) (1 + 2)] (1 + 2) (a+b+c) (1 + 2)2 (24) (a (b c)) = (a + b + c) (1 + - This means that there is an 1so that1)2, |1| 2. (25) ( - The next term involves the potentially catastrophic subtraction of canda`b` , becauseabmay have rounding error. Because a, b and c are the sides of a triangle, a b+c, and combining this with the orderingcbagivesab+c2b2a. Soa-bsatisfies the conditions of Theorem 11. This means thata-b=abis exact, hencec(a- b) is a harmless subtraction which can be estimated from Theorem 9 to bec(ab)) = (c- (a-b)) (1 +2), |2| 2 (26) ( - The third term is the sum of two exact positive quantities, so c(ab)) = (c + (a-b)) (1 +3), |3| 2 (27) ( - Finally, the last term is a(bc)) = (a+ (b- c)) (1 +4)2, |4| 2, ( - using both Theorem 9 and Theorem 10. If multiplication is assumed to be exactly rounded, so that xy=xy(1 + ) with || , then combining (24), (25), (26) and (27) givesa(b c)) (c (a b)) (c (a b)) (a(bc)) (a+ (b+ c)) (c- (a-b)) (c+ (a-b)) (a + (b -c))E - where E= (1 +1)2(1 +2) (1 +3) (1 +4)2(1 +1)(1 +2) (1 +3) - An upper bound for Eis (1 + 2)6(1 + )3, which expands out to 1 + 15 + O(2). Some writers simply ignore the O(e2) term, but it is easy to account for it. Writing (1 + 2)6(1 + )3= 1 + 15 +R(),R() is a polynomial inewith positive coefficients, so it is an increasing function of . SinceR(.005) = .505,R() < 1 for all < .005, and henceE(1 + 2)6(1 + )3< 1 + 16. To get a lower bound onE, note that 1 - 15 - R() < E, and so when < .005, 1 - 16 < (1 - 2)6(1 - )3. Combining these two bounds yields 1 - 16 <E< 1 + 16. Thus the relative error is at most 16. zTheorem 12 certainly shows that there is no catastrophic cancellation in formula (7). So although it is not necessary to show formula (7) is numerically stable, it is satisfying to have a bound for the entire formula, which is what Theorem 3 of Cancellation gives. ## Proof of Theorem 3 - Let q= (a+ (b+c)) (c- (a-b)) (c+ (a-b)) (a+ (b-c)) - and Q= (a(bc)) (c (ab)) (c(ab)) (a(bc)). (28) - Then, Theorem 12 shows that Q = q(1 + ), with 16. It is easy to check that , - provided .04/(.52) 2.15, and since || 16 16(.005) = .08, does satisfy the condition. Thus - with | 1| .52|| 8.5. If square roots are computed to within .5 ulp, then the error when computing is (1 +1)(1 +2), with |2| . If = 2, then there is no further error committed when dividing by 4. Otherwise, one more factor 1 +3with |3| is necessary for the division, and using the method in the proof of Theorem 12, the final error bound of (1 +1) (1 +2) (1 +3) is dominated by 1 +4, with |4| 11. zTo make the heuristic explanation immediately following the statement of Theorem 4 precise, the next theorem describes just how closely µ( x) approximates a constant.## Theorem 13 Ifµ(x)= ln(1 +x)/x, then for0x,µ(x)1and the derivative satisfies |µ'(x)|.## Proof - Note that µ( x) = 1 -x/2 +x2/3 - ... is an alternating series with decreasing terms, so forx1, µ(x) 1 -x/2 1/2. It is even easier to see that because the series for µ is alternating, µ(x) 1. The Taylor series of µ'(x) is also alternating, and ifxhas decreasing terms, so -µ'(x) -+ 2x/3, or -µ'(x) 0, thus |µ'(x)|. z## Proof of Theorem 4 - Since the Taylor series for ln (29) (1 + - is an alternating series, 0 < x- ln(1 +x) <x2/2, the relative error incurred when approximating ln(1 +x) byxis bounded byx/2. If 1x= 1, then |x| < , so the relative error is bounded by /2.- When 1 x1, define via 1x= 1 + . Then since 0x< 1, (1x) 1 = . If division and logarithms are computed to withinulp, then the computed value of the expression ln(1 +x)/((1 +x) - 1) is1) (1 +2) = (1 +1) (1 +2) = µ( ) (1 +1) (1 +2) where | (30) µ( ) - µ(1| and |2| . To estimate µ( ), use the mean value theorem, which says thatx) = ( -x)µ'() - for some between xand . From the definition of , it follows that | -x| , and combining this with Theorem 13 gives |µ( ) - µ(x)| /2, or |µ( )/µ(x) - 1| /(2|µ(x)|)which means that µ( ) = µ(x)(1 +3), with |3| . Finally, multiplying byxintroduces a final4, so the computed value ofx·ln(1x)/((1x) 1) (1 + - It is easy to check that if < 0.1, then 1) (1 +2) (1 +3) (1 +4) = 1 + , An interesting example of error analysis using formulas (19), (20), and (21) occurs in the quadratic formula . The section Cancellation, explained how rewriting the equation will eliminate the potential cancellation caused by the ± operation. But there is another potential cancellation that can occur when computing d=b2- 4ac. This one cannot be eliminated by a simple rearrangement of the formula. Roughly speaking, whenb24ac, rounding error can contaminate up to half the digits in the roots computed with the quadratic formula. Here is an informal proof (another approach to estimating the error in the quadratic formula appears in Kahan [1972]). If b24ac, rounding error can contaminate up to half the digits in the roots computed with the quadratic formula.Proof: Write ( .bb) (4ac) = (b2(1 +1) - 4ac(1 +2)) (1 +3), where || .i30Usingd=b2- 4ac, this can be rewritten as (d(1 +1) - 4ac(2-1)) (1 +3). To get an estimate for the size of this error, ignore second order terms in, in which case the absolute error isid(1+3) - 4ac4, where |4| = |1-2| 2. Since , the first termd(1+3) can be ignored. To estimate the second term, use the fact thatax2+bx+c=a(x-r1) (x-r2), soar1r2=c. Sinceb24ac, thenr1r2, so the second error term is . Thus the computed value of is , , so the absolute error in ais about . Since4-, , and thus the absolute error of destroys the bottom half of the bits of the rootspr1r2. In other words, since the calculation of the roots involves computing with , and this expression does not have meaningful bits in the position corresponding to the lower order half ofri, then the lower order bits ofricannot be meaningful. zFinally, we turn to the proof of Theorem 6. It is based on the following fact, which is proven in the section Theorem 14 and Theorem 8. ## Theorem 14 Let0< k < p, and set m =k+1, and assume that floating-point operations are exactly rounded. Then (mx)(mxx) is exactly equal to x rounded to p - k significant digits. More precisely, x is rounded by taking the significand of x, imagining a radix point just left of the k least significant digits and rounding to an integer.## Proof of Theorem 6 - By Theorem 14, xhisxrounded top- k = places. If there is no carry out, then certainlyxhcan be represented with significant digits. Suppose there is a carry-out. Ifx=x0.x1...xp - 1×, then rounding adds 1 toexp -k- 1, and the only way there can be a carry-out is ifxp -k- 1= - 1, but then the low order digit ofxhis 1 +xp -k- 1= 0, and so againxhis representable in digits.- To deal with xl, scalexto be an integer satisfyingp- 1x- 1. Let where is thepp-khigh order digits ofx, and is theklow order digits. There are three cases to consider. If , then roundingxtop-kplaces is the same as chopping and , and . Since has at mostkdigits, if p is even, then has at most k = = digits. Otherwise, = 2 and is representable withk- 1 significant bits. The second case is when , and then computingxhinvolves rounding up, soxh= +, andkxl=x-xh=x- -k= -. Once again, has at mostkkdigits, so is representable with p/2 digits. Finally, if = (/2)k- 1, thenxh= or +depending on whether there is a round up. Sokxlis either (/2)k- 1or (/2)k- 1-= -kk/2, both of which are represented with 1 digit. zTheorem 6 gives a way to express the product of two working precision numbers exactly as a sum. There is a companion formula for expressing a sum exactly. If | x| |y| thenx+y= (xy) + (x(xy))y[Dekker 1971; Knuth 1981, Theorem C in section 4.2.2]. However, when using exactly rounded operations, this formula is only true for = 2, and not for = 10 as the examplex= .99998,y= .99997 shows.## Binary to Decimal Conversion Since single precision has p= 24, and 224< 108, you might expect that converting a binary number to 8 decimal digits would be sufficient to recover the original binary number. However, this is not the case.## Theorem 15 When a binary IEEE single precision number is converted to the closest eight digit decimal number, it is not always possible to uniquely recover the binary number from the decimal one. However, if nine decimal digits are used, then converting the decimal number to the closest binary number will recover the original floating-point number.## Proof [ - Binary single precision numbers lying in the half open interval [10 3, 210) = [1000, 1024) have 10 bits to the left of the binary point, and 14 bits to the right of the binary point. Thus there are (210- 103)214= 393,216 different binary numbers in that interval. If decimal numbers are represented with 8 digits, then there are (210- 103)104= 240,000 decimal numbers in the same interval. There is no way that 240,000 decimal numbers could represent 393,216 different binary numbers. So 8 decimal digits are not enough to uniquely represent each single precision binary number.- To show that 9 digits are sufficient, it is enough to show that the spacing between binary numbers is always greater than the spacing between decimal numbers. This will ensure that for each decimal number N, the intervalN- ulp,N+ ulp] - contains at most one binary number. Thus each binary number rounds to a unique decimal number which in turn rounds to a unique binary number. - To show that the spacing between binary numbers is always greater than the spacing between decimal numbers, consider an interval [10 , 10nn+ 1]. On this interval, the spacing between consecutive decimal numbers is 10(n+ 1) - 9. On [10, 2n], wheremmis the smallest integer so that 10n< 2, the spacing of binary numbers is 2mm- 24, and the spacing gets larger further on in the interval. Thus it is enough to check that 10(n+ 1) - 9< 2m- 24. But in fact, since 10< 2n, then 10m(n+ 1) - 9= 1010n-8< 210m-8< 22m-24. zThe same argument applied to double precision shows that 17 decimal digits are required to recover a double precision number. Binary-decimal conversion also provides another example of the use of flags. Recall from the section Precision, that to recover a binary number from its decimal expansion, the decimal to binary conversion must be computed exactly. That conversion is performed by multiplying the quantities Nand 10|P|(which are both exact ifp< 13) in single-extended precision and then rounding this to single precision (or dividing ifp< 0; both cases are similar). Of course the computation ofN· 10|P|cannot be exact; it is the combined operation round(N· 10|P|) that must be exact, where the rounding is from single-extended to single precision. To see why it might fail to be exact, take the simple case of = 10,p= 2 for single, andp= 3 for single-extended. If the product is to be 12.51, then this would be rounded to 12.5 as part of the single-extended multiply operation. Rounding to single precision would give 12. But that answer is not correct, because rounding the product to single precision should give 13. The error is due to double rounding.By using the IEEE flags, double rounding can be avoided as follows. Save the current value of the inexact flag, and then reset it. Set the rounding mode to round-to-zero. Then perform the multiplication N· 10|P|. Store the new value of the inexact flag in`ixflag` , and restore the rounding mode and inexact flag. If`ixflag` is 0, thenN· 10|P|is exact, so round(N· 10|P|) will be correct down to the last bit. If`ixflag` is 1, then some digits were truncated, since round-to-zero always truncates. The significand of the product will look like 1.b1...b22b23...b31. A double rounding error may occur ifb23...b31= 10...0. A simple way to account for both cases is to perform a logical`OR` of`ixflag` withb31. Then round(N· 10|P|) will be computed correctly in all cases.## Errors In Summation The section Optimizers, mentioned the problem of accurately computing very long sums. The simplest approach to improving accuracy is to double the precision. To get a rough estimate of how much doubling the precision improves the accuracy of a sum, let (31)s1=x1,s2=s1x2...,si=si- 1xi. Thensi= (1 +) (isi - 1+xi), where, and ignoring second order terms inigivesi The first equality of (31) shows that the computed value of is the same as if an exact summation was performed on perturbed values of xj. The first termx1is perturbed byn, the last termxnby only . The second equality in (31) shows that error term is bounded by . Doubling the precision has the effect of squaring . If the sum is being done in an IEEE double precision format, 1/ 1016, so that for any reasonable value ofn. Thus, doubling the precision takes the maximum perturbation ofnand changes it to . Thus the 2 error bound for the Kahan summation formula (Theorem 8) is not as good as using double precision, even though it is much better than single precision.For an intuitive explanation of why the Kahan summation formula works, consider the following diagram of the procedure. Each time a summand is added, there is a correction factor Cwhich will be applied on the next loop. So first subtract the correctionCcomputed in the previous loop fromXj, giving the corrected summandY. Then add this summand to the running sumS. The low order bits ofY(namelyYl) are lost in the sum. Next compute the high order bits ofYby computingT-S. WhenYis subtracted from this, the low order bits ofYwill be recovered. These are the bits that were lost in the first sum in the diagram. They become the correction factor for the next loop. A formal proof of Theorem 8, taken from Knuth [1981] page 572, appears in the section Theorem 14 and Theorem 8."## Summary It is not uncommon for computer system designers to neglect the parts of a system related to floating-point. This is probably due to the fact that floating-point is given very little (if any) attention in the computer science curriculum. This in turn has caused the apparently widespread belief that floating-point is not a quantifiable subject, and so there is little point in fussing over the details of hardware and software that deal with it. This paper has demonstrated that it is possible to reason rigorously about floating-point. For example, floating-point algorithms involving cancellation can be proven to have small relative errors if the underlying hardware has a guard digit, and there is an efficient algorithm for binary-decimal conversion that can be proven to be invertible, provided that extended precision is supported. The task of constructing reliable floating-point software is made much easier when the underlying computer system is supportive of floating-point. In addition to the two examples just mentioned (guard digits and extended precision), the section Systems Aspects of this paper has examples ranging from instruction set design to compiler optimization illustrating how to better support floating-point. The increasing acceptance of the IEEE floating-point standard means that codes that utilize features of the standard are becoming ever more portable. The section The IEEE Standard, gave numerous examples illustrating how the features of the IEEE standard can be used in writing practical floating-point codes. ## Acknowledgments This article was inspired by a course given by W. Kahan at Sun Microsystems from May through July of 1988, which was very ably organized by David Hough of Sun. My hope is to enable others to learn about the interaction of floating-point and computer systems without having to get up in time to attend 8:00 a.m. lectures. Thanks are due to Kahan and many of my colleagues at Xerox PARC (especially John Gilbert) for reading drafts of this paper and providing many useful comments. Reviews from Paul Hilfinger and an anonymous referee also helped improve the presentation. ## References Aho, Alfred V., Sethi, R., and Ullman J. D. 1986. Compilers: Principles, Techniques and Tools, Addison-Wesley, Reading, MA.ANSI 1978. American National Standard Programming Language FORTRAN, ANSI Standard X3.9-1978, American National Standards Institute, New York, NY.Barnett, David 1987. A Portable Floating-Point Environment, unpublished manuscript.Brown, W. S. 1981. A Simple but Realistic Model of Floating-Point Computation, ACM Trans. on Math. Software 7(4), pp. 445-480.Cody, W. J et. al. 1984. A Proposed Radix- and Word-length-independent Standard for Floating-point Arithmetic, IEEE Micro 4(4), pp. 86-100.Cody, W. J. 1988. Floating-Point Standards -- Theory and Practice, in "Reliability in Computing: the role of interval methods in scientific computing", ed. by Ramon E. Moore, pp. 99-107, Academic Press, Boston, MA.Coonen, Jerome 1984. Contributions to a Proposed Standard for Binary Floating-Point Arithmetic, PhD Thesis, Univ. of California, Berkeley.Dekker, T. J. 1971. A Floating-Point Technique for Extending the Available Precision, Numer. Math. 18(3), pp. 224-242.Demmel, James 1984. Underflow and the Reliability of Numerical Software, SIAM J. Sci. Stat. Comput. 5(4), pp. 887-919.Farnum, Charles 1988. Compiler Support for Floating-point Computation, Software-Practice and Experience, 18(7), pp. 701-709.Forsythe, G. E. and Moler, C. B. 1967. Computer Solution of Linear Algebraic Systems, Prentice-Hall, Englewood Cliffs, NJ.Goldberg, I. Bennett 1967. 27 Bits Are Not Enough for 8-Digit Accuracy, Comm. of the ACM. 10(2), pp 105-106.Goldberg, David 1990. Computer Arithmetic, in "Computer Architecture: A Quantitative Approach", by David Patterson and John L. Hennessy, Appendix A, Morgan Kaufmann, Los Altos, CA.Golub, Gene H. and Van Loan, Charles F. 1989. Matrix Computations, 2nd edition,The Johns Hopkins University Press, Baltimore Maryland.Graham, Ronald L. , Knuth, Donald E. and Patashnik, Oren. 1989. Concrete Mathematics,Addison-Wesley, Reading, MA, p.162.Hewlett Packard 1982. HP-15C Advanced Functions Handbook.IEEE 1987. IEEE Standard 754-1985 for Binary Floating-point Arithmetic, IEEE, (1985). Reprinted in SIGPLAN 22(2) pp. 9-25.Kahan, W. 1972. A Survey Of Error Analysis, in Information Processing 71, Vol 2, pp. 1214 - 1239 (Ljubljana, Yugoslavia), North Holland, Amsterdam.Kahan, W. 1986. Calculating Area and Angle of a Needle-like Triangle, unpublished manuscript.Kahan, W. 1987. Branch Cuts for Complex Elementary Functions, in "The State of the Art in Numerical Analysis", ed. by M.J.D. Powell and A. Iserles (Univ of Birmingham, England), Chapter 7, Oxford University Press, New York.Kahan, W. 1988. Unpublished lectures given at Sun Microsystems, Mountain View, CA. Kahan, W. and Coonen, Jerome T. 1982. The Near Orthogonality of Syntax, Semantics, and Diagnostics in Numerical Programming Environments, in "The Relationship Between Numerical Computation And Programming Languages", ed. by J. K. Reid, pp. 103-115, North-Holland, Amsterdam.Kahan, W. and LeBlanc, E. 1985. Anomalies in the IBM Acrith Package, Proc. 7th IEEE Symposium on Computer Arithmetic (Urbana, Illinois), pp. 322-331.Kernighan, Brian W. and Ritchie, Dennis M. 1978. The C Programming Language, Prentice-Hall, Englewood Cliffs, NJ.Kirchner, R. and Kulisch, U. 1987. Arithmetic for Vector Processors, Proc. 8th IEEE Symposium on Computer Arithmetic (Como, Italy), pp. 256-269.Knuth, Donald E., 1981. The Art of Computer Programming, Volume II, Second Edition, Addison-Wesley, Reading, MA.Kulisch, U. W., and Miranker, W. L. 1986. The Arithmetic of the Digital Computer: A New Approach, SIAM Review 28(1), pp 1-36.Matula, D. W. and Kornerup, P. 1985. Finite Precision Rational Arithmetic: Slash Number Systems, IEEE Trans. on Comput. C-34(1), pp 3-18.Nelson, G. 1991. Systems Programming With Modula-3, Prentice-Hall, Englewood Cliffs, NJ.Reiser, John F. and Knuth, Donald E. 1975. Evading the Drift in Floating-point Addition, Information Processing Letters 3(3), pp 84-87.Sterbenz, Pat H. 1974. Floating-Point Computation, Prentice-Hall, Englewood Cliffs, NJ.Swartzlander, Earl E. and Alexopoulos, Aristides G. 1975. The Sign/Logarithm Number System, IEEE Trans. Comput. C-24(12), pp. 1238-1242.Walther, J. S., 1971. A unified algorithm for elementary functions, Proceedings of the AFIP Spring Joint Computer Conf. 38, pp. 379-385.## Theorem 14 and Theorem 8 This section contains two of the more technical proofs that were omitted from the text. ## Theorem 14 Let0< k < p, and set m =k+1, and assume that floating-point operations are exactly rounded. Then(mx)(mxx)is exactly equal to x rounded to p - k significant digits. More precisely, x is rounded by taking the significand of x, imagining a radix point just left of the k least significant digits, and rounding to an integer.## Proof (32) - The proof breaks up into two cases, depending on whether or not the computation of mx=kx+xhas a carry-out or not.- Assume there is no carry out. It is harmless to scale xso that it is an integer. Then the computation ofmx=x+kxlooks like this: `aa...aabb...bb` aa...aabb...bb +`zz...zzbb...bb` - where xhas been partitioned into two parts. The low orderkdigits are marked`b` and the high orderp-kdigits are marked`a` . To computemxfrommxinvolves rounding off the low orderkdigits (the ones marked with`b` ) somx=mx-xmod() + rkk (33) - The value of ris 1 if`.bb...b` is greater than and 0 otherwise. More preciselyr= 1 if`a.bb...b` rounds toa+ 1,r= 0 otherwise. (34) ( - Next compute mx-x=mx-xmod() +kr-kx=(kx+r) -xmod(). The picture below shows the computation ofkmx-xrounded, that is, (mx)x. The top line is(kx+r), where`B` is the digit that results from adding`r` to the lowest order digit`b` . `aa...aabb...bB00...00` bb...bb -`zz... zzZ00...00` - If `.bb...b` < thenr= 0, subtracting causes a borrow from the digit marked`B` , but the difference is rounded up, and so the net effect is that the rounded difference equals the top line, which iskx. If`.bb...b` > thenr= 1, and 1 is subtracted from`B` because of the borrow, so the result iskx. Finally consider the case`.bb...b` = . Ifr= 0 then`B` is even,`Z` is odd, and the difference is rounded up, givingkx. Similarly whenr= 1,`B` is odd,`Z` is even, the difference is rounded down, so again the difference iskx. To summarizemx)x=kx ( - Combining equations (32) and (34) gives ( mx) - (mxx) =x-xmod() + ·k. The result of performing this computation isk `r00...00` bb...bb + aa...aabb...bb -`aa...aA00...00` - The rule for computing r, equation (33), is the same as the rule for rounding`a...` `ab...b` top-kplaces. Thus computingmx- (mx-x) in floating-point arithmetic precision is exactly equal to roundingxtop-kplaces, in the case whenx+kxdoes not carry out.- When x+kxdoes carry out, thenmx=kx+xlooks like this: `aa...aabb...bb` aa...aabb...bb +`zz...zZbb...bb` - Thus, mx=mx-xmod() +kw, wherekw= -ZifZ< /2, but the exact value ofwis unimportant. Next,mx-x=kx-xmod() +kw. In a picturek `aa...aabb...bb00...00` w - bb... bb +`zz ... zZbb ...bb` 31- Rounding gives ( mx)x=kx+w-kr, wherekr= 1 if`.bb...b` > or if`.bb...b` = andb0= 1.32Finally,mx) - (mxx) =mx-xmod() +kwk- (kx+w-kr)k =x-xmod() +kr.k - And once again, r= 1 exactly when rounding`a...ab...b` top-kplaces involves rounding up. Thus Theorem 14 is proven in all cases. z## Theorem 8 (Kahan Summation Formula) - Suppose that is computed using the following algorithm S = X [1]; C = 0; for j = 2 to N { Y = X [j] - C; T = S + Y; C = (T - S) - Y; S = T; } Then the computed sumSis equal toS=xj(1+j) +O(N2)|xj|, where |j|2.## Proof - First recall how the error estimate for the simple formula xiwent. Introduces1=x1,si= (1 +) (isi - 1+xi). Then the computed sum issn, which is a sum of terms, each of which is anximultiplied by an expression involving's. The exact coefficient ofjx1is (1 +2)(1 +3) ... (1 +), and so by renumbering, the coefficient ofnx2must be (1 +3)(1 +4) ... (1 +n), and so on. The proof of Theorem 8 runs along exactly the same lines, only the coefficient ofx1is more complicated. In details0=c0= 0 and yk= xkck - 1=(xk- ck - 1) (1+)ksk= sk - 1yk=(sk-1+ yk) (1+)kck=(sksk - 1)yk=[(sk- sk - 1) (1+) -kyk] (1+)k- where all the Greek letters are bounded by . Although the coefficient of x1inskis the ultimate expression of interest, in turns out to be easier to compute the coefficient ofx1insk-ckandck.- When k= 1, c1= (s1(1 +1) -y1) (1 +d1)- = y1((1 +s1) (1 +1) - 1) (1 +d1)- = x1(s1+1+s1g1) (1 +d1) (1 +h1)s1-c1=x1[(1 +s1) - (s1+g1+s1g1) (1 +d1)](1 +h1)- = x1[1 -g1-s1d1-s1g1-d1g1-s1g1d1](1 +h1)- Calling the coefficients of x1in these expressionsCkandSkrespectively, thenC1= 2 + O(2) S1= +1-1+ 42+ O(3) - To get the general formula for SkandCk, expand the definitions ofskandck, ignoring all terms involvingxiwithi> 1 to get sk= (sk - 1+yk)(1 +)k- = [ sk - 1+ (xk-ck - 1) (1 +)](1 +k)k- = [( sk - 1-ck - 1) -kck - 1](1+)kck= [{sk-sk - 1}(1 +) -kyk](1 +)k- = [{(( sk - 1-ck - 1) -kck - 1)(1 +) -ksk - 1}(1 +) +kck - 1(1 +)](1 +k)k- = [{( sk - 1-ck - 1)-kckk-1(1 +) -kck - 1}(1 +) + ckk- 1(1 +)](1 +k)k- = [( sk - 1-ck - 1)(1 +k) -kck - 1(+k(k+k+kk))](1 +k),ksk-ck= ((sk - 1-ck - 1) -kck - 1) (1 +)k- - [( sk - 1-ck - 1)(1 +k) -kck - 1(+k(k+k+kk)](1 +k)k- = ( sk- 1-ck - 1)((1 +) -k(1 +k)(1 +k))k- + ck - 1(-(1 +k) + (k+k(k+k+kk)) (1 +k))k- = ( s- 1-ck - 1) (1 -(k+k+kk))k- + ck - 1- [+k+k(k+kk) + (k+k(k+k+kk))k]k- Since SkandCkare only being computed up to order2, these formulas can be simplified to Ck= (+ O(k2))Sk - 1+ (-+ O(k2))Ck - 1Sk= ((1 + 22+ O(3))Sk - 1+ (2 + (2))Ck - 1- Using these formulas gives C2=2+ O(2) S2= 1 +1-1+ 102+ O(3) - and in general it is easy to check by induction that Ck=+ O(k2) Sk= 1 +1-1+ (4+2)k2+ O(3) Finally, what is wanted is the coefficient of x1insk. To get this value, letxn + 1= 0, let all the Greek letters with subscripts ofn+ 1 equal 0, and computesn + 1. Thensn + 1=sn-cn, and the coefficient ofx1insnis less than the coefficient insn + 1, which isSn= 1 +1-1+ (4n+ 2)2= (1 + 2 + (n2)). z## Differences Among IEEE 754 Implementations Note This section is not part of the published paper. It has been added to clarify certain points and correct possible misconceptions about the IEEE standard that the reader might infer from the paper. This material was not written by David Goldberg, but it appears here with his permission. The preceding paper has shown that floating-point arithmetic must be implemented carefully, since programmers may depend on its properties for the correctness and accuracy of their programs. In particular, the IEEE standard requires a careful implementation, and it is possible to write useful programs that work correctly and deliver accurate results only on systems that conform to the standard. The reader might be tempted to conclude that such programs should be portable to all IEEE systems. Indeed, portable software would be easier to write if the remark "When a program is moved between two machines and both support IEEE arithmetic, then if any intermediate result differs, it must be because of software bugs, not from differences in arithmetic," were true. Unfortunately, the IEEE standard does not guarantee that the same program will deliver identical results on all conforming systems. Most programs will actually produce different results on different systems for a variety of reasons. For one, most programs involve the conversion of numbers between decimal and binary formats, and the IEEE standard does not completely specify the accuracy with which such conversions must be performed. For another, many programs use elementary functions supplied by a system library, and the standard doesn't specify these functions at all. Of course, most programmers know that these features lie beyond the scope of the IEEE standard. Many programmers may not realize that even a program that uses only the numeric formats and operations prescribed by the IEEE standard can compute different results on different systems. In fact, the authors of the standard intended to allow different implementations to obtain different results. Their intent is evident in the definition of the term destinationin the IEEE 754 standard: "A destination may be either explicitly designated by the user or implicitly supplied by the system (for example, intermediate results in subexpressions or arguments for procedures). Some languages place the results of intermediate calculations in destinations beyond the user's control. Nonetheless, this standard defines the result of an operation in terms of that destination's format and the operands' values." (IEEE 754-1985, p. 7) In other words, the IEEE standard requires that each result be rounded correctly to the precision of the destination into which it will be placed, but the standard does not require that the precision of that destination be determined by a user's program. Thus, different systems may deliver their results to destinations with different precisions, causing the same program to produce different results (sometimes dramatically so), even though those systems all conform to the standard.Several of the examples in the preceding paper depend on some knowledge of the way floating-point arithmetic is rounded. In order to rely on examples such as these, a programmer must be able to predict how a program will be interpreted, and in particular, on an IEEE system, what the precision of the destination of each arithmetic operation may be. Alas, the loophole in the IEEE standard's definition of destinationundermines the programmer's ability to know how a program will be interpreted. Consequently, several of the examples given above, when implemented as apparently portable programs in a high-level language, may not work correctly on IEEE systems that normally deliver results to destinations with a different precision than the programmer expects. Other examples may work, but proving that they work may lie beyond the average programmer's ability.In this section, we classify existing implementations of IEEE 754 arithmetic based on the precisions of the destination formats they normally use. We then review some examples from the paper to show that delivering results in a wider precision than a program expects can cause it to compute wrong results even though it is provably correct when the expected precision is used. We also revisit one of the proofs in the paper to illustrate the intellectual effort required to cope with unexpected precision even when it doesn't invalidate our programs. These examples show that despite all that the IEEE standard prescribes, the differences it allows among different implementations can prevent us from writing portable, efficient numerical software whose behavior we can accurately predict. To develop such software, then, we must first create programming languages and environments that limit the variability the IEEE standard permits and allow programmers to express the floating-point semantics upon which their programs depend. ## Current IEEE 754 Implementations Current implementations of IEEE 754 arithmetic can be divided into two groups distinguished by the degree to which they support different floating-point formats in hardware. Extended-basedsystems, exemplified by the Intel x86 family of processors, provide full support for an extended double precision format but only partial support for single and double precision: they provide instructions to load or store data in single and double precision, converting it on-the-fly to or from the extended double format, and they provide special modes (not the default) in which the results of arithmetic operations are rounded to single or double precision even though they are kept in registers in extended double format. (Motorola 68000 series processors round results to both the precision and range of the single or double formats in these modes. Intel x86 and compatible processors round results to the precision of the single or double formats but retain the same range as the extended double format.)Single/doublesystems, including most RISC processors, provide full support for single and double precision formats but no support for an IEEE-compliant extended double precision format. (The IBM POWER architecture provides only partial support for single precision, but for the purpose of this section, we classify it as a single/double system.)To see how a computation might behave differently on an extended-based system than on a single/double system, consider a C version of the example from the section Systems Aspects: int main() {double q;q = 3.0/7.0;if (q == 3.0/7.0) printf("Equal\n");else printf("Not Equal\n");return 0;} Here the constants 3.0 and 7.0 are interpreted as double precision floating-point numbers, and the expression 3.0/7.0 inherits the `double` data type. On a single/double system, the expression will be evaluated in double precision since that is the most efficient format to use. Thus,`q` will be assigned the value 3.0/7.0 rounded correctly to double precision. In the next line, the expression 3.0/7.0 will again be evaluated in double precision, and of course the result will be equal to the value just assigned to`q` , so the program will print "Equal" as expected.On an extended-based system, even though the expression 3.0/7.0 has type `double` , the quotient will be computed in a register in extended double format, and thus in the default mode, it will be rounded to extended double precision. When the resulting value is assigned to the variable`q` , however, it may then be stored in memory, and since`q` is declared`double` , the value will be rounded to double precision. In the next line, the expression 3.0/7.0 may again be evaluated in extended precision yielding a result that differs from the double precision value stored in`q` , causing the program to print "Not equal". Of course, other outcomes are possible, too: the compiler could decide to store and thus round the value of the expression 3.0/7.0 in the second line before comparing it with`q` , or it could keep`q` in a register in extended precision without storing it. An optimizing compiler might evaluate the expression 3.0/7.0 at compile time, perhaps in double precision or perhaps in extended double precision. (With one x86 compiler, the program prints "Equal" when compiled with optimization and "Not Equal" when compiled for debugging.) Finally, some compilers for extended-based systems automatically change the rounding precision mode to cause operations producing results in registers to round those results to single or double precision, albeit possibly with a wider range. Thus, on these systems, we can't predict the behavior of the program simply by reading its source code and applying a basic understanding of IEEE 754 arithmetic. Neither can we accuse the hardware or the compiler of failing to provide an IEEE 754 compliant environment; the hardware has delivered a correctly rounded result to each destination, as it is required to do, and the compiler has assigned some intermediate results to destinations that are beyond the user's control, as it is allowed to do.## Pitfalls in Computations on Extended-Based Systems Conventional wisdom maintains that extended-based systems must produce results that are at least as accurate, if not more accurate than those delivered on single/double systems, since the former always provide at least as much precision and often more than the latter. Trivial examples such as the C program above as well as more subtle programs based on the examples discussed below show that this wisdom is naive at best: some apparently portable programs, which are indeed portable across single/double systems, deliver incorrect results on extended-based systems precisely because the compiler and hardware conspire to occasionally provide more precision than the program expects. Current programming languages make it difficult for a program to specify the precision it expects. As the section Languages and Compilers mentions, many programming languages don't specify that each occurrence of an expression like `10.0*x` in the same context should evaluate to the same value. Some languages, such as Ada, were influenced in this respect by variations among different arithmetics prior to the IEEE standard. More recently, languages like ANSI C have been influenced by standard-conforming extended-based systems. In fact, the ANSI C standard explicitly allows a compiler to evaluate a floating-point expression to a precision wider than that normally associated with its type. As a result, the value of the expression`10.0*x` may vary in ways that depend on a variety of factors: whether the expression is immediately assigned to a variable or appears as a subexpression in a larger expression; whether the expression participates in a comparison; whether the expression is passed as an argument to a function, and if so, whether the argument is passed by value or by reference; the current precision mode; the level of optimization at which the program was compiled; the precision mode and expression evaluation method used by the compiler when the program was compiled; and so on.Language standards are not entirely to blame for the vagaries of expression evaluation. Extended-based systems run most efficiently when expressions are evaluated in extended precision registers whenever possible, yet values that must be stored are stored in the narrowest precision required. Constraining a language to require that `10.0*x` evaluate to the same value everywhere would impose a performance penalty on those systems. Unfortunately, allowing those systems to evaluate`10.0*x` differently in syntactically equivalent contexts imposes a penalty of its own on programmers of accurate numerical software by preventing them from relying on the syntax of their programs to express their intended semantics.Do real programs depend on the assumption that a given expression always evaluates to the same value? Recall the algorithm presented in Theorem 4 for computing ln(1 + x), written here in Fortran: real function log1p(x)real xif (1.0 + x .eq. 1.0) thenlog1p = xelselog1p = log(1.0 + x) * x / ((1.0 + x) - 1.0)endifreturnOn an extended-based system, a compiler may evaluate the expression `1.0` `+` `x` in the third line in extended precision and compare the result with`1.0` . When the same expression is passed to the log function in the sixth line, however, the compiler may store its value in memory, rounding it to single precision. Thus, if`x` is not so small that`1.0` `+` `x` rounds to`1.0` in extended precision but small enough that`1.0` `+` `x` rounds to`1.0` in single precision, then the value returned by`log1p(x)` will be zero instead of`x` , and the relative error will be one--rather larger than 5. Similarly, suppose the rest of the expression in the sixth line, including the reoccurrence of the subexpression`1.0` `+` `x` , is evaluated in extended precision. In that case, if`x` is small but not quite small enough that`1.0` `+` `x` rounds to`1.0` in single precision, then the value returned by`log1p(x)` can exceed the correct value by nearly as much as`x` , and again the relative error can approach one. For a concrete example, take`x` to be 2-24+ 2-47, so`x` is the smallest single precision number such that`1.0` `+` `x` rounds up to the next larger number, 1 + 2-23. Then`log(1.0` `+` `x)` is approximately 2-23. Because the denominator in the expression in the sixth line is evaluated in extended precision, it is computed exactly and delivers`x` , so`log1p(x)` returns approximately 2-23, which is nearly twice as large as the exact value. (This actually happens with at least one compiler. When the preceding code is compiled by the Sun WorkShop Compilers 4.2.1 Fortran 77 compiler for x86 systems using the`-O` optimization flag, the generated code computes`1.0` `+` `x` exactly as described. As a result, the function delivers zero for`log1p(1.0e-10)` and`1.19209E-07` for`log1p(5.97e-8)` .)For the algorithm of Theorem 4 to work correctly, the expression `1.0` `+` `x` must be evaluated the same way each time it appears; the algorithm can fail on extended-based systems only when`1.0` `+` `x` is evaluated to extended double precision in one instance and to single or double precision in another. Of course, since`log` is a generic intrinsic function in Fortran, a compiler could evaluate the expression`1.0` `+` `x` in extended precision throughout, computing its logarithm in the same precision, but evidently we cannot assume that the compiler will do so. (One can also imagine a similar example involving a user-defined function. In that case, a compiler could still keep the argument in extended precision even though the function returns a single precision result, but few if any existing Fortran compilers do this, either.) We might therefore attempt to ensure that`1.0` `+` `x` is evaluated consistently by assigning it to a variable. Unfortunately, if we declare that variable`real` , we may still be foiled by a compiler that substitutes a value kept in a register in extended precision for one appearance of the variable and a value stored in memory in single precision for another. Instead, we would need to declare the variable with a type that corresponds to the extended precision format. Standard FORTRAN 77 does not provide a way to do this, and while Fortran 95 offers the`SELECTED_REAL_KIND` mechanism for describing various formats, it does not explicitly require implementations that evaluate expressions in extended precision to allow variables to be declared with that precision. In short, there is no portable way to write this program in standard Fortran that is guaranteed to prevent the expression`1.0` `+` `x` from being evaluated in a way that invalidates our proof.There are other examples that can malfunction on extended-based systems even when each subexpression is stored and thus rounded to the same precision. The cause is double-rounding. In the default precision mode, an extended-based system will initially round each result to extended double precision. If that result is then stored to double precision, it is rounded again. The combination of these two roundings can yield a value that is different than what would have been obtained by rounding the first result correctly to double precision. This can happen when the result as rounded to extended double precision is a "halfway case", i.e., it lies exactly halfway between two double precision numbers, so the second rounding is determined by the round-ties-to-even rule. If this second rounding rounds in the same direction as the first, the net rounding error will exceed half a unit in the last place. (Note, though, that double-rounding only affects double precision computations. One can prove that the sum, difference, product, or quotient of twop-bit numbers, or the square root of ap-bit number, rounded first toqbits and then topbits gives the same value as if the result were rounded just once topbits providedq2p+ 2. Thus, extended double precision is wide enough that single precision computations don't suffer double-rounding.)Some algorithms that depend on correct rounding can fail with double-rounding. In fact, even some algorithms that don't require correct rounding and work correctly on a variety of machines that don't conform to IEEE 754 can fail with double-rounding. The most useful of these are the portable algorithms for performing simulated multiple precision arithmetic mentioned in the section Exactly Rounded Operations. For example, the procedure described in Theorem 6 for splitting a floating-point number into high and low parts doesn't work correctly in double-rounding arithmetic: try to split the double precision number 2 52+ 3 × 226- 1 into two parts each with at most 26 bits. When each operation is rounded correctly to double precision, the high order part is 252+ 227and the low order part is 226- 1, but when each operation is rounded first to extended double precision and then to double precision, the procedure produces a high order part of 252+ 228and a low order part of -226- 1. The latter number occupies 27 bits, so its square can't be computed exactly in double precision. Of course, it would still be possible to compute the square of this number in extended double precision, but the resulting algorithm would no longer be portable to single/double systems. Also, later steps in the multiple precision multiplication algorithm assume that all partial products have been computed in double precision. Handling a mixture of double and extended double variables correctly would make the implementation significantly more expensive.Likewise, portable algorithms for adding multiple precision numbers represented as arrays of double precision numbers can fail in double-rounding arithmetic. These algorithms typically rely on a technique similar to Kahan's summation formula. As the informal explanation of the summation formula given on Errors In Summation suggests, if `s` and`y` are floating-point variables with |`s` | |`y` | and we compute: t = s + y;e = (s - t) + y;then in most arithmetics, `e` recovers exactly the roundoff error that occurred in computing`t` . This technique doesn't work in double-rounded arithmetic, however: if`s` = 252+ 1 and`y` = 1/2 - 2-54, then`s` `+` `y` rounds first to 252+ 3/2 in extended double precision, and this value rounds to 252+ 2 in double precision by the round-ties-to-even rule; thus the net rounding error in computing`t` is 1/2 + 2-54, which is not representable exactly in double precision and so can't be computed exactly by the expression shown above. Here again, it would be possible to recover the roundoff error by computing the sum in extended double precision, but then a program would have to do extra work to reduce the final outputs back to double precision, and double-rounding could afflict this process, too. For this reason, although portable programs for simulating multiple precision arithmetic by these methods work correctly and efficiently on a wide variety of machines, they do not work as advertised on extended-based systems.Finally, some algorithms that at first sight appear to depend on correct rounding may in fact work correctly with double-rounding. In these cases, the cost of coping with double-rounding lies not in the implementation but in the verification that the algorithm works as advertised. To illustrate, we prove the following variant of Theorem 7: ## Theorem 7' Ifmandnare integers representable in IEEE 754 double precision with |m| <252andnhas the special form n =2i+2j, then (mn)n = m, provided both floating-point operations are either rounded correctly to double precision or rounded first to extended double precision and then to double precision.## Proof Assume without loss that m> 0. Letq=mn. Scaling by powers of two, we can consider an equivalent setting in which 252m< 253and likewise forq, so that bothmandqare integers whose least significant bits occupy the units place (i.e., ulp(m) = ulp(q) = 1). Before scaling, we assumedm< 252, so after scaling,mis an even integer. Also, because the scaled values ofmandqsatisfym/2 <q< 2m, the corresponding value ofnmust have one of two forms depending on which ofmorqis larger: ifq<m, then evidently 1 <n< 2, and sincenis a sum of two powers of two,n= 1 + 2-for somekk; similarly, ifq>m, then 1/2 <n< 1, son= 1/2 + 2-(k+ 1). (Asnis the sum of two powers of two, the closest possible value ofnto one isn= 1 + 2-52. Becausem/(1 + 2-52) is no larger than the next smaller double precision number less thanm, we can't haveq=m.)Let edenote the rounding error in computingq, so thatq=m/n+e, and the computed valueqnwill be the (once or twice) rounded value ofm+ne. Consider first the case in which each floating-point operation is rounded correctly to double precision. In this case, |e| < 1/2. Ifnhas the form 1/2 + 2-(k+ 1), thenne=nq-mis an integer multiple of 2-(k+ 1)and |ne| < 1/4 + 2-(k+ 2). This implies that |ne| 1/4. Recall that the difference betweenmand the next larger representable number is 1 and the difference betweenmand the next smaller representable number is either 1 ifm> 252or 1/2 ifm= 252. Thus, as |ne| 1/4,m+newill round tom. (Even ifm= 252andne= -1/4, the product will round tomby the round-ties-to-even rule.) Similarly, ifnhas the form 1 + 2-, thenkneis an integer multiple of 2-and |kne| < 1/2 + 2-(k+ 1); this implies |ne| 1/2. We can't havem= 252in this case becausemis strictly greater thanq, somdiffers from its nearest representable neighbors by ±1. Thus, as |ne| 1/2, againm+newill round tom. (Even if |ne| = 1/2, the product will round tomby the round-ties-to-even rule becausemis even.) This completes the proof for correctly rounded arithmetic.In double-rounding arithmetic, it may still happen that qis the correctly rounded quotient (even though it was actually rounded twice), so |e| < 1/2 as above. In this case, we can appeal to the arguments of the previous paragraph provided we consider the fact thatqnwill be rounded twice. To account for this, note that the IEEE standard requires that an extended double format carry at least 64 significant bits, so that the numbersm± 1/2 andm± 1/4 are exactly representable in extended double precision. Thus, ifnhas the form 1/2 + 2-(k+ 1), so that |ne| 1/4, then roundingm+neto extended double precision must produce a result that differs frommby at most 1/4, and as noted above, this value will round tomin double precision. Similarly, ifnhas the form 1 + 2-, so that |kne| 1/2, then roundingm+neto extended double precision must produce a result that differs frommby at most 1/2, and this value will round tomin double precision. (Recall thatm> 252in this case.)Finally, we are left to consider cases in which |qis not the correctly rounded quotient due to double-rounding. In these cases, we have |e| < 1/2 + 2-(d+ 1)in the worst case, wheredis the number of extra bits in the extended double format. (All existing extended-based systems support an extended double format with exactly 64 significant bits; for this format,d= 64 - 53 = 11.) Because double-rounding only produces an incorrectly rounded result when the second rounding is determined by the round-ties-to-even rule,qmust be an even integer. Thus ifnhas the form 1/2 + 2-(k+ 1), thenne=nq-mis an integer multiple of 2-, andkne| < (1/2 + 2-(k+ 1))(1/2 + 2-(d+ 1)) = 1/4 + 2-(k+ 2)+ 2-(d+ 2)+ 2-(k+d+ 2). If |kd, this implies |ne| 1/4. Ifk>d, we have |ne| 1/4 + 2-(d+ 2). In either case, the first rounding of the product will deliver a result that differs frommby at most 1/4, and by previous arguments, the second rounding will round tom. Similarly, ifnhas the form 1 + 2-, thenkneis an integer multiple of 2-(k- 1), andne| < 1/2 + 2-(k+ 1)+ 2-(d+ 1)+ 2-(k+d+ 1). If kd, this implies |ne| 1/2. Ifk>d, we have |ne| 1/2 + 2-(d+ 1). In either case, the first rounding of the product will deliver a result that differs frommby at most 1/2, and again by previous arguments, the second rounding will round tom. zThe preceding proof shows that the product can incur double-rounding only if the quotient does, and even then, it rounds to the correct result. The proof also shows that extending our reasoning to include the possibility of double-rounding can be challenging even for a program with only two floating-point operations. For a more complicated program, it may be impossible to systematically account for the effects of double-rounding, not to mention more general combinations of double and extended double precision computations. ## Programming Language Support for Extended Precision The preceding examples should not be taken to suggest that extended precision per seis harmful. Many programs can benefit from extended precision when the programmer is able to use it selectively. Unfortunately, current programming languages do not provide sufficient means for a programmer to specify when and how extended precision should be used. To indicate what support is needed, we consider the ways in which we might want to manage the use of extended precision.In a portable program that uses double precision as its nominal working precision, there are five ways we might want to control the use of a wider precision: - Compile to produce the fastest code, using extended precision where possible on extended-based systems. Clearly most numerical software does not require more of the arithmetic than that the relative error in each operation is bounded by the "machine epsilon". When data in memory are stored in double precision, the machine epsilon is usually taken to be the largest relative roundoff error in that precision, since the input data are (rightly or wrongly) assumed to have been rounded when they were entered and the results will likewise be rounded when they are stored. Thus, while computing some of the intermediate results in extended precision may yield a more accurate result, extended precision is not essential. In this case, we might prefer that the compiler use extended precision only when it will not appreciably slow the program and use double precision otherwise. - Use a format wider than double if it is reasonably fast and wide enough, otherwise resort to something else. Some computations can be performed more easily when extended precision is available, but they can also be carried out in double precision with only somewhat greater effort. Consider computing the Euclidean norm of a vector of double precision numbers. By computing the squares of the elements and accumulating their sum in an IEEE 754 extended double format with its wider exponent range, we can trivially avoid premature underflow or overflow for vectors of practical lengths. On extended-based systems, this is the fastest way to compute the norm. On single/double systems, an extended double format would have to be emulated in software (if one were supported at all), and such emulation would be much slower than simply using double precision, testing the exception flags to determine whether underflow or overflow occurred, and if so, repeating the computation with explicit scaling. Note that to support this use of extended precision, a language must provide both an indication of the widest available format that is reasonably fast, so that a program can choose which method to use, and environmental parameters that indicate the precision and range of each format, so that the program can verify that the widest fast format is wide enough (e.g., that it has wider range than double). - Use a format wider than double even if it has to be emulated in software. For more complicated programs than the Euclidean norm example, the programmer may simply wish to avoid the need to write two versions of the program and instead rely on extended precision even if it is slow. Again, the language must provide environmental parameters so that the program can determine the range and precision of the widest available format. - Don't use a wider precision; round results correctly to the precision of the double format, albeit possibly with extended range. For programs that are most easily written to depend on correctly rounded double precision arithmetic, including some of the examples mentioned above, a language must provide a way for the programmer to indicate that extended precision must not be used, even though intermediate results may be computed in registers with a wider exponent range than double. (Intermediate results computed in this way can still incur double-rounding if they underflow when stored to memory: if the result of an arithmetic operation is rounded first to 53 significant bits, then rounded again to fewer significant bits when it must be denormalized, the final result may differ from what would have been obtained by rounding just once to a denormalized number. Of course, this form of double-rounding is highly unlikely to affect any practical program adversely.) - Round results correctly to both the precision and range of the double format. This strict enforcement of double precision would be most useful for programs that test either numerical software or the arithmetic itself near the limits of both the range and precision of the double format. Such careful test programs tend to be difficult to write in a portable way; they become even more difficult (and error prone) when they must employ dummy subroutines and other tricks to force results to be rounded to a particular format. Thus, a programmer using an extended-based system to develop robust software that must be portable to all IEEE 754 implementations would quickly come to appreciate being able to emulate the arithmetic of single/double systems without extraordinary effort. No current language supports all five of these options. In fact, few languages have attempted to give the programmer the ability to control the use of extended precision at all. One notable exception is the ISO/IEC 9899:1999 Programming Languages - C standard, the latest revision to the C language, which is now in the final stages of standardization. The C99 standard allows an implementation to evaluate expressions in a format wider than that normally associated with their type, but the C99 standard recommends using one of only three expression evaluation methods. The three recommended methods are characterized by the extent to which expressions are "promoted" to wider formats, and the implementation is encouraged to identify which method it uses by defining the preprocessor macro `FLT_EVAL_METHOD` : if`FLT_EVAL_METHOD` is 0, each expression is evaluated in a format that corresponds to its type; if`FLT_EVAL_METHOD` is 1,`float` expressions are promoted to the format that corresponds to`double` ; and if`FLT_EVAL_METHOD` is 2,`float` and`double` expressions are promoted to the format that corresponds to`long double` . (An implementation is allowed to set`FLT_EVAL_METHOD` to -1 to indicate that the expression evaluation method is indeterminable.) The C99 standard also requires that the`<math.h>` header file define the types`float_t` and`double_t` , which are at least as wide as`float` and`double` , respectively, and are intended to match the types used to evaluate`float` and`double` expressions. For example, if`FLT_EVAL_METHOD` is 2, both`float_t` and`double_t` are`long double` . Finally, the C99 standard requires that the`<float.h>` header file define preprocessor macros that specify the range and precision of the formats corresponding to each floating-point type.The combination of features required or recommended by the C99 standard supports some of the five options listed above but not all. For example, if an implementation maps the `long double` type to an extended double format and defines`FLT_EVAL_METHOD` to be 2, the programmer can reasonably assume that extended precision is relatively fast, so programs like the Euclidean norm example can simply use intermediate variables of type`long double` (or`double_t` ). On the other hand, the same implementation must keep anonymous expressions in extended precision even when they are stored in memory (e.g., when the compiler must spill floating-point registers), and it must store the results of expressions assigned to variables declared`double` to convert them to double precision even if they could have been kept in registers. Thus, neither the`double` nor the`double_t` type can be compiled to produce the fastest code on current extended-based hardware.Likewise, the C99 standard provides solutions to some of the problems illustrated by the examples in this section but not all. A C99 standard version of the `log1p` function is guaranteed to work correctly if the expression`1.0` `+` `x` is assigned to a variable (of any type) and that variable used throughout. A portable, efficient C99 standard program for splitting a double precision number into high and low parts, however, is more difficult: how can we split at the correct position and avoid double-rounding if we cannot guarantee that`double` expressions are rounded correctly to double precision? One solution is to use the`double_t` type to perform the splitting in double precision on single/double systems and in extended precision on extended-based systems, so that in either case the arithmetic will be correctly rounded. Theorem 14 says that we can split at any bit position provided we know the precision of the underlying arithmetic, and the`FLT_EVAL_METHOD` and environmental parameter macros should give us this information.The following fragment shows one possible implementation: Of course, to find this solution, the programmer must know that `double` expressions may be evaluated in extended precision, that the ensuing double-rounding problem can cause the algorithm to malfunction, and that extended precision may be used instead according to Theorem 14. A more obvious solution is simply to specify that each expression be rounded correctly to double precision. On extended-based systems, this merely requires changing the rounding precision mode, but unfortunately, the C99 standard does not provide a portable way to do this. (Early drafts of the Floating-Point C Edits, the working document that specified the changes to be made to the C90 standard to support floating-point, recommended that implementations on systems with rounding precision modes provide`fegetprec` and`fesetprec` functions to get and set the rounding precision, analogous to the`fegetround` and`fesetround` functions that get and set the rounding direction. This recommendation was removed before the changes were made to the C99 standard.)Coincidentally, the C99 standard's approach to supporting portability among systems with different integer arithmetic capabilities suggests a better way to support different floating-point architectures. Each C99 standard implementation supplies an `<stdint.h>` header file that defines those integer types the implementation supports, named according to their sizes and efficiency: for example,`int32_t` is an integer type exactly 32 bits wide,`int_fast16_t` is the implementation's fastest integer type at least 16 bits wide, and`intmax_t` is the widest integer type supported. One can imagine a similar scheme for floating-point types: for example,`float53_t` could name a floating-point type with exactly 53 bit precision but possibly wider range,`float_fast24_t` could name the implementation's fastest type with at least 24 bit precision, and`floatmax_t` could name the widest reasonably fast type supported. The fast types could allow compilers on extended-based systems to generate the fastest possible code subject only to the constraint that the values of named variables must not appear to change as a result of register spilling. The exact width types would cause compilers on extended-based systems to set the rounding precision mode to round to the specified precision, allowing wider range subject to the same constraint. Finally,`double_t` could name a type with both the precision and range of the IEEE 754 double format, providing strict double evaluation. Together with environmental parameter macros named accordingly, such a scheme would readily support all five options described above and allow programmers to indicate easily and unambiguously the floating-point semantics their programs require.Must language support for extended precision be so complicated? On single/double systems, four of the five options listed above coincide, and there is no need to differentiate fast and exact width types. Extended-based systems, however, pose difficult choices: they support neither pure double precision nor pure extended precision computation as efficiently as a mixture of the two, and different programs call for different mixtures. Moreover, the choice of when to use extended precision should not be left to compiler writers, who are often tempted by benchmarks (and sometimes told outright by numerical analysts) to regard floating-point arithmetic as "inherently inexact" and therefore neither deserving nor capable of the predictability of integer arithmetic. Instead, the choice must be presented to programmers, and they will require languages capable of expressing their selection. ## Conclusion The foregoing remarks are not intended to disparage extended-based systems but to expose several fallacies, the first being that all IEEE 754 systems must deliver identical results for the same program. We have focused on differences between extended-based systems and single/double systems, but there are further differences among systems within each of these families. For example, some single/double systems provide a single instruction to multiply two numbers and add a third with just one final rounding. This operation, called a fused multiply-add, can cause the same program to produce different results across different single/double systems, and, like extended precision, it can even cause the same program to produce different results on the same system depending on whether and when it is used. (A fused multiply-add can also foil the splitting process of Theorem 6, although it can be used in a non-portable way to perform multiple precision multiplication without the need for splitting.) Even though the IEEE standard didn't anticipate such an operation, it nevertheless conforms: the intermediate product is delivered to a "destination" beyond the user's control that is wide enough to hold it exactly, and the final sum is rounded correctly to fit its single or double precision destination.The idea that IEEE 754 prescribes precisely the result a given program must deliver is nonetheless appealing. Many programmers like to believe that they can understand the behavior of a program and prove that it will work correctly without reference to the compiler that compiles it or the computer that runs it. In many ways, supporting this belief is a worthwhile goal for the designers of computer systems and programming languages. Unfortunately, when it comes to floating-point arithmetic, the goal is virtually impossible to achieve. The authors of the IEEE standards knew that, and they didn't attempt to achieve it. As a result, despite nearly universal conformance to (most of) the IEEE 754 standard throughout the computer industry, programmers of portable software must continue to cope with unpredictable floating-point arithmetic. If programmers are to exploit the features of IEEE 754, they will need programming languages that make floating-point arithmetic predictable. The C99 standard improves predictability to some degree at the expense of requiring programmers to write multiple versions of their programs, one for each `FLT_EVAL_METHOD` . Whether future languages will choose instead to allow programmers to write a single program with syntax that unambiguously expresses the extent to which it depends on IEEE 754 semantics remains to be seen. Existing extended-based systems threaten that prospect by tempting us to assume that the compiler and the hardware can know better than the programmer how a computation should be performed on a given system. That assumption is the second fallacy: the accuracy required in a computed result depends not on the machine that produces it but only on the conclusions that will be drawn from it, and of the programmer, the compiler, and the hardware, at best only the programmer can know what those conclusions may be.1Examples of other representations arefloating slashandsigned logarithm[Matula and Kornerup 1985; Swartzlander and Alexopoulos 1975].2This term was introduced by Forsythe and Moler [1967], and has generally replaced the older termmantissa.3This assumes the usual arrangement where the exponent is stored to the left of the significand.4Unless the numberzis larger than +1 or smaller than . Numbers which are out of range in this fashion will not be considered until further notice.5Letz' be the floating-point number that approximatesz. Thend.d...d- (z/)eis equivalent top-1z'-z/ulp(z'). A more accurate formula for measuring error isz'-z/ulp(z). - Ed.6700, not 70. Since .1 - .0292 = .0708, the error in terms of ulp(0.0292) is 708 ulps. - Ed.7Although the expression (x-y)(x+y) does not cause a catastrophic cancellation, it is slightly less accurate thanx2-y2if or . In this case, (x-y)(x+y) has three rounding errors, but x2- y2has only two since the rounding error committed when computing the smaller ofx2andy2does not affect the final subtraction.8Also commonly referred to ascorrectly rounded. - Ed.9When n = 845, xn= 9.45, xn+ 0.555 = 10.0, and 10.0 - 0.555 = 9.45. Therefore, xn= x845forn> 845.10Notice that in binary,qcannot equal . - Ed.11Left as an exercise to the reader: extend the proof to bases other than 2. - Ed.12This appears to have first been published by Goldberg [1967], although Knuth ([1981], page 211) attributes this idea to Konrad Zuse. 13According to Kahan, extended precision has 64 bits of significand because that was the widest precision across which carry propagation could be done on the Intel 8087 without increasing the cycle time [Kahan 1988]. 14Some arguments against including inner product as one of the basic operations are presented by Kahan and LeBlanc [1985].15Kirchner writes: It is possible to compute inner products to within 1 ulp in hardware in one partial product per clock cycle. The additionally needed hardware compares to the multiplier array needed anyway for that speed. 16CORDIC is an acronym for Coordinate Rotation Digital Computer and is a method of computing transcendental functions that uses mostly shifts and adds (i.e., very few multiplications and divisions) [Walther 1971]. It is the method additionally needed hardware compares to the multiplier array needed anyway for that speed. d used on both the Intel 8087 and the Motorola 68881. 17Fine point: Although the default in IEEE arithmetic is to round overflowed numbers to , it is possible to change the default (see Rounding Modes)18They are calledsubnormalin 854,denormalin 754.19This is the cause of one of the most troublesome aspects of the standard. Programs that frequently underflow often run noticeably slower on hardware that uses software traps. 20No invalid exception is raised unless a "trapping" NaN is involved in the operation. See section 6.2 of IEEE Std 754-1985. - Ed.21may be greater than if both x and y are negative. - Ed.22It can be in range because ifx< 1,n< 0 andx-is just a tiny bit smaller than the underflow threshold , then , and so may not overflow, since in all IEEE precisions, -nemin<emax.23This is probably because designers like "orthogonal" instruction sets, where the precisions of a floating-point instruction are independent of the actual operation. Making a special case for multiplication destroys this orthogonality.24This assumes the common convention that`3.0` is a single-precision constant, while`3.0D0` is a double precision constant.25The conclusion that 00= 1 depends on the restriction thatfbe nonconstant. If this restriction is removed, then lettingfbe the identically 0 function gives 0 as a possible value for limx 0f(x)g(x), and so 00would have to be defined to be a NaN.26In the case of 00, plausibility arguments can be made, but the convincing argument is found in "Concrete Mathematics" by Graham, Knuth and Patashnik, and argues that 00 = 1 for the binomial theorem to work. - Ed. 27Unless the rounding mode is round toward -, in which casex-x= -0.28The VMS math libraries on the VAX use a weak form of in-line procedure substitution, in that they use the inexpensive jump to subroutine call rather than the slower`CALLS` and`CALLG` instructions.29The difficulty with presubstitution is that it requires either direct hardware implementation, or continuable floating-point traps if implemented in software. - Ed.30In this informal proof, assume that = 2 so that multiplication by 4 is exact and doesn't require ai.31This is the sum if adding w does not generate carry out. Additional argument is needed for the special case where adding w does generate carry out. - Ed.32Rounding giveskx+w-kronly ifk(kx+wk)keeps the form ofkx.-Ed. Sun Microsystems, Inc. Copyright information. All rights reserved. Feedback | Library | Contents | Previous | Next | Index |
true
true
true
null
2024-10-13 00:00:00
1999-01-01 00:00:00
null
null
null
null
null
null
24,353,472
https://www.componentdriven.org
Component Driven User Interfaces
null
The development and design practice of building user interfaces with modular components. UIs are built from the “bottom up” starting with basic components then progressively combined to assemble screens. Modern user interfaces are more complicated than ever. People expect compelling, personalized experiences delivered across devices. That means frontend developers and designers have to embed more logic into the UI. But UIs become unwieldy as applications grow. Large UIs are brittle, painful to debug, and time consuming to ship. Breaking them down in a modular way makes it easy to build robust yet flexible UIs. Components enable interchangeability by isolating state from application business logic. That way, you can decompose complex screens into simple components. Each component has a well-defined API and fixed series of states that are mocked. This allows components to be taken apart and recomposed to build different UIs. History: Software engineer Tom Coleman introduced Component Driven in 2017 to describe the shift in UI development toward component architectures and processes. The idea of modular UI has many parallels in software movements such as microservices and containerization. Historical precedents also include lean manufacturing and mass manufacturing circa early 20th century. Components are standardized, interchangeable building blocks of UIs. They encapsulate the appearance and function of UI pieces. Think LEGO bricks. LEGOs can be used to build everything from castles to spaceships; components can be taken apart and used to create new features. Build each component in isolation and define its relevant states. Start small. Compose small components together to unlock new features while gradually increasing complexity. Build pages by combining composite components. Use mock data to simulate pages in hard-to-reach states and edge cases. Add pages to your app by connecting data and hooking up business logic. This is when your UI meets your backend APIs and services. **Design systems:** A holistic approach to user interface design that documents all UI patterns in a centralized system that includes assets (Sketch, Figma, etc.), design principles, governance, and a component library. **JAMStack:** A methodology for building websites that pre-renders static files and serves them directly from a CDN (as opposed to a server). The UIs of JAMStack sites rely on componentized JavaScript frameworks. **Agile:** A method of software development that promotes short feedback loops and rapid iteration. Components help teams ship faster by reusing readymade building blocks. That allows agile teams to focus more on adapting to user requirements. The Component Story Format is an open standard for component examples based on JavaScript ES6 modules. This enables interoperation between development, testing, and design tools.
true
true
true
How modularity is transforming design and frontend development
2024-10-13 00:00:00
2017-01-01 00:00:00
https://componentdriven.…onent-driven.jpg
website
componentdriven.org
componentdriven.org
null
null
35,748,484
https://en.wikipedia.org/wiki/Zaum
Zaum - Wikipedia
null
# Zaum **Zaum** (Russian: за́умь, lit. 'transrational') are the linguistic experiments in sound symbolism and language creation of Russian Cubo-Futurist poets such as Velimir Khlebnikov and Aleksei Kruchenykh. Zaum is a non-referential phonetic entity with its own ontology. The language consists of neologisms that mean nothing. Zaum is a language organized through phonetic analogy and rhythm.[1] Zaum literature cannot contain any onomatopoeia or psychopathological states.[2] ## Usage [edit]Aleksei Kruchenykh created Zaum in order to show that language was indefinite and indeterminate.[2] Kruchenykh stated that when creating Zaum, he decided to forgo grammar and syntax rules. He wanted to convey the disorder of life by introducing disorder into the language. Kruchenykh considered Zaum to be the manifestation of a spontaneous non-codified language.[1] Khlebnikov believed that the purpose of Zaum was to find the essential meaning of word roots in consonantal sounds. He believed such knowledge could help create a new universal language based on reason.[1] Examples of zaum include Kruchenykh's poem "Dyr bul shchyl",[3] Kruchenykh's libretto for the Futurist opera *Victory over the Sun* with music by Mikhail Matyushin and stage design by Kazimir Malevich,[4] and Khlebnikov's so-called "language of the birds", "language of the gods" and "language of the stars".[5] The poetic output is perhaps comparable to that of the contemporary Dadaism but the linguistic theory or metaphysics behind zaum was entirely devoid of the gentle reflexive irony of that movement and in all seriousness intended to recover the sound symbolism of a lost aboriginal tongue.[6] Exhibiting traits of a Slavic national mysticism, Kruchenykh aimed at recovering the primeval Slavic mother-tongue in particular. Kruchenykh would author many poems and mimeographed pamphlets written in Zaum. These pamphlets combine poetry, illustrations, and theory.[1] In modern times, since 1962 Serge Segay was creating zaum poetry.[7] Rea Nikonova started creating zaum verses probably a bit later, around 1964.[8] Their zaum poetry can be seen e.g. in issues of the famous "Transponans" samizdat magazine.[9] In 1990, contemporary avant-garde poet Sergei Biriukov has founded an association of poets called the "Academy of Zaum" in Tambov. The use of Zaum peaked from 1916 to 1920 during World War I. At this time, Zaumism took root as a movement primarily involved in visual arts, literature, poetry, art manifestoes, art theory, theatre, and graphic design,[10] and concentrated its anti war politic through a rejection of the prevailing standards in art through anti-art cultural works. Zaum activities included public gatherings, demonstrations, and publications. The movement influenced later styles, Avant-garde and downtown music movements, and groups including surrealism, nouveau réalisme, Pop Art and Fluxus.[11] ## Etymology and meaning [edit]Coined by Kruchenykh in 1913,[12] the word *zaum* is made up of the Russian prefix за "beyond, behind" and noun ум "the mind, *nous*" and has been translated as "transreason", "transration" or "beyonsense."[13] According to scholar Gerald Janecek, *zaum* can be defined as experimental poetic language characterized by indeterminacy in meaning.[13] Kruchenykh, in "Declaration of the Word as Such (1913)", declares zaum "a language which does not have any definite meaning, a transrational language" that "allows for fuller expression" whereas, he maintains, the common language of everyday speech "binds".[14] He further maintained, in "Declaration of Transrational Language (1921)", that zaum "can provide a universal poetic language, born organically, and not artificially, like Esperanto."[15] ## Major zaumniks [edit]- Velimir Khlebnikov [2] - Aleksei Kruchenykh [2] - Ilia Zdanevich [2] - Igor Terentev [2] - Aleksandr Tufanov [2] - Kazimir Malevich [2] - Olga Rozanova [2] - Varvara Stepanova [2] ## Notes [edit]- ^ **a****b****c**Terras, Victor (1985).**d***Handbook of Russian Literature*. London: Yale University Press. p. 530. ISBN 978-030-004-868-1. - ^ **a****b****c****d****e****f****g****h****i**Kostelanetz, Richard (2013).**j***A Dictionary of the Avant-Gardes*. New York: Taylor&Francis. ISBN 978-113-680-619-3. **^**Janecek 1996, p. 49.**^**Janecek 1996, p. 111.**^**Janecek 1996, pp. 137–138.**^**Janecek 1996, p. 79.**^***Кузьминский К., Ковалёв Г.*Антология новейшей русской поэзии у Голубой Лагуны. — Т. 5Б.**^**Жумати, Т. П. (1999). ""Уктусская школа" (1965-1974) : К истории уральского андеграунда".*Известия Уральского государственного университета*.**13**: 125–127.**^**"Журнал теории и практики "Транспонанс": Комментированное электронное издание / Под ред. И. Кукуя. - A Work in Progress | Project for the Study of Dissidence and Samizdat".*samizdatcollections.library.utoronto.ca*.**^**Janecek 1984, pp. 149–206.**^**Knowlson 1996, p. 217.**^**Janecek 1996, p. 2.- ^ **a**Janecek 1996, p. 1.**b** **^**Janecek 1996, p. 78.**^**Kruchenykh 2005, p. 183. ## References [edit]- Janecek, Gerald (1984), *The Look of Russian Literature: Avant-Garde Visual Experiments 1900-1930*, Princeton: Princeton University Press, ISBN 978-0691014579 - Janecek, Gerald (1996), *Zaum: The Transrational Poetry of Russian Futurism*, San Diego: San Diego State University Press, ISBN 978-1879691414 - Kruchenykh, Aleksei (2005), Anna Lawton; Herbert Eagle (eds.), "Declaration of Transrational Language", *Words in Revolution: Russian Futurist Manifestoes 1912-1928*, Washington: New Academia Publishing, ISBN 978-0974493473 - Knowlson, J. (1996), *The Continuing Influence of Zaum*, London: Bloomsbury ## External links [edit]- Chapter Nine of G. Janecek, *Zaum: The Transrational Poetry of Russian Futurism* - Janecek's *Zaum*, published by San Diego State University Press - Lecture by Z. Laskewicz: "Zaum: Words Without Meaning or Meaning Without Words? Towards a Musical Understanding of Language" - 'Locating Zaum: Mnatsakanova on Khlebnikov' an essay by Brian Reed - Article by A. Purin: "Meaning and Zaum" (in Russian) - Tambov Academy of Zaum, Cyrillic KOI8-R encoding (in Russian) - Samizdat books and artist' books by Serge Segay, some with zaum and visual poetry - Samizdat books and artist' books by Ry Nikonova, some with zaum and visual poetry
true
true
true
null
2024-10-13 00:00:00
2004-08-20 00:00:00
https://upload.wikimedia…7/71/Zangezi.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
21,268,879
https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside
How An Algorithm Feels From Inside — LessWrong
Eliezer Yudkowsky
For what it's worth, I've always responded to questions such as "Is Pluto a planet?" in a manner more similar to Network 1 than Network 2. The debate strikes me as borderline nonsensical. While "reifying the internal nodes" must indeed be counted as one of the great design flaws of the human brain, I think the *recognition* of this flaw and the attempt to fight it are as old as history. How many jokes, folk sayings, literary quotations, etc. are based around this one flaw? "in name only," "looks like a duck, quacks like a duck," "by their fruits shall ye know them," "a rose by any other name"... Of course, there wouldn't *be* all these sayings if people didn't keep confusing labels with observable attributes in the first place -- but don't the sayings suggest that recognizing this bug in oneself or others doesn't require any neural-level understanding of cognition? I think it goes beyond words. Reality does not consist of concepts, reality is simply reality. Concepts are how we describe reality. They are like words squared, and have all the same problems as words. Looking back from a year later, I should have said, "Words are not the experiences they represent." As for "reality," well it's just a name I give to a certain set of sensations I experience. I don't even know what "concepts" are anymore - probably just a general name for a bunch of different things, so not that useful at this level of analysis. *Don't the sayings suggest that recognizing this bug in oneself or others doesn't require any neural-level understanding of cognition?* Clearly, bug-recognition at the level described in this blog post does not so require, because I have no idea what the biological circuitry that *actually* recognizes a tiger looks like, though I know it happens in the temporal lobe. Given that this bug relates to neural structure on an abstract, rather than biological level, I wonder if it's a cognitive universal beyond just humans? Would any pragmatic AGI built out of neurons necessarily have the same bias? Again, very interesting. A mind composed of type 1 neural networks looks as though it wouldn't in fact be able to do any categorising, so wouldn't be able to do any predicting, so would in fact be pretty dumb and lead a very Hobbesian life.... I've always been vaguely aware of this, but never seen it laid out this clearly - good post. The more you think about it, the more ridiculous it seems. "No, we *can* know whether it's a planet or not! We just have to *know more about it!*" Scott, you forgot 'I yam what I yam and that's all what I yam'. At risk of sounding ignorant, it's not clear to me how Network 1, or the networks in the prerequisite blog post, actually work. I know I'm supposed to already have superficial understanding of neural networks, and I do, but it wasn't immediately obvious to me what happens in Network 1, what the algorithm is. Before you roll your eyes, yes, I looked at the Artificial Neural Network Wikipedia page, but it still doesn't help in determining what yours means. Silas, the diagrams are not neural networks, and don't represent them. They are graphs of the connections between observable characteristics of bleggs and rubes. Once again, great post. Eliezer: "We know where Pluto is, and where it's going; we know Pluto's shape, and Pluto's mass - but is it a planet? And yes, there were people who said this was a fight over definitions..." It was a fight over definitions. Astronomers were trying to update their nomenclature to better handle new data (large bodies in the Kuiper belt). Pluto wasn't quite like the other planets but it wasn't like the other asteroids either. So they called it a dwarf-planet. Seems pretty reasonable to me. http://en.wikipedia.org/wiki/Dwarf_planet billswift: Okay, if they're not neural networks, then there's no explanation of how they work, so I don't understand how to compare them all. How was I supposed to know from the posts how they work? Silas, billswift, Eliezer does say, introducing his diagrams in the Neural Categories post : "Then I might design a neural network that looks something like this:" Silas, The keywords you need are "Hopfield network" and "Hebbian learning". MacKay's book has a section on them, starting on page 505. Silas, see Naive Bayes classifier for how an "observable characteristics graph" similar to Network 2 should work in theory. It's not clear whether Hopfield or Hebbian learning can implement this, though. To put it simply, Network 2 makes the strong assumption that *the only influence on features such as color or shape* is whether the object is a a rube or a blegg. This is an extremely strong assumption which is often inaccurate; despite this, naive Bayes classifiers work extremely well in practice. I was wondering if anyone would notice that Network 2 with logistic units was exactly equivalent to Naive Bayes. To be precise, Naive Bayes assumes that within the blegg cluster, or within the rube cluster, all remaining variance in the characteristics is independent; or to put it another way, once we know whether an object is a blegg or a rube, this screens off any other information that its shape could tell us about its color. This isn't the same as assuming that the only causal influence on a blegg's shape is its blegg-ness - in fact, there may not *be* anything that corresponds to blegg-ness. But one reason that Naive Bayes does work pretty well in practice, is that a lot of objects in the real world *do* have causal essences, like the way that cat DNA (which doesn't mix with dog DNA) is the causal essence that gives rise to all the surface characteristics that distinguish cats from dogs. The other reason Naive Bayes works pretty well in practice is that it often successfully chops up a probability distribution into clusters even when the real causal structure looks nothing like a central influence. Silas, The essential idea is that network 1 can be trained on a target pattern, and after training, it will converge to the target when initialized with a partial or distorted version of the target. Wikipedia's article on Hopfield networks has more. Both types of networks can be used to predict observables given other observables. Network 1, being totally connected, is slower than network 2. But network 2 has a node which corresponds to *no observable thing*. It can leave one with the feeling that some question has not been completely answered even though all the observables have known states. Silas, let me try to give you a little more explicit answer. This is how I think it is meant to work, although I agree that the description is rather unclear. Each dot in the diagram is an "artificial neuron". This is a little machine that has N inputs and one output, all of which are numbers. It also has an internal "threshold" value, which is also a number. The way it works is it computes a "weighted sum" of its N inputs. That means that each input has a "weight", another number. It multplies weight 1 times input 1,... I think the standard analysis is essentially correct. So let's accept that as a premise, and ask: Why do people get into such an argument? What's the underlying psychology? I think that people historically got into this argument because they didn't know what sound was. It is a philosophical appendix, a vestigial argument that no longer has any interest. The extra node in network 2 corresponds to assigning a label, an abstract term to the thing being reasoned about. I wonder if a being with a network-1 mind would have ever evolved intelligence. Assigning names to things, creating categories, allows us to reason about much more complex things. If the price we pay for that is occasionally getting into a confusing or pointless argument about "is it a rube or a blegg?" or "does a tree falling in a deserted forest make a sound?" or "is Pluto a planet?", that seems like a fair price to pay. I tend to resolve this sort of "is it really an X?" issue with the question "what's it for?" This is similar to making a belief pay rent: why do you care if it's really an X? I'm a little bit lazy and already clicked here from the reductionism article, is the philosophical claim that of a non-eliminative reductionism? Or does Eliezer render a more eliminativist variant of reductionism? (I'm not implying that there is a contradiction between quoted sources, only some amount of "tension".) Most of this is about word-association, multiple definitions of worlds, or not enough words to describe the situation. In this case, a far more complicated Network setup would be required to describe the neural activity. Not only would you need the Network you have, but you would also need a second (or intermediate) network connecting sensory perceptions with certain words, and then yet another (or extended) network connecting those words with memory and cognitive associations with those words in the past. You could go on and on, by then also including the ... So.. is this pretty much a result of our human brains wanting to classify something? Like, if something doesn't necessarily fit into a box that we can neatly file away, our brains puzzle where to classify it, when actually it is its own classification... if that makes sense? If a tree falls in a forest, but there's nobody there to hear it, does it make a sound? Yes, but if there's nobody there to hear it, it goes "AAAAAAh." ...Except that around 2% of blue egg-shaped objects contain palladium instead. So if you find a blue egg-shaped thing that contains palladium, should you call it a "rube" instead? You're going to put it in the rube bin—why not call it a "rube"? But when you switch off the light, nearly all bleggs glow faintly in the dark. And blue egg-shaped objects that contain palladium are just as likely to glow in the dark as any other blue egg-shaped object. So if you find a blue egg-shaped object that contains palladium, and you ask "Is it a b There is a good quote by Alan Watts relating to the first paragraphs. Problems that remain persistently insoluble should always be suspected as questions asked in the wrong way. I personally prefer names to be self-explanatory. Therefore, in this example I would consider a "blegg" to be a blue egg, regardless of its other qualities, and a "rube" to be a red cube, regardless of its other qualities. I suspect many other people would have a similar intuition. This article argues to the effect that the node categorising an unnamed category over 'Blegg' and 'Rube' ought to be got rid of, in favour of a thought-system with only the other five nodes. This brings up the following questions. Firstly, how are we to know which categorisations are the ones we ought to get rid of, and which are the ones we ought to keep? Secondly, why is it that some categorisations ought to be got rid of, and others ought not be? So far as I can see, the article does not attempt to directly answer the first question (correct me if I am m... I doubt I'd be able to fully grasp this if I had not first read hpmor, so thanks for that. Also, eggs vs ovals. Another example: Yeah, you could tell about your gender, sex, sexual orientation and gender role... but are you a boy or are you a girl??? Of course, the latter question isn't asking about something observable. On one notable occasion I had a similar discussion about sound with somebody and it turned out that she didn't simply have a different definition to me-- she was, (somewhat curiously) a solipsist, and genuinely believed that there wasn't *anything* if there wasn't somebody there to hear it-- no experience, no soundwaves, no anything. I see no significant difference between your 2 models. Sure, the first one feels more refined.. but at the end, each node of it is still a "dangling unit".. and for example the units should still try to answer.. "Is it blue? Or red?" So for me, I'd still say that the answers depend on the questioner's definition. Each definition is again an abstract dangling unit though.. "Given that we know Pluto's orbit and shape and mass, there is no question left to ask." I'm sure it's completely missing the point, but there was at least one question left to ask, which turned out to be critical in this debate, i.e. “has it cleared its neighboring region of other objects?" More broadly I feel the post just demonstrates that sometimes we argue, not necessarily in a very productive way, over the definition, the defining characteristics, the exact borders, of a concept. I am reminded of the famous quip "The job of philosophers is first to create words and then argue with each other about their meaning." But again - surely missing something... The audio reading of this post [1] mistakenly uses the word hexagon instead of pentagon; e.g. "Network 1 is a hexagon. Enclosed in the hexagon is a five-pointed star". [1] [RSS feed](https://intelligence.org/podcasts/raz); various podcast sources and audiobooks can be found [here](https://intelligence.org/rationality-ai-zombies/) "If a tree falls in the forest, and no one hears it, does it make a sound?" I remember seeing an actual argument get started on this subject—a fully naive argument that went nowhere near Berkeleyan subjectivism. Just: The standard rationalist view would be that the first person is speaking as if "sound" means acoustic vibrations in the air; the second person is speaking as if "sound" means an auditory experience in a brain. If you ask "Are there acoustic vibrations?" or "Are there auditory experiences?", the answer is at once obvious. And so the argument is really about the definition of the word "sound". I think the standard analysis is essentially correct. So let's accept that as a premise, and ask: Why do people get into such an argument? What's the underlying psychology? A key idea of the heuristics and biases program is that mistakes are often more revealing of cognition than correct answers. Getting into a heated dispute about whether, if a tree falls in a deserted forest, it makes a sound, is traditionally considered a mistake. So what kind of mind design corresponds to that error? In Disguised Queries I introduced the blegg/rube classification task, in which Susan the Senior Sorter explains that your job is to sort objects coming off a conveyor belt, putting the blue eggs or "bleggs" into one bin, and the red cubes or "rubes" into the rube bin. This, it turns out, is because bleggs contain small nuggets of vanadium ore, and rubes contain small shreds of palladium, both of which are useful industrially. Except that around 2% of blue egg-shaped objects contain palladium instead. So if you find a blue egg-shaped thing that contains palladium, should you call it a "rube" instead? You're going to put it in the rube bin—why not call it a "rube"? But when you switch off the light, nearly all bleggs glow faintly in the dark. And blue egg-shaped objects that contain palladium are just as likely to glow in the dark as any other blue egg-shaped object. So if you find a blue egg-shaped object that contains palladium, and you ask "Is it a blegg?", the answer depends on what you have to do with the answer: If you ask "Which bin does the object go in?", then you choose as if the object is a rube. But if you ask "If I turn off the light, will it glow?", you predict as if the object is a blegg. In one case, the question "Is it a blegg?" stands in for the disguised query, "Which bin does it go in?". In the other case, the question "Is it a blegg?" stands in for the disguised query, "Will it glow in the dark?" Now suppose that you have an object that is blue and egg-shaped and contains palladium; and you have already observed that it is furred, flexible, opaque, and glows in the dark. This answers everyquery, observes every observable introduced. There's nothing left for a disguised query to standfor.So why might someone feel an impulse to go on arguing whether the object is reallya blegg?This diagram from Neural Categories shows two different neural networks that might be used to answer questions about bleggs and rubes. Network 1 has a number of disadvantages—such as potentially oscillating/chaotic behavior, or requiring O(N 2) connections—but Network 1's structure does have one major advantage over Network 2: Every unit in the network corresponds to a testable query. If you observe every observable, clamping every value, there are no units in the network left over.Network 2, however, is a far better candidate for being something vaguely like how the human brain works: It's fast, cheap, scalable—and has an extra dangling unit in the center, whose activation can still vary, even after we've observed every single one of the surrounding nodes. Which is to say that even after you know whether an object is blue or red, egg or cube, furred or smooth, bright or dark, and whether it contains vanadium or palladium, it feelslike there's a leftover, unanswered question:But is it really a blegg?Usually, in our daily experience, acoustic vibrations and auditory experience go together. But a tree falling in a deserted forest unbundles this common association. And even after you know that the falling tree creates acoustic vibrations but not auditory experience, it feelslike there's a leftover question:Did it make a sound?We know where Pluto is, and where it's going; we know Pluto's shape, and Pluto's mass—but is it a planet? Now remember: When you look at Network 2, as I've laid it out here, you're seeing the algorithm from the outside. People don't think to themselves, "Should the central unit fire, or not?" any more than you think "Should neuron #12,234,320,242 in my visual cortex fire, or not?" It takes a deliberate effort to visualize your brain from the outside—and then you still don't see your actual brain; you imagine what you thinkis there, hopefully based on science, but regardless, you don't have any direct access to neural network structures from introspection. That's why the ancient Greeks didn't invent computational neuroscience.When you look at Network 2, you are seeing from the outside;but the way that neural network structure feels from theinside,if you yourselfarea brain running that algorithm, is that even after you know every characteristic of the object, you still find yourself wondering: "But is it a blegg, or not?"This is a great gap to cross, and I've seen it stop people in their tracks. Because we don't instinctively see our intuitions as "intuitions", we just see them as the world. When you look at a green cup, you don't think of yourself as seeing a picture reconstructed in your visual cortex—although that iswhat you are seeing—you just see a green cup. You think, "Why, look, this cup is green," not, "The picture in my visual cortex of this cup is green."And in the same way, when people argue over whether the falling tree makes a sound, or whether Pluto is a planet, they don't see themselves as arguing over whether a categorization should be active in their neural networks. It seems like either the tree makes a sound, or not. We know where Pluto is, and where it's going; we know Pluto's shape, and Pluto's mass—but is it a planet? And yes, there were people who said this was a fight over definitions—but even that is a Network 2 sort of perspective, because you're arguing about how the central unit ought to be wired up. If you were a mind constructed along the lines of Network 1, you wouldn't say "It depends on how you define 'planet'," you would just say, "Given that we know Pluto's orbit and shape and mass, there is no question left to ask." Or, rather, that's how it would feel—it wouldfeellike there was no question left—if you were a mind constructed along the lines of Network 1.Before you can question your intuitions, you have to realize that what your mind's eye is looking at isan intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.People cling to their intuitions, I think, not so much because they believe their cognitive algorithms are perfectly reliable, but because they can't see their intuitions as the way their cognitive algorithms happen to look from the inside.And so everything you try to say about how the native cognitive algorithm goes astray, ends up being contrasted to their direct perception of the Way Things Really Are—and discarded as obviously wrong.
true
true
true
"If a tree falls in the forest, and no one hears it, does it make a sound?" I remember seeing an actual argument get started on this subject—a fully…
2024-10-13 00:00:00
2008-02-11 00:00:00
https://res.cloudinary.c…river_fjdmww.jpg
article
lesswrong.com
lesswrong.com
null
null
6,187,747
https://github.com/blog/1580-keep-your-email-private
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,477,353
http://www.quora.com/Stop-Online-Piracy-Act-SOPA-1/If-you-had-direct-access-to-a-policymaker-in-the-White-House-what-is-the-best-argument-you-could-make-against-SOPA/answer/Jimmy-Wales
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,357,681
https://kentcdodds.com/blog/making-your-ui-tests-resilient-to-change
Making your UI tests resilient to change
null
You're a developer and you want to avoid shipping a broken login experience, so you're writing some tests to make sure you don't. Let's get a quick look at an example of such a form: ``` const form = ( <form onSubmit={handleSubmit}> <div> <label htmlFor="username">Username</label> <input id="username" className="username-field" /> </div> <div> <label htmlFor="password">Password</label> <input id="password" type="password" className="password-field" /> </div> <div> <button type="submit" className="btn"> Login </button> </div> </form> ) ``` Now, if we were to test this form, we'd want to fill in the username, password, and submit the form. To do that properly, we'd need to render the form and query the document to find and operate on those nodes. Here's what you might try to do to make that happen: ``` const usernameField = rootNode.querySelector('.username-field') const passwordField = rootNode.querySelector('.password-field') const submitButton = rootNode.querySelector('.btn') ``` And here's where the problem comes in. What happens when we add another button? What if we added a "Sign up" button before the "Login" button? ``` const form = ( <form onSubmit={handleSubmit}> <div> <label htmlFor="username">Username</label> <input id="username" className="username-field" /> </div> <div> <label htmlFor="password">Password</label> <input id="password" type="password" className="password-field" /> </div> <div> <button type="submit" className="btn"> Sign up </button> <button type="submit" className="btn"> Login </button> </div> </form> ) ``` Whelp, that's going to break our tests. But that'd be pretty easy to fix right? ``` // change this: const submitButton = rootNode.querySelector('.btn') // to this: const submitButton = rootNode.querySelectorAll('.btn')[1] ``` And we're good to go! Well, if we start using CSS-in-JS to style our form and no longer need the `username-field` and `password-field` class names, should we remove those? Or do we keep them because our tests use them? Hmmmmmmm..... 🤔 ## So how do we write resilient selectors? Given that "the more your tests resemble the way your software is used, the more confidence they can give you", it would be wise of us to consider the fact that our users don't care what our class names are. So, let's imagine that you have a manual tester on your team and you're writing instructions for them to test the page for you. What would those instructions say? - get the element with the class name `username-field` - ... "Wait," they say. "How am I going to find the element with the class name `username-field` ?" "Oh, just open your devtools and..." "But our users wont do that. Why don't I just find the field that has a label that says `username` ?" "Oh, yeah, good idea." This is why Testing Library has the queries that it does. The queries help you to find elements in the same way that users will find them. These queries allow you to find elements by their role, label, placeholder, text contents, display value, alt text, title, test ID. That's actually in the order of recommendation. There certainly are trade-offs with these approaches, but if you wrote out instructions for a manual tester using these queries, it would look something like this: - Type a fake username in the input labeled `username` - Type a fake password in the input labeled `password` - Click on the button that has text `sign in` ``` const usernameField = rootNode.getByRole('textbox', { name: /username/i }) const passwordField = rootNode.getByLabelText('password') const submitButton = rootNode.getByRole('button', { name: /sign in/i }) ``` And that would help to ensure that you are testing your software as closely to how it's used as possible. Giving you more value from your test. ## What's with the `data-testid` query? Sometimes you can't reliably select an element by any of the other queries. For those, it's recommended to use `data-testid` (though you'll want to make sure that you're not forgetting to use a proper `role` attribute or something first). Many people who hit this situation, wonder why we don't include a `getByClassName` query. What I don't like about using class names for my selectors is that normally we think of class names as a way to style things. So when we start adding a bunch of class names that are not for that purpose it makes it even ** harder** to know what those class names are for and when we can remove class names. And if we simply try to reuse class names that we're already just using for styling then we run into issues like the button up above. And *any time you have to change your tests when you refactor or add a feature, that's an indication of a brittle test*. The core issue is that the relationship between the test and the source code is too implicit. We can overcome this issue if we **make that relationship more explicit.** If we could add some metadata to the element we're trying to select that would solve the problem. Well guess what! There's actually an existing API for this! It's `data-` attributes! For example: ``` function UsernameDisplay({ user }) { return <strong data-testid="username">{user.username}</strong> } ``` And then our test can say: ``` const usernameEl = getByTestId('username') ``` This is great for end to end tests as well. So I suggest that you use it for that too! However, some folks have expressed to me concern about shipping these attributes to production. If that's you, please really consider whether it's actually a problem for you (because honestly it's probably not as big a deal as you think it is). If you really want to, you can compile those attributes away with `babel-plugin-react-remove-properties` . ## Conclusion You'll find that testing your applications in a way that's similar to how your software is used makes your tests not only more resilient to changes, but also provide more value to you. If you want to learn more about this, then I suggest you read more in my blog post Testing Implementation Details. I hope this is helpful to you. Good luck!
true
true
true
User interface tests are famously finicky and prone to breakage. Let's talk about how to improve this.
2024-10-13 00:00:00
2019-10-07 00:00:00
null
null
kentcdodds.com
Kentcdodds
null
null
25,504,068
https://regalify.me
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,590,531
https://wiki.mozilla.org/Stylo
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,103,120
http://www.networkworld.com/news/2011/101211-peer-to-peer-update-to-zeus-trojan-251884.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
840,758
http://www.google.com/hostednews/ap/article/ALeqM5jlMpJGn28kqCcgU-aGcYE_ZHW-ywD9AT460O0
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
11,752,542
https://github.com/ptresearch/me-disablement/blob/master/How%20to%20become%20the%20sole%20owner%20of%20your%20PC.pdf
me-disablement/How to become the sole owner of your PC.pdf at master · ptresearch/me-disablement
Ptresearch
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert {{ message }} This repository has been archived by the owner on Sep 22, 2022. It is now read-only.
true
true
true
Contribute to ptresearch/me-disablement development by creating an account on GitHub.
2024-10-13 00:00:00
2016-05-21 00:00:00
https://opengraph.githubassets.com/80ee6fbeca3d0215b89cd3d8e81ce93336cfd42786225eb4cce1acc7c2e1a4ea/ptresearch/me-disablement
object
github.com
GitHub
null
null
26,822,171
https://csrenown.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,448,844
https://becausecurious.me/remove_shotcut_gaps
becausecurious.me
null
You need to enable JavaScript to run this app.
true
true
true
A site of a very curious person.
2024-10-13 00:00:00
null
null
null
null
null
null
null
11,101,891
https://www.technologyreview.com/s/600762/apprentice-work/#/set/id/600803/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,429,359
https://play.google.com/store/apps/details?id=com.aihashtags
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
660,850
http://technologizer.com/2009/06/16/still-needed-for-the-iphone-a-great-office-suite/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,812,763
https://www.statecraft.pub/p/how-to-speedrun-a-new-drug-application
How to Speedrun a New Drug Application
Santi Ruiz
# How to Speedrun a New Drug Application ### "it took 48 hours to write the application, instead of four months" **Alvea** launched in 2022, and announced on its website that it was “building an engine for warp speed drug development.” In the middle of COVID-19, the company planned to show it was possible to dramatically reduce the time from molecule to trials for new medicines, starting with a new vaccine candidate tailored to BA.2, a particular COVID variant. A year and a half later, Alvea **shut down** (for reasons we explore here). But before that, it became the first company to vaccinate an animal against Omicron, and **proved** that pharma companies can move far faster through the FDA approval process than they currently do. **Today we talk to Grigory Khimulya, the former CEO of Alvea, about we can get effective new drugs more quickly.** **How’d this thing start, Grigory?** Initially, **Kyle Fish **and I started an AI antibody design company to do early-stage design of drug candidates against new pandemics in weeks, instead of the usual years. Alvea grew out of that, in part. It was a four person co-founding team: myself, Kyle, **Ethan Alley**, and **Cate Hall**. This is really how we got our first experience of the pharma industry and began to see bottlenecks. In our experience, these bottlenecks are very common downstream of the initial discovery and design stages, and through the highly regulated steps, all the way to clinical trials. **Alvea started** in response to the public health emergency from the Omicron variant. It’s been what, two years now? At the time, it was the most transmissible and immunoevasive COVID variant to date. There was lots of uncertainty about how dangerous it was going to be. There was a huge public health need, especially in low- and middle-income countries that were very unlikely, in our view, to receive updated COVID vaccines in time. This was also an opportunity for me to see what the later stages of the process looked like and whether there was a way to get to clinical trials and to do drug development much faster than it's normally done. This is where we've had some success, even though we ultimately made the **decision to wind down**. That's how it came together. Ethan and Cate came in that first week, as we were getting concerned about Omicron, it all started from there. *If memory serves, we hired our first 20 or 30 teammates in the next two weeks.* #### As you were putting the team together, what did you imagine the roadmap would be? What was the initial plan for getting the thing off the ground? What I've been excited about for a long time, the problem we’ve been solving, is taking a new medicine from design to human trials to results as quickly as humanly possible. This is of course particularly critical in pandemic emergencies. But it's really important in peacetime drug development as well. *Every wasted month and year of development time off a treatment for a disease that doesn't have one is costing lives.* What we wanted to do is design a very simple and scalable COVID vaccine focused on Omicron. Initially we focused on the BA1 variant, and then we pivoted to the BA2 variant that emerged a bit later. It’s a transition from this very minimally regulated world of basic and applied science, where drug designs come from, to a suddenly very regulated world of human clinical trials. You go through all the steps required, including animal experiments that test the safety and efficacy of the drug manufacturing at sufficient scale to provide for a clinical trial. Then of course you go through the clinical trial itself. We wanted to try and streamline every part of this process to get this new medicine to patients in the pandemic and see how well it worked. #### Obviously, as a small, lean startup, you were able to hire quicker. You were able to do some things much faster. I think that's clear. #### What was your original theory about being able to streamline the process? Why wouldn't the pharma industry be able to move at the pace that you were moving? This is a really good question. I think my initial motivation was something like, “this is a really important problem.” I think this holds true for most people on our team. We didn't necessarily see all the ways in which it would be easy or difficult. We just had a sense that this is something that could be done a lot faster by a dedicated team in a situation where the emergency is extremely clear and salient. We did an informal poll and many of our advisors were setting numbers like five years to go from an idea to submitting the clinical trial application to the regulator. Not even to running the clinical trial. It just didn't seem right. Of course, we were not the only ones who were thinking about that. Some pharmaceutical companies achieved tremendous results during COVID, including Moderna and Pfizer. This was an inspiration for us, but we wanted to see how this could be done as a small team from design onward. The way the industry is normally structured, there are drug design people, and there are early-stage biotech people who don't really think about the regulated stuff. They think about the science, they generate the candidates, and then there is a very inefficient, high transaction cost process where some of these candidates get picked up by big pharma and then developed further. Just to provide more light on what we discovered, I'm going to tell a bit of a stylized story. #### I love stylized stories. Obviously, it works differently in every company. But every step: animal testing, clinical trial design and preparations, interaction with regulators, and manufacturing, among others, is usually done by a siloed department, and many of them are done in series. One step gets done in one department and then it passes the gate and proceeds to the next department, but only after the first one has done its job. This can lead to the steps not being under a tight schedule, and a really slow and annoying process of actually assembling the regulatory package. #### Is that package a new drug application? Yeah, in the US and some other countries it’s called the clinical trial application (CTA). It's all the documents that go to the regulators for them to eventually make a decision. [*Astute readers may remember ***Statecraft *** interviewee Dr. Gilbert Honigfeld posing with all 327 volumes of the clinical trial application for clozapine.] *It’s a summary, including hundreds of pages of supporting data for all different aspects. The FDA benchmark, the average time it takes to write this package, is four months. You've got to get all the departments together and they’ve got to talk about it. They’ve got to plan it out and then actually execute the writing. For what its worth, this timeline is based on my recollection of what a few FDA experts anecdotally said was the expectation, but I so far haven't found solid stats which are this fine-grained, and I would love to see them if they exist. From the start, we decided to do things a bit differently. Instead of doing this in sequence, in departments, we did it with a small, integrated team. And we did it in parallel as much as scientifically possible from a risk management perspective. Because it's a small team and everything was designed from the start to proceed in parallel to the final result, it took 48 hours for us to write the final application, instead of four months. #### Is the reason that a big pharma company doesn’t operate on a more integrated scale like this just math? Is it that J&J is a massive operation, so it’s not feasible to do all the work in parallel? I think in normal time, scale has a lot to do with it. In particular, it's the difference between working on many drug candidates in parallel, vs focusing on a single drug and running all the stages of the process in parallel. I think that’s the best way to think about it. The latter thing is what in our experience gives you a really big speed boost. The approach that we've taken, and what’s really been enabled by this integration, parallelization, and a small team, is that we would always be looking across all the stages and all the activities, and asking, “What is the current bottleneck? What is the thing, right now, that is pushing back our delivery date of the clinical trial application and ultimately the clinical trial?” Of course, a lot of these things just can't physically be done faster. Sometimes cells just need time to grow and there isn't a whole lot you can do about it. But in our experience, a lot of the slow steps are slow because of people, not physics or biology. We would basically focus the attention of everybody we could on the bottlenecking step and invest a somewhat unusual amount of effort into really going after it. So that's the second piece of it. #### If Alvea was still around today, how much of that initial pace do you think you could maintain over time? How much of it was the consequence of an emergency environment and a tightly scoped project? #### If you were still doing this a year from now, could you keep that pace in perpetuity? My guess is yes, absolutely we would have. I think one thing that didn’t lend itself to this approach was the early stages of drug discovery themselves. Our initial candidate was chosen and optimized to be quite simple and well-validated from a scientific standpoint. When, after our first clinical trial, we expanded into more speculative and challenging scientific areas and applied a similar kind of thinking to speeding it up, it turned out to be much, much harder. Early-stage, risky science is very difficult to do on schedule, even with all the best conceptual speed-up approaches applied: parallelization, contingency and bottleneck optimization, etc. The focus of our pivots from that point on was coming in on cases where the science is reasonably clear and there was a strong proof of concept: for example, in animals, which is still comparatively unregulated. That’s one place where I think our initial assumption that we could speed everything up did not pan out. But as far as running through a new clinical trial goes, I think the approach generalizes. We saw that, in preparation for future clinical trials as we continued to operate. #### What did you learn about the regulatory environment from that first clinical trial? I think that was the most important learning. There’s a meme in Silicon Valley circles about people who are really into moving fast and breaking things. The caricature is that the FDA is the enemy of progress, medical regulators are the enemy of progress, and they're slowing everything down. On reflection, I don’t agree with that take, and our experience doesn’t really support it. #### When you say on reflection, do you mean that you would have agreed with that take before this process? No, I just would see that conversation going on in a lot of private and public places, and I was agnostic about it because I'd never interacted with regulators myself. I could see reasonable people pointing to lots of inefficiencies in how the FDA and any medical regulator operates. No doubt things can be improved, but there are a few things that I've come to appreciate. *What regulators are trying to do is actually really hard.* People look at the drug development processes, and they notice that like 90% of the effort is focused on testing, verifying, and quality control, all these boring things after the creative, exciting part is done. There is a big difference between the hypothetical drug that is a formula or a design in the computer, and the actual drug that ends up in your vial. In drug development especially, making a thing that is plausibly good is much, much easier than making something that is actually, reliably, *very good*. Deploying drugs to scale requires that reliability. It’s a very hard socio-technical problem. All the different kinds of regulatory requirements, quality management, quality control, etc., that could be naively identified as red tape or boring paperwork that slow down the innovators are actually there to achieve that reliability. Of course, when you get into the details, there are tons of ways this could be done more efficiently. But the fact that validation, testing, and ensuring that things are as they seem is 90% of the process is just the way the world works, not any fault of the regulators. My second conclusion is that regulators are fundamentally reasonable, scientifically minded people who want to do good things for patients and get good drugs to them quickly. We’ve interacted the most with regulators in South Africa who have been, in a word, fantastic. In every case where we had a scientifically based opinion that some guideline was inappropriate in light of new evidence, they have been very receptive. People who have this critical take about regulators have often seen these guidelines that seem very harsh and uncompromising in how they state some process needs to work. But it's actually always a conversation with highly scientifically competent people who are capable of analyzing the evidence, hearing arguments and reconsidering what might be required in any given case. That’s been our experience. Something that folks who are coming into it fresh may not think to do is don’t take anything as gospel. This is advice I only really started hearing after getting to people who are some of the most experienced in the world at dealing with this kind of regulatory process. It's always a conversation. It's always a conversation about science and safety for the patient, which is very much in everybody's interest: patients and regulators and companies. It feels to me like regulators don't set their own risk tolerances. Regulators get blamed when something goes wrong, but don't get a whole lot of credit when they make things safe and good. Of course there’s legislation and Congress to set standards, but it’s really society, through the court of public opinion, that sets a risk tolerance threshold. I’m really sympathetic to society potentially shifting the accepted risk tolerance toward getting more innovative medicines to patients faster. But that’s just not a decision that the FDA makes. It is a decision that happens at a much higher and more diffuse level in society. It feels like an aspect that I failed to appreciate in my naïve mindset before I went through the process, and that can be ignored by folks who are looking for a policy solution. There’s an incentive set up by how the news works. There’s much more reporting on very rare issues compared to achievements in safety in the vast majority of the drug supply. #### Can you give me some more context for the conversation between the drug sponsor and the regulators? #### You have, on paper, some set of requirements. But those requirements may actually be negotiable? What's the process by which you negotiate that? The high level frame on this process is it is up to the drug developer to propose what kinds of evidence would be sufficient. Given the scientific literature and everything that is known about this kind of drug, what would be sufficient to establish safety and efficacy? What would be sufficient regarding a significant unmet medical need to approve the first human clinical trial? These guidelines are compilations of high level best practices and suggestions for a thousand specific decisions that you might make in the course of developing a drug. For example, you might need to run your safety testing in a certain number of different animal species, which of course is expensive and takes a long time. #### Those are specifically guidelines and best practices rather than regulatory requirements, right? Correct. There are regulatory requirements, which are good manufacturing practice, good laboratory practice, good clinical practice, so called GXP, and those are much more serious. They're also much higher level and have more to do with ensuring that every bit of quality control the regulators have agreed on for these kinds of cases is there. But, in the case of guidelines, they speak much more precisely to the specific kind of drug that you might develop. For example, for us, this was a **DNA plasmid**-based vaccine, which is very different from many other kinds of drugs that FDA considers and has much more specific requirements for. In some ways, it does not need to be tested as extensively as a completely new, unprecedented drug because of its very long history of use in humans and many clinical trials that have come before. This process happens concretely, in your conversations with regulators before your clinical trial application is submitted. There are different kinds of preliminary meetings with the FDA, and the drug developer is in a position to propose what they think is sufficient to establish safety, consistent with what’s normally required for a phase one clinical trial. Again, in our experience regulators are super reasonable and responsive to hard scientific arguments about why this or that thing might optimally be done differently. #### Looking back, is there something you would have done differently in your Interactions with regulators? Honestly it feels like the answer is no. I'm sure there were small things we could have changed in hindsight about this or that part of the application to preempt a question or something like that. But not in a big way. The problem in our experience was really not on the regulatory side. The problem was with operational inefficiencies that run deep and fractally for every part of this process, and I think that suggests a focus for other efforts to speed it up. Getting operations parallelized, and speeding up the process of generating the data and arguments that would convince regulators, give you a much bigger bang for your buck than over-optimizing the regulatory process. There are common-sense things that we did that I would recommend everybody do, which are to work with experienced consultants who have seen a lot of applications and to proactively address ways in which you might be doing things differently from how they would be normally done. I think there are ways to implement the same regulatory requirements that lead to a very slow process or a very fast process. It's really in the implementation and prioritization, making all the pieces fit together very tightly on schedule, where you get the speed up. #### Tell me a little bit more about the “fractally distributed” organizational problems that you encountered. For sure. Again, the larger context of this is a normally serial process through siloed departments. The big changes are integrating the process, running it in parallel. On top of that, there is a huge family of bottlenecks, some of them funny, some of them just frustrating, that we ran into and resolved. In terms of solutions, there are maybe three different clusters of things. One is just using relatively straightforward software and automation tools. This is usually the easiest solution. *For example, when you work with any vendor for a pharmaceutical company, almost everybody requires an NDA to be signed. This by itself can eat up to two weeks of time on both ends of this transaction.* We had automated this NDA signing process so that it would usually happen in hours. Many of our vendors would follow up and tell us how insanely fast this was and how it was the smoothest and fastest contracting experience that they had ever had. I really don't have an answer for why this is not how it normally works, but we've encountered a lot of places where a simple – for folks in Silicon Valley simple – software and automation infrastructure was extremely helpful. #### I need more of an answer than that. You can't just tell me you don't know why you guys were so effective. Okay, I think part of it is the siloing between departments. The value of this kind of solution is much clearer if you are looking at the whole process and wanting to optimize the bottleneck at every step for a single drug you really care about getting to the patients as quickly as humanly possible. If you are running tons of these drugs in parallel through this sequential gated process and the whole thing takes five years, and that's the industry standard, then saving a week on the NDA negotiation side might not feel like such a big deal. I think that’s part of it, but I will be honest with you. I don't have a satisfying answer to this for myself, and I'm very frustrated by it. I expected there would be a better way but I'm still at a bit of a loss. This is not to say that we are the only folks who are moving in this direction. **Vial**, for example, is one company that is working in this direction, not quite as fast as us yet. But this is not widespread by any means. Another big pattern is that, for some reason, for a lot of these key processes that really move the needle on speed, the standard operating procedure for the industry is to talk to maybe three to five different vendors, compare them across a bunch of categories, and then pick one and go forward with them. That never seemed to work for us. We would approach it by finding every single vendor in the world who does the thing that we need done, finding the best people, and then going in and very closely redesigning and managing their process for maximum speed. Practically, this involves parallelization and then bottleneck hunting in the vendor’s process to identify ways to make it faster. A good example of that was the manufacturing of the drug itself, of the DNA plasmid that was our vaccine’s main active component. Our initial quotes from the first few vendors were like two years. “It takes two years. There is no way around that. This is just how long it takes.” Then we found some folks who said, “It’s going to be hard, but we can do it in a year.” *Then, once we have come in and looked at it deeply and redesigned it in collaboration with these folks, we ended up doing it in just over two months if memory serves.* This is the kind of speed up that's possible. Part of that is really taking advantage of platform and modular processes. A lot of vendors, folks who ostensibly do the same thing over and over again for many different pharma companies, won’t do exactly the same thing: they develop a new, custom process for every customer. It's a little bit complicated, but to simplify it a lot, there are often intellectual property reasons for it. If you have your own patented process, which was developed specifically for you to manufacture your drug, you have some increased protection in future intellectual property disputes. But it just ends up eating a lot of time. Often, these vendors have a platform modular process with reasonable defaults on lots of different parameters that they can use to produce a very high-quality product. Taking advantage of that wherever possible is a really big thing. You see it across other fields as well. In the formal academic work on mega-projects, modularity has emerged as a really big factor for projects that don't go over budget and over time. This was a big thing for us. One last thing that I think is unavoidable if you want to move this quickly is strategic in-housing. If everything else fails, if you can't find a vendor who can do things quickly enough, you can do it yourselves. This usually happens for relatively small steps of the process that are nevertheless critical and end up bottlenecking. This is where the culture and the people are crucial, because, of course, it’s expensive and really difficult. For the small number of really important bottlenecks, you need to have an emergency mindset. You end up having to do things like building out an entire tissue culture facility in-house in a matter of weeks, faster than any vendor quote. *To summarize, automate as much as possible, look at every single vendor and redesign and manage their processes for speed, and then strategically in-house the most important bottlenecking step for which everything else fails.* I guess it's more of a family of responses to a whole zoo of specific small inefficiencies, rather than just one thing that makes a huge difference. Perhaps that's the other reason why this is relatively rare and why we've been able to achieve these results. It’s a big leap to a better place, at least in terms of speed, that’s difficult to do piecemeal; we developed it in response to a pandemic emergency. You need to change a hundred things and you need to apply this approach a lot to really see the kind of timelines that we've been able to see. #### Is there anything else I should have asked you? Perhaps one thing to add is what to take away from the ultimate decision to wind down. There is never a single explanation for a decision like that. Of course, changes in the funding environment have been factors, but there are also factors connected to the specific risk profile of the science that we decided to pursue. I really think there is a huge path forward to substantially speeding up drug development and clinical trials and that there has been awfully little innovation on that front, all things considered. I'm excited for folks to keep working on this problem. I'm optimistic about large language models automating a lot of this work in ways that we got started on. This feels like a profound opportunity right now, and the wind down of any one company really doesn't invalidate this concept for me. I think our results – getting from idea to first in-human trials in less than six months – really speak for themselves in terms of what's possible. *Thanks to Chloe Holland for her judicious edits to this transcript.* ### Further reading: **A tweet thread from then-co-CEO on some of Alvea’s early progress** This is great! I'm dying of recurrent / metastatic squamous cell carcinoma, originally of the tongue, and the FDA's pokey slowness is baffling and infuriating: https://jakeseliger.com/2024/01/29/the-dead-and-dying-at-the-gates-of-oncology-clinical-trials
true
true
true
"it took 48 hours to write the application, instead of four months"
2024-10-13 00:00:00
2024-02-22 00:00:00
https://substackcdn.com/…b_2495x1404.webp
article
statecraft.pub
Statecraft
null
null
23,466,442
https://exploringjs.com/tackling-ts/
Tackling TypeScript:Upgrading from JavaScript
null
# Tackling TypeScript: Upgrading from JavaScript ## About this book This book consists of two parts: - Part 1 is a quick start for TypeScript that teaches you the essentials quickly. - Part 2 digs deeper into the language and covers many important topics in detail. This book is not a reference, it is meant to complement the official TypeScript handbook. **Required knowledge:** You must know JavaScript. If you want to refresh your knowledge: My book “JavaScript for impatient programmers” is free to read online. ## Read all essential chapters ## Buy the book If you buy the digital package, you get the book in four DRM-free versions: - PDF file - ZIP archive with ad-free HTML - EPUB file - MOBI file ### Discounts and bulk purchases - Discounts: If a digital package is beyond your means, you can get a discount via this form. - Bulk purchases: If you intend to buy more than 10 digital copies, please contact me via email at `dr_axel AT icloud.com` and I’ll help you make the purchase (Payhip doesn’t currently directly support bulk purchases). ## About the author **Dr. Axel Rauschmayer** specializes in JavaScript and web development. He blogs, writes books and teaches classes. Axel has been writing about JavaScript since 2009.
true
true
true
null
2024-10-13 00:00:00
2009-02-01 00:00:00
null
null
null
null
null
null
29,217,994
https://twitter.com/rainmaker1973/status/1459891368621400069
x.com
null
null
true
true
false
null
2024-10-13 00:00:00
null
null
null
null
X (formerly Twitter)
null
null
9,520,389
http://davidsimon.com/zero-tolerance-is-exactly-what-it-sounds-like/
Zero tolerance is exactly what it sounds like:
Name
Intolerance. And a broken-windows policy of policing is exactly what it means: The property matters. The people can stay broken until hell freezes over. And the ejection of these ill-bought philosophies of class and racial control from our political mainstream — this is now the real prize, not only in Baltimore, but nationally. Overpolicing and a malignant drug prohibition have systemically repressed and isolated the poor, created an American gulag, and transformed law enforcement into a militarized and brutalizing force utterly disconnected from communities in which thousands are arrested but crime itself — real crime — is scarcely addressed. To be sure, there are a great many savage inequalities in our society — no doubt we could widen this discussion at a dozen points — but now, right now, overpolicing of the poor by a militarized police-state is actually on the table for the first time in decades. And don’t for a second think that stabbing a fork through the heart of zero tolerance isn’t job one. Nothing else changes, nothing else grows in the no-man’s lands of a war zone, and our inner cities have been transformed into free-fire battlegrounds by this drug war and all of the brutalities and dishonesties done in its name. Yes, the charges came for the Baltimore officers and the city is now relatively quiet. But step back for a moment from the immediacy of each individual outrage — from Ferguson, from Staten Island, from North Charleston, from West Baltimore — and realize that while this systemic overlay of oppression will offer a moral exemption or two when the facts or the digital video demands it, charging an officer here or implementing a new training course for police there, the game itself grinds on. Even as they acknowledge an atrocity or two, the same voices of seeming reason continue to suggest that we needn’t abandon all the good that zero-tolerance enforcement has done for us. Why look at New York, can’t you? Safest big city in America. Zero-tolerance works, goddammit. It makes us all safer, and our cities governable. Fix the broken windows, write up all the small infractions, punish every minor offender and soon, you’ll see, the city becomes liveable again. If you have money, quite liveable indeed. Meanwhile, in Baltimore — as in every other city that doesn’t happen to be the recapitalized, respeculated, rebuilt center of world finance — zero tolerance has been a disaster. And the levels of police violence and incarceration that spring from this policing philosophy are proving more lethal to the American spirit and experiment than even race fear and race hatred, as ugly and enduring as that pathology is. No, this is now about class. This is those who have more using the levers of governance to terrorize those who have less, and doing so by using damn near *nothing *to keep the poor at the margins of American life. Four men in four separate cities are dead over a shoplifted cigar, a single sold cigarette, a legal pocket knife and a domestic order for child support. Do any of us feel appreciably safer for the cost? Do any of us still want to talk about breaking a few eggs to make that omelet? Do any of us still want to defend the absurd and brutalizing notion that by using our police officers to stalk our ghettoes heaving criminal charge upon criminal charge at every standing human being, we are fixing, or helping, or even intelligently challenging the other America to find a different future for itself? Why yes, yes we do. Incredibly, we do. * * * As much as the best slogans and the purest ideology wishes it otherwise, this astonishing edifice of American repression, built carefully, brick by brick, over decades and sustained by a paper-thin, 24-hour-a-day media culture that traffics only in fear and shock value, is not going to fall with a riot. Exactly the opposite is going to happen if rightful civil disobedience gives way to civil unrest. When the very demand is an end to wanton and brutalizing overpolicing, a riot and all the imagery that a riot conjures is in fact the most useless thing in the great arsenal of civil disobedience and rebellion. Yes, if you want to argue anyone’s right to a burn and loot, to declare that America’s dispossessed have been violently targeted, that they are desperate, that they deserve all the violence that these state-sponsored murders elicit, then you can present yourself as a fairly sublime fascimile of Patrick Henry or Malcolm X for our time. Death or glory. Liberty or death. Your rhetoric will no doubt inspire those who are like-minded, and maybe even the folks risking all in the street, as well. And then you — and they — will lose. Me, I’m fucking tired of losing. For decades now, American governance has carefully leached the overt racialist sentiment from its calls for law and order. Just as carefully, with the rise of a black and Latino middle class, that governance has secured some healthy measure of minority participation in a crackdown that now targets the underclass overall. No? Look at the faces of those charged with failing to travel Freddie Gray from street to lockup without severing his spine; ebony and ivory, beating down the poor in perfect harmony. And finally, to fully insulate and institutionalize the brutality, our government has deployed it against us in post-racial fashion. If you don’t think so — if you believe that this is still merely about race — you need to spend some time in places such as Baltimore’s Pigtown or O’Donnell Heights, watching white people of little means getting their asses kicked and their rights violated with as much gusto as in West Baltimore. This war is on the poor. And they are good at this. They understand the optics. And they believe that in these moments when the systemic nightmare that is now American policing reveals itself in a choked-to-death arrestee or a hellish wagon ride, that they can wait out the outrage, that the small bone of a singular indictment or even conviction can be thrown, that eventually the indignation of the oppressed will slip in either its intensity or its discipline, that the street theater will dissipate, or even better for their purposes, lurch into open, CNN-engorged violence. In the end, they expect any uprising to underplay or overplay even the strongest hand. You don’t think so? One word: Occupy. Yes, the street is essential, and more than that, the hands-up ballet that exposed the militarized police response in Ferguson was brilliant, honest disobedience. Those images — far more than anything burned or anything thrown in Missouri — moved this cause forward. Just as fundamentally, the Baltimore imagery of young men standing their ground and claiming North and Pennsie for their own, or marching peacefully in anger toward City Hall against a line of helmets and riot shields, has profound power. Stakes are high and now, with one lethal encounter after another lined up to prove the rule and not the exception to Americans who have little clue about police violence, some moral high ground is there for the taking. But Occupy proved that the street is only the opening act, and the second act of this drama — of any popular movement — has to be political. And for a second act to even begin to happen, the optics don’t merely matter — they are everything. The demand here is not merely to punish some police, much as some police need to be held to account. The substantive victory — the one for which there is now actually a window– is for our governance and law enforcement to take its hand from the throat of the other America, to finally and forever abandon the cruelty of an unrestrained drug war, of zero-tolerance policies, of mass incarceration. The demand has to be systemic reform: Governance must allow the dispossessed of this country to stand up and venture unmolested into the same shared future with the rest of us: This is wrong. Let them be. They are Americans. They are *us.* Shame is some powerful shit, and there is so much for all of us to be ashamed about after buying into this repressive dynamic for so long. And for as long as the optics and the discipline of the uprising allow, shame and the grievous sacrifices of Brown and Garner, Scott and Gray are doing hard and essential labor here. Those who live only by the slogan, who want to assert categorically that power only yields to force, that no one ever achieved a real measure of freedom without violence — they talk as if the imagery of violent civil unrest has ever done anything in this country other than push middle Americans into the arms of fearful, authoritarian repression, or even more naively, as if the political middle is somehow unnecessary to political victory in a republic that when it governs itself at all, governs by rough consensus. By any means necessary? As fine a phrase in the cause of liberty as has ever been uttered, but in actual application, it will have to be employed as if the urban poor are not already at the margins of American life, as if their numbers are such that they can find political consensus in this country once rioting becomes the predominant visual. By any means necessary sounds great until you realize that there aren’t actually a lot of means available to the underclass, that bricks and fire will have to suffice against a policing and civil defense apparatus that is already militarized and weaponized beyond anything seen in 1968. To embrace a riot when circumstances offer a real prize for the first time in decades — this would be a triumph of self-defeating anger, however justified or worthy of empathy, on the part of the underclass themselves. Or worse, in the case of those claiming to support the aspirations of the popular risings in the streets of Ferguson or Baltimore, it is armchair revolution, a celebration of perfect ideology ready to street-fight tyranny at the cost of someone else’s blood, someone else’s skull. To argue such desperate extremity, you have to scrub clean every lesson of the last half century that argues for organization and discipline, for mass non-violent civil disobedience and the victories won at the hands of that ideal. Selma, Gdansk, Robben Island — the transformational moments come not when the popular will indulges in violence, but after the state itself indulges in shameless violence and repression against its own people, when the tactics of brutality are overplayed and when the threat or actuality of violence reveals as hollow the moral standing of a bad government. You think the presumption is mine, that I’m speaking for the poor from a position of affluence, or white entitlement? Perhaps. Or perhaps the presumption is yours in declaring that many, or even most of our urban poor are not themselves fully aware of the stakes, that they are too battered and enraged by years of authoritarian violence to achieve anything bigger or more lasting than a riot. Perhaps when a Baltimorean of any stripe argues against other Baltimoreans giving in to the rage of a riot when still other Baltimoreans are risking so much to actually reform something — maybe this isn’t actually as much a function of race as you think. And perhaps, too, infantilizing those participating in this uprising by rationalizing the rioting, by implying that the poor and dispossessed can’t instead organize and maintain a disciplined and unrelenting mass protest for real results — perhaps this is an ugly condescension all its own. Real results? Not here, you say. Not now. Are you sure? * * * A few weeks ago, I swallowed hard, put on a tie, and drove down to Washington D.C. to eat rubber chicken and directly engage with people who are said to have some hand in pretending to governing this country. For an old reporter, and one well-versed in certain time-tested newsroom cynicisms, this was a close call. I told the organizers of the event that I didn’t want to be on some damn panel discussing everything from deindustrialization to educational equality to family values. I didn’t want to waste my time sitting in meeting rooms over five-year plans and new slogans for programs that never come. I didn’t want to be used to validate more inertia and failure. “We can’t promise an outcome,” an organizer conceded, “but this time, it’s not just the liberals. Gingrich is a cosponsor aloing with Donna Brazille, and some of the funding comes from Koch Industries.” Huh. Different. “There’s honestly a chance that some movement on this stuff can happen, actually.” Maybe I’m a chump, but I signed up. No panels, no back-and-forth on all of the global issues in which an actual attempt at reform can be lost, but yeah, I agreed to vent ten minutes on an aspect I guessed probably woudn’t be covered by people on either the left or the right: The drug war had fucked up policing. It was brutality without purpose, save for the mass incarceration of people who don’t really need to be in prison. It was, to be exact, the same set-piece rant I’ve been giving for more than a decade, but I reheated it again because I thought for once I was talking to a group that all had some feathered piece of the same agenda: The libertarians don’t care about any sense of a shared future, but hey, they see the drug war clearly for what it is, and the lefties know the smell of brutality and repression when it’s in the room. The conversatives? Hell, they can see that the costs of locking up this many human beings for all manner of infraction is more than the country can bear economically. After I signed on, the White House called. Rather than tape his own remarks to the gathering, the President wanted to talk with me (yeah I know, WTF) and send the bipartisan symposium a 10-12 minute video arguing further the disaster that mass incarceration and an unwinnable drug war had brought the country. Huh. So that too. And a few days later, I’m sitting with my chicken plate between Newt fucking Gingrich and some vice president for Koch Industries listening to the sitting Republic governor of Georgia — that’s right, good old red-state Georgia — explaining how this essential reform i*s already happening,* that in his state, for fiscal and humanistic reasons both, they are closing prisons and dramatically reducing the prison population by walking away from the notion of zero tolerance, and making a very sensible, very human distinction between “those things that we wish people wouldn’t do and those things that we can’t allow you to do.” Still think that there isn’t a window here? This is the actual, on-the-ground statewide abandonment of zero-tolerance by a conversative Republican governor of Georgia; not a proposed change, not an argument undertaken at the fringe of a political campaign or by some gadfly critic or academician. Georgia, of all places, has just abandoned mass incarceration, broken windows and zero tolerance. The governor’s keynote received standing applause, and why not from a bipartisan coalition that had been brought together to pursue a goal of reducing the national prison population by 50 percent? Georgia is doing it, on her own. Go figure. I used my ten minutes as planned, arguing that even if you value public safety above all things, you needed to abandon zero tolerance. Then I sat down again, only to have Mr. Gingrich follow me and declare that while Mr. Simon makes good fictional television dramas, zero-tolerance and broken-windows policing had a real future in our country, that they have in fact claimed a great victory in making New York one of the safest big cities in America. Yeah, this shit will not die easy. Once a myth becomes the truth, it stays true. Mr. Gingrich came back to the table and, God help me, I can never resist a good piss in the wind: “I’d agree with you,” I assured him, “but then we’d both be wrong.” He laughed, and I proceeded to argue to his increasing irritation that comparing New York, or London, or Los Angeles or any other world city to half-hollow, second-tier post-industrial cities was incredibly specious, that what had worked in New York had not worked because Guliani filled Rikers, or because the civil rights of every black or brown citizen walking the streets had been made to disappear in the name of public safety. He smiled, but he wasn’t listening. Dessert had arrived. * * * This window is eighteen months. After that, the Obama administration ends and whatever follows it — Democratic, Republic — will not likely have the standing or fortitude to argue on behalf of the underclass, to risk the Willie-Horton baiting that can come when a prison is emptied, to expend limited political capital on the most demonized, feared and politically disenfranchised element of our society. The poor, and largely the urban poor at that, will be reconsigned to oblivion when the new administration transitions to power and the affluent who have paid for large chunks of the election victory will have their own notions about how the new president ought to use his political capital. If a Republican wins the White House, he will have done so by yet again promising the party base that he will be tough on crime, that small-town values are an elemental truth despite the fact that America is forever more a big-city society, that he is a law-and-order kind of guy, that drugs are bad and that whoever the Democratics send at him is weak and vacillating when it comes to keeping our streets safe. If a Democrats wins, it will at worst be because he maneuvered to the American center and abandoned any primary-season talk about the poor, about urban policies, or emptying prisons or getting soft on crime. At best, even a Democratic president who stays true to a moral course on this issue is likely going to be denied the necessary legislative victories by a Republican congress maneuvering for the next mid-term and presidential election cycles. At the earliest, with either party, nothing happens to help the poor or mitigate the violence directed at the poor by our government until a second term, as it was with this administration. The next window after this one will be, at best, another eight years away. But right now, this president — as a matter of conscience, perhaps, and with no more political worlds to conquer — is speaking words that have not been heard in decades. And willing, perhaps, to grant a legacy of reform to an administration that is of no further threat electorally, his opposition is actually joining the chorus, or — as in the case of Georgia — acting unilaterally to bipartisan applause. Now, in the last years of the last term of this presidency, there is a chance to undo decades of warfare on the poor. Now, right now, the pendulum very much is in swing. * * * There are a lot of people who misread “The Wire” as being cynical about the possibilities of populism or political change; that’s an easy read, in my opinion. Superficial, too. Yes, the drama is a dystopic vision of an ungovernable American city trapped in a rigged game. That’s not accidental: It seems important, I think, to first call a rigged game by its true name, and for the other America, as represented in “The Wire” by certain quadrants of Baltimore, the game is truly and prohibitively rigged. But so was pre-civil rights America a rigged game. And the economic landscape of the country in the industrial age, prior to the Haymarket and Teddy Roosevelt and the the rise of collective bargaining, was also a mug’s game for many. The Communist satellites of Eastern Europe were rigged for decades before Solidarity sat down in that shipyard, just as apartheid was its own circular argument until a growing international isolation and economic stagnation forced an illegitimate, authoritarian government to see the man on Robben Island not as their prisoner but as their only possible chance for non-violent transformation. Every era of bad or illegimate governance is rigged and rigged tight. Until it isn’t. The last time Baltimore — and the rest of urban America — burned for the television cameras, it brought nods of understanding and empathy from the left, and it left the urban poor and their communities even more isolated and vulnerable than before. It is tempting to argue otherwise — to point to community block-grants and UDAGs and say, look what progress did follow the riots in 1968. Or to read the Kerner Commission report and think that what happened in Detroit a year earlier brought the country to some new understanding of the fire next time and how to avoid it. But no. The greater wisdom of the Kerner report lays there on the pages still, untouched by anything resembling comprehensive political action. And as for whatever money was tossed into American cities that were leaching population and tax-base after 1968, well, the government has always been okay at regilding ghettoes. Bricks and mortar is one thing, and hey, wherever you go, a developer is always a developer. But people? Where was the grand initiative to reconnect the isolated, urban poor with an economy that was already on the move, that was increasingly rendering them irrelevant to the American future? The hard truth is the only comprehensive and lasting urban agenda that followed the rioting of the 1960s is law and order. A healthy chunk of the DNA of our current militarized policing dynamic and unrestrained use of arrest and incarceration is there, latent, in the fear that those long summers of civil unrest produced in middle America. And Detroit is still Detroit. And the parts of Baltimore that burned on Monday have never quite made it back from what happened in 1968. A riot in London or Los Angeles — and such events were actually used for comparison this week, often by dillettantes from London or Los Angeles — is not going to implode those cities. Damage to a world metropolis can be papered over within a year or two by virtue of the incredible economic engines that guarantee the health of such extraordinary places. What does East London or Crown Heights or South Central mean to vast, monied landscapes that are the now the fixed centers of the accumulated financial health of their entire societies? But Baltimore? Gauging what can happen to a Baltimore or a Detroit or a St. Louis in the wake of serious, prolonged riot by referencing a world city is as specious an endeavor as say, explaining all the good that zero-tolerance policing did in a city that was soaking luxuriously in the quarter-century run up in the financial markets. New York has busied itself for three decades completely rebuilding itself and recalibrating the wealth of its population to an extent that the poor were not only priced out of Manhattan, but much of the outer boroughs as well. The only thing that is going to mug someone in Alphabet City or Astoria nowadays is the bill from a two-star restaurant. That’s why zero-tolerance worked in New York — because one of the richest cities that humankind has ever built *soon enough had many more rich people and much less poor people overall.* Put Wall Street where North Avenue is and drop West Baltimore where the financial district now sits in Manhattan and see the magic happen. If the financial markets were in Baltimore or St. Louis, and decades of Wall Street bonus money was scarfing up and restoring those towns block by block, why yes, what shining new Jerusalems would result in Maryland and Missouri. And if New York were an old manufacturing center without its bedrock of financial and artistic primacy? That civic and political leaders in second-tier cities — without the mass capital to reconstitute themselves as centers of capitalist affluence — actually followed Guiliani and Bratton into this hellhole is testament to the simplicity and easy sloganeering under which our political culture operates. To its credit, the police department in nearby Washington D.C. tried zero-policing on the poorer quadrants in Northeast and Southeast and quickly backed away. They were destroying all semblence of community-police relations and police work itself was becoming brutish and ineffective. But Baltimore kept going. Incredibly. And “The Wire” was a show made in that time of astonishing and stubborn indifference to the facts on the ground. Our leaders here were willing to fight the theory of zero tolerance to the tune of more than 100,000 annual arrests in a city of 600,000. And while they did, the arrest and conviction rates for every single category of felony crime fell because what is the use of actual police work and crime deterrence when you can sweep the streets of the poor instead? Crime also fell, too, during that time, or so the cooked paperwork says. But more on that later; the complexity of that lie requires a separate essay, perhaps. Again, suffice to say that this shit dies hard, and if the mass protests in Baltimore and other cities achieve only a handful of indictments or convictions, then it probably won’t end at all. But a riot? Christ, what could be better for arguing the need for more shields and helmets, more militarized police, more prisons, more omnibus crime bills. And, of course, more unending drug war. At every level, from the federal to the municipal, American government emerged from the maelstrom of the late-Sixties rioting with a mainstream-voter mandate for law-and-order policing, for establishing layers of social control over the poor, and especially the minority poor, that no longer relied on direct racial discrimination, but on a more coded and nominally color-blind drug prohibition. The blame is bipartisan. Democratic and Republican presidents and governors and mayors competed with each other to spike the new construct with ever greater weaponry and militarization, to make the penalties on even the most minor, non-violent offenses ever more marginalizing and draconian, to demonize and isolate the poor beyond what our bifurcated version of America had already done, to make middle-class and working-class Americans viscerally afraid of and even vengeful against those without. Some of our most populist Democratic leaders traded in this shit for maximum political advantage. I’m looking at you, Bill Clinton. You are one masterful politician, and, well, a self-preserving sonofabitch. As much as anyone, the American gulag, millions of non-violent offenders strong, belongs to you. But hey, that was then. Right now, in this rare window, Mr. Clinton, along with others — including many Americans who occupy the political center and are necessary ballast for consensus — are today as wary of the police and the overreach of zero tolerance, of the drug war, of the mass incarceration of fellow citizens, as they are scared of the poor. It’s been a long time coming, and but for the brutal overreach of the law enforcement community itself and, perhaps, too, the small wonder of digital camera-phones, we would not be here now. But again, we have at best a year and a half before this political window closes. Hell, it may snap shut before then if the leaders of the mass civil dissent in Baltimore and elsewhere can’t sustain the civil disobedience and mass protest, if a mere indictment or conviction sends everyone to a warm coda of self-congratulation. And the window will certainly close if those leaders don’t stay organized and in control of the agenda, if they lose the optics to burning and looting. The American center stared at that shit once before and replied with Nixon and Reagan and three decades of omnibus crime bills, mandatory sentencing, and rampant prison construction. A good, robust riot now brings at least a decade more of the same misery. * * * The morning after the day when I apparently engaged in the unpardonable effrontery of urging, on this site, fellow Baltimoreans not to diminish and betray the moral authority and power of the ongoing protests by indulging in violence, I took drove to North and Pennsie to spend the morning, along with many other city residents, picking up trash. It’s too much to claim I was at that point motivated by any hope of communal affirmation — though seeing hundreds of us — most black, but some white — walking the streets and alleys of Penn-North doing the same thing was pretty damn affirming. Mostly, though, I just woke up sick to my stomach at the thought of CNN and Fox reporters doing their Tuesday stand-ups with burnt trash and broken glass as their backdrop. I wanted the images of the previous day and night overtaken by something else. After the trash was gone, even from many of the rear alleys, I joined the renewed occupation of North and Pennsie for a time. The intersection was closed for the day and the police line in riot gear seemed to have little appetite to push anyone off the real estate. The young protestors stood their ground, some bantering with the police and others glaring implacably. It stayed that way for a good while until some asshole threw a bottle at police and then, some other asshole, safely ensconced behind helmet, Kevlar and shield, replied by firing mace into the eyes of the front row of protestors. The throng in the intersection broke in a spray of shouting humanity. The worst kind of shit seemed to be starting again, until a cadre of young men came off the corners, arms raised, shouting for peace, telling everyone to calm down. Stand your ground, but keep calm. Peace, they chanted repeatedly, and the moment held, with the protesters and police settling in against each other in tempered hostility for the rest of the afternoon. I am honestly not sure that I have ever been more proud to be a Baltimorean than at that precise moment. And I am certain that I have never had more belief that right now, for the first time in decades, something real can actually be won. - 496shares - Share on Facebook - Share on Twitter […] the context in which she lived and how it pertains to the police and the ill of Little Rock. Gaines lived in a city that has arrested at least 1/6 of its mostly black population. She worked in a town that has used police force not to improve safety, but to buttress the […] […] to winning against urban crime, but over time its limitations have become painfully clear. As The Wire creator David Simon, that long-time student of Baltimore policing and urban decay, has put it: “A broken-windows […] […] to winning against urban crime, but over time its limitations have become painfully clear. As The Wire creator David Simon, that long-time student of Baltimore policing and urban decay, has put it: “A broken-windows […] […] that, it’s also worth noting that David Simon has been particularly vocal over the past month, crediting O’Malley with initially taking proactive steps to lower crime […] So… what can we do in this time frame? I live in Chicago, and while I don’t think we have a zero tolerance policy (I could very well be mistaken), the authorities do play the stats game. And then of course we have the whole Homan Square thing. It’s hard to get people out of their apathy here. Well, it’s hard anywhere, but especially here because corruption has always been such an integral part of the city, it takes a real scandal to piss people off. I mean, I’ve been canvassed outside of polling stations twice just before voting, and at the time I didn’t even think about it, since it’s Chicago. Vote early and often, don’t you know. But seriously, what can we do in this window of time? Even though my neighborhood has a fair amount of gang activity (and worse, I’m set to move to cut my commute), being white and middle class, I feel pretty removed from the effects of the national discussion. […] in the way that true stories are, messy and confused and kind of distantly sad. 3. David Simon, “Zero tolerance is exactly what it sounds like” on his blog. Simon responds to the legion of lefty trolls attacking him for telling people not to […] […] Simon kommentiert angesichts zunehmend häufigeren Ausschreitungen in US-Großstädten zur Null-Toleranz-Politik: […] I realize I’m late to this party by about a month, but beautiful essay Mr. Simon. But I really wonder whether we’re asking the right questions when it comes to violence vs. nonviolence for political agitation: http://badprophet.weebly.com/home/violence-nonviolence-or-should-that-be-the-question You hit the nail on the head with why the zero tolerance policy is in jeopardy … because all the political parties have something to benefit from abolishing it. Now, how do you build that same coalition for something like getting a body camera on every police officer? If that happens I guarantee you that will make one of the biggest differences in urban policing that we have ever seen. Several months ago, Obama proposed something like $263 million with addition to state funding to help make this happen. It needed congressional approval so naturally it went nowhere. What if one of the main messages people got out of the riots was that the people protesting were demanding that in exchange for the funding for body cameras, they were willing to cut an equal number of spending for food stamps. Whatever it took to get a deal done. Sure, that’s a shitty and unfair deal for the poor, but republicans house and senate members would be able to go back to their constituents and say that this funding would make government more transparent and they’re helping people get off government dependence. Bam, you got the body cameras, and these types of injustices go own very rapidly. Hopefully the solution would not have to be as extreme as the food stamp example, but it’s something that shocks people and gets the national conversation on how to pragmatically get a deal done, instead of every group going into their usual talking points and nothing changes. In the mid 1600’s Maryland and Virginia specifically codified “whiteness” into existence, for the first time. With the express purpose of this codifying being the dehumanizing and reducing of all non-white peoples to the status of non-citizens. And conversely elevating the status of all whites to “God-ordained” preeminence. As very clearly evidenced by the primary focuses and penalties of those original codes. Since those humble beginnings, there has literally not been one second of American history that has avoided the astronomically putrid stain of those original acts of racialization! EVER! So, as long as the original distinction that is whiteness (that ultimately started the ball rolling on all this garbage in the first place) continues to exist, EVERY discussion like these that presumes to discuss the plight of any “non-white” is ultimately nothing more than smoke and mirrors. Because, without first discussing the real underlying disease that is the fundamental illegitimacy of the continuing existence of this arbitrary thing called “whiteness”, it’s all just a huge and self-perpetuating shell-game of egotistically pontificating about the seemingly endless correlating symptoms, without ever actually addressing the causation of the disease itself. And the disease is: That there continues to be a thing called “whiteness”, more than 350 years after it was created out of thin air. With utterly no scientific basis for such a distinction. And along with that, beyond a shadow of a doubt, created strictly for the purpose of justifying “RACE” discrimination, in the first place! So the discussion we need to all be endlessly having is: “How do we FINALLY get rid of the fundamental, scientifically illegitimate distinction called “whiteness” that precipitated (and continues to justify) all this mess in the first place???” As opposed to pointless and distracting BS debates about “the n-word” and/or “redskins”, that are in fact, very direct outgrowths of the creation of “whiteness”, in the first place. From indentured servitude to full on enslavement, miscegenation laws, “Indian” Wars, death marches and reservations, non-citizenship for Asian railroad workers, 3/5th Compromise, Emancipation Proclamation, Slave catchers, KKK, lynchings, sharecropping, convict leasing, Oregon state constitution, Reconstruction abandoned, strict racial quotas on immigration, “Birth Of A Nation”, northern white race riots, Tuskegee experiments, forced sterilizations, segregated military, internment camps, Dixiecrats, FHA set-a-sides for whites, welfare creation for whites, southern strategy, George Wallace, Richard Nixon, Spiro Agnew, Ronald Reagan, David Duke, Bill O’Reilly and Fox News, national highway system, northern school desegregation riots, white flight, redlining, prison industrial complex, war of drugs, Tea Partiers, police brutality…and on and on… What ties ALL these things together? They are an unbroken chain of events that have specifically served to endlessly dehumanize minorities, since shortly after the first settlers on these lands. And all discussed ad nauseam. But, much, much more importantly, they have been instruments specifically designed to perpetuate the concept of “whiteness” in America for over 350 years. A concept that, nonetheless, scientifically has never existed. SO… How do we start a conversation on getting rid of this “thing that doesn’t actually exist”??? Yet figuratively, has been creating “hell on Earth” for people of color since 1493? Instead of another endless stream of periodic masturbatory dissertations on “the apparent problems of black culture”. Followed shortly thereafter by another highly predictable round of “our” recommended solutions to “their’ problems. Because the fundamental issue has always been the creating of (and allowing to continue) a lie called “whiteness” for far, far too long one. ** **Note: Not “white people” (that’s a completely different waste of time). Georgie, I doubt many here fundamentally disagree with you. And your point is an important one, but its relevance to the discussion at hand is overstated. Sure, it’s a root cause of the “disease”, but it’s not the disease itself. It may once have been, but the disease has mutated. The “anti racism” based meds proved just effective enough to expand the scope of the disease past race, into class. Both the perpetrators and victims of these injustices are now multiethnic. More tangibly, while (again) your point is not wrong, its fervor overshadows real solutions, policy solutions that can have legitimate results: ending zero-tolerance. Why deal exclusively in philosophic exclamations when legislative action is available? And conversely, why undermine an important philosophical argument – that our society needs finally to move past racial distinctions – by centering a policy debate on race when it doesn’t need to be? I’d argue that moving past “whiteness”, or more generally, capital-R “Race”, foremost requires us to have these conversations in a post racial manner. if we can do that, parallel but not conjoined with the continued true assertion that race is bullshit, our society will – hopefully – eventually give up on racism. Being white in a nation that has collectively been saturated in white supremacy for over 350 years greatly distorts ones ability to process moral reality. It’s like some sort of very powerful and seductive drug that generally can’t be perceived by those partaking endlessly in it’s subtle, conscience destroying, high. White arguments surrounding “race” seem quite logical to them. But to non-whites living with it’s endlessly exhausting consequences daily, they are about as distorted as Hell itself! Once again, what was codified into the very fabric of this nation starting in the mid 1600’s was, and is, white supremacy. Based on the foundational concept of the unending preeminence of white skin and white culture. And likewise, on the concept of the “God-ordained” — this part is massively important — inhumanity of those with “non-white skin”. Thereby, justifying all manner of unspeakable crimes against them. In addition, It is a policy that, if we are willing to be fundamentally honest with ourselves, has been predicated on endless amounts of kidnapping, raping, torturing and murdering of “non-white” peoples, without legal repercussions for such crimes, since long before we became an actual nation. Because that lack of repercussions was, again, codified into the very fabric of our legal system from it’s very earliest beginnings. And as a result, due to the specific unsubtle racial content of those very first legal distinctions being fundamentally predicated on “whiteness” in the 1650’s, the message has been clear and unending, from that day to this…”non-white” lives don’t matter! (Sound familiar?) NOT “lower class” people, but people of color, very, very specifically! Our history is littered with millions upon millions of examples of the differences between the two. Slavery, Indian wars, death marches, forced sterilizations, our basic immigration policy until the 1960’s, post-slavery race riots, Jim Crow laws, secret STD experiments, racilized restrictions in the original FHA program, redlining, overflowing prisons, the evolution of welfare, internment camps, non-citizenship for Asian railroad builders, the War on Drugs… All acts of fundamental inhumanity, very, very specifically targeting “non-whites”! Race-based atrocities! Not in any way, shape or form capable of being confused with class warfare!!! And while these things (and a thousand others like them) have, indeed, sometimes affected poor whites, real honesty dictates admitting that they fundamentally targeted (and most assuredly continue to target) people with non-white skin extremely disproportionately. And not just 60 years ago…right now! The Justice Department didn’t exactly “close up shop” with the passage of the Civil Rights Act of 1964. And 100 years of Jim Crow laws did not, by any means, just coincidentally appear, in the name of making life hell for poor people, after the abolition of race-based slavery. Need more proof? Let’s look somewhat in the opposite direction… Ala the late, great James Baldwin: From coast to coast, what do the vast majority of those seated in our highest institutions of wealth and power continue to look like physically, now 350 years later? And what does that continuously unchanging demographic (granted, with scattered very token and very vetted exceptions) continue to say to the “non-whites” in this country? I contend this: Those at the very top remain almost exclusively white. While those specifically suffering the absolute WORST acts of abject inhumanity at the bottom STILL continue to be almost exclusively “non-white”. Not just “poor”…”non-white”. So therefore, attempting to make an absolutely unbroken 350-year old chain of legalized white supremacist kidnapping, rape, torture and murder suddenly about class is, at best, wishful thinking. And at worst a subtle and underhanded attempt to whitewash the sordid and poisonous truth of our collective racial history (and present) into a much more convenient and palatable lie about “class”. A more simplistic way to describe it is this: As long as the lowest class of white person can, nonetheless, STILL look down on even the most successful “non-whites” as JUST “niggers”, “spics” and/or “chinks”…we still aren’t even close to it being about “class”!!! So, until we definitively get to a point at which the LEGAL foundations of institutionalized white supremacy have been LEGALLY replaced with considerations of “class”…those foundations still stand…PRECISELY as originally intended! And the theory that the disease has recently changed is nothing more than a lie created to minimize the ugliness of a STILL very distorted reality. PS Your proposal regarding a colorblind society is itself an example of a very distorted reality rooted in a pathological need to minimize the truth. “It will NEVER come even remotely close to working that way!!!” And yet tomorrow the actual extant world of practical politics will nonetheless offer the opportunity to make things incrementally worse, or incrementally better. And it will do so regardless of the historical themes and truths that you hope will resonate. And I think the question that was posed to you, boiled down, is this: What can be done now, by a consensus of Americans of all races, to produce some actual, specific reform in this particular moment, involving these deaths from police violence. If you say that a continued and protracted discussion of American racial pathologies, however accurate, gets you to anything transformational, I would argue that you are delusional. After ten minutes, you’ll be talking to yourself and those who are already like-minded. The rest of the political spectrum will have walked away. It is, I argue, a moment that you will have squandered in order to enjoy the sound of your own rhetoric. I’d rather see, I don’t know, a mandatory national data base on police violence created, or the worst aspects of the Law Enforcement Officers’ Bill of Rights repealed by various states, including Maryland. A plurality or majority of Americans can be mobilized against evidence of fundamental unfairness in very specific ways. Wrapping that unfairness in comprehensive lectures on America’s tortured racial history may be legitimate academics, but it will not pull its weight in the political arena. It will take the air out of the fundamental struggle to actually change specific things for the better. Also, your belief that the war on the poor doesn’t extend to poor whites is belied by my experience in Baltimore. I’ve covered callous indifference and brutality by police, white and black, in O’Donnell Heights, Armistead Gardens and Pigtown going back generations. In truth, I once covered the wagon death of a white woman hauled off of S. Stricker street that certainly showed enough callous indifference to warrant the negligence charges brought in the death of Mr. Gray. She was not left to suffocate in the wagon back because she was a racial minority, and her case was not ignored by investigators because of that status. She was just poor white trash from Pigtown and treated as abysmally as anyone living ten blocks north of her. It’s important not to shape the reality to fit only your concise and preordained ideologies. Two things… First, I didn’t at all say it didn’t affect poor whites. I said it wasn’t specifically aimed AT them. Particularly from a historical perspective. That should not imply in the least that they don’t sometimes get caught in the disgusting net that is white supremacy. Because the institutions of white supremacy have a long and clear history of sacrificing “a few of their own” in order to “keep the overall machinery in tip-top running condition”. In addition, I specifically emphasized the most heinous, blatant and clearly unjustifiable (if not outright in clear view of the general public) violence. And again, not just right now, but over an unbroken and uninterrupted 350-plus year period. So when you start talking about the “absolutely inconceivably evil and out in the open” shit, non-whites have had the market practically cornered on that for a VERY, VERY long time. Now, second… Practical politics my ass!!! The institutions of white supremacy are unequivocally a crime against humanity! By any rational measure! They’re a fucking blight on humanity! For 422 years straight! And the various “non-white” peoples around the world who have been endlessly suffering under the weight of such an unmitigated 400 year atrocity are under no constraint whatsoever to negotiate politically for it’s utter and unconditional demise! None! Now here’s the catch… Despite white supremacy creating and endlessly peddling the term “minority” for the last several decades, “non-whites” are now , in fact, over 90 percent of the world’s total population. So, one way or another, the 400 year old lie that is white supremacy is going to end very soon. The only question is whether it’s going to end violently or peacefully — for whites! Again, contrary to your oft repeated personal belief in the moral and political power of non-violent resistance, people who are perpetually conscious of having endured literal torture for over 400 years may or may not have any real capacity left for committing themselves to the principles of non-violent resistance. And every day that people like you continue to tell them to wait and trust the very political process that created their predicament in the first place, you’re unknowingly (at least I hope) playing a game of Russian roulette. Because, as I said, whites actually make up less than 10 percent of the world’s overall population. And the other 90 percent is slowly awakening to the realization that those proportions are virtually the opposite when it comes to money, resources, power and most of all PAIN and SUFFERING. For exhibit A I give you South Africa. Practical politics had almost nothing to do with the final fall of apartheid. It was the stark realization that continuing on the path of violent white supremacy in a country that was 12 percent white was very, very soon going to result in an absolute bloodbath — for whites. It was strictly Nelson Mandela’s personal commitment to avoiding bloodshed (on both sides) that kept whites from being utterly annihilated. Again, the rest of the black population in SA was nowhere close to being collectively constrained to the principle of “turning the other cheek”. And even the most staunch supporters of apartheid knew it emphatically. That’s the ONLY real reason that they surrendered power! So, believe it or not, as much as I personally believe in the principles of non-violent resistance — just like you!!! — I nonetheless believe that, like SA, politics will not solve our seemingly endless racial problems in this country. I believe it will ONLY be when whites begin to realize what a literally suicidal course they are ultimately on by continuing to try and sustain, by incremental political manipulations, such a morally soulless system, until they have utterly no other choice but to either surrender it or deal with millions of people who are mindlessly angry after 400 years of literal torture. And who have no further use for either slow-moving, manipulative politics or protests (peaceful or otherwise). It is one of the most primal instincts that exists in living beings: For 350 years they have been starved for plain, simple self-determination. Because white supremacy does not allow such a thing. But one way or another, they will get it! And eventually, it will no longer matter to them whether it comes through peace or violence. And that day is fast approaching… …far faster than you might think! “First, I didn’t at all say it didn’t affect poor whites. I said it wasn’t specifically aimed AT them. Particularly from a historical perspective.” Again, your passion for a historical perspective has academic merit. But no one arguing over political solutions cares much about a historical perspective. Whatever its origins, the drug war targets all of the poor, regardless of color, right now…Secondly, with regard to practical politics being shown your ass: It doesn’t work that way. It’s the opposite. In the end, when all is said and done, your ass will be shown practical politics. And all of the slogans and speeches and pontificating will not move anyone but those who are already singing in your choir. Your belief that the moment of a racial-based revolution is right around the corner works well as unsupported optimism, but as you note we are more than 400 years into this mess and thus far — even with black life held much cheaper and the situation more dire in any number of earlier eras — a full-blooded revolutionary moment has not appeared. And now, with a black and Latino middle class firmly established, and even partially co-opted against the underclass, there is some additional longevity in certain repressive policies. Witness the faces of the officers in Baltimore, or the racial dynamics of leadership and policing in the city. Your argument that this is singularly about race has been carefully covered and countered by authoritarian forces that understand that the racial rhetoric of the past can no longer engender support. They’ve cooled and expanded the original racial fearmongering that fueled draconian law-and-order policies with something more insidious and comprehensive. In practical terms, they’ve successfully moved the battlefield and lowered their vulnerability to your racialist critique, much as you enjoy making it. It’s why they’ve been able to carry this to the extremity they have, for as long as they have. But that’s me again, being practical, and hoping we can change what is in our power now to change, and knowing with certainty, what will surely not amount to more than sound and fury, achieving very, very little in the end. “…knowing with certainty, what will surely not amount to more than sound and fury, achieving very, very little.” Apparently, we will have to agree to disagree. Because as I tried to convey with the South Africa example above, I am likewise entrenched in believing that the institutions of white supremacy are not just wrongheaded, they’re full-blown crimes against humanity. Not to mention that they’re fundamentally predicated on a 400-plus year old lie. And as such, their demise is, most assuredly, inevitable. Likewise, based on worldwide population demographics (and by extension, the percentage of humanity that feels the same), that inevitability will be sooner rather than later. (Apparently, our level of disagreement centers around whether non-whites “in America” are somewhat angry and looking for change or whether non-whites “worldwide” are getting completely fed-up and considering full-blown revolution. And likewise, where you see incremental progress “in America”, I contend they see 400 years of unbroken and dehumanizing “worldwide” racial violence.) So I end with this: Based on my South Africa example above: If I’M wrong, how many more non-whites eventually die of white supremacist violence “in America”? But, if YOU”RE wrong, how many whites potentially die “worldwide” when white supremacy finally results in a non-white breaking point? Again, based on worldwide demographics, money, resources and power are astronomically disproportionate in white hands. And with it, the power to inflict more and more race-based violence. Therefore, I say, “the clock’s ticking”… If there’s been no racial progress in America since the 17th Century, then yes, it’s time for a violent rebellion against the United States by its non-white citizens. But I can’t take seriously the argument that you see merely “incremental progress” for African-Americans from slavery through Jim Crow to the present. You have made an argumentative predicate of the notion that white respondents here are blind to the full context of the reality that Americans of color experience. No doubt in many respects this can be the case. Do you have any sense that perhaps your assessments of where we are, and where we are headed as a society, might be similarly shaped by where you stand, and that perhaps, a little more proportion in your verbiage is called for. Everyone is somewhere, and everyone has his or her own prism, apparently. I respond to your question with another that is not my own. It’s a paraphrase of the great James Baldwin. What message did our highest institutions convey to non-whites about race 350 years ago, and what do those same institutions convey about race to non-whites now? If by “our” you mean American institutions, then the answer for 350 years ago would be that non-whites, and those of African descent are of value as chattel only, and are inferior in all respects. If 200 years ago, pretty much the same, although now they had value for purposes of a national census as being three-fifths of a white citizen. A 150 years ago, those same institutions allowed them the rights of citizens by Constitutional Amendment but failed to ensure the practice of those rights under the actual application of law. By 100 years ago, in fits and starts, and subject to geographic variance, our institutions began to incorporate more participation by African-American citizens while at the same time maintaining a legal basis by which segregation could still be practiced. By 60 years ago, those institutions terminated the legal grounds for such segregation and ten years after that the overt practice of such segregation in public accommodations and securing not merely the right to vote, but the right to do so unimpeded by local established measures to effectively deny the vote. The ensuing century has seen voluntary and mandated efforts both to promote the actual integration of a variety of national institutions, from the civil service to private corporations to media outlets, law enforcement, and politics, including notably, national politics at the highest level. The opening of American society in that time has in fact created a substantial black middle class and notable pockets of black affluence. None of the above obviates the fact that an entrenched underclass is still poorly treated, disdained and subject to draconian policies, that pronounced racism is still endemic in a meaningful, but statistically smaller subset of the white majority, or that racial insensitivites or indifferences are endemic among many whites of goodwill, or that even the most affluent or accomplished African-Americans do not routinely encounter and endure encounters with racism or racial insensitivity. But it does directly answer your question about what our institutions conveyed to non-whites 350 years ago and what those same institutions now convey. Wild hyperbole is a motherfucker until you take a moment to run it down. Once again I asked one very specific question and you answered a completely different one. “…our highest…” As in Wall Street, Congress, CEO’s, Governors, Federal Reserve Board, etc. Where the REAL decisions that affect our children and grandchildren are truly made. The TRUE power-brokers. The “Captains of Industry”. How different are the occupants of THOSE seats from 350 years ago? And especially…why? Great answer otherwise…just ignores that one specific word… “highest”. Where lives are truly made and broken is still an extremely lily-white club, pretty much regardless of what happens in the rest of America. So the basic question was, ” what does THAT fundamental constant at those levels of power continue to say to non-white America? But good answer otherwise. I’m truly lost. Are you suggesting that people of color have not penetrated those strata, that those institutions are still as white-dominated as they were a hundred or two hundred years ago? Really? Again, your hyperbole is crushing. Let’s just go with government. Given the African-American percentage of the population, is the racial makeup of the Cabinet, in the last fifty years, inconsistent with a pattern of growing integration. The number of national candidates for President? Mr. Obama, notably, as the first man of color to be president? Has the number of African-American general officers risen since Vietnam, including a Chairman of the Joint Chiefs? Are the numbers of African-American legislators and state legislators rising to reflect a growing political dynamic, or are they the same as they were in 1950? Bear in mind the actual census figures for African-Americans as part of the general population. Is there more to be accomplished? Truly. Does the trend justify your claim that nothing has changed in 350 years? No. Unequivocally. It’s interesting that you should bring up South Africa, because to me, one of the things that made changes in the country so authentic and admirable was the Truth and Reconciliation Commission. It’s incredible to me that they created a space where both victims and perpetrators could come and speak their truths in safety. Here we all talk past each other. I think there would be great value in a TRC for the United States. Victims and perpetrators alike. It’s what I think Elizabeth Alexander meant when she read her poem “Praise Song for the Day” at Barack Obama’s first inauguration: Love beyond marital, filial, national, love that casts a widening pool of light, love with no need to pre-empt grievance. What was interesting is how deeply ambivalent the ANC was about the few instances in which white civilian South Africans were killed by an ANC action, even though that violence as clearly undertaken against an illegitimate regime. Mr. Mandela and others insisted that the Reconcilliation Commission was an appropriate venue for those excesses as well, which many in the ANC regarded as a blemish on a movement that while it originally and formally took the right to resist the state violently, had — at the time of the Rivonia trial of Mr. Mandela — largely confined itself to blowing up radio towers and such. When at the height of the struggle there were instances in which civilians were harmed, there was, within the ANC, considerable self-criticism and recrimination. Given that the international sanctions and hard bargaining between Mr. Mandela and the Afrikaans produced, in the end, a remarkably peaceful transition to democracy, the ANC was certainly willing to throw its own relatively limited history of violence out of the boat and include it among the crimes that required reconciliation and apology. Again, I am as strong of — if not stronger — a proponent of non-violent resistance than yourself. But why do you keep insisting that people who’ve been endlessly tortured, for literally hundreds of years, nonetheless, MUST exhibit an absolutely superhuman ability to instantly forgive and “be the better men” and not respond with violence of their own against those who have done absolutely unfathomable torture? Philosophically I agree with it. But it’s a very simple fact of life that actual torture often changes people drastically…and often in hideous ways. I’m not insisting that anyone must do anything. I’m simply affirming my belief that non-violent mass disobedience has achieved more politically and socially over the last 75 years than violent resistance. I am not moralizing so much as I am speaking of actual efficacy. There is a single ingredient that was present in South Africa at that time that we are nowhere close to: desperation. South Africa was a hair’s breadth away from unfathomable bloodshed against it’s white citizens (if not out-and-out annihilation). Had that happened it would have absolutely destroyed their economy. Which in turn would have probably led to all-out civil war. Not just Nelson Mandela, but every major player of the world stage knew it. So they had to do something EXTREMELY drastic to save their country from the virtually inevitable genocide of whites, total economic collapse AND THEN civil war. So chances are very high that we won’t come even remotely close to doing anything similar unless we are facing circumstances that are just about as dire…for quite a few of the same reasons…and with just about all those reasons connected either directly or indirectly to the same kind of institutional white supremacy. Plus our demographics are drastically different from theirs. Georgie, I’m having trouble with your logic. Just so we’re all on the same page, and so you don’t need to risk finger damage with more furious atrocity listing: NO ONE with their head squarely on their shoulders is arguing the significant validity of your points regarding the depth and breadth of the white supremacy problem in this country. It’s real, and the history is well documented. Perhaps the magnitude of your hyperbole is overdone, but whatever: arguing just HOW BAD worldwide racism is or has been will yield nothing. Where I – and I think DS (correct me if I’m being presumptuous here) – are struggling is with your proposals or, more loosely, your implications. First (lets call this point [A]), you made a valuable – if intangible – plea to free our minds from flawed conceptions of race. They are not supported scientifically, and yet they lend themselves to such awful effects. Agreed. You follow that important and nuanced point up with feverish proclamations [B] that the issue of police violence cannot ever be disassociated from racial lines. Perhaps too vague to be explicitly contradictory to point [A], but you surely understand my hesitation. Further still [C] you claim that in order for the issue to free itself from class distinctions “the LEGAL foundations of institutionalized white supremacy [will have to be] LEGALLY replaced with considerations of “class””. Continuing, [D] you claim that no good can come of practical politics, that a full-scale revolution is imminent. you may notice this one feels more explicitly contrary to point [C] And finally – of course ignoring a handful of “points” lost in the last 10k words – you assert [E] that America (or the world at large) is nowhere near as desperate as S Africa 30 years ago, and so drastic solutions are impossible. E and D appear contradictory, as do D and C, and B and A. And of course B through E all undermine the nuanced quality of A – an important initiative that should and can co-exist with concerted efforts to address the practical ramifications of both white supremacy and – more generally – authoritarian tendencies (as DS has so eloquently put it). So what’s the root of your “argument”? What are you saying other than “the world is super racist” and “racism is super bad”? If you are saying that revolution is imminent, then who are you trying to convince? Surely it doesn’t matter what any of the, likely white (or at least educated and ‘privileged’), opinionated blowhards on a wonkblog like this might think. If you think, rather, that there is a practical way to handle our society’s very real problems, what are they? Is there a way to undo the “legal foundations of institutionalized white supremacy”? How about 3 solid examples? If “400 years of white supremacy” around the world really is (and continues to be) a “crime against humanity”, should we try the perpetrators? If yes, who? If no, what good does addressing it do? These are serious questions. And if you can’t answer them, what good can your pontification do? PS. Sorry for perpetuating this DS – I know it feels futile, but I think we can both admit that the futility is fun. Since you agree with [A], I’ll start with [B]. The key word in all this is “institutional”. We tend to want to see shocking acts of racism as random, unforeseeable aberrations. They are not. Every single one of them is a direct descendant of 1650’s legal precedents that created the concept of the preeminence of “whiteness” (“in the eyes of God”). Followed by creating complex financial, legal and political structures that insure the preeminence of “whiteness” throughout perpetuity. And lastly, the institutional underpinnings that established (and ordained) those “institutions of whiteness” in the 1650’s are still virtually entirely intact today. Quietly, methodically, and most of all, highly discretely slanting the tables of justice almost as highly in the direction of “whiteness” at this very minute as then. Now where that directly intersects with today’s blazing headlines of “extra-judicial killings” is that since the end of the Civil War, the police have literally been (and continue to be) the single biggest instruments for preserving that fundamental status quo. Now to [C]. You highly misinterpreted my response to DS. I believe he highly overestimates the role of class in these types of debates. To the point that I see it as virtually a non-starter. And worse yet, simply an instrument for watering down the overwhelming role that the 350-year old deeply rooted institutional structures, built very specifically around race, continue to play in this current society. So, in fact, I wasn’t talking at all about legally replacing one with the other. I was actually trying to express how immensely disproportionate race is compared to class when it comes to these types of discussions. Moreso, I personally believe that such attempts to somehow equate the two are fundamentally rooted in attempts to minimize the issue of highly institutionalized white supremacy. And instead, try to frame it all as random acts of individually twisted outliers (and of course, class). And as much as possible, disconnect these current acts from their immensely institutionalized and directly connected historical context. Now to [D]. The attitude you see there is founded in the simple belief that “you can’t fix a problem that you won’t admit is a problem. Ala alcoholism. It is also rooted in a very personal belief that “race” is a truly massive problem in this nation at an institutional level. To the point that people of color STILL perceive themselves as being literally violently tortured, because of the color of their skin. NOT mistreated somewhat, tortured! And that is almost exactly the same type of highly internalized perception that non-whites had over 350 years ago. Seemingly endless institutional violence. And as I somewhat expressed to DS earlier, people rarely respond to long-term violence either rationally or logically. Most often it’s with violence for violence. Thus the not-so-subtle references to revolution. Now to [E]. Except the answer is a repetition of my [D] answer. I believe as a nation we are suicidally determined to believe that because we’ve cut down from a massive amount of alcohol(racism) to a slightly less massive amount, we’re now doing just fine. Um…no. The people that are being pissed on our alcoholic stupors have some very, very, very different opinions as to what is really going on here! It’s ironic that Freddie Gray is being held up as an example of the “intolerance” of “zero tolerance”. The same man that by age 25 has nearly 20 arrests of either possession and/or distribution of narcotics represents nothing remotely close to being a victim of a zero-tolerance police culture. If the policy were true to it’s name Mr. Gray never should have been on the street that day. He’d have been behind bars. The libertarians have been banging on about ending the drug war a good while now, and it’s become fashionable for others to get on board as well (Mr. Simon, whose made this a steady cause for a while now, excepted). But, if the nation were to evolve toward a policy of diminishing the “drug war”, then what? Tolerate the crimes the addicts will inevitably commit in order to satisfy their addiction? Offer, and pay for, drug treatment instead of jail/prison sentences? What to do with those that refuse? What about the poor populations that have to live with addicts and sellers in their midst? It’s my experience that those poor populations prefer a police presence to that of drug dealers and junkies, do they get to voice their preference or do they just get to live with the fashionable decision of decriminalizing drug use made by “their betters”? What do we say about the statistical proof that overall criminality has diminished over the last few decades while the militarization of police has increased? These questions shouldn’t imply my support of either the drug war or the current harshness of the police state, but they questions that need to be tackled head-on with practical policy ideas by those advocating changes to current policy. Mr. Simon, while I’ve not yet watched any of your stuff, I understand these topics are all themes in your work, and you deal with them here in your blog — what are your policy ideas as replacements to the status quo? Here’s the lie of zero tolerance: Maryland has about 22,000 prison cells for all crimes, all counties. In Baltimore city alone — which is only one of two dozen Maryland jurisdictions including the populous Baltimore County that surrounds it and counties of Prince George’s and Montgomery in the D.C. suburbs, there are said by state health officials to be between 20,000 and 35,000 chronic users of heroin and cocaine. You can keep locking them up for loitering, for failure to obey, for drug possession, for drug paraphernalia, for a dozen different charges and you can do it over and over. But where are you gonna put them? By the numbers, the drugs have won this war and there is no way you can arrest you way out of the problem. Not unless you spend billions to house a mass population of non-violent offenders by building prisons from here to Hagerstown. And unlike the federal government, Maryland can’t operate at a deficit. Every prison cell we build and fill sucks money out of the general fund. Crime is down, but so is population in places like Baltimore. Indeed, the key cohort of 18-26 year olds is a smaller entity now. So that by raw numbers, crime is down, but by rate, not so much, and violent crime is again rising. Why? Because no one is solving crime. They are busy hunting the Freddie Gray’s of the city, rather than making cases against those who are actually making the city less safe, fundamentally. Medicalize this problem and pour money into treatment beds and job training. It can’t make things worse. And yes, if it were up to me, I would practice harm reduction. Baltimore has lost more than a third of its population since 1960, we have significant non-residential real estate to which we could push the illegal drug trade to marginalize its effect on neighborhoods, schools, commercial areas. Such practical policing, rather than this ridiculous stat game, would be greated by most neighborhood leaders as a tactical victory, if only because they could reclaim their own blocks. It won’t solve the long term dilemma of mass addiction and mass unemployment — that’s an epic fight to turn back decades of neglect and deindustrialization, for sure. But it acknowledges a fundamental reality in a way that zero tolerance does not. In your last paragraph, where you actually begin dealing with policy, I detect some level of hesitation or maybe lack of enthusiasm. Perhaps I’m misreading, but, if so, it need not be. I think anyone that’s looked at the numbers/statistics for non-violent drug offenders can’t escape the notion that this is unsustainable, at best. At worst, we’re ruining the lives of non-violent offenders, and, in some cases, making violent criminals of them. I think the more difficult reality to confront is that the prescription you’ve mentioned, and many of us endorse, won’t keep criminals out of the poor areas of cities (or the rural trailer parks). The meth-heads, coke and heroin addicts will continue to have a disproportionate negative impact that will require some type of police reaction. We will be doing triage, yes. And we will be practicing harm reduction in various ways, by pouring resources into treatment, job training and, god willing, some version of a modern CCC or New Deal-jobs facsimile. Too costly? We’re spending far more than that to practice triage that simply brutalizes — those addicts and petty criminals aren’t exactly going to prison for good, either. They’re on the streets just the same because we can’t arrest our way through such numbers. I’d rather practice triage, though, with better tactics and without the bullshit that credits the arrest of people like Freddie Gray with accomplishing anything. Fresno went the opposite way amid a gang-problem and considerable violence. They stopped harrassing their underclass and instead turned communities against the violence. By distinguishing between non-violent offenses and real crime, they reengaged with the community in fundamental ways and actually began to make the worst parts of the town more liveable. And of course Spain and Portugal were warned that legalizing drugs would create a swarm of unrepentent and uncontrollable users who could never be reasoned with. It didn’t happen. We wanted a war and we got a war. It’s time to try the opposite of war when it comes to drugs. Mr Gray, at one point, was found to have huge amounts of long term lead poisoning in his system…Lived in a city with an unemployment rate two to three times normal for people like him…And, for all intents and purposes, died of Government torture. While we collectively hold ourselves up as the greatest nation on Earth. So in light of all that (and LITERALLY millions more similar cases throughout our relatively short national history)… Do you even remotely believe his arrest record is the most important thing going on here??? You blame all the anti crime laws and zero tolerance policing on the riots (rebellions). Using that logic, segregation and the the rise of the KKK were caused by the reforms carried out under Reconstruction, the populist movements in the South in the 19th century, the black migration to the cities and the civil rights movement in the 20th century. I could go on, but I won’t…. Be careful to be a bit nuanced here. I said the DNA of the 1966-68 riots were contained in the national resolve to implement a new Jim Crow under the semantically color-blind premise of the drug war, zero tolerance and mass incarceration. Middle-American fears of urban disorder was a fertile field in which these seeds could be planted. Have you read Ms. Alexander’s “The New Jim Crow”? I am saying nothing outside of her own compelling case. The civil rights gains made the overt racial targeting of people of color more problematic, but middle America could be carried to the same effective result through the war on drugs and mass incarceration. There are other contributing factors to be sure, but the riots and their after effects are in the DNA. That is what I said. Am I reading you correctly here? Are you really implying that the war on drugs was primarily a subversive continuation of Jim Crow by proxy carried out for the benefit of “middle America” ?! Are you that ready to shit on all of the people who, in good faith, press for equality in America while desiring to simultaneously avoid the abyss of drug addiction’s effects on communities? You and I are both old enough to remember the rise of cocaine use in the late 70’s and early 80’s along with the subsequent crack epidemic and inner city gang problems that followed in the late 80’s. Those were real issues that the wealthy and the poor of all races felt compelled to fight with same enthusiasm. That same enthusiasm to fight the stark violence of inner city gangs, in no small part, drove the polity’s acceptance of the militarization of the police. I think many good people were under the misapprehension that this war was being prosecuted fairly and judiciously as a war on dangerous drugs. I think many good people believed that in earnest. Many still do. It became a war on the poor, and it provides a perfect overlay to consign the American poor, through mass arrest, to a status of non-voting, unemployable, heavily incarcerated and socially and economically controlled second-class citizens. That is its effect. Was it everyone’s plan? No. Did it dovetail with enough white fear so that it became a plan for generation of political leadership who were willing to marginalize the poor to secure middle-class votes and personal advancement? Yes, I believe this. You missed something in your description of what you call the “hands up ballet.” Hands up might be dramatic, but it is also a form of capitulation “that ironically accommodates the very goal of police brutality — to intimidate and immobilize black citizens, forcing them into a defenseless posture if they hope to survive.” YILYASAH SHABAZZ, in an Op-Ed called “What Would Malcolm X Think?” in the NY Times, FEB. 20, 2015. Disagree. I think it shamed the shit out of the police and became a cogent reply to the death of an unarmed man. I really like what Malcolm X’s daughter had to say about that on the anniversary of his assassination. Shamed??? Is that all it takes to change things? Just make the authorities feel ashamed of themselves??? Are you kidding me? Your eye is off the real goal here with the hands-up, non-violent optics: You are trying to swing the mass of middle Americans — not the authorities, but the entire political consensus of the country — away from these policies. Do you wish to suggest that the drug war, mass incarceration and zero tolerance policing haven’t lost traction over the past several years, and now, with these senseless deaths, continue to do so? No, I am not kidding you. Explain what is happening in Georgia. Under a Republican governor. Actually happening; not being discussed, but happening. Do you think that Georgia’s governor and legislature continues to deemphasize non-violent drug arrests and empty prisons if tomorrow, Atlanta burns in a riot? Or does law and order return to its place as the certain political mantra in that state? The riots helped give Nixon and Reagan and years of counter-revolution when it came to urban policies. Law and order became a go-to political plank to stoke middle American fear. Shit, Agnew would not have had a political career had it not been for his use of the Baltimore riots as a backdrop to vocalize white fear and anger. Two years after the American cities burned, a drug war was declared. It didn’t slow down for the next four decades. Shaming middle Americans is not all that is required to change things, no. That’s your hyperbole. But without some new consideration of the drug war, of zero tolerance, of militarized policing by a consensus of Americans larger than urban America and its finite number of lefty allies, nothing is going to change. And a bad riot or two guarantees it. The government had confronted a mass movement through the 1960s. When the economic crisis hit in the early 1970s, unemployment skyrocketed. How does a government keep control under those circumstances? It first targets the “dangerous classes.” That’s what they used the drug war to do. To put any blame on the earlier riots is not correct. For one way or another, there was a mass movement that expressed itself in many different ways. One way or another, the government, and the capitalist overlords who it defends, were going to use repression. And they would have invented any excuse to do it. And yet the excuse that they have used to such draconian effect has been the drug war, mass incarceration and zero tolerance policing, coupled with the militarization of the police response. They have transformed the poor into a profitable commodity — not for society as a whole — but for certain sectors of law enforcement and the prison-industrial complex. Do you really think this is possible without middle America — voting America, the plurality of the country that views the inner-cities and burned-out dystopias as ever too close to their own world — being even more alienated and distanced from the poor by what the riots wrought. Sure racism is the original sin here, but didn’t the riots exacerbate white fears and accelerate the the flight of capital and tax base? Have you been to Detroit? To Gay Street in Baltimore? To Camden? To the cities that burned? Their core areas of poverty were systematically isolated and marginalized, so that the economic future of the country — tech centers, service industry jobs — rushed to Dearborn or Ann Arbor, Owings Mills or Bel Air. I agree that the riots are not the only thing in the DNA that has segregated the poor from the rest of us and made possible this open warfare on them, but by what logic do you claim that “to put anyblame on the earlier riots is not correct.” You talk about the jobs loss of the 1970s in a vaccuum, as if white flight post-1968 wasn’t a part of that. Sure, deindustrialization accounts for a good, healthy chunk of the inner city’s high unemployment. But so does the fact that until relatively recently, most of the job creation of the last thirty years — the run up in the service economy especially — departed the inner cities for the suburbs at an accelerated rate once the riots held sway. Detroit literally packed up its economic base and moved it to Dearborn, where it lives today. Baltimore’s Gay Street corridor, which burned on the Monday, never came back from 1968. The money and the jobs ran elsewhere, and the industry that remained, the drug trade, brought its own miseries and then was followed by a war that compounded those miseries. And to bring the drug war to its current levels of punitive brutality — this could only happen in a place utterly divorced from the rest of America, a world apart economically, socially, racially, politically. The riots did that magnificently, creating an unseen America where different and brutal methodologies of policing could be practiced out of the middle American eyeline. At least until all the personal cell phones had cameras, anyway.Are you suggesting that the riots in 1968 were not merely inevitable, or justifiable — which is one thing — but were, in fact, a good thing for the cities that burned? Then why is Detroit still shit? And why is Camden gone? And why is Baltimore only now digging out of that deep hole. And why is Pittsburgh, which did not burn, so much better economically? After all, we’re not making steel in this country either. Detroit is still shit because we are in a depression, David. In a very real sense, this depression didn’t just start in 2007… but 1973. Just look at what has happened to wages and benefits since then, not to speak of government spending on social programs, social services and infrastructure. You say that we’re not making steel anymore in this country. That is a myth. Industrial production has more than doubled in the last 30 years. But employment has plunged because of high productivity and outsourcing in this country. In a way, industry is following what happened in agriculture. As opposed to a century ago, when half the population still made their living making food, today, one or two percent of the workforce produces food– and not just for this country, given the high level of exports. Of course, given the productivity increases, people should be working less and getting more. Instead, all the benefits are going to the capitalist class, the bankers and all the rest. So, I think you have your cause and effect all mixed up. I cringe at the term “middle America.” You seem to love it. But what is that supposed to mean? The Middle America that I am familiar with is catching hell. And it is looking for solutions. And if it doesn’t find them on the left, it will be attracted to the right. What you seem to run away from is that this country has a long, radical tradition. Hard to believe, but it was actually created by a revolution… two, as a matter of fact. But you want to cover all that over, smother it in piety and religiosity. In your scenario, people say, Hands Up! I submit! And the authorities are shamed and change their ways. Most of the characters in The Wire wouldn’t believe such nonsense. We’re in a depression for 48 years, really? Not even a recession, but a depression. Wow. And how, amid the last twenty-five years of GDP growth, have other American cities — not those that burned in the riots, but others that did not see massive white flight and capital deinvestment — manage to have better outcomes than Detroit? You’re speaking in generalizations that are not only hyperbolic economically, but that don’t answer the question about why Pittsburgh or Atlanta aren’t Detroit, or even Baltimore? Those are very different civic outcomes. As to the Hands up-I submit characterization, that’s your language only. Resistance as armed struggle can claim far fewer victories than resistance as non-violent civil disobedience. Selma was Selma. Gdansk was Gdansk. And apartheid fell not because the ANC actually embraced armed struggle, blew up a few radio towers and provoked a single massacre that led to real reflection within the organization. They began to win when Mandela stood up at Rivonia, and they triumphed when the international community so isolated their economy in support of Mandela and the ANC that the Afrikaaners began to see the man on Robben Island as their only way out. Does the arc of non-violent resistance require time and pain and sacrifice? It does, sometimes grieviously. Can it claim more of the last century’s victories over authoritarian rule than violent resistance. I believe it can. Please cite for me, in terms of modern history, the victories that have come through the actual use of mass violence against either an authoritarian regime or the misuse of state power by that regime’s own people. I’ll stack my list against yours. Lastly, the term middle America. Cringe though you will, we need a large plurality of the American voting public — the middle-class, the working-class — to support some agenda other than law and order and all that the law-and-order mantra has come to entail over the last thirty years. You need the national legislature to reform the sentencing guidelines, to restore federal parole, to accede to an emptying of the prisons. You need them to support an executive administration that ends the militarized giveaways from the Pentagon to local law enforcement. You need allies because, regardless of what you believe is right, you are subject to the realities of, at best, a politically centrist electorate that has proven itself responsive in the past to authoritarian calls for overpolicing and mass incarceration. This is now happening — and moreso than at any other point in the last thirty years. The drug war is being openly questioned by politicians, even nationally. States have started to rethink mass incarceration and cities are moving away from zero tolerance. It was happening before Baltimore or even Ferguson — though these incidents of police violence are also now essential evidence for that reconsideration. Do you want to win some of these battles? Do you want to see the pendulum swing back from the worst excesses of the last thrity years? Do you want to pull the necessary political ballast to turn the ship closer to keel? Or do you just want to raise your fist in the air and adopt a better and more affirming slogan? I want to win some things back that were given away over the last three decades because of this war on the poor. I am thinking of specifics. I am thinking, frankly, of the Grants and Scotts to come if there aren’t actually policy changes that are achieved. Convicting some cops? Certainly appropriate in most of these cases. But if you think the dynamic actually changes by making an example of a select number of foot soldiers in this war, well, good luck with that. The rules of the game have to change. For that, you are going to need to join those people vicitimized by this brutality with a significant plurality of those who have not been victimized, who have for thirty years had this war fought in their name. You can cringe at that reality, which is called politics, and you can pretend that the optics of rioting have no effect on the process, but hey, that’s just delusional. You’re asking the American electorate, as liable to fear-mongering, racism and classism as it has proven, to walk away from a militarized police response amid the imagery of actual civil unrest. Cringe away and keep your first raised in absolute solidarity with armed insurrection, but honestly, your failure is assured. I am sorry, you just want to pick and choose the parts of social movements, the kinds that you like, led by liberal organizations. Selma wasn’t just Selma. And it certainly wasn’t just Martin Luther King. You can’t just isolate it. It was part of a broader struggle that included a lot of different things, like the riot in Birmingham and the riots in Harlem and Bed-Stuy — which were harbingers of what was to come. By the time of Selma, already, the younger generation was radicalizing, and trying to find their way out of the political straitjacket of the preachers, at the head of the movement. And that’s not to speak of Malcolm X, who was a reflection of what was going on in a big part of the black population. “Malcolm didn’t create black anger with his speeches — he organized and gave direction to it,.” wrote his daughter, ILYASAH SHABAZZ. The same goes for South Africa — which was characterized by a massive revolt of the black working class.– something you didn’t mention, but which, more than anything, forced the issue. The tragedy in all these movements is that the people who did the actual fighting and dying were not the ones who gained power. Instead, the middle and upper classes did, in their name. And that has left the masses of people in basically the same shit as before. just look at the Marakana massacre, carried out by the ANC and Cyril Ramaphosa, who used his position at the head of the trade unions to become a billionaire. As for the black movement in this country — well, most of the gains have been going down the toilet, haven’t they? Except for maybe a small sliver of those who climbed their way to the middle and upper class. A couple of other points: I believe it is a depression when peoples standard of living goes down. And by almost every measure, that has been true for the working class. And not just in relative terms, which would be bad enough. And in that sense, Detroit is just a harbinger of things to come for ordinary people everywhere in this country. Finally, as you may tell, I don’t hold much stock in elections and what they can gain people. As Helen Keller said — if elections actually could change anything, they would be made illegal. So, Ellis. Are you claiming Selma as a victory for by any means necessary, or more specifically, for the Harlem or Bed-Sty riots? I’m not sure that was what America saw when the networks went live to the Pettus Bridge. Selma was one battle in a much larger war. For every Martin Luther King, there was a Malcolm X. For every Selma, there was a Watts. But they are all part of the same war. You pick and choose the battles and leaders that you want to talk about., to fit your liberal political viewpoint– which I don’t share, at all. I am not, as you say, holding my fist clenched in the air. I am simply trying to describe a historical process, in which you have two very, very opposed social forces, which can never, eve be reconciled. This country was built on slavery. It was a slave society for more years than it has been so-called free. And it is still based on the super-exploitation of black people. They have always had a “special” or “peculiar” place in this society — more unemployment, poverty and repression has always been reserved for them. It is the way this ruling class has ruled since its inception, and it won’t change until the entire society is changed. Which battles were victories and which battles, less so? I keep asking. Watts was a victory.It forced employers to hire black people in the aerospace industry for the first time. Lockheed rushed to open up a factory in Watts. The country built a big hospital near Watts… and named it after Martin Luther King. Detroit was a victory. The auto industry hired black workers for the first time in big numbers on the assembly line. And if you want to keep on insisting that the riots led to Detroit’s decline — and not an actual economic crisis — then I will say the same holds true for Selma — which is suffering from more unemployment and poverty than Detroit. Detroit was a victory. I get it. We had to destroy the village in order to save it. And Selma, for the love of all that is honest rhetoric, was not about Selma, Alabama. It was about the enfranchisement of the African-American vote under federal law throughout the entire country. Amazing that you can be so grandoise about the great achievement of the 1967 riot that began the flight of capital from Detroit and marks that city’s rapid decline, but you can be so ridiculously narrow at what Selma and the Voting Rights Act were. Ellis, we are just not going to agree on the fundamentals here. It’s cool, but it’s so. I wasn’t being grandiose about what happened in Detroit or LA. I was giving you facts — which you dismiss with the back of your hand. And I wasn’t dismissing what happened at Selma. I never did. I was just trying demonstrate how cause and effect can be twisted.. Yes, you did indeed affirm how cause and effect can be twisted. I generally respect you, even if we disagree. But you are being disingenuous as fuck to describe Mike Brown’s death as “over a stolen cigar”. It was a robbery by threat of force by someone who then punched a cop in the face and tried to rob him of his gun. Be critical all you want about the cop or the DA or the grand jury. But don’t fucking lie about it. Yeah, I have to second that – the police admitted at the timr that Brown was not stopped in conjunction with any crime. If you want to get technical about it, he was stopped for jaywalking. That’s a capital crime now, you know. It certainly escalated from the shoplifted item. First to an apparent common misdemeanor assault by Brown, then to the worst, laziest and most provocative stop of a pedestrian a law officer might undertaken, then to an assault on an officer and an apparent struggle for his weapon, then to flight, then to a police-involved shooting. I have no problem agreeing that a grand jury might find an indictment in the Brown shooting to be a problematic. I do too. But it began over a stolen cigar, or following up on that origin, a cigar and a misdemeanor assault. Are you suggesting that there is a level of policing in which such things should have a body count? Once you put your hands on a store employee, a misdemeanor shoplifting charge turns into a felony robbery. Even if, say, an employee tries to grab you while you’re walking out and you just pull away from them. That’s all it takes for it to be considered “theft by force.” Indeed, in Maryland the charge is “strongarm robbery” which combines the theft with a common assault. It is not armed robbery without a weapon, of course. But KT notes correctly that he wasn’t stopped for any of this. The confrontation with the officer was actually over jaywalking as the officer had no knowledge of the incident in the store. Law Enforcement Against Prohibition http://www.leap.cc Send ’em a few bucks. http://www.stltoday.com/news/national/records-thousands-too-injured-to-enter-baltimore-jail/article_75a5aa31-d75c-580f-a62a-9cd82906c429.html You said that this is primarily about class and not race. I agree. But would you agree that the primary reason one race tends to suffer more in this economic class system is that the exoskeleton for which this class system is based upon is in the face the same one that support a racist system? I certainly would. Even with the rise of a multi-racial middle-class over the last half century, the economic fault lines in our society still skew racially. I applaud you for this, David. Not to play the sycophant or anything of the sort, as I feel I am rational enough to see reasons for the tone, but I swear this is the most savage I have ever read of you. And goddamn justly so. So, tip of the hat from here in Bardstown, KY. While I know this is out of your wheelhouse, Mr. Simon, I wanted to drop this link here for readers. I think one of the great overlooked issues is the effects of lead paint. Here’s an article from 538 about the history of Baltimore (and other cities) with regards to lead paint. http://fivethirtyeight.com/features/baltimores-toxic-legacy-of-lead-paint/ What a fucking sad state of affairs. Bad times make great art though, right? At least, that’s what I’m clinging on to. By the way, I was playing a little Fats Domino at my mum’s birthday. Her eyes lit up in memory, and even my dad woke up (he can only take four beers these days before he’s out) to do a little jig with her. Sometimes, the good things aren’t good enough to ride out the bad. But at least, they’re there. Hope things get better soon. Graham Fats can salve a lot, can’t he? I heard “Walking To New Orleans” on the night that the night after the levees collapsed and the city flooded like a bowl. Even then, it seemed to help some. While I’m thinking about it, Newt Gingrich has officially changed his name to Newt Fucking Gingrich, so from now on, in the interests of orthographic correctness, the “f” should be capitalized. I probably could have left his middle name out. Hey, he chose it in ’96, and now he gets to live with it. Mr. Simon, It seems like you’re the only left-leaning journalist – maybe the only journalist, period – with an insightful, nuanced view of the issue of police brutality and how, in the cities at least, it’s directly related to the war on drugs. The rest, out of intellectual laziness or bias or ideology or I don’t know what, want to oversimplify it as strictly a racial issue, even to the point of cherry picking stories for national attention and mindlessly parroting the idea that it’s only blacks that are affected by it, and just generally pushing this narrative that our entire society is built for the sole purpose of crushing and belittling minorities; that white people are all just walking around with sacks of money slung over our shoulders like the fucking monopoly man that we all got working in cahoots with one another at the expense of blacks, knowingly winking at the police that drive by us on their way to sodomize those same blacks with night sticks while we head back to our mansions in our gated communities. Having grown up in Gloucester City, New Jersey, the epitome of a working class town, and having spent close to a decade battling opiate addiction in one form or another – percocets, oxys, dope, methadone maintenance, suboxone etc., – I am intimately familiar with the drug war on the streets of Camden, New Jersey, and North Philly, aka “the Badlands.” and I can personally attest to the fact that it don’t matter what fucking color your skin is when it comes to the judicial system, the only color that matters is green. I remember one time walking down the street in Camden when a cop car pulled up beside me, one of the officers jumped out, ran at me full speed, tackled me like fucking James Harrison, mushed my face in the pavement, handcuffed me, then picked me up on my feet by my hair – all before saying even one word to me. Granted, I was going to cop, but still… the nerve of this guy, he didn’t even ask me if I had my white privilege card on me first. Of course, after I showed it to him he apologized to me, didn’t even give me a five hundred dollar ticket for “loitering in a drug zone,” he even went as far as to drive me to the set so I could cop my dope, let me shoot a bag in the back of his patrol car, then drove me home. When we got to my house I thanked him, and he said no problem, anything to help out a fellow white man. Gave me an FOP sticker for my car, too. No, but seriously, the first part of that story, and getting the ticket for “walking while white,” actually did happen. I’ve gotten more “loitering in a drug zone” tickets than I can count, which – the entire fucking city of Camden is a drug zone according to the police. And if you’re white, it stands to reason that the only reason you’re there is to buy them. I’ve gotten loitering tickets while on my way to the methadone clinic, a perfectly legitimate reason to be there, but the cops don’t give a fuck. If you’re a white junkie in Camden, they are gonna shake you down for every penny they can. And if you can’t pay then you take a vacation in the County. And let me tell you, for a city that’s 99 percent black and Puerto Rican, there are a shit load of white boys in there, a disproportionate amount. I’ve kicked dope in a two man cell with five people stuffed in it numerous times, shelled out thousands of dollars for loitering and possesion of hypo fines. Surprisingly, the only time I actually got caught with dope on me it was by a black officer who took the bags and my works, told me to “go the fuck back to Gloucester and don’t come back.” Didn’t charge me for any of it. I’ve spent eight months in that same county on charges stemming from my addiction, I’ve lost close to twenty friends friends, neighbors, classmates, etc, over the years to overdoses, and I’m only 32. My buddy got robbed and shot in the head up in North Philly, dead. Another buddy of mine started robbing banks to support his addiction, he’s doing like 15 years in federal prison. One of my ex-girlfriends just overdosed a few weeks before Christmas, left a two year-old daughter behind, the father is in Rahway State. For what? Guess. Another ex overdosed two years ago and left two kids behind. I’ve had service pistols pointed at my face by Philly cops, and a few years back, an unarmed white kid, Billy Panas, was shot and killed by the PPD, but you didn’t see that shit making the national news because it doesn’t fit the narrative. Apparently only black lives matter when it comes to police brutality. I turn on the TV and see some geeky Ivy League, ivory tower motherfucker like Chris Hayes, who, the only time he’s been around black people is when he was getting pelted with rocks by em reporting in Ferguson, self-flagellating on behalf of all us privileged whites and I wanna throw the fucking TV out the window. What kind of fucking bubble do these people exist in? I’m not looking for sympathy or fishing for pity or nothing like that, ultimately we make our own choices in life and suffer the consequences. The point I’m trying to make is: nobody gives a fuck about white trash, either, black people, so don’t think you’re fucking special in that regard. Anyway, yeah… thanks for bringing some balance to the discussion. On a related side note, quick question: I noticed while watching The Wire the way the sets, i.e., drug blocks were set up one guy would collect money from the customers while another distributed the drugs. I was wondering if that’s the way they work it in Baltimore, or if that system was of your own creation? Because in Camden/Philly the cardinal rule of copping in the hood is never ever ever, under any circumstances, hand anyone your money until you’ve got drugs in your hand – that’s just asking to get beat. And this rule is tacitly adhered to by both the hustlers and the fiends. No serious drug dealer would expect you to give them your money before they give you drugs. If someone says, “give me your money, I got you,” you’re about to get beat or robbed. Or arrested. I remember watching one of those scenes when the show first aired and thinking, oh no, he fucked that part up. That would never happen in real life. That’s the way they did it. General belief being that if you’re seen handing off and getting paid, you’ve given an observing officer all he needs to make an arrest, but by dividing the work among the crew, and giving the drug transaction to a minor or to a low-ranking member of the crew, you provide some insulation. Crews do that all the time in Baltimore. Or at least the moderately organized ones do. Single dealers working small packages don’t have that option, but the organized crews very much do. And yes, you can be burnt that way. But it’s a sellers market. I’d be outraged that Donald fucking Rumsfeld is saying rioting in certain circumstances is understandable (not desirable, but understandable) and you’re still not getting it, but between that and Newt Gingrich arguing for doubling the NIH budget, apparently we are through the looking glass and I don’t know what to think about anyone anymore. Also, the BPD acquaintance I mentioned who was telling me all week there’d be no charges (and is now insisting the charges are all political and will be dropped) has been illegally surveilling me — I shit you not, I guess the BPD gives zero fucks about “optics” either, who’da thunk it — so now I have to get a fucking civil rights lawyer once they’re all done trying to get the teenage kids the BPD has rounded up out on their $500,000 bail and such, and that’s taking up most of the energy I’ve got left for any of this. So anyway. The 300 Men group is looking for volunteers (black, white, men, women) if you’re interested; I bet you’d be good at it. Let me know if you want more info. They were great at North and Pennsie. I do understand that you are being unequivocal and saying that no one white has the right to ask anyone not-white, who is oppressed, not to riot. I hear you loud and clear. I don’t agree. I’m a Baltimorean and a human being talking to other Baltimoreans and human beings. Even if I wasn’t seeking the same political and moral outcomes, even if my agenda was somewhere else on the political continuum, I have an innate right to ask others not to employ random violence in their cause. Follow your notion of standing and entitlement to its final result, and no one can ask various nationalities or sects not to randomly bomb markets and bring down airliners and kill civilians whenever they so feel the need. It is possible to see violence as human failure regardless of the cause or creeds involved. I did not say that. I never said that. Throughout this debate you have been making up words, placing them in my mouth and then arguing against them. I forget what that’s called — straw men? — but it’s a weak tactic for debate and an ineffective one for discussion. I can make up shit you didn’t say either — “you shouldn’t have said all rioters and protestors are thugs who should just pull their pants up and get a job, David!” — but I don’t find that particularly productive. No one has been criticizing you on this because you are white (at least not solely, but I’ll come back to that). This is about your timing, your tone, and your choice of focus. I will break it down for you again as simply as I can: We did not hear from you the day after Freddie Gray died, when the horrible and suspicious nature of his death was already hitting local news and was all over social media. Lot of Nina Simone’s “Baltimore” being posted that day. We did not hear from you during the first week of peaceful protest. We did not hear from you on the Saturday before the riots, at Camden Yards, when (this has been documented) drunk white bar patrons decided to throw bottles and at least one bar stool at protestors, thus provoking them. A stunning echo of the first skirmish of the Civil War on Pratt Street, btw, for you history majors looking to write your theses, just a hot tip. And yes, a couple of cop car windows got busted that day. Boo fucking hoo. As Peter Staley (long-term AIDS/HIV/gay rights activist) just said in his talks at the Embassy of Berlin, we — including our own President — now celebrate Stonewall and to some extent the White Night Riots, when a dozen cop cars went up in flames, as important events in moving the gay rights movement forward. But not when young black men are expressing their fury and desperation… We did not hear from you that Sunday when the media narrative was already “oh, the burden on the taxpayers, the protestors don’t even pay taxes!”. As though a couple of cop car windows are more expensive than the roughly $1.5 million we’re paying out in police brutality lawsuits every year. We did not hear from you until the city was on fire and then your first and only comment was “rioters you’re being selfish, go home”. We did not hear any acknowledgement from you that many of the “rioters” were children that were jacked off their schoolbuses when they were trying to go home, and roughed up by police in the Mondawmin parking lot. We still do not hear any acknowledgement from you that black children and young men in Baltimore are fucking TERRIFIED by the police — as well they should be — and may feel no fucking choice in certain moments other than to pick up a piece of *gravel* or a goddamn bottle and toss it at their potential murderers, who may point out, have armor protection and semi-automatic weapons. This is not about you being white. THIS IS ABOUT YOUR TIMING. This is about you adding to and reinforcing the unconscionable narrative of “thugs” and “looters” that dominated the media and would have continued to do so if activists didn’t make a point of getting out there and documenting the bullshit that was really happening (and has been going on for a long, long time in Baltimore). As for your concern about ongoing riots: there has not been a single riot since Bloody Monday. I am really not concerned about the bottle being tossed at the police that you witnessed. Was it a glass bottle, btw? Doubt it and even if it was there’s much worse behavior going on from white kids in Powerplant any fucking night of the weekend and they don’t end up getting maced. (In fact I once saw a brawl of 30-40 white kids outside Powerplant that pulled in a half dozen squad cars, two police on horses, any number of foot patrol and two helicopters. DIDN’T EVEN MAKE THE LOCAL NEWS.) No one is sitting around planning the next riot as a political tactic (unless they’re plants from the government a la COINTELPRO, it happens in every activist movement so why not this one). In fact, from what I hear the goddamn Bloods, Crips and Black Guerrilla Family are banding together to keep peace in the neighborhoods. I just hope they set up a security hotline b/c at this point I would much rather call them for help than 911. Riots are not political tactics and nobody thinks they are. They are acts of collective insanity when living in an insane environment, fueled by emotion and desperation, nothing more, nothing less. There is no sense or logic to them, and standing around writing thinkpieces about them is pointless. Nobody’s stopping in the middle of busting up a CVS out of pure rage and hopelessness at being trapped economically and physically in Abandoned America where you can be executed by the police for your skin color, to check the David Simon blog and see what he has to say. And before you fall back on that “I never said I was a spokesman, this is a part-time endeavor, etc. etc.” please note that you said “YOU RISK LOSING THIS MOMENT FOR *US*”. Those were your words. It is not anyone else’s fault that you claimed ownership of Baltimore and then subsequently refused responsibility to it. So, keep on hand-wringing I guess, but FYI, the only danger of further riots will come if a) BPD savagely murders another dude in the worst possible way for absolutely no reason on a bad arrest in the near future (sadly not unlikely) or b) all six of these dudes get acquitted. That one seems much less likely, but hey, the fucking Fraternal Order of Police managed to get Gahiji Tshamba’s charge reduced to manslaughter when he shot a guy 14 times, off-duty, with his hands in the air begging “please brother don’t shoot” in front of like 50 eyewitnesses, simply b/c his victim was bigger than him and took a single step towards him. So YOU NEVER KNOW IN THIS FUCKING TOWN. But you can save your worrying for then. Don’t let it eat up your nervous system now, when you’re on a deadline. Trial’s gonna take months anyway. Hopefully long enough that the DOJ indictment of the BPD comes through first 🙂 p.s. Regarding you being a white upper-middle class male it does not help your “optics” on this stance either, but that’s between you and your brand name. It’s certainly not the main point. “You risk losing this moment for all of us.” Indeed, I included myself in the mass of Baltimoreans — and Americans — who want to see the opposition to this litany of police violence accomplish something substantial and transformative for oursociety. Because this bifurcated system that marginalizes and brutalizes the poor has a cost not merely to the immediate victims, but to the society as a whole. Saying “us” does not claim ownership for Baltimore. Not in the fucking slightest. The pronoun serves for the collective of people who want change, who want the protests to achieve not merely a criminal indictment or conviction, but a reconsideration of the very policies that underlie these deaths. How did you get from that simple, communal assertion to anyone claiming ownership of any real estate anywhere, other than your own presuppositions about who is entitled to speak when riots are popping off?If I have misunderstood your words here, I apologize. I took your opening comment about Rumsfeld to say: I would be upset even if the most right-wing fellow fell short of unequivocal support for the riots, or, conversely, even a Rumsfeld commenting sympathetically on the street theater in Baltimore would upset you because white viewpoints are unwelcome in any discussion about black political action. Either way, it seemed to me a comment about standing or privilege, and well, as we have already discussed, I don’t think that critique means very much in a discussion about ongoing civil unrest between citizens of the city that is burning at that precise moment. I think that what is left of the phrase white privilege after it is invoked in such circumstances isn’t worth very much. I said what I said when the optics of rioting and looting overtook the heroism and commitment of protest. My heart sank to see what the country was now seeing of Baltimore’s affirming civil disobedience and demand for justice, and how that might in fact excuse a continued reliance on militarized law enforcement. And I said so, under the heading, “first things first,” then followed it with considerably more verbiage about law enforcement policy, the drug war, and the paradigm for political change. And you’re upset about the admonition and plea against rioting? Really? Okay, but I don’t disown my desire to see the protests stay non-violent and Baltimore stay unlooted and unburnt. I want those things. And I spoke to those things at the moment that they were in the balance. And now — and before — I write about a good many other issues and circumstances. Christ, there’s some click-baiting fellow over at The Nation explaining why “The Wire” doesn’t have enough police brutality or community activism in it. The entire police-community storyline of “Treme” — rolling over 35 hours of television — was onlyabout police brutality and corruption and community activism in the wake of that brutality.Not every film is about everything. Not every story is about everything. Certainly, not every blogpost is about everything. If they were, they would be long, useless and shitty. You have narrowed your focus to the post you dislike and disagree with. Okay. You can do that. You are entitled to feel as you do. I just disagree. “Chuffed” means pleased, proud, delighted, etc. I think you mean chafed. Not to be a dick b/c I suffer all the time from the lack of edit ability on these blog comments but, hey, I’m an editor and you’ve dealt with that before. LOL And I’m still chafed b/c you’re still quadrupling down on it and preaching against riots that aren’t happening and haven’t been happening. No riots are scheduled! Bulletin alert, I repeat: no riots are scheduled!!! Everyone can come out of their Panic Rooms and spend a little money in the city, that would certainly be useful. I haven’t read THE NATION article but it sounds like bullshit. I prefer not to click on clickbait out of principle, even just to see how stupid it is (clicks, after all, are what they are going for) but I’ll take your word on it. As for the rest of it, I’ll say like I said before, I am sorry it takes the city being on fire for you to get off your ass and react to something like Freddie Gray’s death. But apparently, that is true of most of America. I expected better. Still do. I knew it as soon as I read it back, but thanks. I already changed it even before I saw your post. I am glad that you have the memo on when shit is going to pop off, Kt. I am not on that distro list. Seems to me, though, that there is a pregnant moment waiting for all of us if a Circuit Court judge grants a motion to send the trial of those six officers out of town for change-of-venue, or if too many of the original charges are dismissed, or if a jury doesn’t find sufficient guilt on enough counts. Or, of course, if another Baltimorean is caught up not merely in unjustifiable police violence, but even in a police-shooting that is proven by the facts to be unequivocal or justified. The city is at edge and the stakes remain high for all of us. And it may be a long summer. Let me ask you something, because I have to say, there is a simplistic dishonesty in you claiming some perceived breach of faith because I didn’t leap to comment on or condemn the death of Freddie Gray post haste. Am I under obligation to respond to every policing affront in Baltimore, or nationally? What about other issues? On which ones do I prove myself callow when I am in the next county over, working or writing on the wrong thing in your eyes? I spent the last four years laying down a storyline based explicitly on the police violence in New Orleans that followed Katrina, the coverups that ensued, and the community activism, civil rights lawyering and journalism that were required to show that people of color had been targeted on the Danzinger Bridge, in Algiers and elsewhere. On the blog, I went all in on Trayvon Martin and the horror of stand-your-ground laws. I was muted on subsequent shootings of African-Americans in Florida. Meanwhile, I’ve been editing film and writing a new foreward for Belkin’s “Show Me A Hero” on desegregation and public housing, and giving speeches and taping interviews against militarized policing and mass incarceration, but I wasn’t typing in the days after Billy Murphy brought the Gray case into the media and the protests first swirled into the streets. I was watching, and listening, and yes, waiting for both an autopsy report and a statement from city prosecutors. And a riot popped off. If there is more police violence in New Orleans, how fast do I have to make myself into a human microphone before you’re disappointed. Of if something slips at one of those East Yonkers townhouses, how quickly do I need to assert for that which you believe before you expect more of me? And then how quick do I have to rush back to Baltimore if they change the venue on this police trial to Kent County and shit goes bad again? I’m writing in the margins of ordinary life and real time like everyone else. I get to some things early, some things late, and most things — never. Like most everyone does. Do you see how silly this is? And how indifferent I have to be to your personal expectations? “Seems to me, though, that there is a pregnant moment waiting for all of us if a Circuit Court judge grants a motion to send the trial of those six officers out of town for change-of-venue, or if too many of the original charges are dismissed, or if a jury doesn’t find sufficient guilt on enough counts.” I am not saying this is not a concern. Noted this above. But we have quite a while to wait on that one, and personally, I prefer to hope for the best. Given the lack of decent excuses the police have been able to come up with so far for even ARRESTING Freddie Gray, let alone failing to call for medical attention repeatedly (and the mysterious “second stop”), the other witness in the van, and a State’s Attorney who happens to be a black woman from Baltimore with a family steeped in policing (don’t it hurt, FOP! Mosby for Mayor!!), I prefer to be optimistic, I dunno. Shit, none of us thought there were going to be charges in the first place, not after the shit the BPD has gotten away with in the past, so it’s a celebration even to get this far. Also, maybe I need to clarify that I am an avid follower of your work, you really don’t need to explain the plot of TREME to me (I watched it repeatedly and thought the police storyline was incredibly well-researched and sensitively acted by David Morse — perhaps the difference there is that you waited until you had DONE the research to speak on the matter), nor what you are working on now. I watch and read all your stuff, pretty much. We watched HOMICIDE in my house coming up, b/c my parents are hip (Jon Seda was an early crush, so I was pleased to see him return to the David Simon Theater Company in TREME). I read the CORNER, watched the miniseries. Also watched THE WIRE repeatedly. I know as much information about SHOW ME A HERO as is available out there. I read your interview lamenting that no one will watch it. I watched your panel with Barack Obama. I’ve read numerous of your essays on the incarceration state. The only reason I didn’t watch GENERATION KILL is I don’t like war movies. (Is this getting creepy yet?) I know what you have been up to. I know you’re busy. Nonetheless, the impression of your principles that I have taken away from your work and from your previous posts on this blog, I’m sorry, simply do not jibe with the reaction you had in this situation. I understand people say what first comes to them in the heat of the moment, but you are still insisting on being right about this and clutching your pearls about the riots about to bust out any minute in your mind, and honestly, it is fearmongering — fearmongering about a city that already has enough of that from the mainstream media to begin with. As many people have pointed out, when white kids in Kansas burn a couple of cop cars b/c their sports teams lost, it’s “just high young spirits”! When young black men in Baltimore do it b/c they’re running from the goddamn riot cops that pulled them off their schoolbuses, it’s fucking Armageddon? I honestly don’t see what purpose it serves to have you adding to the perception that young black men are savages prone to pop off at a moment’s notice. This was not an unprovoked riot sparked by nothing. It was sparked by an obviously savage police murder that went unresponded to for a week before riots, two weeks before charges, and which the police union is still trying to defend! Please stop judging Baltimore by its behavior during the most extreme and egregious circumstances. As for your questions: “Am I under obligation to respond to every policing affront in Baltimore, or nationally?” Not everyone, but this was kind of an important one, don’t you think? And to be honest, yes, I think if you are going to make your art about disenfranchise people and make a paycheck off that (please don’t put me on Marty O’Malley’s side on this, Jesus) then you do have something of an obligation to continue to speak on behalf of those communities when it is necessary. Sorry if that is holding you to too high of a standard, but I assure you I hold everyone else to it (including myself) too. “I was watching, and listening, and yes, waiting for both an autopsy report and a statement from city prosecutors” Oh honey waiting for full reports from the BPD to come through? You have been off the beat too long. I know you’ve asked Justin Fenton how long he has to wait for this shit sometimes. “If there is more police violence in New Orleans, how fast do I have to make myself into a human microphone before you’re disappointed. Of if something slips at one of those East Yonkers townhouses, how quickly do I need to assert for that which you believe before you expect more of me?” Timing really depends on the situation going on in the world. During Katrina, for example, I would say serious writers and thinkers who cared about New Orleans needed to be there the first week (how about you?). Ditto Baltimore after Freddie Gray died. I’m not so familiar with the culture or backdrop of what might pop off in Yonkers. Now I understand not everyone can get to a place or do the research in time, but in that situation — and this is just me, grain of salt — if you can’t get there before the riot, maybe don’t write about it DURING the riot, at least not in a short, glib way. Wait until afterwards when reflection sets in (as you did with TREME, THE WIRE, and much of your best work). The above essay would have been perfectly fine and entirely appreciated by me, except you’re still have to defend yourself for making an unhelpful comment. “Do you see how silly this is?” I don’t see that much about what’s going down in Baltimore is silly, but okay. “And how indifferent I have to be to your personal expectations?” And yet…you certainly, in your original blog post on it, expected a whole fucking lot of the beleaguered people of Baltimore — more than, as you have stated on this very blog in your “Trayvon” post, you would expect from yourself — and you still expect us to care. Expectations are a two-way street. If you possess them of others, they will possess them of you. And nobody’s free from criticism. Ok. I tried. Ok, I caved and read that NATION article. It’s not as much of a hatchet piece as you’re making it out to be. It’s a thoughtful critique from a young person and you don’t have to agree with it but you could just listen to it. Pessimism CAN be a luxury. Assuming and anticipating the worst of your fellow man can be unhelpful (particularly when FOX News is already there to do that for you). A little optimism wouldn’t kill you. “BELIEVE”, as the bumper stickers used to say. 1984was such a downer. The pessimism ofBrave New Worldwas such a luxury. Why couldn’t those guys imagine a happier future for everyone?I’m not comparing “The Wire” to those classics. I’m just pointing out the empty critical stance that says a reader or viewer is owed either a happy ending, or a roadmap of how a conjured, dystopic world can end well. But again, if you prefer such an outcome, maybe avoid “The Wire.” Maybe “Treme” is more the ticket. Just a point but 1984 was wrong. Certainly some things in it have metaphorical resonance. But here is 2015 and the state has not yet come up with ways to make all undesirable information disappear (or to outlaw sexuality or split the world into three superpowers or whatever else is in that book, I read it 20 years ago). Never got around to reading BRAVE NEW WORLD (shoot me) but from what I gather it didn’t exactly come true either. Disagree. Aspects of both books are still in play. I have never been able to finish 1984; I know how it ends and I can never get past the feeling of hope that occurs half way through. I also feel both books are in play. I have often seen people argue of which book is closer to reality, perhaps it’s human nature that makes us try to choose one narrative over another with out seeing the similarities between both? I would also suggest adding Bradbury’s Fahrenheit 451 to the discussion. It is another book that feels more like a prediction then fiction. I have to point out though, your thought on “why couldn’t both author’s imagine a happier future…” I don’t believe that either book was written with the intent to depress. But were more intended to be used as a warning. It is our own misfortune that our reality follows fiction. And perhaps our own shortcomings that we have fallen into the very traps we should have been aware of ; The Cassandra effect at work perhaps? Just so. I think Zirin was onto something. Along similar lines I’ve become concerned that “The Wire” provides an inadequate representation of Greek dramatic tropes. I now believe that there is too much “Antigone,” too much “Oedipus the King” and not enough of the “Bacchae.” Also, someone who thinks it is a meaningful discussion whether “The Wire” is “greater” than “Breaking Bad” directed me to a tweet in which the author was complaining about the “The Wire”s depiction of Black America (I wasn’t aware that was what you were doing but …) and, in particular, having a small Black child commit a murder in season 5. Apparently, the tweeter believed this dramatic episode could be refuted by an absence (or claimed absence) of any record of a child that age having commited a murder with a gun. I suggested that the internal logic of the show and the mythology of the murdered character required that only a child could commit the murder (nemisis had to take the form of a child). After Zirin, I realize that if the murdered character had been a realistic representation of his type that would not have been necessary. Obviously, you made a mistake. And where are the crack dealers, Mr. Simon? “This is about you adding to and reinforcing the unconscionable narrative of ‘thugs’ and ‘looters’ that dominated the media…” For what’s worth, I’ve been watching and reading and I just don’t see this. In this post and earlier ones there has been no lack of empathy or understanding; no criticism of people, only of tactics. “Riots are not political tactics…” Not employing tactics where tactics could help seems worthy of comment, if not outright criticism. “Nobody’s stopping in the middle of busting up a CVS… to check the David Simon blog and see what he has to say.” So what would have been the value of him saying what you wished he said, or when? Is it really your argument that anyone was checking the David Simon blog before leaving home to go bust up a CVS? What’s the value of anyone ever saying anything? David Simon has a fairly well-established axe to grind on this subject. David Simon (among others) has long been employing tactics in this arena for an achievable end. It seems unfair – if nothing worse – to expect him to abandon his tactics – even momentarily – for someone else’s “acts of collective insanity,” however sympathetic. You don’t think “you’re being selfish and risking losing this moment for us” is a criticism of people who are reacting in panic and chaos to years of unfathomable police brutality? I thought it was pretty personally aimed. There was a lot of the word “you” in that post. And I don’t expect him to abandon anything, just the opposite — I expect him to stay true to what he’s said previously. He doesm’t have to agree riots are great — nobody does, that’s ridiculous! — he simply needs to recognize where such events are coming from. As he ALWAYS did before, and somehow decided to abandon overnight (I refer you again to his “Trayvon” blog). Just out of curiosity Mr. Simon what did you think of the riots after the Rodney King verdict? Aimed not at people protesting, but at people who want to go with the brick? I thought it was accurate, and far more restrained than the words of, say, the African-American mayor, congressman and president who actually categorized the rioters personally, rather than characterized their actions. I chose my words to say what I believe. There is more at stake than defeating a line of riot police or emptying a CVS or a liquor store, more even than punishing the individual officers responsible for the death of Freddie Gray and doing so in a manner that displays dispassionate justice rather than the fear of civil unrest. There is a chance to change some of the structural firmament of repression. Anyone who wants to trade that chance for some liquor, toilet paper and a burnt construction site — and a moment of gratifying anger — is, I would argue, not thinking on behalf of what can be gained for all of us. Do you not see that rationalizing the rioting as the best that you believe the underclass can do for itself is, in its own way, condescending? I think if you don’t understand why people in Baltimore might be stealing toilet paper you might be way more out of touch than you realize. Is there anything you can’t rationalize? The difference between “you’re being selfish etc” and “you are selfish etc” may arguably be semantic or pedantic or something else, but the fact of a difference is fairly undeniable. The former does put focus on actions where the latter is more concerned with character. He criticized actions, not people. Further, these words, taken within the context of everything else the guy says and writes on the reg, just don’t sound to me like what they apparently sound like to you. “…he simply needs to recognize where such events are coming from. As he ALWAYS did before…” That’s ultimately exactly my point above. He’s said it so much in the past… do we really need to hear it one more time? I take it for granted that this blog post doesn’t expunge years of other writing and speaking, but takes off from the foundation it established. That Trayvon Martin post you refer to – that post that also ultimately praised the patience and patriotism of a community that refused to be moved to violence – caused a stir for allegedly condoning or trying to incite the opposite. What do you think – maybe more importantly, what do you think David Simon thought – when he considered what to write now in the midst of actual rioting? What fresh ammunition would he himself supply to proponents of the militarized police state and the drug war by saying anything other than what he said smack dab in the middle of shit popping off? I don’t believe reform is possible, given how rigged the game has become. It would not surprise me at all if the riot started because some FBI paid person was inflaming the crowd. It’s happened before. I’ve seen so many videos of cops hurting women, people in wheelchairs, old and or homeless people that I now believe cops are amped up on steroids as well as being way over armed. Happens all of the time all over the country. Protect and serve is over. I also have come to believe that militarizing the police is part of the plan. Climate change is going to wreak such havoc that civil unrest will become the norm. I think we are seeing some grooming going on here. It’s not like cop shops are buying arms on the streets, now is it? The people who really run this country are psychopaths. Is that audacious despair or what? You qualify, yes. Another thought-provoking and thoughtful essay from Mr. Simon. Thank you very much. I don’t know if I’m too cynical to hope you’re right, but if the window is only 18 months, then we’d better start seeing some movement, from everyone, pretty damn soon. An election year sure doesn’t make it any easier… I just wanted to make the point that while Malcolm X has sadly been reduced by some (not Mr. Simon) to a few out-of-context slogans that falsely (and with depressing regularity) associate him with violence, Malcolm X was never involved with any riots or violent acts himself. There are, in reality, no examples of him urging violence in any speech or interview. No really, not one. There are, of course, lots of examples of him urging self-defence against someone coming at you with a rope or a gun or a noose with ill-intent. While he understood the anger of black America better than almost everyone else, it never manifested itself simply as a thuggish call to violence as some would like to pretend. Malcolm X spent the last year of his life urging people to register to vote and get involved in the political process. He met with heads of state in Africa and the Middle East to build bridges between America and the rest of the world. He had a plan to bring formal charges against the United States in the UN for its treatment of black people the way others had done against Portugal and South Africa. On a personal level, he probably got more black people off drugs and alcohol than any other single individual in America between 1955-1965. Violence? Where? Ironically, his actual message of economic self-determination, political engagement and cultural rejuvenation is exactly the sort of thing we need now, but that message takes a backseat to the hackneyed lie about a violent revolutionary that would make a better TV segment. But why study history when you can throw a brick and say you were there? Also… “The only thing that is going to mug someone in Alphabet City or Astoria nowadays is the bill from a two-star restaurant.”Good one! Malcolm was a hero. So was Patrick Henry. But they are the authors of a couple phrases that define unapologetic rebellion. No doubt. And I think that in their respective cases, they were justified. They were clearly talking about, and involved in, life-and-death circumstances. Baltimore is obviously life-and-death but hardly on the verge of a violent revolution or civil war. That’s one of the things I find interesting about Malcolm X’s speeches. Because he was careful about what he said, he could get away with some pretty incendiary rhetoric, but he always had an “out” for the oppressor. The ballot OR the bullet. If we don’t get full and complete human rights as American citizens right here, right now….who knows how bad things could get? “Whoever heard of a sociological explosion that was done intelligently and politely? And this is what you’re trying to make the black man do. You’re trying to drive him into a ghetto and make him the victim of every kind of unjust condition imaginable, then when he explodes, you want him to explode politely. You want him to explode according to somebody’s ground rules. Why, you’re dealing with the wrong man, and you’re dealing with him at the wrong time in the wrong way.”He knew the score. Just so. And yet this drives me back to the drier plain of efficacy, as boring and unpassionate as that is. Non-violence and mass disobedience works. A riot, not so much. Indeed. I think you made an excellent point about repression igniting reform. I personally think the 16th Street Baptist Church bombing was the defining moment for the Civi Rights movement. Once that happened, America decided. “No. This is too far.” How was that reform achieved? Many, many, many people in the streets saying, in effect, “If we don’t get full and complete human rights as American citizens right here, right now….who knows how bad things could get?” To me, it’s astonishing the role the digital cameras are playing in all this. Anyway, thanks again for all your hard work! This is a good examination of such a complex issue. However, I believe the first steps to fixing inequality amongst the myriad of other issues facing America are simple. We need to reform the manner in which the political system operates before we can seriously address these problems. The selection process for our political leaders has always been skewed, but it seems to be getting worse. The fact that we now have billionaire kingmakers openly deciding which candidates get attention will make true reform much harder. Take gun control for example. The vast majority of American supported background checks and other common sense measures, but nothing happened because the representatives were worried about the NRA throwing mud. I’d say let’s take care of gerrymandering and campaign finance first, and then the will of the people will be able to truly confront inequality. Do you agree Mr. Simon? There are many things that require reform, and the political process is certainly broken. But I don’t think the underclass, as targeted as it is by law enforcement right now, can wait for as long as such vast reform will require. People are being killed now. I think we’re going to have to provide some immediate reform within the construct of the flawed system we have now, regardless. David, your position on Baltimore kind of escapes the core of the issues, & the reason why I think that it takes on this position is because you are not from Baltimore. More specifically, you are not from Black Baltimore. As a 37 year old Black man who was born & raised in East Baltimore, all that you gripped from writing about Baltimore from the Sun newspapers or using consultants for The Wire does not justify a clear conviction that permits your insight to be accurate for the fate of what goes on in Black Baltimore. You spoke of Oppression, zero tolerance & how these merging points were/are linked to some political or drug war agenda. Your points are based on a surface scale that does not go into the veins of being Black in Baltimore or any Black city for that matter. Let us face it, your most successful show (in terms of ratings & acclaim) was based off of Black plights in East & West Baltimore. Your literary work that guided you into an HBO office with intentions to transfer your literature into a mini-series was from a book based on a Black teenager growing into a young man in West Baltimore. Just because you have this surety that allows you the luxury to write about us, does not turn your grumble into a philosophical platform that can posses a proliferation in expounding on societal ills as a common goal that undo our Oppression. As much as I believe in equality, let us face it, being Black in Baltimore, Harlem, or Chicago is not the same as being white in Baltimore, Harlem or Chicago. Yes, you may know your way thru the streets of Baltimore, but so does Barry Levinson & John Waters. These two filmmakers explained a visual Baltimore that does not try to pull over a gregarious set of political platforms that justify hammering out Baltimore into a gem that they only understand. The Baltimore that Barry & John write about is not a part of the Baltimore cultures that I am from; however, it is one that I know as a different aspect of Baltimore—white, middle-class, (Barry) & white, gay, quirky (John) stories that as a Baltimore native, we can laugh & say, “yeah I saw this or that in Fells Point or in Liberty Heights.” Yes, you may show the grit that European explorers utilized while looking for a new country (& I say this metaphorically) to shower the natives with your expansive thinking that can solve or offer insight to our problems, but the point is that your problem of traveling & being there has not ever been expressed. I bet that when you do have this self-awakening (which is explaining why you present our Black problems with so much thinking & answers that tend to go back your own rescue mission) you will gravitate towards expressing to the world that you are doing us a favor or two. Just as I am sure that you will apply the rioting that happened in Baltimore as some form of fuel to drench over your writing. Perhaps a mini-series or evening drama show that have four Black families in West Baltimore, a week before the riots. Your expertise on Baltimore is why Black Baltimore writers, producers, & directors are overlooked when they have the direct stories linked to losing Black brothers, sisters, mothers, fathers, uncles, cousins, & best-friends to years & years of the drug-culture, police brutality, the prison & education systems. CNN will interview you, the major media with their publications in magazines & newspapers will quote you profusely, because you are the white man who was the explorer into Black Baltimore. Do not get me wrong, I do not base good, or great entertainment or art on a racial qualification. However, when I happen to see your prolific work-rate dedicated to Baltimore from the Corner to the Wire to your partnership in the Civil Rights project for HBO, & then I see you going into these elaborated disdain objectives about society with a cover that goes into the issues that Blacks suffer world-wide—-it just makes me think about telling you to step back & let us speak for ourselves. There is a reason why Martin Scorsese can depict Italian films about the underworld, mafia or New York living to a precision that you can smell the pretzels near Central Park. Steven Spielberg’s who has a history of doing films that are pop-corn friendly, took on Schindler’s List, because it was necessary for him to visually articulate a religious history connected to his existence. I am sure that you will dismiss what I am saying, & if you respond, it will be with the philosophical mettle that overrides with digressions that you have the specific right to write about what you know, because Baltimore has a quiet voice due to your loudness on leading the cavalry to your own self-awareness. Peace & keep winning those awards. Shaun. P.S. Here is my piece on growing up in Black Baltimore. No consultants needed, no studies needed. Just living & seeing: http://www.baltimoresun.com/news/opinion/oped/bs-ed-gray-la-20150427-story.html Shaun, Schindler’s List, as you mention it, was written by a Gentile. America in the King Years, by a white guy. Madame Bovary, by a man. I’m not merely going to suggest they had a right to pen whatever was in their heads and hearts, but that I’m glad those works exist. If anything I’ve written doesn’t pass muster with you, then that is sufficient for you disregard it or criticize it, or me, or my abilities. But let’s be honest and direct here. THis issue has been joined because of what I have written about what happened in Baltimore in the wake of Freddie Gray’s death, and further, what is happening with regard to law enforcement, the drug war, and the growth of the prison-industrial complex in America. And to the extent there is any controversy over my standing it stems not from me writing about what I think it means to be an African-American in Baltimore. That’s not how my comments are actually framed. That’s your language. WHat I have written, more specifically, is about policing, law enforcement and what has become of probable cause in Baltimore. On this issue, that’s the meat of what I have offered up. In addition, because I want to see certain outcomes, for the city in which I live and on national issues such as mass incarceration and drug prohibition, I did not want the images of civil unrest to thwart or malign the pattern of ongoing protest, not merely in Baltimore or nationwide. Yes, I reject the notion that white writers should only write white characters or narratives, or that black writers are similarly confined, or that racial or religious or geographic categories are non-permeable barriers to any journalism, or art, or empathy in any form. The work is the tell. We are going to disagree on that. But it seems your stance is even more radical: Are you suggesting that everything I learned from covering the police department for 12 years, or from the year spent in the homicide unit, or the year spent at Monroe and Fayette Street are insufficient for me to have an opinion on how the city is policed, or what has gone wrong over the last decades with how the city is policed? Or, because I am white, I have no standing to express dismay at the images of rioting two weeks ago that threatened to define the protests in Baltimore? If it’s white privilege for a Baltimorean to praise the ongoing protests but plead against burning and looting, then I would argue that you’ve defined the term out of all meaning. Do you see that you have drawn the line telling me to shut up not merely at the boundary of me supposedly writing about the black experience, but more liberally, at speaking to issues of law and justice in my own city, or further, in urging fellow citizens, regardless of race, not to riot? Not that what I say isn’t open to all due criticism on the merits — it is, and, it is — but are you simply saying only black folk can say stuff? And what black writer or filmmaker is being stifled because I speak my piece? I’m writing what I think on this blog; you’ve been published in the Baltimore Sun. It isn’t as if you had to fight your way through my verbiage to claim that space. David, let me start with the artistic area. Freedom of expression is afforded to every artist, but a profound artist will understand that originality does not start off with tracing a masterpiece—I use the term masterpiece as a reflection of a civilized culture, be it: a sub-culture or a chief culture. When you marry politics & artistic expression, if you are not from an area as deep as race or religion that you are taking vows with (in this case, Blacks dealing with Oppression), you are asking to be invited into a place where those that live it will see it as exploitation. “Images are face replacing words as our primary language,” is a quote from Kathryn Walker, the narrator for the documentary, “Richard Avedon-Darkness and Light.” Thomas Keneally penned the novel, but Steven Spielberg shaped the visual images for people to see for themselves. Could this be the reason why when Steven asked Martin Scorsese to direct Schindler’s List, Marty declined the offer—instead he became an encourager who told Steven that he should produce & direct the picture. This is the same Martin Scorsese who turned down directing Clockers (who as you already know, the Jewish writer Richard Price penned) because he felt that Spike Lee would be a better fit for visually writing out the story of Blacks in Brooklyn dealing with slanging drugs & living amongst Black Brooklyn. Was it Spike Lee who had an issue with Steven directing the Color Purple? Remember, “Images are fast replacing words as our primary language.” If Eminem produced & recorded a gangster rap album, does the significance of him being a white M.C. play a part in people finding his music unbelievable? Even if he had a gangster rap album with hardcore Detroit, Chicago to L.A. street-gang members, would he still be considered gangster enough to make a hard-core gangster album that uses the n-word, describing how to hustle crack-cocaine, heroin? Would he be able to say, “hey, this is what I see, I am an urban journalist, a writer within my city who is only rhyming about what I see.” My point of contention would be this: the differences in racial relations are injustices within a systematic oppression that exploit the Black culture, worldwide (from African Americans, to Afro-Latinos to the natives in Africa). There are things in Black Baltimore that you would be unaware of, nor able to visually explain, because you have not been oppressed to the point that the system is stacked up against you. Just look at the Blaxploitation films in the 1970’s that took off with Melvin Van Peebles with Sweet Sweetback’s Baadasssss Song & Gordon Parks Shaft. Films that would serve as a method of tracing for white filmmakers to flood the market with stereotypes about Blacks being pimps, dealers or hookers. So, do you still think that a white man or woman stepping into the narration, visually or literary wise, standing up & speaking for Blacks, completely a blueprint for figuring out Oppression—even though, you speak about Baltimore, the politics & the drug war issues in a fullness that construes us as a society; however, you pinpoint Blacks & their plights as a storyline that pulls out this interest from the world. I am just waiting for the chance for a white man or woman artist to say, “I do not know why Blacks are going thru oppression, but I will be a quiet optimist & say this, I believe that some great Black minds will figure it out.” Even in your response, you dance around this conception with putting a foot down on all of us being equal under a system of injustices, only to trace Blacks & their experiences while throwing them into your pot of writings. Of course, I am expressive in my language, as a Black child who grew up in Baltimore, within a Black community in East Baltimore—who happened to see childhood friends become drug-dealers, drug addicts, killers & lost in the system of correctional institutions, I think that my language on this topic does carry a bit of a personal approach. Yes, you will hear my voice, clearly & convincingly. Now, does this make me radical because I believe in being Black & that we have the power to write, produce & express our own stories? Just as in the past, you called those who were involved in the commotion going on in Baltimore rioters; however for the ones who experienced growing up Black in Baltimore—some of us viewed it as an uprising. Do not get me wrong, I do not condone burning down a community, but the issues of police brutality goes back generations; especially since I was raised in home that had a family grocery store near John Hopkins hospital. I have heard the stories as child growing up in Black Baltimore from the older teenagers, my uncles & my elders. When I grew into my teenage years, I would see it for myself. I am 37 years old, so the 1990’s were a mural of zero-tolerance, false arrests, being handcuffed for 20 or 30 minutes because some cops had the power to tell you to sit down while they figure out what to do with you. You were paid to report on Black Baltimore. I was born & raised in Black Baltimore. You came to see. I was born to see. Your Baltimore is Fells Point, Paterson Park in the 1980’s or Canton, maybe those big houses in Roland Park. You traveled & explored Black Baltimore, because you had to write about it, but you cannot carry that as a vocative baton to hand off as an indication of fixing Baltimore, because your political agenda is philosophically sound. There are still parts of Baltimore that is segregated, from the neighborhoods to the prisons to the churches. Your political points are fantastic, maybe there is a situation ethics appeal to them, but the continuation of Blacks in Baltimore to Harlem to D.C. to Philadelphia is that we handle a different form of political mindfulness. You can stop writing about Blacks or African-Americans tomorrow. You could go write a Broadway musical & title it, “The Glory in Optimism, how Love met Freedom” cast all white actors & actresses while aligning all sorts of political ramifications inside of your musical, with a great band playing original musical pieces. Not one person will say, “He is from Black Baltimore, he needs to tell our stories again, & he has sold out.” Please understand, I am talking about the serious topics addressed in your television & literary works. If you were behind a sit-comedy or some soap-opera with imaginative plots, I would just say that can you were in a lane to entertain as you feel, without offering a disposition that is about race, oppression or societal ills. I am based in New York. Do you know how many times I have met people who would ask me where I am from? When I respond Baltimore, they base their entire premise on Baltimore from what you pushed with your words in The Wire? Yet, when I ask them about Barry Levinson or John Waters, their faces tend to go blank with wrinkles in their foreheads that is a body language speaking to me that retorts, “Huh?.” When I ask them about the 1970’s film, “Amazing Grace” they are unaware. There are parts of Baltimore in these films that speak beyond the preconceived notions that Baltimore is some 3rd World country in the United States. So to name specific Black filmmakers & writers who cannot escape into the mainstream flow would be a study that would take years & years to compile. Just as when we hear the term that Eminem is the greatest M.C. in Hip-Hop, do you know how many Black or Latino (Hip-Hop came from the Bronx & from the Black American & Latino experiences to put poetry & music into a music culture that had the elements to make it significant) M.C.’s & rappers are left in the cold because of the white writer or lover of our culture, plights & issues run with the opportunity to be our writer, director, producer, & the controller of our freedom to express ourselves? How can I name a specific, when we are talking about cultural oppression in putting our stories into a realm where the editors (your newspaper days) & producers (your HBO days) can relate to you, Mr. Simon, because you look like them, you live in their neighborhood. HBO built your Niña, Pinta & Santa Maria. You are a safe journey for them. Now, you are an exceptional writer, so my position is not one to layer out your writing—I am talking about the content, the core of the ocean. I would love to see a Baltimore or a world that Dr. King mentioned in his, “I Have A Dream” speech. But, as the great Malcolm X once said, “Whites can help us, but they can’t join us. There can be no black, white unity until there is first some black unity. We cannot think of being acceptable to others until we have first proven acceptable to ourselves.” However, I expect you not to see it my way & this is fine. But as I always say to the people who ask me about Blacks, Baltimore & the Wire, “Baltimore is bigger than the Wire.” Baltimore is bigger than the “The Wire” to be sure. Bigger than any single narrative. I’m certainly glad, for example, that the Sun was a vehicle for you as the NYT was a vehicle for D.Watkins. It’s certainly my hope that more black filmmakers will find their way to HBO and elsewhere. I’m having trouble with The Sun website, as an aside. What day did your piece run? I can find it that way, as your link isn’t working for me. Bottom line: There are stories that I do not know and can’t tell, and stories by a multitude of others that need to be told and heard. And telling whatever stories I do, doesn’t change who I am or allow me to claim membership or even alliance with any ethnic or religious group other than my own. Perhaps it bears noting that I didn’t wake up at any moment with the express plan of writing some stories that involved African-Americans. I got hired out of college by the Baltimore Sun and assigned to cover crime on the city desk in a town that was majority black. I did the best I could and then instead of getting sensibly promoted to another beat, I got more and more interested in systemic issues such as the drug war. To this moment, I don’t think I have ever thought of myself as writing about the black experience. I don’t think I have. I’ve written about dynamics and systems that matter to me. We will not agree on the overall, but we do agree in points. And we are probably close enough in certain respects to have a good, circular argument — which I enjoy. I am headed back to NY for film editing, but I would probably enjoy sharing a cup of coffee and debating this further, if you don’t mind me contacting you by email at some future point. Assuming you are presently in Baltimore. David, I believe that the world listening to you about Black Baltimore goes deeper than just your progress & fate that allowed you the passageway to be successful with HBO or with your literary work. If Spike Lee had made Schindler’s List (Steven Spielberg) or Charles Burnett had made Sophie’s Choice (Alan J. Pakula), would the trajectory of two Black filmmakers stepping outside of explaining their typical Black experiences be uninteresting to the masses, because these two men do not know about the Jewish experience? One has to think about this, if we summarize this whole concept of how white filmmakers, writers, & directors can genre jump with the machine behind them to push their work to the masses. People (the industry) expect a Spike Lee Joint or for Charles Burnett to make some small film with a narrative that is conscience to the Black man, woman & child, which is not that of a pimp or hooker. I am in Baltimore this weekend. Of course, we can meet for coffee. My e-mail: shaunthinking77@gmail.com Feel free to write to me there & I will give you my contact number. The date that my article was published: 27th of April, 2:45 p.m., the title is Police and Black Baltimore, written by Shaun La. If this is not effective, just let me know in an e-mail & I will try sending the direct link in a reply. Will be in touch. As I said, I am on the way back to NY next week to finish some editing. But soon. Coffee on me. Safe travels & enjoy your weekend. Shaun, I’m wondering if Schindler’s list is particularly the right film to bring up in this debate. I wouldn’t say it’s really about the Jewish Experience – The themes that come up in that film seem to be more about the moral truths of an individual overcoming his sense of self-preservation in a society that has chosen a horrifically immoral path, and the “list” is the living testament to his heroism. In this sense, perhaps what we see is Steven Spielberg telling what is more aptly viewed as a German story. Sophie’s Choice is more directly related to the Jewish experience, and was actually directed by a jewish director, Alan J. Pakula. What’s interesting about that, is that Mr. Pakula also directed “To Kill a Mockingbird”, which is more a less the story of a white person’s awakening to the concept of Civil Rights. Now, that film is pretty well regarded too, but it’s director was a White Jewish man from the Bronx. What right did he have to tell Scout’s story, her being a young christian girl from the south? Furthermore, extending this logic to the plot itself, does Atticus Finch not have a right to tell Tom Robinson’s story by defending him in court, because he cannot come from a place of identical understanding? Should Harper Lee have not written her story about a broken America in the first place? And what about Quentin Tarantino telling his revisionist revenge tales with “Django”, a story about a black slave, and “Inglorious Basterds”, a story about jewish men trying to kill Hitler? To be honest, I dislike both these films, but that puts me in a very small group. An even smaller group is the one which questions whether or not he has earned the right to tell those stories. And why are you focusing on Spike Lee? The man has done his share of genre jumping. Yes, he was responsible for films specifically about race and violence, such as “Do the Right Thing,” one of my favorite films, and Malcolm X. But he also directed “Old Boy,” originally a Korean film based on a manga, (whose only fault was being a pretty unnecessary and watered down remake) and “Inside Man”, which in my opinion was a pretty great and underrated heist film. My point is, why are you holding David Simon to a standard that no artist, writer, or storyteller working today actually holds themselves to, (Alfonso Cuaron, first film “Y Tu Mama Tambien”, writes/directs blockbusters Gravity and Children of Men, Guillermo Del Toro bounces from personal spanish projects “Pan’s Labyrinth” and “The Devil’s Backbone” to writing and directing “Hellboy” and “Pacific Rim”. Stanley Kubrick isn’t Russian and didn’t fight in Vietnam, but he gives us the films Lolita and Full Metal Jacket.) These are all fine films, made with the power of empathy, not with personal experience. All this too, particularly when it’s clear that David’s goals aren’t even aligned with what you imagine them to be. Earlier in these comments, David remarks (I paraphrase) that he found many people’s reading of the Wire to be more cynical than it is meant to be. I agree; I’ve read Homicide as well, and watched both the Wire and Treme through a few times through (I suppose by your logic, the Wire by David Simon should have lasted one season, and have been a buddy cop show, in which McNulty and Prez investigate the Sobotkas, and Treme could have ended with Creighton’s suicide). What I admire most about your writing, Mr. Simon, (And I’m afraid I’m in the drooling variety of fanboy) is how well you identify and explain problems, which is the most important step to finding the solutions. It is both methodical without feeling forced, as eye-opening as it is entertaining. Be that as it may, I would think it highly inappropriate to introduce you to others by saying, “Oh, he writes excellent stories about black people.” Come to think of it, I can’t think of anyone who has defined your work in that manner, until seeing this conversation. So, Shaun, what I’m wondering is how far exactly does this logic extend. This is important to me because I’m currently writing a novel that centers around the theme of violence, which contains references to and characters involved with these riots and protests. Some of these characters are black, and I have wondered about the rights that I have, as a white kid who grew up miles away from this explosive form of racial tension, to include those stories in my work. Is this something I should avoid, in your view? Once, a professor in a film class outright asked me what my opinion on abortion was. I told her that I felt uncomfortable giving an opinion on the subject, because I lacked the critical knowledge of being a woman, with a woman’s pregnant body. She asked me, “That’s interesting. But aren’t you a human being?” I haven’t written about abortion yet, either, but I haven’t really come up with a good answer for her either. Look into Spike Lee & Charles Burnett. How can you not mention Black filmmaking, & not mention Spike Lee? Yes, the man does digress & can be a bit irrational, but he has written & directed more films that has advanced Black actors, actresses, & cinematographers more than any other Black filmmaker/writer in the history of cinema. Besides Oscar Micheaux, the man is the most prolific Black filmmaker ever, & in the last 30 years, he is the most prolific after Woody Allen. Speaking of Wood Allen, was it not Woody who answered the question about not having enough Black people in his films with the following reason & this is not a direct quote, but he stated something along the lines that he could not write about Blacks, because he feels as though he does not know enough about Blacks? His life is about Upper Manhattan, stories of growing up Jewish with a witty sense of humor or when he does the drama pieces, White women dealing with life. All that you explained points right back to my basis that BLACKS do not have a say in their films. Was it not Ava who said that Selma was criticized because it did not have enough White people in it—or that it left out the Jewish contributions to Dr. King’s movement? Spielberg is a genius, & I am sure that part of the reason why Schindler’s List has so much success, mainstream wise to the scholarly appreciation that guides it into being played in some high-schools curriculum for study & discussion is because the director is Jewish. We still live in a society where a White actress can wear makeup & play a Black woman, because she has a name. Get out of here with those diversions of trying to explain a logic that can not be measured. Blacks in cinema & in the Arts (overall) has been having their stories stolen & cultures recycled for centuries. Your answer to your professor’s question about abortion was an answer, it might not have been an answer that sits on contemplation, but it was an answer. Some topics, you can touch with your fingertips, & other topics you can pick up with your hands. Blacks going thru Oppression is not a fingertip topic. There is no way in the world that you could tell me that if Charles Burnett wanted to make a cable series about the riots in L.A. or the gang life in L.A., it would be approved by HBO or Showtime. Shaun, This is neither here nor there. It doesn’t validate “The Wire” or me. It is an ad hominem endorsement and therefore as subjective as any other critique that one might offer. But, still, I can’t resist. You bring up Spike. He, um, loved “The Wire.” David, some one mentioned the Wire. I did not say that it validated you as a writer. As I stated earlier, you are an exceptional writer. But, if Picasso had painted the Black Chicago that Gordon Parks lived in—I would still say that Picasso was an exceptional painter, but….. I respect Spike Lee & I called out his comments of liking The Wire as well. This is the same man who called out Tyler Perry for his modern minstrel shows, but he applauded two White men who wrote about Black Baltimore, won money, fame & more production deals. Yet, he had an issue with Spielberg doing the Color Purple or Mann directing Ali. Then he got on Clint for not having Blacks in his two films covering World War II. However, he made this rant when his film Miracle of St. Anna was about to premier, a year or so after those set of films from Clint. I never said that Spike was perfect, but I can not knock his accomplishments as a Black man who has went against the mainstream Hollywood system—the man is the higher level of success that the LA Film School of Black filmmakers & writers were aiming for. He is another mind that I would like to sit across from & have coffee with. I am willing to bet that after he spoke with me, he would have a different viewpoint of accepting The Wire as a complete whole portion of Baltimore—at the very least, he would have a Black Baltimore man point of view to merge into what he saw on television from you & what he would be hearing from me. I’m honestly not tracking. When Spike calls out what you want him to call out, he’s insightful? When he doesn’t, he’s in error? Look, I don’t care if you like anything I write or not. And I don’t buy into any straight-out-of-1974 declarations about who can tell what story. I hold that stuff in low regard, to be sure. But I don’t expect you to be convinced. But the intellectual dishonesty of evaluating art based on a compartmentalizing of the artist is just fundamental. If Picasso went to the south or westside of Chicago and painted the Great Migration scenes that inspired Jacob Lawrence, those might well be some amazing works of art. They wouldn’t be the work of Lawrence, but they’d be Picasso and they could well be of considerable merit. And if Lawrence found himself in Guernica and painted the fascist torment of that place, it wouldn’t be Picasso but I think it would probably be quite resonant as well. The human heart can travel, Shaun. But do you know the only thing that would prove or disprove my above assertion? Yup. The paintings. David, that would not be Picasso if he painted the South side of Chicago. Those odd shapes (dare not to call it abstract) would fall flat if this European stood in the same locations where Black Chicago in the 1940’s rivaled Harlem with Blacks being independently cultured with a society within a community. Then again, who knows, maybe Picasso would have won a bunch of awards & respect with being an Eye who saw the sufferings or the successes of Blacks in an inner city America. Again, another explorer. Picasso did utilize African masks for some of his art-work, so I guess he had some form of understanding that the resources leading up to being called Art can be powerful if it is guided by a White European. An Artist has a choice. Oh, I did not state that you being an exceptional writer as a praise. Do not confuse my acknowledgement as a measurement of approving your work or not. I stated this, to let people who are fans of The Wire or any of your other literary or television works know that I am not talking about the surface context of your show (s) or bookwork. As in, I do not like the cast, the narrative or any other surface critique that fans of the show argue with people who are not fans. Of course you are going to think or favor the conception that anybody can write about anything, without having to address the serious nature of it–if they decide to dive into the depths of racism, oppression & a system that favors one race over the other. If you did sit-comedy work, I would not say one word to your witty approach to making races laugh at punchlines. This is the mere reason why Black writers & filmmakers are asked to edit their “too Black-ness” or “Pro-Black” stances in their works. Perhaps, this is why some of their works do not get the green-light. We can play the gangster, mystical drug-dealer, but that producer’s, network executive’s chair or that head writer hat is not made for us. I do not even complain about it anymore. As you stated, you are not buying into that 1974 declaration—I did not know that Black history or the cultural ramifications of the past, present & the future was on sale. I guess if there is script mindset to it all, I would be judging on what to buy or not to buy as well. Wow, you know Picasso inside and out. Pretty cool. You don’t even need to look at the paintings. I get it. I always found Picasso presence as an Artist to be polar opposites of success battling the need to be original. Leah Dickerman from MoMa did a terrific interview about Abstract painting on Charlie Rose, some months ago. In was in this interview where she declared that Picasso got close to the edges of Abstract & then stepped away without ever returning to such a movement. Not to digress from the topic at hand, I just wanted to explain some strings tied to my point. In the simplest of English: Mr. Simon you’re not getting IT. At one of the most fundamental levels, it appears extremely important for you personally to see these struggles as being primarily about class. Conversely, for almost 100 million people of color experiencing everyday life in this country under a system founded (and partly, first codified in none other than 1650’s Maryland, I might add) inarguably on the concept of white supremacy, that perspective is literally insulting to both their intelligences and the realities of their accumulated life experiences. Why? Because if we will be brutally honest, every one of those 100 million people continue to carry the constant weight of still instantly being capable of being reduced to non-human stereotypes (and by extension tragedies), at a moments notice, in a society still predicated on white supremacy. No matter what their level of achievement. President, talk-show billionairess, world-renowned university professor, or driver with broken tail-light. And as such, living in at least some degree of perpetual fear of the self-perpetuating institutions of that white supremacy at all times. The recognition of that fundamental fact has been a literal matter of life and death for them for over 350 years. And they have never had the luxury of viewing it through the lens of it being both a “race” and “class” struggle. Since the first shiploads arrived on the eastern shores of the Atlantic, they’ve had to learn in unfathomably microscopic detail what “CRAZY” looks, smells, tastes, sounds and feels like. Or else! And as such, your adamance in the rightness of your analysis of what THEY should consider “crazy” in America is unintentionally (one would hope) insulting to 100 million people. Because they have been forced to live it every second of every day. Or else. While you have the privileged option of taking time off from such considerations whenever you feel the need, without consequence. And as “Shaun” somewhat mentioned, even potentially going into a completely different realm of the entertainment field, if you so desire to escape the craziness. Likewise, you have the option of being wrong in your analysis of what’s going on here, with potentially only “embarrassing” consequences. Thus rendering endless debates about the intersection between race and class a very “fascinating” exercise. They, on the other hand, literally DIE when they guess wrong about how to define “CRAZY” in a white supremacist society. It’s fascinating and passionate stuff for you “most of the time”. But for them it is a perpetual matter of life and death at ALL times. With the fundamental implication of that being that they have a multitude of reasons why they HAVE TO know much, much more about this stuff than you. Now back to the IT part that you keep missing… Not one word of these debates should even remotely begin to imply that white people are therefore unqualified to write and/or speak about things related to the realities of the lives of minorities!!! That’s clearly where your oversized hot button is that’s causing you to not hear the rest of what’s being said!!! In fact, in a white supremacist society, for better or worse, for the foreseeable future it’s still predominantly the white writers and speakers like yourself that are going to most often be in a position to get the ear of those in power. So, unquestionably, people like yourself play an astronomically huge role in relaying the things you’ve observed. But understand emphatically, that no matter how much you’ve invested to this point you’re still strictly an observer. And as such, there is a very large percentage of the more subtle realities of life as a minority in a white supremacist nation that you’re never going to have the slightest hint about. Even if you’re the most dedicated observer that has ever lived! So the IT (of all this mostly wasteful noise) is suppose to simply be a not-so-subtle reminder to be enormously respectful, at all times, of how much you CAN’T know. Regardless of how much time you’ve spent working there. And likewise, regardless of how passionate you may get about this stuff sometimes. So therefore, this is a reminder to periodically remember to acknowledge to those in power (and more importantly, to those you are reporting on) that you are, in fact, strictly a limited observer and reporter. And as such, there are much more deeply invested and passionate voices than yours, being systemically stifled right now, that truly must to be heard eventually…or else!!! Because if this incredibly brutal system of white supremacy is to finally be dismantled (peacefully, as you insist), then it is an absolute imperative that those very people at the bottom must eventually have a full seat (and full voice) at the table of real power. And not just have their messages relayed by very, very passionate, honest, thoughtful, kind, sympathetic (and a host of other adjectives), observers. That’s the IT. In the simplest of terms, I’m not buyingit. I believe what I believe and I write what I write. If it has no merit for you, or if you disagree with either the premise or the execution, then surely, you should avoid my work. Which is fine.Holy Shit Batman!!! I’m not sure what you were trying to convey with that, but on the surface of it it sounds like… Leni Riefenstahl and D.W. Griffith eat your hearts out!!! I agree. I don’t think you are sure what I conveyed either. I doubt if they will ever have us at the table, & if they do, we won’t be able to sit down, because there is not enough chairs in the room for us to enjoy the invite. Just as we were maids & butlers, awaiting orders—we still await orders on how we should tell out stories. Then, people wonder why we are frustrated when we see some White corporation makes a visit to our communities, get the content, win the awards, profit with huge recoups, cast some Black actors & actresses, sell the show, the soundtrack & the story with interviews & fame. No, we won’t ever be seated at the power table. What kind of power would want to give up, fame, fortune, & fun? “The arc of the moral universe is long, but it bends towards justice.” -MLK I greatly disagree!!! Georgie, disagreeing is your right. I am an optimist, but as a Black man, I do not see Blacks sitting at the power table while being respected enough to have a voice in the meeting about their issues. If it does happen, I doubt that it will evolve in my Lifetime. Oppression is foundation. Even if we get rid of racism, another, “Ism” will come into its place & the next level will be built on Oppression. Dr. King was a remarkable man & thinker, but so was Malcolm X. & The Black Panthers. There are so many aspects to being Black, worldwide that the global powers that spin does not want equality. Slavery was the biggest business ever. You have years & years of not paying someone to tend to the oppressor’s land or product. Then you take that structure & apply it to a philosophical balance towards saying that a Black person is property—free them as your property & teach them a lesson in wanting “freedom” by segregating when you want control or desegregating when you want to use something from our culture. I have heard political commentators & talk-show hosts mention that Baltimore has a Black mayor, a lot of Black officers & a Black City D.A.. We have a Black President as well. Just because you put the power of position into an accomplishment as the 1st Black or that many Blacks are in the leadership seat (I say standing up just like butlers & maids awaiting orders) it does not justify that they have the control. Checks & Balances is not just a doctrine for democracy, it is controlling method that is unseen as well. White supremacy is morally wrong. And as such it is not a sustainable concept. Period! People will not continue to fight for something that they ultimately know is morally. At best it’s adherents will eventually become indifferent. But sooner or later the immorality of it will cause it to collapse under it’s own weight. It’s strictly a question of when. Georgie, I agree with you, it is morally wrong, but power is a selfish ambition & I doubt if equality will become humble enough to say, “hey, let us all share the power.” We are so high-tech today, we can build space shuttles, send out e-mails, hold video calls from a mobile device. Yet, race is still a topic that meets, “Oh, that was in the past, Blacks, Latinos & the Jews are this or that,” let us talk about right now. Historical wise, World War II, the Civil Rights movement & immigration issues that date back can be as close as Mexico to the U.S. are not that old. Those issues were only a century ago. So, that question of when can end up being a perpetual question. Just for grins and giggles, you might want to peruse the first couple comments under the “Maryland Festival” posting. I think you’ll find the similarities to this particular exchange rather “interesting”. “Years ago when “The Wire” was on the air, people use to question me about why I would decline to watch it; I would answer, “two white men are the creators of this show, and one is a former Baltimore City cop.” I knew that I would lose them if I went into the details like I did in the paragraphs above this one, so I would stop my reasoning right there.” Woah…Did you really never even watch it? I mean that’s fine, watch whatever you want, but criticizing the author without even becoming familiar with the work puts me in mind of all those parent’s preventing their kids from reading Harry Potter because “It promoted Satanism”. I was wondering why your accusations sounded so general; I would be interested to know precisely what you think the Wire got wrong. TORGO, no I did not watch the Wire. However, I did watch some episodes of Homicide, & I did watch The Corner mini-series. You said that I generalized, yet you are doing the same thing. A child does not need to know what a tree looks like in order to put the puzzle piece together to recognize the shape of a tree. He or she is will pick up the pieces while being raised in their household, whenever they hear the description of a tree from their parents. This is why parents smile when a child point at a tree & retort out aloud, “Tr-e-e.” I was born & raised in East Baltimore. In one year during the 1990’s, I lost 30 something family or friends to murder. There have been times when I would be hanging out on a social level, & I would be with 5 of my friends, sitting around on the steps, Summer time Baltimore & it would hit me that all of us have been shot—myself included. I came up in the Baltimore school system. There are friends of mine doing hundreds of years in prison. Not to mention, the real life drug-dealers whose lives were mirrored in some of those Wire episodes, I knew them, either their family or friends. Even though I did not watch the show, I would hear people talk about it. So, it was not hard for me to figure out that the same two White men who was painting this visual picture of Baltimore with mainstream success were not from the same Black Baltimore that I was born & raised in. My parents died in Baltimore, they are buried in Baltimore. I learned how to read & write in Baltimore, picked up my 1st camera in Baltimore, my 1st girlfriend was from Baltimore. If I was some man from Iowa talking about, “I do not watch the Wire” & I had these views, perhaps your generalizing question & assumption might hold a balancing act. However, Baltimore is on my mind, every day. Even when I am in New York or lost in some other city, Baltimore makes me think. I can not explain to you what The Wire got wrong. You see people tend to defend their favorite show(s). Which is fine. I did not ever say that The Wire suck or it had bad acting with a incomplete storyline. I am talking about the core. You are asking me to explain the Soul, which is more about feeling than being poetically real with words. Perfect!!! Shaun77, I’m a white woman whose life is far removed from West Baltimore. Injustice and inequality have always been of interest to me. I loved The Wire – it showed both the systemic root causes of the problems and a look at the real human toll those problems take. I felt the systemic piece made it unique. I didn’t feel it was some sort of voyeuristic exploitation, or some sort of presentation of “The Story of Black America.” It was one story among many although those stories are pathetically underrepresented in our media and in our awareness. A lot of your points resonate with me. I have many experiences of not being heard, especially when the same words spoken by me and by a points made by me and by a male are received differently. Men are often conferred a greater authority which edges out and eventually drowns out the rest of us. I know that frustration. When that happens, though, is it fair to blame the proverbial messenger? Especially when that messenger is well-intended and open to helping to change the system? When the messenger is using his/her standing in society to bring about change? You speak of the movie Schindler’s List, but what about Schindler himself? He was part of the power structure that committed genocide; as an insider he even benefited from it.. He used that privilege to do good work from the inside, despite personal failings and his inability to save everyone or to save perfectly. I guess my rambling point is this — in your estimation, what is the appropriate role for someone like me? Sometimes I feel like I’m damned if I do and damned if I don’t. It seems like you are saying here that Simon should stop writing about West Baltimore and we should all stick to our own visceral experiences. Is that what you are saying and how would that ever change the world? Katie Good day Katie. I think that you took on the surface points of The Wire & Schindler’s List & applied them into a commonality that has the “reality” cover but with The Wire, such a reality cover is not necessarily the core. People can say whatever they want about the film, “Schindler’s List” but when the debate about who made the film (even thought he bookwork was from a man who is not Jewish, the visual language from a Jewish filmmaker made the book alive again), what was the intention of such a film? The intention points back to a Jewish filmmaker taking on an important historical event under horrible & idiotic conditions directly & indirectly linked to an ignorant world leader (Hitler) who had so many issues. I find it revealing that some white people who I have met or had some correspondence with tend to think that I am ordering David Simon to stop writing about West Baltimore or Black Baltimore in general. When in fact, I have not said or wrote that in any of my statements or responses. I guess that these same people take my questioning or communication as a challenge to stop. Which is baffling to me. My point is this, how can a white man speak or explain Black Baltimore & its issues to a core reason? Then you see these political answers where as, Baltimore has this problem & that problem, if we fix this, the Blacks in Baltimore will have a better chance at being a success story. I just think some white people should just be quiet when it comes to addressing the issues that are heavy in the Black community. Yes, I understand that some white people may have the urge to help, from running into an African nation & adopting a child, or marching along some protest in the inner city while reflecting enough to remember to get some book or television material to make your “artist” voice louder than the protestors. I do not want to deconstruct someone’s compassion, because the World certainly requires a lot more good energy. But being quiet is a powerful energy as well. There were Jewish leaders who helped Dr. King, could you imagine if their names overshadowed Dr. King? Of course, back then, the nature of being opportunistic did not have the wheels of becoming famous thru social media & people knew their place a lot more keenly than today. However, even the so-called equality banter that the outsiders bring forth can overshadow those who suffer from the racism & inequality. The major networks who were camped out in West Baltimore knew who to interview—they knew how to script a “news story” to be swallowed up by some political base that these major news networks market towards. How is that philosophically different than what HBO put forth with The Wire? I remember when I heard that Charles Dutton pitched a show to HBO about Blacks being given back half of the United States & from there, they would govern themselves. Did it receive a green-light from HBO? To my understanding, it did not. That was the first & the last time that I heard of such a show pitched. Charles is from Black Baltimore. Yeah, people are quick to toss up the logic that famous Blacks, filmmakers or actors & even politicians love The Wire. But that is all surface. You can go on You Tube & find someone doing something silly & that video footage could have 120,00,000,000 views—does that constitute it being real? We confuse fame or followers as a form of success that dignifies the surface. Nobody said that the show sucks—even though I have not watched the show, from the adverts, the cinematography, editing & the actors looked professional. Yet, I am talking about the core. I wonder if the mighty writer & poet Gil Scott Heron had some prediction laced insight when he said, “The Revolution Will Not Be Televised.” Could it be that he understood that the producers, writers & directors would not be of the same struggles—the opportunity to cash out while writing about us when it was not from us would become a level beyond the realm of propaganda? Oh, Gil had to know about the effects of Blaxploitation being bigger than the silver-screen. So big, that I bet that some outsider is reading my words right now, going to use it & not even credit me. This is how we live today. See it, use it, distant yourself from it & call it what you want. It is like European colonialism towards Africa—but on a level of using Black actors, actresses, an urban city narrative, white producers, a white mainstream cable network & then when the ratings goes thru the roof, who win the awards? When these Black actors & actresses excel in their career, who do they thank for allowing them their big break into the industry? That show about being Black or Black issues, but was not produced, written or directed by Blacks. This tradition of Blacks not telling our own stories run thru a century or more of neglect, it expands beyond just America. Just as in Africa during the 20th Century, specific nations would shut down African filmmakers who made content that was not kind to the European governments that was subsidizing the government & its economy. So step back & look at how Martian Scorsese can always say that he is Italian & his badge of visually showing his culture in various lights from the mafia to Jake LaMotta without exploiting his culture, because he has the right to see his people his way. The same can be said about Francis Ford Coppola & the Godfather films. People can say what they wish about Francis & Mario Puzo, but when the critique stops, they have a cultural voice that expresses how they see lives, & customs of their people. There are core to their stories. I am sure that if some Black child located anywhere in this world happen to see an all Black cast & crew working on a mainstream film or television show, being broadcasted on a major network owned by Black executives, it would confidently change their world view that they could be a part of that Black World. Thanks for the follow up. Much food for thought. Peace. A much more simplistic response to your original question… Do absolutely everything you can conceivably do. But understand emphatically it’s about raising up their history, their lives and their voices to full equality. So if we turn on the TV and see: “CNN Live reporting on race from Baltimore, starring Katie.” Followed shortly by, “NBC News special report, on race, with special guest Katie.” And then, “HBO’s premieres it’s newest show on race, presented by Katie”. And lastly, “join us for an all-white panel discussion on ‘Meet The Press’ about our race problems, starring Katie”. You quickly start to give rise to the question that it’s STILL probably not about them, it’s about you. That’s fundamentally what the perpetuation of white supremacy is: Always being in a position of preeminence even when it’s situations that very clearly are not suppose to be. And in many cases, even looks downright idiotic to even the casual person of color. So does this imply that whites should be excluded? Not one tiny bit. Just simply not perpetually THE preeminent voice when it comes to race. That’s it. Good day & you are welcome, Katie. Peace to you & your new week as well. It is becoming quite fascinating to see the similarities, at a “core” level, of what you and I are trying to convey, using nonetheless very, very different wording. In no uncertain terms, I agree emphatically with what you are fundamentally trying to say here (ESPECIALLY when you bring Gil Scott-Heron into the equation -LOL)! Alas, I think you are soon going to be going on without me here. I’ve expressed the sentiment repeatedly here that there is one single thread that ties that first tribesman standing on the banks of the Niger river, who ended up in the bowels of a slave-ship, with every AA since that time: The need to quickly and decisively identify “CRAZY”… …and very, very quickly get the living hell away from it!!! …and I think I’m starting to smell something crazy off in the distance. So, I think I’ll be here a bit longer… but, I suspect, not very much longer. Best wishes. And one final thought: Based on current and projected WORLD population demographics, I don’t think the end of this 500-year old lie that is white supremacy is anywhere near as far off as you tend to think. George, I know that I may be late with my response, I have been busy. Even if the near is closer, the concept of something being perpetual will recycle itself into other forms. People barely want to talk about race on a worldly platform, outside of the major news networks or the mainstream elements that want to publish a book or green-light a television show. You actually have people who believe that racism does not even exist, because we have our 1st Black U.S. President or that Oprah is a Black woman who is a billionaire. So, I go right back to the core, where Black families have to deal with a cheap inner city educational system, a prison system that could be connected to racial profiling (some cases, not all, I am not saying empty the prisons but to look at the cycle of Blacks feeding that prison food chain) in the justice system & then our stories end up being budgeted for films, television shows, or a album full of music—only to be played over a mobile phone commercial where we see white actors or models dance to a radio-friendly hit that edited out the N-Word. Of course, we are closer than you think if you are coming down on racism because it is trendy. Meanwhile, my Black race is still marching, different causes but against the same Oppression, which is not trendy but traditional, & by the way, those 500 years of consistency made it a tradition. Peace. Thanks for this and the panel tonight. I believe that transformative change is necessary, and I don’t like the easy policy prescriptions that can often flow off the tongue in our op-ed culture. But the concept of transformational change can be a different sort of con. I don’t see how institutions transform other than by relentlessly pushing a slew of discrete, sometimes seemingly boring individual reforms. Though we need the motivation and depth it provides, we have enough searching, moving commentary about American poverty to last ten lifetimes. I think sometimes that by reading it and viewing it I am tricking myself into thinking that I am helping. Of course, those who make it are actually helping–it’s just that the audience is often comprised of people who don’t lack for money or awareness to pass a bill, rebuild a block, save a life. We see moments of opportunity for transformational change, but often simply wait to be transformed. Perhaps it’s time to take policymaking back from academic journals, ALEC, and the developers. What would you like your readers to do tomorrow, next week, next month? How about your top three discrete changes nationally, and your top three locally? Good questions. I’m on the spot. I’ll try to post something explicit this weekend in reply. “Until it isn’t.” Until it’s taken more like. I love what you have to say, but I can’t help thinking it’s simplistic hippy shit most of the time. The world has never changed through peaceful protest. It’s just not the case. That’s the ideal, sure, but it rejects human-nature, let’s not forget the horizontal violence perpetuated by the aggressors. Every significant change throughout history has come through bloodshed on the streets. Maybe I don’t see the same movies as you do, or watch the same documentaries, or read the same books. I just don’t buy into “if we can just love each other, everything will be alright”. It won’t. Thanks David. I don’t begrudge Ta-Nehisi his comments from the last couple weeks given all the deep diving he’s done the past few years into the history of slavery and white supremacy and all the violence upon black bodies that history entails. Still, I agree with you about the politics of the moment, the window of reform they offer, and how easily friends of zero tolerance could ride a riot’s coattails. Moreover, as a small-potatoes journalist, I was glad to see some comment on why it was important to you to wade into the field of White House optics and mostly staged conversation. Thanks also for the blog. I’ve enjoyed it a lot. I don’t begrudge Ta-Nehisi anything. I think he’s one of the most valuable voices on this stuff we have. When he pulls me up on something, chances are I need to at least think twice. Were you at the panel tonight? I wish I’d been at that damn panel! Eagerly awaiting the you tube post so I can catch it from out here in southern Colorado. Hope all involved got their money’s worth. It was mostly our talking about trying to get our heads around the substance of Taylor Branch’s amazing trilogy. And because of the drug war (along with insidious, unyielding woman-hatred), we find ourselves at a similar moment of reckoning when it comes to rape. New York and L.A., and then Baltimore, NOLA, Cleveland, Memphis, Cook County, Detroit, St. Louis, Houston, and other places that don’t have a savvy reporter or bold advocates making noise about it yet–they all have UCR rape numbers that are speciously close to their homicide numbers. And they tossed tens of thousands of rape kits on warehouse shelves for decades, especially when the victims were poor, dark-skinned, young, or disabled. It seems with police brutality that the nation is having a long-overdue wake-up moment, and as you described in this post, there are calls from all sides for accountability. But not so with law enforcement’s gargantuan failure to investigate rape. Rape is in the spotlight lately, and all that awareness is an accomplishment in itself. But by and large, it’s seen as a law enforcement training issue (which it certainly is to an extent) and a forensics issue (which it also is, but not nearly to the extent Law & Order has convinced people it is). That rape has essentially not been regarded as a crime by police and prosecutors in so many major American cities, especially over the past 30 years as rape awareness has grown, is also a direct result of the drug war and its subsumption of money, resources, and attention. Everyone wants to fix the rape problem these days. That sure is nice, but it’s really time to recognize that fixing the rape problem means ending the drug war, too. Yeah, Baltimore’s been known to toss some spoiled rape kits (b/c of a bad evidence refridgerator) in the dumpster before. Whoops. There was also a case some years ago where the BPD refused to take a report from a prostitute who wanted to file a rape claim, and she subsequently went to the hospital for an independent kit which indicated she had in fact been raped. I think that woman sued, but I never heard any follow-up on whether she settled. Mr. Simon, I agree with your message but do you think those policemen would have been charged but for the riots? Maybe, maybe not. At this point, I feel obliged to give the state’s attorney the benefit of the doubt. She charged as soon as the autopsy and police reports were available to her. But let me turn that around on you. Do you think that Ms. Mosby was right to charge the officers, and to do as promptly and as aggressively as she did? If you do, then concede that you diminish her integrity and performance by saying that the riots helped to decide the matter. And further, acknowledge the truth that had Ms. Mosby been able to act without the trauma of rioting as her preamble, that she and her action would not be subject to the cynical critique now in evidence in the mainstream media and among the right-leaning commentators that says she charged on something other than the evidence. The riots were no friend to Ms. Mosby. Her actions look better amid a backdrop of non-violent protest, to be sure. I agree that they were charged for the right reasons, as should have the officers in Ferguson, Staten Island and every other place this kind of atrocity happens… and I am ignorant of the timelines of when things were available to her (or not). Not to mention that I am in no way qualified or equipped to judge a Civic employee’s actions, or lack thereof because I have no clue what they have to deal with on a daily basis…. I just don’t know (and want to believe) that the riots didn’t have anything to do with it. That’s what you’ve been preaching (and I am in agreement) on this forum. I cant say that for a fact though…because to me (and much of America) they charged as soon as the riots went out of hand. True. They also charged after the OCME autopsy findings and the police investigative file were handed over. They couldn’t have done it sooner. I have concerns about the second-degree murder charge against the one officer, but the case for involuntary manslaughter in a negligent death seems viable. Staten Island, I don’t understand how they avoided involuntary manslaughter. North Charleston was firmly handled. Ferguson? That was awful police work, if not rank incompetence. But once they found Michael Brown’s blood inside the vehicle and the officer’s claim of a fight for the gun was at least partially corroborated, I couldn’t see an indictment from that grand jury. “Do you think that Ms. Mosby was right to charge the officers, and to do as promptly and as aggressively as she did?” I wish the police had turned their reports over to her a week prior and the charges been made then, how about that? Then all this would have been avoided to begin with. But as I said above — waiting for reports of highly suspicious civilian-in-custody deaths released in timely fashion from the BPD these days? LOLOLOLOLOLOLOLOLOL I don’t know who you’re talking to that thinks Mosby looks bad. I haven’t met or talked to one Baltimorean (and many outside the city, really) in the past week who isn’t in love with her and rooting for her to be, like, the first black female President. The guys were putting on fresh shirts and doing their hair talking about wanting to go holler at her but I had to tell them their shot was slim. Oh, wait, maybe you were talking about Fraternal Order of Police. Yeah, they’re not happy with her. BIG SURPRISE. KT, death investigation isn’t done in an hour or two. It takes time to work up the investigative reports. A post-mortem alone from the OCME, which is a state agency and over which neither the BPD nor the SAO has control, requires some time-eating testing procedures. I’m not making this up to thwart your scattershot criticisms of everything that doesn’t please you with its promptness. I watched about two hundred prosecutions close up. In the cases where cause and manner of death were complex — i.e. not a gunshot wound with witnesses, for example — the investigation could go on for quite a while. Unless, you prefer charging people on half-assed and incomplete information. That’s prompt, alright, but it offers other problems. To paraphrase Forrest Gump, I am not a smart man, but I know nobody severs their own spine, and I know regardless of any other circumstances there is no excuse for police to repeatedly refuse to call for medical attention. Those facts have been known from day one. A post-mortem determines cause and manner of death to a legal precision that allows authorities to charge murder or manslaughter. A pathologist doesn’t merely determine the most obvious or apparent fatal injury, he determines whether that is the only explanation for death. He eliminates all other possible causes, or preceding or contributing trauma. For example, a tox screen makes it clear that a victim such as Mr. Gray hadn’t overdosed in the wagon contemporaneous to the neck injury. Did he go flying into a wagon bulkhead because the driver was rough-riding a handcuffed unrestrained man? Or did it happen to a man who had already lost consciousness? A tox screen is therefore part of every autopsy. This may seem like a matter of little consequence or weak speculation to you. But imagine the mayhem that results when a prosecutor charges second-degree murder claiming that the driver showed reckless disregard for the manner in which the victim was transported, and then a defense attorney produces a tox report with a lethal level of opiates and tells the jury that there was no way for the police to know that the man in the back of the wagon was going to do an unguarded header into the bulkhead. And further, all that could be known about the narrative chronology of Mr. Gray’s time in the wagon — all of the stops, and all of the repetitive failures to render him medical assistance — could only be known after the police officers had made statements, which the LEO bill of rights that is Maryland law, regrettably, delays for a notable interim. The detectives, the prosecutors all had to wait days to obtain those full statements. Yet you want a prompt criminal charge within a day or so? No one is fast enough for you there, KT. It’s a theme. You want the world a la carte, on your schedule only. But honestly, what you think you know in your layman’s first understanding of a case is actually insufficient and incomplete for professionals who are responsible for the decision. I’m just saying the suspicious nature of the death was immediately apparent to any citizen. Anyway, this is all a moot point now that it’s been ruled a homicide. Actually, you said you wanted to see criminal charges promptly filed before the Monday unrest. I can’t really argue with this wonderful essay, or the containment of (understandable) rage at the injustice of it all, but….good lord, we ask so muchof those that we’ve given so little.After the last spirited discussion here about the merits of rioting, I heard a story on NPR’s Morning Edition about the health care consequences of the rioting. Namely, elderly and other people in fragile situations suffering from the effects of the violence, like no access to medication because the neighborhood pharmacies were destroyed. This is infuriating. http://www.npr.org/blogs/health/2015/05/04/404164549/triage-and-treatment-untold-health-stories-from-baltimores-unrest I agree that sustained attention to this is critical – but I’m afraid our nation’s attention span gets shorter every day as we jump from crisis to crisis. Can this get printed in the NYT or the WP? Probably not with all the profanity. Why don’t you run for office and try to affect change that way? You really don’t think you couldn’t unseat an incumbent mayor in Baltimore, white or black? Too fucking profane, for one thing. Take out the swearing? This is an epic! Swearing is fucking cathartic and visceral. I love it. It is, but if you want to get a wider audience for an important message in the press? Maybe replace the words fucking, etc, with something more appropriate to a wider audience. The message would still retain the key elements. Derek’s right. I’m fucked for mainstream media. Nothing is fucked here, dude. You just don’t want to fucking play the game anymore. At my son’s middle school, some unknown girl scribbled a threat on the wall of a bathroom stall. In response, they evacuated the school, called in the bomb squad, and swept the school for an hour. Afterward, we got an email from the principal, essentially congratulating himself on his ‘zero-tolerance’ of the situation and promising to prosecute the child offender to the fullest extent of the law. My reply to the email was: “Zero-Tolerance = Zero Judgement” Preach.
true
true
true
Intolerance. And a broken-windows policy of policing is exactly what it means: The property matters. The people can stay broken until hell freezes over. And the ejection of these ill-bought philosophies of class and racial control from our political mainstream -- this is now the real prize, not only in Baltimore, but nationally. Overpolicing and a malignant drug prohibition have systemically repressed and isolated the poor, created an American gulag, and transformed law enforcement into a militarized and brutalizing force utterly disconnected from communities in which thousands are arrested but crime itself -- real crime -- is scarcely addressed.
2024-10-13 00:00:00
2015-05-08 00:00:00
null
article
davidsimon.com
The Audacity of Despair
null
null
12,994,761
http://reallifemag.com/what-was-the-nerd/
What Was the Nerd? — Real Life
Vicky Osterweil
Fascism is back. Nazi propaganda is appearing on college campuses and in city centers, a Mussolini-quoting paramilitary group briefly formed to “protect” Trump rallies, the KKK is reforming, and all the while, the media glibly participates in a fascist rebrand, popularizing figures like Milo Yiannoupolis and the “alt-right.” With the appointment of Stephen Bannon to the Trump administration, this rebranded alt-right now sits with the head of state. Of course, the fascists never really left: They’ve just tended to wear blue instead of brown the past 40 odd years. But an openly agitating and theorizing hard-right movement, growing slowly over the past few years, has blossomed in 2016 into a recognizable phenomenon in the U.S. Today’s American fascist youth is neither the strapping Aryan jock-patriot nor the skinheaded, jackbooted punk: The fascist millennial is a pasty nerd watching shitty meme videos on YouTube, listening to EDM, and harassing black women on Twitter. Self-styled “nerds” are the core youth vanguard of crypto-populist fascist movements. And they are the ones most likely to seize the opportunities presented by the Trump presidency. Before their emergence as goose-stepping shit-posting scum, however, nerds — those “losers” into video games and comics and coding — had already been increasingly attached to a stereotypical set of political and philosophical beliefs. The nerd probably read Ayn Rand or, at the very least, bought into pseudo-meritocracy and libertarianist “freedom.” From his vantage, social problems are technical ones, merely one “disruption” away from being solved. The sea-steading, millennial-blood-drinking, corporate-sovereignty-advocating tech magnates are their heroes — the quintessential nerd overlords. When it was reported in September that Oculus Rift founder Palmer Luckey was spending some of his fortune on racist, misogynist “meme magic” and shit-posting in support of Donald Trump, it sent nervous ripples through the video-game community. Many developers, to their credit, distanced themselves from the Oculus, pulling games and ceasing development. But many in the games-journalism world were more cowardly, either not covering the story at all or focusing their condemnation on the fact that Luckey made denials and seemed to have lied to try to cover his ass, rather than the spreading of racism and misogyny. The myth of nerd oppression let every slightly socially awkward white boy who likes sci-fi lay his ressentimentat the feet of the nearest women and people of color These were the same sorts of gaming journalists who rolled over in the face of Gamergate, the first online fascist movement to achieve mainstream attention in 21st century America. The Gamergate movement, which pretended it was concerned about “ethics in games journalism,” saw self-identifying gamers engage in widespread coordinated harassment of women and queer people in the gaming world in a direct attempt to purge non-white-male and non-right-wing voices, all the while claiming they were the actual victims of corruption. The majority of professional games journalists, themselves mostly white men, in effect feebly mumbled “you gotta hear both sides” while internet trolls drove some of the most interesting voices in game writing and creation out of the field. The movement was a success for the fuckboys of 4Chan and the Reddit fascists, exhausting minority and feminist gaming communities while reinforcing the idea that the prototypical gamer is an aggrieved white-boy nerd. It has meant that — despite the queer, female, and nonwhite contingent that makes up the majority of gamers — gaming’s most vocal segment is fashoid white boys who look and think a lot like Luckey. Surely, those communities of marginalized gamers have just as much claim to the subject position of the “nerd,” as do queer shippers and comic-book geeks, to say nothing of people who identify as a nerd to indicate their enthusiasm for an esoteric subject (e.g. “policy nerds”). But the reason a tech-enabled swarm of fascists have emerged in the nerd’s image today and claimed it as territory necessary to defend is because of the archetype’s specific cultural origin in the late 20th century, and the political purpose for which it was consolidated. The nerd appeared in pop culture in the form of a smart but awkward, always well-meaning white boy irrationally persecuted by his implacable jock antagonists in order to subsume and mystify true social conflict — the ones around race, gender, class, and sexuality that shook the country in the 1960s and ’70s — into a spectacle of white male suffering. This was an effective strategy to sell tickets to white-flight middle-class suburbanites, as it described and mirrored their mostly white communities. With the hollowing out of urban centers, and the drastic poverty in nonwhite communities of the ’80s and ’90s, these suburban whites were virtually the only consumers with enough consistent spending money to garner Hollywood attention. In the 1980s and ’90s, an obsession with comics, games, and anime might have made this suburban “nerd” a bit of a weirdo. But today, with comic-book franchises keeping Hollywood afloat and video games a $100 billion global industry whose major launches are cultural events, nerd culture *is* culture. But the nerd myth — outcast, bullied, oppressed and lonely — persists, nowhere more insistently than in the embittered hearts of the little Mussolinis defending nerd-dom. Of course, there are outcasts who really are intimidated, silenced, and oppressed. They tend to be nonwhite, queer, fat, or disabled — the four groups that are the most consistently and widely bullied in American schools. In other words, the “nerds” who are bullied are *being bullied for other things than being a nerd.* Straight, able-bodied white boys may also have been bullied for their perceived nerdiness — although the epithets thrown often reveal a perceived lack of masculinity or heterosexuality — but the statistics on bullying do not report “nerdiness” as a common factor in bullying incidents. Nevertheless, the myth of nerd oppression and its associated jock/nerd dichotomy let every slightly socially awkward white boy who likes sci-fi explain away his privilege and lay his *ressentiment* at the feet of the nearest women and people of color. The myth of the bullied nerd begins, perhaps, with college fraternities. Fraternities began in America in the mid-19th century, as exclusive social clubs designed to proffer status and provide activity to certain members of the student body. In practice these clubs worked primarily to reproduce masculinity and rape culture and to keep the ruling class tight and friendly. But by the ’60s, fraternities were dying: membership and interest were collapsing nationwide. Campus agitation for peace, Black Power, and feminism had radicalized student populations and diminished the popularity and image of these rich boys’ clubs. Frats sometimes even did battle with campus strikers and protesters, and by 1970, though absolute numbers were up, per capita frat participation was at an all-time low. Across the ’70s, right-wing graduates and former brothers began a concerted campaign to fund and strengthen fraternities at their alma maters to push back against campus radicalism and growing sexual and racial liberation. Decrepit frat houses were rebuilt, their images rebranded, and frat membership began growing again. As the wave of social upheaval receded in the late ’70s, these well-funded frats were left as a dominant social force on campus, and the hard-partying frat boy became a central object of culture. In Stranger Things,the nerdy interests of the protagonists prove crucial to their ability to understand the monster from another dimension. The nerds are heroes This manifested in movies like the 1978 mega-hit *National Lampoon’s Animal House*, where scrappy, slightly less attractive white freshmen aren’t let into their college’s most prestigious frat, and so join the rowdy, less rich one. Steering clear of frats altogether is not presented as plausible, and the movie stages campus conflict not as a question of social movements or broader societal tensions but as a battle between uptight social climbers and cool pranksters. The massive success of *Animal House *immediately inspired a number of network sitcoms and a dozen or so b-movie and Hollywood rip-offs. The threatened, slightly less attractive white male oppressed and opposed by a more mainstream, uptight, wealthy white man became a constant theme in the canonical youth films of ’80s Hollywood. This quickly evolved into the nerd-jock dichotomy, which is central to all of John Hughes’s films, from *Sixteen Candles*’ geeky uncool Ted who gets in trouble with the jocks at the senior party to *The Breakfast Club’s* rapey “rebel” John and gun-toting “nerd” Brian, to *Weird Science, *whose nerd protagonists use their computer skills to build a female sex slave. Both *Sixteen Candles *and *Weird Science* are also shockingly racist, with the former’s horrifically stereotyped exchange student Long Duk Dong and the latter’s protagonist winning over the black denizens of a blues club by talking in pseudo-ebonic patois — a blackface accent he keeps up for an unbearable length of screen time. In these films the sympathetic nerd is simultaneously aligned with these racialized subjects while performing a comic racism that reproduces the real social exclusions structuring American society. This move attempts to racialize the nerd, by introducing his position as a new point on the racial hierarchy, one below confident white masculinity but still well above nonwhite people. The picked-on nerds are central in films across the decade, from *Meatballs *to* The Goonies *to* Stand by Me* to the perennially bullied Marty McFly in the *Back to the Future *series. The outcast bullied white boy is *The Karate Kid* and his is *The Christmas Story. *This uncool kid, whose putative uncoolness never puts into question the audience’s sympathy, is the diegetic object of derision and disgust until, of course, he proves himself to be smarter/funnier/kinder/scrappier etc., at which point he gets the girl — to whom, of course, he was always entitled. New Hollywood, the “American new wave” movement of the ’60s and 1970s, remains to many film historians the last golden age of serious Hollywood filmmaking. Though often reactionary and appropriative, the films of the period were frequently dealing with real social problems: race, class, gender violence. Though our memories tend to collapse all of the social unrest and revolutionary fervor of “the ’60s” into the actual decade ending in 1969, the films of the ’70s remained exciting and socially conscious partly because social movements were still tearing shit up well into the ’70s. The Stonewall riots kicked off the gay rights movement in the last months of 1969, Kent State and the associated massive student strike was in 1970, while the Weather Underground, Black Liberation Army, George Jackson Brigade and other assorted guerrilla groups were at their height of activity in the first half of the ’70s. At the same time, the financial crises of 1972–73 led to deep recession and poverty across the country: The future was uncertain, mired in conflict and internal strife. This turmoil, as much as anything else, produced the innovative Hollywood cinema of the period, and films like *A Woman Under the Influence,* *Serpico, One Flew Over the Cuckoo’s Nest *and *Network* attempted to address that social conflict. People often lament how these sorts of films gave way to the miserable schlock output of the 1980s. This transformation tends to be traced in film-history, not unreasonably, to the rise of the blockbuster — the historic profitability of* Jaws *(1975) and *Star Wars* (1977) pivoted studio attention toward big-budget spectacles with lowest-common-denominator subject matter. The films celebrated as 1980s camp colluded in the Reaganite project: Hollywood worked hard to project a stable white suburban America whose travails were largely due to bureaucratic interference Now, of course, these films are subjects of much high-profile nostalgia. Netflix’s retro miniseries *Stranger Things, *for instance, looks back wistfully to the ’80s, re-enchanting the image of nerds as winning underdogs (rather than tyrannical bigots). *Stranger Things *does so in the face of reinvigorated political movements that advocate for actually oppressed people, including Black Lives Matter, the migrant justice movement, and growing trans and queer advocacy communities. So in *Stranger Things,* the nerdy interests of the protagonists prove crucial to their ability to recognize the sinister happenings of their world. Their openness to magic and their gee-whiz attitude toward scientific possibility allow them to understand the monster from another dimension and the psychic supergirl more readily than the adults around them. The boys play Dungeons & Dragons in the series’s opening scene and get crucial advice from a beloved A/V club adviser. They are mercilessly bullied for their nerdiness, but the bullies are barely even discussed: They are so naturalized that they are merely a minor plot point among others. What comes across more directly is that the nerds are heroes. This is then mirrored by the faux nerdiness of viewers, who can relate to these boys by tallying up all the nostalgic references. The films celebrated in *Stranger Things* as fun 1980s camp at the time were functioning as reactionary cultural retrenchment: They reflected Hollywood’s collusion in the Reaganite project of rationalizing and justifying a host of initiatives: privatization, deregulation, the offloading risk to individuals by cutting safety nets and smashing labor unions. These were explained as “decreasing the tax burden,” and “increasing individual responsibility,” while the nuclear family and “culture” were re-centered as the solution to and/or cause of all social problems. As Hollywood attention swung toward the white suburbs, its ideology followed in lockstep. Reagan’s main political move was to sweep social conflict under the rug and “unify” the population in a new “Morning in America” through an appeal to a coalition of whites concerned about “crime” and taxation. This was matched by a cultural move to replace Hollywood representation of social struggle (as idiosyncratic, individualistic, and bourgeois as these filmic depictions were) with narratives of intra-race, intra-gender interpersonal oppression. Hollywood in the 1980s worked hard to render social tensions invisible and project a safe and stable white suburban America (as opposed to urban hellscapes) whose travails were largely due to bureaucratic interference, whether through meddling high school principals like in *Ferris Bueller’s Day Off* or the tyrannical EPA agents in *Ghostbusters*. Meanwhile, social movements had largely lost their fight against state repression and internal exhaustion, with most militant activists in prison, in graves, or in hiding. Local and federal governments rolled back the victories made over decades of struggle, the Cold War was stoked to enforce ideological allegiance, AIDS decimated the queer movement and black communities faced intensified police persecution tied to drugs, which were suddenly flowing at greater and greater rates into the ghetto. Central to this program of making social conflict disappear, oddly enough, is the nerd. And, as Christopher T. Fan writes, no film shows this as clearly as the fraternity comedy that inaugurated the nerd as hero: *Revenge of the Nerds*. The plot of this 1984 film follows two computer-science freshman at fictional Adams College. After they are kicked out of their dorms and forced to live in the gym by a group of displaced frat boys, they assemble a gang of assorted oddballs and rent a big house off-campus, living in a happy imitation of campus frat life. The frat guys hate this, so they prank and bully the nerds relentlessly. The nerds discover that the only way they can have the frat boys disciplined by an official university body is to be in a frat themselves and appeal to a fraternal council. The jock is forever cool, the nerd perennially oppressed. The nerd exists to deny the significance (if not the existence) of race, class, and gender oppression Looking around for a national frat that doesn’t yet have a chapter at Adams, they find Lamda Lamda Lamda, an all-black fraternity. When they visit the president of the fraternity, he refuses to give them accreditation. Surveying the room of (mostly) white boys, he says, “I must tell you gentlemen, you have very little chance of becoming Tri-Lambs. I’m in a difficult situation here. I mean after all, you’re nerds.” The joke is that he didn’t say “white.” In the imaginary of the film, being a nerd replaces race as the key deciding factor for social inclusion, while black fraternities are situated as the purveyors of exclusion and bias — despite the fact that black fraternities (though often participating in the same patriarchal gender politics as white frats) have historically been a force of solidarity and safety at otherwise hostile universities. Nonetheless, one of the nerds looks over the bylaws and sees that Lamda Lamda Lamda has to accept all new chapters on a trial basis. So the nerds now have a frat. On Adams’s campus, this sparks a prank war between the nerd frat and the prestigious frat that includes a panty raid on a sorority, the distribution of nude photos of a woman (made fair game by her association with one of the jock frat brothers), and a straight-up rape (played as comic), in which one of the nerds uses a costume to impersonate a sorority sister’s boyfriend and sleeps with her while wearing it. All these horrific acts toward women are “justified” by the bullying the nerds have ostensibly received for being nerds, and by the fact that the women aren’t interested in them — or at least, at first. Eventually the nerds’ rapey insouciance and smarts win their hearts, and they steal the jocks’ girlfriends. In the film’s final climactic scene, at a college-wide pep rally, the main nerd tries to speak about the bullying he faces but gets beaten down by the jocks. Just as all looks lost, black Tri-Lamb brothers from other colleges march in and line up in formation, arms crossed in front of the speaker platform in a clear echo of images of Black Panther rallies. The white college jocks thus held back, the national president of Lamda Lamda Lamda hands the nerd back the microphone, who in what amounts to an awful parody of Black Power speeches, announces, “I just wanted to say that I’m a nerd. And I’m here tonight to stand up for the rights of other nerds. All our lives we’ve been laughed at and made to feel inferior … Why? Because we’re smart? Because we look different. Well, we’re not. I’m a nerd, and I’m pretty proud of it.” Then, with the black fraternity president over his shoulder and the militant black frat brothers bordering the frame, the other nerd protagonist declares, “We have news for the beautiful people: There’s a lot more of us than there are of you.” It is the film’s emotional climax. And thus these rapists appropriate the accouterments of black power in the name of nerd liberation. This epitomizes the key ideological gesture in all the films named here: the replacement of actual categories of social struggle and oppression with the concept of the jock-nerd struggle. The jock is forever cool, the nerd perennially oppressed. And revenge is always on the table and always justified. In the nerd’s very DNA is a mystification of black, queer, and feminist struggle: As a social character, the nerd exists to deny the significance (if not the existence) of race, class, and gender oppression. The rise of the internet economy and the rise of nerdy cultural obsessiveness, collecting, and comics —not to mention the rise to power of the kids raised on *Revenge of the Nerds *and its 1980s ilk — means that the nerd is now fully ascendant. But perpetually aggrieved, these “nerds” believe other oppressed people should shut the fuck up and stop complaining, because they themselves didn’t complain! They got jobs! They got engineering degrees! They earned what they have and deserve what they take. As liberals sneer at the “ignorant” middle American white Trump voters, Trump’s most vocal young advocates — and the youthful base of American fascist movements going forward — are not the anti-intellectual culture warriors or megachurch moralists of the flyover states. Though the old cultural right still makes up much of Trump’s voting base, the intelligence-fetishizing “rationalists” of the new far right, keyboard warriors who love pedantic argument and rhetorical fallacies are the shock troops of the new fascism. These disgruntled nerds feel victimized by a thwarted meritocracy that has supposedly been torn down by SJWs and affirmative action. Rather than shoot-from-the-hip Christians oppressed by book-loving coastal elites, these nerds see themselves silenced by anti-intellectual politically correct censors, cool kids, and hipsters who fear true rational debate. Though sports culture continues to be a domain of intense patriarchal production and violence — rape jokes are just locker room talk, after all — these days jocks in the news are just as likely to be taking a knee against American racism in the image of Colin Kaepernick. The nerds, on the other hand, are shit-posting for a new American Reich. The nerd/jock distinction has always been a myth designed to hide social conflict and culturally re-center white male subjectivity. Now that the nerds have fully arrived, their revenge looks uglier than anything the jocks ever dreamed.
true
true
true
The myth of the bullied white outcast loner is helping fuel a fascist resurgence
2024-10-13 00:00:00
2016-11-16 00:00:00
https://reallifemag.com/…N-1-1024x683.jpg
article
reallifemag.com
Real Life
null
null
4,908,957
http://blog.fiestah.com/2012/12/11/fiestah-hearts-balanced-the-best-payment-solution-for-marketplaces/#
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
20,890,096
https://gist.github.com/seisvelas/952185983a625cd16e1ed4d9019cf0e5
Solving n queens in SQL
Seisvelas
8 queens (wiki explanation), as well as its variant n queens, was one of the first really 'hard' toy problems I did at Hola<Code /> (Hi people who look down at bootcamp alumni. I understand you. But still go fuck yourself). First I did it in Python with backtracking and a global store of solutions. Then I did it statelessly in Racket, then in Javascript, then back around to Python. here is my Node.js version (stateless): ``` var {range} = require('range') function isThreatened(board, pos) { return board.includes(pos) || board.any((e, i, a) => a.length - i === Math.abs(e - pos) ) } function nqueens(n, board=[]) { let moves = range(n).filter(e=> !isThreatened(board, e) ) return board.length === n - 1 ? moves : moves.map(move=> nqueens(n, board.concat([move])).map(sol=> [move].concat(sol) ) ).reduce((acc, cur) => acc.concat(cur), []); } ``` All great fun. But now working as a data engineer, I find myself doing ridiculous amounts of SQL (and my fair share of Elixir of all things) even though I'm not that great at it. I think there are different tiers of SQL skill: *Web Developer*: SQL 'devs' at this tier can do basic CRUD operations and JOINS*Data Analyst*: Analysts usually get involved with a wide range of SQL functions, create their own, use CTE's and Window Functions, and intimately understand different types of JOINs.*Database Developer*: Understands how different features' implementation affects performance, can write complex algorithms in SQL (assuming they understand the algorithm itself), uses recursive CTEs and functions, and masters the databases admin tooling and environment. I am at level two, clawing my way to level three. I understand recursive CTE's well enough to make a trivial toy problem, for example: ``` -- CTE to ~~r3cVrs1v3ly~~ find the nth Fibonacci number WITH RECURSIVE fib (a, b, n) AS ( SELECT 0, 1, 1 UNION ALL SELECT b, a+b, n+1 FROM fib ) SELECT a FROM fib WHERE n<=8; ``` But not well enough to competently use them in real life, or do any cool recursive backtracking of the type required for something like n queens. Admit it - n queens in SQL makes you go 'omg how would I even'. So I'm going to try to figure it out! I'm thinking maybe somekind of CROSS JOIN somehow? Oh, and if its n queens I'll have to dynamically create the table as I go using something like crosstab, no? Okay I've got it! I am going to use an array, dynamically generated to have N items. Then each of those will go into a subarray for all the nonthreatened follow up options and so on and so on :) Okay, so at this point I've got all the attempts, some of which made it to depth N. So my next step is to filter out all th nondepth-N attempts (this is CTE #2) THEN CTE #3 is me unnesting the arrays :) So the first CTE is the real work of the algorithm, and the subsequent 2 are just cleaning the data. Now, the array will be unnested so I can SELECT from all of it, cross joining with my new unnest (of the same old N array!) and finding non threatened ones, and keep going (?) like that until I get to depth N. Okay, I kind of have an idea of this architecture. I'm going to implement it tomorrow! Okay, so you *can* apply a function to the elements of an array, a basic necessity of my approach (although I could have written a function to do this myself of course). It works like this: First, turn the array into a set using unnest: ``` > SELECT n FROM unnest(ARRAY[1.53224,0.23411234]) AS n; n ------------ 1.53224 0.23411234 (2 rows) ``` Then, apply an expression to the column: ``` > SELECT ROUND(n, 2) FROM unnest(ARRAY[1.53224,0.23411234]) AS n; round ------- 1.53 0.23 (2 rows) ``` Finally, use array_agg to turn the set back into an array: ``` > SELECT array_agg(ROUND(n, 2)) FROM unnest(ARRAY[1.53224,0.23411234]) AS n; array_agg ------------- {1.53,0.23} (1 row) ``` From How to apply a function to each element of an array column in Postgres? on StackOverflow. I can generate an array of the numbers 1..n with generate_series, also from StackOverflow: ``` select array_agg(gs.val order by gs.val) from generate_series(1, 10) gs(val); ``` From How to apply a function to each element of an array column in Postgres? With that, I've got all the tools I need to generate permutations. Perhaps I should verify that by trying a simpler permute before jumping straight into generating queen positions. The above is more a set of building blocks that I intuit could be combinable into a solution. But I got a bit stuck. Now I'm thinking that an easier way to generate permutations in PostgreSQL might be to first do a CTE that just generates the numbers 1-10, then just keep cross joining on that CTE to get my combos. *NB: I am aware that there is a difference between combinations and permutations, but I never remember which is which. So I googled it and permutations care about order, combinations don't. Like the difference between a tuple and a set in Python. 123 is the same combination as 321, but not the same permutation.* Anyway, let's see how that goes! A simple combinatory CROSS JOIN (which is all cross joins I suppose) looks like this: ``` SELECT * FROM unnest(ARRAY[1, 2, 3]) a CROSS JOIN unnest(ARRAY['a', 'b', 'c']) b 1 a 1 b 1 c 2 a ... ``` Woo! Now It's just a matter of taking `*` and making it into an array. So that first line, `1 a` , becomes `{1, 'a'}` . Then By this logic, instead of starting with an array, I should start with an array of arrays and just keep appending them. That way, next time, I do the same cross join and the result `{1, 'a'} 1` becomes `{1, 'a', 1}` , and so on until the array gets to length N. Also, a CROSS JOIN will also give me all of those yummy results in less-yummy revers. So for every `{1, 'a'} 1` I will also get a `1 {1, 'a'}` . I'm not sure how to filter out such scenarios. In any case, I tried to do this recursively like so: ``` WITH RECURSIVE permute (n) AS ( SELECT ARRAY[a.*, b.*] FROM unnest(ARRAY[ARRAY[1], ARRAY[2], ARRAY[3]]) a CROSS JOIN unnest(ARRAY[1, 2, 3]) b UNION ALL SELECT ARRAY[a.n, ARRAY[b.*]] FROM unnest(ARRAY[1, 2, 3]) b CROSS JOIN permute a ) SELECT * FROM permute ``` Which ran the initial select and ignored the part after UNION ALL :( So there goes that. But I didn't really understand why, so I asked on StackOverflow. Someone answered with this: ``` WITH RECURSIVE permute AS ( SELECT ARRAY[v.n] as ar, 1 as lev FROM (VALUES (1), (2), (3)) v(n) UNION ALL SELECT p.ar || v.n, lev + 1 FROM permute p CROSS JOIN (VALUES (1), (2), (3)) v(n) WHERE lev < 5 ) SELECT p.ar FROM (SELECT p.*, MAX(lev) OVER () as max_lev FROM permute p ) p WHERE lev = max_lev; ``` Which, to me, seems like it should be roughly equivalent. I still don't fully understand what about mine didn't work, but oh well. I have a version that does make sense, and hopefully as I continue to master (and by 'master' in this case I mean 'roughly grasp the basics of') recursive CTE's. Okay, so now I can generate permutations! Woohoo! That's great because generating permutations is a basic part of the normal approach to solving nqueens, which goes like this: Generate sequence of columns on row. Put a queen on each. Now for each of those placements, try placing a queen on all of the next spots (permutation alert!) BUT - filter out the ones that aren't valid moves given the state of the board. Then pass that filtered set of possible solutions along to the next permute and permute on it, so on and so forth passing only the filter thingy. I know I explained that bad, this is why I'm still a junior okay? Now, in SQL I'm thinking something like: ``` CREATE OR REPLACE FUNCTION nothreat(board) -- check to make sure that given p.ar || v.n, v.n is a valid placement on p.ar -- (I don't know how to do this but I feel like it will actually be great in SQL by rank -- to get how far back we are hohooo) WITH RECURSIVE 8 queens AS ( SELECT board starts, 0 FROM VALUES(1, 2, 3, 4, 5, 6, 7, 8) UNION ALL SELECT p.ar || v.n, lev + 1 FROM permute p CROSS JOIN VALUES(1, 2, 3, 4, 5, 6, 7, 8) v(n) WHERE lev < 8 && nothreat(p.ar || v.n) ``` Okay that works but now I'm battling against nothreat!! But here's my solution. Have it take the board and the new pos as separate arguments, then use `row_number()` to get the row number of each pos, THEN given each of those check if the row number - my current row equals the absolute value of the value minus my current value. FUCK YEAH checked for diagonals. BUDA BOOM. Aannnd that approach totally works! Check it out: ``` CREATE OR REPLACE FUNCTION threat(board INT[], place INT) RETURNS BOOLEAN AS $$ SELECT TRUE IN (SELECT abs(new_row - row) = abs(new_col - col) OR new_col = col as threat FROM (SELECT ARRAY_LENGTH(board, 1) + 1 as new_row, place as new_col, unnest as col, ROW_NUMBER() OVER () as row FROM unnest(board)) as b) $$ LANGUAGE SQL; ``` Okey, so the final step is to replace 8 with n so we have n queens. I spent a good amount of time writing about how to do this (involving one attempt with crosstab to dynamically generate my VALUES statement) but I deleted my cookies, was logged out of GitHub, and lost everything. In the end I learned that VALUES is generating a seperate row for everything that appears in it's own parentheses? I think? Anyway I lost interest when I realized that generate_series can do the same thing but to an arbitrary value, so my end code goes like: ``` CREATE OR REPLACE FUNCTION threat(board INT[], place INT) RETURNS BOOLEAN AS $$ SELECT TRUE IN (SELECT abs(new_row - row) = abs(new_col - col) OR new_col = col as threat FROM (SELECT ARRAY_LENGTH(board, 1) + 1 as new_row, place as new_col, unnest as col, ROW_NUMBER() OVER () as row FROM unnest(board)) as b) $$ LANGUAGE SQL; CREATE OR REPLACE FUNCTION nqueens(n INT) RETURNS SETOF RECORD AS $$ WITH RECURSIVE eight_queens AS ( SELECT ARRAY[v] as ar, 1 as lev FROM generate_series(1, n) v UNION ALL SELECT p.ar || v, lev + 1 FROM eight_queens p CROSS JOIN generate_series(1, n) v WHERE lev < n AND NOT threat(p.ar, v) ) SELECT p.ar FROM (SELECT p.*, MAX(lev) OVER () as max_lev FROM eight_queens p ) p WHERE lev = max_lev $$ LANGUAGE SQL; SELECT nqueens(8); ``` WOOHOO! Yeah. NQUEENS!!! The crazy thing is, it can find all 9 queens solutions in less than 1 second. That takes Python, Javascript, or Racket almost a minute using my backtracking algorithm. I think it has to do with how good Postgres is at combining tables (compared to Python eg permuting arrays). Much in the same way that so many problems can be broken down well into a combination of maps, filters and reduces, I think the same is true with SQL: So many problems are just generating and relationally querying datasets.
true
true
true
Solving n queens in SQL. GitHub Gist: instantly share code, notes, and snippets.
2024-10-13 00:00:00
2023-10-15 00:00:00
https://github.githubass…54fd7dc0713e.png
article
github.com
Gist
null
null
10,205,843
http://www.passitdown.tv/watch/travis-kalanick-tells-stephen-colbert-self-driving-cars-are-the-future/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,373,099
http://www.betalize.com
betalize.com
null
# betalize.com This premium domain name is available for purchase! Memorable, brandable and ready for your business Establish instant trust and credibility with customers Premium domain names appreciate in value over time Secure and immediate transfer via Escrow.com Find more betting domains for sale on GamblingInvest.com #### Buy safely and securely with **Escrow.com** When buying betalize.com, your transaction is securely processed by Escrow.com, a licensed escrow company.
true
true
true
betalize.com is available for purchase. Get in touch to discuss the possibilities!
2024-10-13 00:00:00
null
null
null
null
null
null
null
19,025,833
https://www.foehub.com/learn-to-top-5-self-learning-websites
null
null
null
true
true
false
null
2024-10-13 00:00:00
null
null
null
null
null
null
null
15,057,609
https://github.com/alexellis/docker-arm/issues/17
How can I help test Docker for RPi? · Issue #17 · alexellis/docker-arm
Alexellis
- - Notifications You must be signed in to change notification settings - Fork 100 # How can I help test Docker for RPi? #17 ## Comments With rebranding to Docker CE, a new repository was introduced with a new docker-ce package (instead of docker-engine). Docker CE 17.06 works fine on Raspbian using the official instructions for Debian: https://docs.docker.com/engine/installation/linux/docker-ce/debian/#install-using-the-repository | I have successfully tested these on Raspberry Pi 3 - some details below. Device : Raspberry Pi 3 ## High Level Instructions## Prep Device ## Install Dependencies ## Checkout all necessary code and build ## Swap pre-installed docker version for built version ## Lets test docker itself first ## Test docker swarm with faas from alexellisStart a docker swarm (single node is fine) Lets get alexisellis's faas code (to test docker swarm) Find your ip address Then open up browser and hit http://:8080 to see the faas menu. ## Success! | Thanks for compiling all the instructions and comments into one 👍 | @praseodym it really doesn't work fine which is the point of these issues. Please work through the issues and you'll see what's going wrong both on ARMv6 and with Swarm. | alexelliscommentedThe last official Docker binaries for Raspberry Pi (Raspbian) were released in May at version 17.05. 17.05 is fully working including Docker Swarm and is available via `curl -sSL get.docker.com | sh` .Support was going to be dropped for Raspbian (and ARMv6) from 17.05 onwards, but fortunately the decision was re-considered. We need to test Docker 17.07 RC on Raspbian Jessie and Stretch on the ARMv6 (Pi Zero/B/B+) and ARMv7 (RPi 2/3) platforms. Unfortunately this may mean building from source which can take some time and can be tricky on a small device. Please setup an environment with instructions in #16 Then pick one or all of the following issues: The text was updated successfully, but these errors were encountered:
true
true
true
The last official Docker binaries for Raspberry Pi (Raspbian) were released in May at version 17.05. 17.05 is fully working including Docker Swarm and is available via curl -sSL get.docker.com | sh...
2024-10-13 00:00:00
2017-08-20 00:00:00
https://opengraph.githubassets.com/ca6a7be3928146babd2f6aa108f7ec5662d1d384d47d622f34b6b2c253e10d4c/alexellis/docker-arm/issues/17
object
github.com
GitHub
null
null
2,791,871
http://www.readwriteweb.com/start/2011/07/strategy-roundtable-for-entrep-6.php#.TijBbjKCeT4.hackernews
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
17,536,863
https://financialservices.house.gov/uploadedfiles/071818_mpt_memo.pdf
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
22,174,685
https://lowlevelbits.org/type-equality-in-llvm/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
8,420,089
http://delivery.acm.org/10.1145/360000/358561/p75-hoare.pdf
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,646,145
https://livecodestream.dev/post/2020-06-25-intro-to-docker-for-web-developers/
Juan Cruz Martinez - YouTuber, Blogger, Software Engineer & Author
null
Ever feel like you’re falling behind in this crazy, dynamic world of programming? Did you ever wonder, “How can I stay ahead when my job barely gives me any time to breathe, let alone learn new things? Reflecting on my 20-year journey as a software developer, I realize some things I wish I had started doing sooner. These practices, habits, and mindsets would have accelerated my growth, expanded my knowledge, and enhanced my overall experience as a developer.
true
true
true
I’m Juan. I stream, blog, and make youtube videos about tech stuff. I love coding, I love React, and I love building shit!
2024-10-13 00:00:00
2023-11-09 00:00:00
null
website
jcmartinez.dev
Goto Lcs Ug
null
null
23,933,476
https://arstechnica.com/science/2020/07/meet-the-4-frontrunners-in-the-covid-19-vaccine-race/
Meet the 4 frontrunners in the COVID-19 vaccine race
Beth Mole
Researchers have now reported data from early (and small) clinical trials of four candidate COVID-19 vaccines. So far, the data is positive. The vaccines appear to be generally safe, and they spur immune responses against the novel coronavirus, SARS-CoV-2. But whether these immune responses are enough to protect people from infection and disease remains an important unknown. The four candidates are now headed to larger trials—phase III trials—that will put them to the ultimate test: can they protect people from COVID-19 and end this pandemic? ## The challenge While early trials looking at safety and immune response required dozens or hundreds of volunteers, researchers will now have to recruit tens of thousands. Ideally, volunteers will be in places that still have high levels of SARS-CoV-2 circulating. The more likely it is that volunteers will encounter the virus in their communities, the easier it is to extrapolate if a vaccine is protective. As such, researchers are planning to do a significant amount of testing in the US and other parts of the Americas, which have largely failed at controlling the pandemic. There has been much debate about the use of “human challenge trials,” in which researchers would give young, healthy volunteers at low risk from COVID-19 an experimental vaccine and then intentionally expose them to SARS-CoV-2 in controlled settings. This could *potentially* provide a clearer, faster answer on vaccine efficacy. It’s certainly an appealing idea given the catastrophic pandemic—and it's an idea that has gained traction in recent weeks. An advocacy group called 1Day Sooner has collected the names of more than 30,000 people willing to participate in such a trial, for instance. But experts remain divided on the idea. The main concern is that there is no “rescue” treatment for COVID-19 that can fully protect a trial volunteer from severe disease and death if an experimental vaccine fails. Though young, healthy people have *less* risk than older people and those with underlying health conditions, some still suffer severe disease and death from COVID-19—and it’s unclear why. Opponents also note that challenge trials may not be faster or necessary, given the high levels of disease spread in the US and elsewhere.
true
true
true
Safety and immune responses look good, but do these vaccines work?
2024-10-13 00:00:00
2020-07-23 00:00:00
https://cdn.arstechnica.…16514-scaled.jpg
article
arstechnica.com
Ars Technica
null
null
15,369,586
https://github.com/mptre/yank
GitHub - mptre/yank: Yank terminal output to clipboard
Mptre
Yank terminal output to clipboard. The yank(1) utility reads input from `stdin` and display a selection interface that allows a field to be selected and copied to the clipboard. Fields are either recognized by a regular expression using the `-g` option or by splitting the input on a delimiter sequence using the `-d` option. Using the arrow keys will move the selected field. The interface supports several Emacs and Vi like key bindings, consult the man page for further reference. Pressing the return key will invoke the yank command and write the selected field to its `stdin` . The yank command defaults to xsel(1) but could be anything that accepts input on `stdin` . When invoking yank, everything supplied after the `--` option will be used as the yank command, see examples below. Others including myself consider it a cache miss when resort to using the mouse. Copying output from the terminal is still one of the few cases where I still use the mouse. Several terminal multiplexers solves this issue, however I don't want to be required to use a multiplexer but instead use a terminal agnostic solution. - Yank an environment variable key or value: `$ env | yank -d =` - Yank a field from a CSV file: $ yank -d \", <file.csv - Yank a whole line using the `-l` option:$ make 2>&1 | yank -l - If `stdout` is not a terminal the selected field will be written to`stdout` and exit without invoking the yank command. Kill the selected PID:$ ps ux | yank -g [0-9]+ | xargs kill - Yank the selected field to the clipboard as opposed of the default primary clipboard: $ yank -- xsel -b `$ pacman -S yank` `$ sudo apt-get install yank` The binary is installed at `/usr/bin/yank-cli` due to a naming conflict. Versions 24/25/26/Rawhide: `$ sudo dnf install yank` The binary is installed at `/usr/bin/yank-cli` due to a naming conflict. Man-pages are available as both `yank` and `yank-cli` . `$ nix-env -i yank` ``` $ zypper install yank ``` `$ brew install yank` `$ sudo port install yank` `$ pkg install yank` `$ pkg_add yank` The install directory defaults to `/usr/local` : `$ make install` Change the install directory using the `PREFIX` variable: `$ make PREFIX=DIR install` The default yank command can be defined using the `YANKCMD` variable. For instance, macOS users would prefer `pbcopy(1)` : `$ make YANKCMD=pbcopy` Copyright (c) 2015-2022 Anton Lindqvist. Distributed under the MIT license.
true
true
true
Yank terminal output to clipboard. Contribute to mptre/yank development by creating an account on GitHub.
2024-10-13 00:00:00
2015-08-24 00:00:00
https://opengraph.githubassets.com/82abfed498b9b31ddfdbacc1913c13f699f2dec0f189ae704073249c85132848/mptre/yank
object
github.com
GitHub
null
null
3,375,252
http://www.thedominoproject.com/2011/12/how-much-should-an-ebook-cost.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,197,194
http://arstechnica.com/information-technology/2016/06/here-comes-flatpak-a-competitor-to-ubuntus-cross-platform-linux-apps/
Linux’s RPM/deb split could be replaced by Flatpak vs. snap
Jon Brodkin
Linux developers are going to have more than one choice for building secure, cross-distribution applications. Ubuntu's "snap" applications recently went cross-platform, having been ported to other Linux distros including Debian, Arch, Fedora, and Gentoo. The goal is to simplify packaging of applications. Instead of building a deb package for Ubuntu and an RPM for Fedora, a developer could package the application as a snap and have it installed on just about any Linux distribution. But Linux is always about choice, and snap isn't the only contender to replace traditional packaging systems. Today, the developers of Flatpak (previously called xdg-app) announced general availability for several major Linux distributions, with a pointer to instructions for installing on Arch, Debian, Fedora, Mageia, and Ubuntu. Though Flatpak has multiple developers from the GNOME community, "Flatpak is the brainchild of Alexander Larsson, Principal Software Engineer at Red Hat," the announcement said. The technology "allows application developers to build against a series of stable platforms (known as runtimes), as well as to bundle libraries directly within their applications. Flatpak is also standards compliant, offering support for the Open Container Initiative specification." Like Ubuntu's snaps, Flatpak developers are promising that apps packaged in the new format will be isolated from each other and from critical parts of the operating system, improving security. "Flatpak apps are sandboxed. From within the sandbox, the only things the app can 'see' are itself and a limited set of libraries and operating system interfaces. This effectively isolates apps from each other as well as from the host system and makes it much harder for applications to steal user data or exploit one another," the announcement said.
true
true
true
Red Hat developer’s Flatpak installs apps on Fedora, Ubuntu, and other distros.
2024-10-13 00:00:00
2016-06-21 00:00:00
https://cdn.arstechnica.…06/standards.png
article
arstechnica.com
Ars Technica
null
null
11,547,492
http://www.startcon.com/blog/2016/5-aussie-healthtech-startups-that-are-doing-something-cool
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,568,606
http://marcusblankenship.com/blog/2014/4/10/client-porn
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,631,511
https://www.smithsonianmag.com/innovation/founding-fathers-moey-problems-bitcoin-180968393/?no-ist
What the Founding Fathers' Money Problems Can Teach Us About Bitcoin
Smithsonian Magazine; Clive Thompson; Kotryna Zukauskaite
# What the Founding Fathers’ Money Problems Can Teach Us About Bitcoin The challenges faced by the likes of Ben Franklin have a number of parallels to today’s cryptocurrency boom If you walk into the Ketchup Premium Burger Bar in Las Vegas, tucked inside you’ll find a strange icon of today’s economy: a Coinsource ATM. Put in a few American dollars, and the ATM will quickly exchange them for Bitcoin, the newfangled digital currency, which it will place in your “digital wallet.” Want to do the reverse transaction? No problem: you can sell Bitcoin and withdraw U.S. greenbacks. Bitcoin, as you may have heard, is poised to overturn the world of currency. That’s because it’s a form of digital cash that adherents regard as unusually robust. Bitcoin is managed by a community of thousands of “miners” and "nodes" worldwide that are running the Bitcoin software, each of them recording every single transaction that takes place. This makes Bitcoin transactions extremely hard to fake: If I send you a Bitcoin, all those Bitcoin nodes record that transaction, so you cannot later claim you *didn’t* receive it. Similarly, I can prove I own 100 Bitcoin because the Bitcoin network affirms this. It’s the first global currency, in other words, that people feel secure enough to own—yet isn’t controlled by any government. And it’s making some Bitcoin holders massively wealthy—at least on paper. “We got in early, jumped with both feet,” says Cameron Winklevoss, a high-tech entrepreneur who, with his twin brother, Tyler, bought millions of dollars of Bitcoin when a single digital coin was worth under $10. By the end of 2017, Bitcoin had soared to almost $20,000 per coin, making the Winklevosses worth $1.3 billion in the virtual dough. But Bitcoin is also wildly volatile: Mere weeks later, its value plunged in half—shaving hundreds of millions off their fortune. It hasn’t fazed them. The Winklevoss twins, who won $65 million from Facebook in a lawsuit claiming the business was their idea, believe Bitcoin is nothing less than the next incarnation of global money. “This was something that was previously not thought possible,” Cameron says. “They thought that we need central banks, we need Visa, to validate transactions.” But Bitcoin shows that a community of people can set up a currency system themselves. It’s why Bitcoin’s earliest and most ardent fans were libertarians and anarchists who deeply distrusted government control of money. Now they had their own, under no single person or entity’s control! Nor is Bitcoin alone. Its rise has set up an explosion of similar “cryptocurrencies”—companies and individuals who take open-source blockchain code freely available online and use it to issue their own “alt-coin.” There’s Litecoin and Ether; there are start-ups that raised tens of millions in just a few hours by issuing a coin avidly bought by fans who hope it, too, will pop like Bitcoin, making them all instant cryptomillionaires. Though it’s hard to fix a total, according to CoinMarketCap there appear to be more than 1,500 alt-coins in existence, a global ocean of digital cash worth likely hundreds of billions. Indeed, the pace of coin-issuing is so frantic that alarmed critics argue they’re nothing but Ponzi schemes—you create a coin, talk it up and when it’s worth a bunch, sell it, leaving the value to crash for the Johnny-come-lately suckers. So which is it? Are Bitcoin and the other alt-coins serious currencies? Can you trust something that’s summoned into being, without a government backing it? As it turns out, this is precisely the conundrum that early Americans faced. They, too, needed to create their own currencies—and find some way to get people to trust in the scheme. ********** Currencies are thousands of years old. For nearly as long as we’ve been trading goods, we’ve wanted some totem we can use to represent value. Ancient Mesopotamians used ingots of silver as far back as 3,000 B.C. Later Europe, too, adopted metal coins because they satisfied three things that money can do: They’re a “store of value,” a “medium of exchange” and a way of establishing a price for something. Without a currency, an economy can’t easily function, because it’s too hard to get everything you need via barter. The first American Colonists faced a problem: They didn’t have enough currency. At first, the Colonists bought far more from Britain than they sold to it, so pretty soon the Colonists had no liquidity at all. “The mind-set was, wealth should flow from the Colonies to Britain,” says Jack Weatherford, author of *The History of Money*. So the Colonists fashioned their own. They used tobacco, rice or Native American wampum—lavish belts of beaded shells—as a temporary currency. They also used the Spanish dollar, a silver coin that was, at the time, the most widely used currency worldwide. (The terminology stuck: This is why the government later decided to call its currency the “dollar” rather than the “pound.”) A young Ben Franklin decided that the United States needed more. He’d noticed that whenever a town got an infusion of foreign currency, business activity suddenly boomed—because merchants had a trustworthy, liquid way to do business. Money had a magical quality: “It is Cloth to him that wants Cloth, and Corn to those that want Corn,” he wrote, in a pamphlet urging the Colonies to print their own paper money. War is what first pushed the Colonies to print en masse. Massachusetts sold notes to the public to fund its battles in Canada in 1690, promising that citizens could later use that money to pay their taxes. Congress followed suit by printing fully $200 million in “Continental” dollars to fund its expensive revolution against Britain. Soon, though, disaster loomed: As Congress printed more and more bills, it triggered catastrophic inflation. By the end of the war, the market drove the value of a single Continental to less than a penny. All those citizens who’d traded their goods for dollars had, in effect, just transferred that wealth to the government—which had spent it on a war. “That’s where they got the phrase, ‘not worth a Continental,’” says Sharon Ann Murphy, professor of history at Providence College and author of *Other People’s Money*. Some thought it was a clever and defensible use of money-printing. “We are rich by a contrivance of our own,” as Thomas Paine wrote in 1778. Government had discovered that printing dough could get them through a rough patch. But many Americans felt burned and deeply distrustful of government-issued bucks. Farmers and merchants were less happy with fiat currency—not backed up by silver or gold—because of how the often-inevitable inflation wreaked havoc with their trade. This tension went all the way to the drafting of the Constitution. James Madison argued “nothing but evil” could come from “imaginary money.” If they were going to have currency, it should only be silver and gold coins—things that had real, inherent value. John Adams hotly declared that every dollar of printed, fiat money was “a cheat upon somebody.” As a result, the Constitution struck a compromise: Officially, it let the federal government mint only coins, forcing it to tether its currency to real-world value. As for the states? Well, it was OK for financial institutions in the states to issue “bank notes.” Those were essentially IOU’s: a bill that you could later redeem for real money. As it turns out, that loophole produced an avalanche of paper money. In the years after the Revolution, banks and governments across the U.S. began avidly issuing bank notes, which were used more or less as everyday money. Visually, the bills tried to create a sense of trustworthiness—and Americanness. The iconography commonly used eagles, including one Pennsylvania bill that showed an eagle eating the liver of Prometheus, which stood in for old Britain. They showed scenes of farming and households. The goal was to look soothing and familiar. “You had depictions of agricultural life, of domestic life. You get portraits literally of everyday people. You got depictions of women, which you don’t have today on federal bills!” says Ellen Feingold, curator of the national numismatic collection at the Smithsonian’s National Museum of American History. “You got pictures of someone’s dog.” All told, there were probably 9,000 different bills issued by 1,600 different banks. But figuring out which bill to trust was hard—a daily calculation for the average American. If you lived in New Hampshire and someone handed you a $5 bill issued by a Pennsylvania bank, should you trust it? Maybe you’d only give someone $4 worth of New Hampshire money for it, because, well, to truly redeem that bill for gold or coins you’d need to travel to Pennsylvania. The farther the provenance of the bill, the less it might be worth. “As crazy as this sounds, this was normal for Americans,” says Steven Mihm, associate professor of history at the University of Georgia and author of *A Nation of Counterfeiters*. In a very real way, Americans thought daily about the philosophy of currency—what makes a bill worth something?—in a way that few modern Americans do. It makes them far more similar to those digital pioneers today, pondering the possible value of their obscure alt-coins. ********** One thing that made it even harder to trust currency was the rampant counterfeiting. Creating fake money was so easy—and so profitable—that all the best engravers worked for the criminals. Newspapers would print columns warning readers about the latest forgeries. Yet Americans mostly shrugged and used the counterfeit bills. After all, so long as the person you were doing business with was willing to take the bill—well, why not? Fakes might be the only currency available. Keeping business moving along briskly was more important. “Using counterfeits was a typical thing in merchants, and bars. Especially in a bar! You get a counterfeit bill and you put it back in circulation with the next inebriated customer,” says Mihm. Rather than copy existing bills, some counterfeiters would simply create their own, from an imaginary bank in a far-off U.S. state, and put it into circulation. Because how was anyone going to know that bank didn’t exist? Banks themselves caused trouble. A nefarious banker would print bills of credit, sell them, then close up shop and steal all the wealth: “wildcatting.” A rumor that a healthy bank was in trouble would produce a “bank run”—where customers rushed to withdraw all their money in hard, real, metal coins, so many at once that the bank wouldn’t actually have the coins on hand. A bank run could destroy a local economy by making the local currency worthless. Banks, and bankers, thus became hated loci of power. Yet the biggest currency crisis was still to come: the Civil War. To pay for the war, each side printed fantastic amounts of dough. Up North, the Union minted “greenbacks.” One cartoon mocked politicians of the time, with a printer cranking out bills while complaining: “These are the greediest fellows I ever saw...With all my exertions I cant [sic] satisfy their pocket, though I keep the Mill going day and night.” When the North won the war, the greenback retained a decent amount of value. But the South under Jefferson Davis had printed a ton of its own currency—the “grayback”—and when it lost the war, the bills became instantly worthless. White Southerners were thus economically ruined not only by the freeing of their previously unpaid-for source of labor—the slaves—but by the collapse of their currency. In the 1860s, the federal government passed laws establishing a national banking system. They also established the Secret Service—not to protect the president, but to fight counterfeiters. And by the late 19th century, you could wander the nation spending the American dollar more or less confidently in any state. ********** Bitcoin—and today’s other cryptocurrencies—solve old problems of currency and create new limits on how it’s used. They cannot easily be counterfeited. The “blockchain”—that accounting of every transaction, copied over and over again in thousands of computers worldwide—makes falsifying a transaction unbelievably impractical. Many cryptocurrencies are also created to have a finite number of coins, so they can’t be devalued, producing runaway inflation. (The code for Bitcoin allows for only 21 million to be made.) So no government could pay for its military ventures by arbitrarily minting more Bitcoin. This is precisely what the libertarian fans of the coin intended: to create a currency outside government control. When Satoshi Nakamoto, the secretive, pseudonymous creator of Bitcoin released it in 2009, he wrote an essay savagely critiquing the way politicians print money: “The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust.” Still, observers aren’t sure a currency can work when it’s backed only by the faith of people participating in it. “Historically, currencies require either that it’s based in something real, like gold, or it’s based in power, the power of the state,” as Weatherford says. If for some reason the community of people who believe in Bitcoin were to falter, its value could dissolve overnight. Some cryptocurrency pioneers think alt-coins are thus more like penny stocks—ones that get talked up by shysters to lure in naive investors, who get fleeced. “I want a worse word than ‘speculation,’” says Billy Markus, a programmer who created a joke alt-coin called “Dogecoin,” only to watch in horror as hucksters began actively bidding it up. “It’s like gambling, but gambling with a very standard kind of predictable human emotions.” Mihm thinks the rush toward Bitcoin illustrates that the mainstream ultimately agrees, in some way, with the libertarians and anarchists of alt-coins. People don’t trust banks and governments. “The cryptocurrencies are an interesting canary in the coal mine, showing a deeper anxiety about the future of government-issued currencies,” he says. On the other hand, it’s possible that mainstream finance may domesticate the various alt-coins—by adopting them, and turning them into instruments of regular government-controlled economies. As Cameron Winklevoss points out, major banks and investment houses are creating their own cryptocurrencies, or setting up “exchanges” that let people trade cryptocurrencies. (He and his twin set up one such exchange themselves, Gemini.) “It’s playing out, it’s happening,” he notes. “All the major financial institutions have working groups looking at the tech.” He likens blockchain technology to the early days of the internet. “People thought, why do I need this? Then a few years later they’re like, I can’t live without my iPhone, without my Google, without my Netflix.” Or, one day soon, without your Bitcoin ATM. *Editor’s note: an earlier version of this story conflated Bitcoin mining and nodes. Mining validates Bitcoin transactions; nodes record Bitcoin transactions.* **A Note to our Readers** Smithsonian magazine participates in affiliate link advertising programs. If you purchase an item through these links, we receive a commission.
true
true
true
The challenges faced by the likes of Ben Franklin have a number of parallels to today’s cryptocurrency boom
2024-10-13 00:00:00
2018-03-20 00:00:00
https://th-thumbnailer.c…_prologue-wr.jpg
article
smithsonianmag.com
Smithsonian Magazine
null
null
4,051,631
http://www.apivoice.com/2012/05/31/judge-alsup-is-a-do-er/index.php
apivoice.com
null
apivoice.com Buy this domain The owner of apivoice.com is offering it for sale for an asking price of 1299 USD!
true
true
true
This website is for sale! apivoice.com is your first and best source for all of the information you’re looking for. From general topics to more of what you would expect to find here, apivoice.com has it all. We hope you find what you are searching for!
2024-10-13 00:00:00
null
null
null
null
apivoice.com - This website is for sale!
null
null
7,783,286
http://mybiasedcoin.blogspot.com/2014/05/whats-important-in-algoirthms.html
What's Important in Algoirthms
Michael Mitzenmacher
**Thus I think the present state of research in algorithm design misunderstands the true nature of efficiency. The literature exhibits a dangerous trend in contemporary views of what deserves to be published.** But here's the whole question/answer for context. **15. Robert Tarjan, Princeton:**What do you see as the most promising directions for future work in algorithm design and analysis? What interesting and important open problems do you see? **Don Knuth:**My current draft about satisfiability already mentions 25 research problems, most of which are not yet well known to the theory community. Hence many of them might well be answered before Volume 4B is ready. Open problems pop up everywhere and often. But your question is, of course, really intended to be much more general. In general I'm looking for more focus on algorithms that work fast with respect to problems whose size, *n*, is *feasible*. Most of today's literature is devoted to algorithms that are asymptotically great, but they are helpful only when *n*exceeds the size of the universe. In one sense such literature makes my life easier, because I don't have to discuss those methods in TAOCP. I'm emphatically *not*against pure research, which significantly sharpens our abilities to deal with practical problems and which is interesting in its own right. So I sometimes play asymptotic games. But I sure wouldn't mind seeing a lot more algorithms that I could also use. For instance, I've been reading about algorithms that decide whether or not a given graph *G*belongs to a certain class. Is *G*, say, chordal? You and others discovered some great algorithms for the chordality and minimum fillin problems, early on, and an enormous number of extremely ingenious procedures have subsequently been developed for characterizing the graphs of other classes. But I've been surprised to discover that very few of these newer algorithms have actually been implemented. They exist only on paper, and often with details only sketched. Two years ago I needed an algorithm to decide whether *G*is a so-called comparability graph, and was disappointed by what had been published. I believe that all of the supposedly "most efficient" algorithms for that problem are too complicated to be trustworthy, even if I had a year to implement one of them. Thus I think the present state of research in algorithm design misunderstands the true nature of efficiency. The literature exhibits a dangerous trend in contemporary views of what deserves to be published. Another issue, when we come down to earth, is the efficiency of algorithms on real computers. As part of the Stanford GraphBase project I implemented four algorithms to compute minimum spanning trees of graphs, one of which was the very pretty method that you developed with Cheriton and Karp. Although I was expecting your method to be the winner, because it examines much of the data only half as often as the others, it actually came out two to three times *worse*than Kruskal's venerable method. Part of the reason was poor cache interaction, but the main cause was a large constant factor hidden by *O*notation. ## 8 comments: I think that both theory and practical algorithms should be encouraged, for different reasons. The main problem is in the middle: a new algorithm that is simpler that all already-published algorithms but not at the level of being implementable faces several problems when trying to publish it. Thanks for the link to the Knuth Q&A - some very interesting material there. The topic of the missing implementations of published algorithms has been discussed before (on this blog I think). It seems to me that this is the theoretical CS version of the non-reproducibility crisis currently occurring in disciplines such as psychology. Have you seen any progress over the last 5 years? Maybe giving DOIs for code will allow people to get citation credit and go some way to improving the situation. It seems that the reward system of scholarship is badly broken, when obscure "improvements" of algorithms that could never be implemented get points, but careful comparison of actual running times of useful implementations of more important algorithms does not. Math envy is a major problem in many disciplines! this is a great high voted question on that "powerful algorithms too complex to implement" & rjlipton has a great blog on the subject also re "galactic algorithms". gotta blog on this sometime & include your blog! It's important to tell that 'Algoirthms' is incorrect. Math envy or not, progress on fundamental and difficult problems in algorithms and theoretical computer science is not going to happen by carefully implementing existing algorithms. Pragmatic considerations are of course important but academic TCS has to advance the frontiers. Anon #5: I think I disagree with pretty much everything in your comment. Since that's rare, I'll respond. "...progress on fundamental and difficult problems in algorithms and theoretical computer science is not going to happen by carefully implementing existing algorithms" I just don't think this is correct. Often implementing existing algorithms is what gives you insight allowing for improvements -- including significant theoretical improvements. I could spout examples from myself and students, but I'm sure you can find some of your own. The relationship between theory and practice is a 2-way street. Sure, implementation isn't the ONLY way to make progress, but it's also a way that should not be ignored. "Pragmatic considerations are of course important but academic TCS has to advance the frontiers." In specifically algorithms, advancing the frontiers should definitely include "pragmatic considerations". "Use" has always been at least a criteria for successful algorithmic work, making pragmatic considerations important. "Pragmatic considerations are of course important but academic TCS has to advance the frontiers." how did TCS get to this point... Anonymous X: I was talking about Algoirthms. What are Algorithms? (Just kidding.) Anonymous June 1: I'm not surprised by the "pragmatic" comment. I think TCS should have room for a wide spectrum of researchers, certainly including those that aren't interested in practical implications at all. The "T" is for "Theoretical". But I think I'm on record as noting that TCS has, culturally, tilted pretty far in the "non-practical" direction, so that a comment like Anon May 26 is both unsurprising and, perhaps to many, uncontroversial. Post a Comment
true
true
true
I saw this interesting article up on 20 questions with Don Knuth , worth reading just for the fun of it. But the following question on work...
2024-10-13 00:00:00
2014-05-21 00:00:00
null
null
blogspot.com
mybiasedcoin.blogspot.com
null
null
9,602,176
http://famer.github.io/device.css/
null
null
### About Device.css is a project that helps you to display app screenshots including phones, tablets and screens easily with pure CSS file sized 29Kb only. They are scalable and use CSS3 styles to enable you to use vector graphics that looks sharp on any resolution. ### Example ``` <link rel="stylesheet" href="device.css" type="text/css"> <div class="iphone-6 silver"></div> ``` ### Results Check out more extensive demo here. ### Usage - Include `device.css` or generate your custom subset of devices.`<link rel="stylesheet" type="text/css" href="css/device.css">` - Add model name from the list to your div’s classes `<div class="iphone-6 silver"></div>` - You can scale the phone using font-size style, with font-size of 12px being 100% screen size, 6px - 50% etc. `<div class="iphone-6 silver" style="font-size:6px"></div>` ### Supported devices list with options: - iPhone 6 - base CSS class: **iphone-6** - required color scheme CSS class **space-gray**or**silver** - optional CSS class **landscape**for horizontal orientation Example:`<div class="iphone-6 space-gray landscape"></div>` - base CSS class: - iPad Mini 3 - base CSS class: **ipad-mini-3** - required color scheme CSS class **space-gray**,**silver**or**gold** - base CSS class: - Macbook Air - base CSS class: **macbook-air** - base CSS class: - iMac - base CSS class: **imac** - base CSS class: ### Requirements Safari 6.1+, Chrome 26.0+, Opera 15.0, FireFox 16.0+, IE 10+ iOS Safari 7.1+ Android Browser 4.4+, Chrome for Android 42+ ### Sources Sources available in repository famer/device.css. ### Issues If you find some bugs or issues feel free to submit report in issues section. ### License Device.css and its sources are released under MIT license. ### Share ### Copyright Devices are created with CSS magic by Alex Inkin, project managed by Timur Tatarshaov. You are free to use it anywhere according to MIT license. © 2015
true
true
true
Device.css is a tool that helps you to display device on page with pure CSS for web designers
2024-10-13 00:00:00
2015-01-01 00:00:00
/device.css/identity/image.png
null
github.io
famer.github.io
null
null
37,780,724
https://segment.com/blog/analytics-react-native-2-blog/
How to build a holistic tracking implementation with Analytics React Native 2.0
null
# How to build a holistic tracking implementation with Analytics React Native 2.0 Analytics React Native 2.0 makes it easier than ever to customize your tracking implementation to meet the bespoke requirements of your app. Analytics React Native 2.0 makes it easier than ever to customize your tracking implementation to meet the bespoke requirements of your app. Hi I’m Alan, a software engineer at Twilio Segment. I’ll be walking you through a complete implementation of Analytics for React Native using this app. This is an example of a real-world E-Commerce application that sells skateboard decks. We decided to use real products designed and produced by our beloved colleague, Brandon Sneed. In this app, you can walk through a typical E-Commerce purchase flow complete with a tracking implementation that follows Segment’s E-Commerce spec. If you would like to follow along, you can use the starter app found here. If you would like to peruse or follow along with a completed implementation, checkout the finished app. We will start with a functional E-Commerce app built in React Native that does not have tracking implemented. From there, we will walk through a complete tracking implementation with the help of Analytics React Native 2.0, Protocols, and a Tracking Plan. The new architecture implemented in Analytics React Native 2.0 will make it possible for us to add IDFA and advertisingId collection consent without incorporating another third-party dependency and incorporate an analytics hook to track events throughout the application. Finally, with the help of the Firebase Destination Plugin, we will send the events tracked to Firebase. I have been with Segment for a little over three years now. I have helped dozens of companies implement, improve, and sometimes completely reimagine their mobile analytics implementations. In that time, I have seen just how quickly things can change in the mobile analytics landscape and how important it is to have a comprehensive understanding of your tracking goals/needs before you begin your implementation. To that end, the following series of blog posts will show you how you can leverage Segment’s new mobile architecture by applying it to a few relatively standard, real-world use cases in a React Native application. Over the course of the past year we have completely reimagined our mobile libraries. There were a number of reasons for this but the biggest one was customizability. Every customer has their own bespoke needs and we strongly believe that our analytics libraries should reflect and support this reality. With this in mind, we decided to take a *flywheel* approach to the new libraries. The idea is that by keeping the core library small and lightweight, customers can “plugin” to it when and where they need. To accomplish this we decided to break down the lifecycle of an analytics event into simple components. You can think of this as a timeline of when an analytics event is created until the time it leaves a device. We came up with the following: **Before:** What happens before the event is created. For instance, has consent been given by the user? **Enrichment:** Data that needs to be added to the event after it has been invoked. An example of this would be adding contextual data if certain criteria have been met. **Destination:** The destination of the event, either Segment or a third party SDK (Device Mode integrations). **After:** What happens after the event lifecycle. This includes clean up operations and things of that nature. Approaching an analytics event this way makes it much easier to conceptualize your entire implementation and gives you more control over what and when you track. This has become increasingly important as Apple and Google continue to restrict tracking anything useful without a user’s consent. To get started, make sure you have cloned a copy of the __segment/analytics-react-native-ecommerce-samples repository__ and have run the necessary build/startup commands: Before we do anything else with this app let’s set up a new React Native Source in your Segment workspace. Go to Connections -> Sources -> `Add Source` Search for React Native Fill out the necessary fields and `Add Source` You should now have a new Source! This is technically all you need to get started with your implementation. However, as mentioned earlier, the best analytics implementations are planned. While this may seem like a big lift in the beginning, it will make your life much easier as your analytics needs start to scale. To do this, we’re going to add a Tracking Plan*. *Protocols and Tracking Plans are a Business Tier add-on. However, the steps above are still a helpful exercise to make a plan with Google Sheets or Excel for your own tracking purposes if you are not currently Business Tier. If you don’t have access to Protocols or already know how to implement a Tracking Plan, feel free to skip this section. **The full Segment Shop Tracking Plan can be found here in JSON format,** which can be used to create a tracking plan via our API docs**.** Start by selecting `Schema` in your React Native Source Next, select `Connect Tracking Plan` Finally, select `New Tracking Plan` Before we begin adding events to our tracking plan, let’s take a step back and think about what we need. __Segment’s eCommerce Spec__ is a great place to start thinking about the events we’d like to track and the properties we need to associate with a particular event. You should also consider where you need to send event data as different tools have different requirements. For the sake of this demo, we are ultimately going to send data to Firebase and Braze. We can start by running the app. You should now have successfully built the app in the emulator of your choice: The home page of our Segment Shop is simply a list of skateboards designed by one of our very own engineers, Brandon Sneed. This fits perfectly with the __Product List Viewed__ event in the eCommerce Spec so let’s add that to our tracking plan. Once you’ve added the event and its associated properties, click `Add Event` . We are going to do this for the rest of the events we’ll need for our shop. Now that we have a complete tracking plan for our app, we can start to implement Segment’s Analytics for React Native library. To get started, add the following packages: You may want to rebuild your app to make sure everything is compiling correctly. Next, in `App.tsx` you will need to import the library and set up the client configuration. The Segment Client has a number of useful configuration options. You can find more about those here. For this project, we’ll use the following: The last thing we need to do in `App.tsx` is wrap the app in the analytics provider. This uses the __Context API__ and will allow access to the analytics client anywhere in the application. Rebuild the app and you will now start seeing Application Lifecycle Events in your Source Debugger! Before we start implementing custom track events, we’re going to add `enrichment` plugins to request tracking permission from the user on both Android and iOS devices. In `Info.plist` add the following: In `App.tsx` In `app/build.gradle` In `AndroidManifest.xml ` All that’s left to do is implement the track events from the tracking plan. An easy one to start with is the `Product List Viewed` event we covered earlier. In `Home.tsx` add the` {useEffect}` hook to your existing React import and import the `useAnalytics` hook. Add the `useAnalytics ` hook and set up the track call inside the` useEffect` method to track the `Product List Viewed` event when the screen is rendered. As an exercise, you can go through the rest of the Tracking Plan and implement the remaining tracking events. If you want to see a final version, checkout the `final-app-ecommerce` in the same repository. Destination Plugins make it possible to send your events to analytics tools directly from the device. This is handy when you want to integrate an SDK that either does not offer a server-side endpoint or has additional functionality you would like to use (in-app messaging, push notifications, etc.). In this example we are going to add the Firebase Destination Plugin as it is one of the most popular plugins we support. Since the `@react-native-firebase SDK` requires adding initialization code to native modules for both iOS and Android builds, we will only walk through Firebase setup for iOS builds. You can find the Android implementation steps here. To get started, add Firebase as a Destination in your Segment workspace. Next, go to the Firebase console and create a new iOS project. Follow steps 1 & 2 for iOS installation. Once you’ve added your `GoogleService-Info.plist` file, add the dependencies. 1. Add dependencies ** you may have to add the following to `ios/podfile` * 2. Add pods 3. Add Plugin to Segment Client You should now be able to rebuild your app and successfully send your events to Firebase. It can take anywhere from 30 minutes to 24 hours for your data to start showing in your Firebase console when you first connect. Destination plugins contain an internal timeline that follows the same process as the analytics timeline, enabling you to modify/augment how events reach the particular destination. For example, if you only wanted to send a subset of events to Firebase to initially test out the integration, you could sample the Segment events by defining a new Plugin as per the following: Add the plugin to the Firebase plugin We have now completed a simple yet holistic tracking implementation with Analytics React Native 2.0. With the help of the eCommerce Spec and Protocols, we have built a tracking plan to standardize the events and data being tracked. The new architecture implemented in Analytics React Native 2.0 made it possible to add `IDFA` and `advertisingId` collection consent without incorporating another third-party dependency and incorporate an `analytics` hook to track events throughout the application. Finally, with the help of the Firebase Destination Plugin, we sent events to Firebase. Analytics React Native 2.0 makes it easier than ever to customize your tracking implementation to meet the bespoke requirements of your app. Now that the foundations of the analytics implementation are complete, customizing it over time or as you scale will typically be as straightforward as adding a few dependencies and updating your tracking plan. Our annual look at how attitudes, preferences, and experiences with personalization have evolved over the past year.
true
true
true
Analytics React Native 2.0 makes it easier than ever to customize your tracking implementation to meet the bespoke requirements of your app.
2024-10-13 00:00:00
2022-09-19 00:00:00
https://segment.com/cont…Social-Image.png
website
segment.com
Segment
null
null
12,476,696
http://www.ft.com/cms/s/2/335e5286-7484-11e6-b60a-de4532d5ea35.html?siteedition=intl
A journey along Japan’s oldest pilgrimage route
Barney Jopson
# A journey along Japan’s oldest pilgrimage route Simply sign up to the Life & Arts myFT Digest -- delivered directly to your inbox. Sayuru Kunihashi had paid the bill for a night on a *tatami* straw mat, eaten a breakfast of fish and rice and absorbed the directions for the day ahead. Her watch said 7.20am and she was dressed to depart. A sedge hat for the strong sun, a wooden staff for the rough terrain and a white funeral robe. The latter was an emblem of her journey — a trek into the realm of death. She had already walked 584 miles along Japan’s oldest pilgrimage route, the Shikoku *henro*, and that day she would make the treacherous ascent of Mount Unpenji, a peak named after the place at its summit, the Temple in the Clouds. It is the highest point on the pilgrim trail, the 66th stop on a circuit of 88 Buddhist temples. The route circles Shikoku, a rugged island in south-west Japan that is synonymous with exile in national folklore, a remote and frightening place of dark forests and stormy seas. For hundreds of years, pilgrims who wore the funeral robe were ready to be struck down by illness or accident. The path is still dotted with the gravestones of those who never made it back. Today, the pilgrimage’s 850 miles no longer threaten walkers’ lives, though some trucks do as they thunder past the ramen-thin sidewalks. Yet the journey is still an act of rupture, a voyage into an “other” world. The symbolism of death remains potent. Those who embark on the journey are severing ties — at least temporarily — with their homes and everyday lives. “To die is to lose everything,” one temple priest told me. “The pilgrimage is like virtually dying. You lose everything you have. It’s a form of ascetic training. You leave all your physical and mental baggage behind.” Kunihashi, then 47, had left most of hers in Tokyo, where she taught cooking. Her accessories honoured the pilgrimage’s spiritual founder, Kobo Daishi, a wandering holy man who helped to establish Buddhism in Japan. His name was celebrated in a mantra written in thick brushstrokes on her robe; his form was said to be embodied in the wooden staff she carried. Born in 774AD, he travelled to China to study as a young man and returned home two years later to found the Shingon sect of Buddhism. According to legend, the pilgrimage passes the places where he worshipped. In truth, its precise origins are unclear. It is doubtful the Daishi established the route himself. More likely is that his disciples created it after his death. There are no definitive counts but each year between 80,000 and 140,000 pilgrims — known as* o-henro* — are estimated to travel at least part of the route. According to one survey, around 60 per cent of them are over the age of 60. The vast majority speed around on air-conditioned bus tours but a hardy band of 2,000-5,000 are estimated to do it on foot, usually completing the circuit in 40-50 days. Kunihashi let me join her on the mulchy trek up through the cedar trees of Mount Unpenji. It took us three hours to emerge from the green gloom on to a stone courtyard hemmed by spotless temple buildings. There, Kunihashi began a series of rituals that culminated in a chant of the Heart Sutra, a Buddhist prayer that she had memorised. “People who do it without reading anything look cool, don’t you think?” she said. That’s why she had decided to learn it from YouTube. She pulled out her iPhone to show me a musical version of the sutra sung by Hatsune Miku, a wildly successful Japanese pop star who happens to be a cartoon character. On the way up the mountain, Kunihashi had stopped to bow in front of a glass cabinet that contained a few small Buddhist statues. I asked her what they represented. She said: “I have no idea.” We talked about religion. She told me that like many Japanese people she had married in a Christian ceremony, and that on New Year’s eve she prayed for good luck in the year ahead at a Shinto shrine, the worship place of an indigenous faith that predates Buddhism. She said Japanese people believed in gods of trees and mountains, too. “There’s even a toilet god,” she added. “She’s supposed to be very beautiful.” So was religion what brought her to the pilgrimage? No, she said. She was not religious “in the western sense”. She was doing it because she had won an iPad in a competition by saying she would use it to document the *henro*: “If there’s something you have to do, life has a way of pushing you into doing it.” Japan special **The art of growing old ** Ageing gracefully in Misaki **On the march to power ** Outspoken nationalist Tomomi Inada **Treasures of the Tang trail ** Nara and Kyoto’s Chinese visitors **In the sumo ring ** How sumo wrestlers train — and eat ** ** **Cornucopia utopia ** Japan’s 24-hour convenience store **Tea, trains and Bowie ** Must-do dates for the diary **Q&A: Quintessentially Japanese ** What frequent visitors like most about Japan Japan is dotted with temples and shrines but religion’s imprint on the country’s mental landscape is often barely perceptible. Like Kunihashi, three quarters of Japanese people say religion is not an important part of their daily lives, according to a Gallup survey. At the start of the new millennium, I studied the Japanese language and reported on the country for nearly four years from Tokyo. In all that time, religion never really came up. That was why I was intrigued when I heard about the Shikoku *henro*. Its popularity seemed paradoxical. My chance to take the journey came three years ago, a year before the pilgrimage’s official 1,200th anniversary in 2014. In the time since, as I’ve lived in New York and Washington DC, the journey has never been far from my mind. When the rat race is getting to me, it is soothing to remember that there is always someone out there in a white robe, trekking to the next temple, whatever their motive. . . . In 1927, a bespectacled German teacher called Alfred Bohner embarked on the *henro*. He went on to write a book about it called *Two on a Pilgrimage *and rebuked those non-pilgrims who imagined the journey to be a form of recreation. The labour and humiliations of the trip, he explained, quickly weeded out the halfhearted. The pain of battered feet was compounded by having to stay in “miserable lodgings with a wretched bed and slender fare”. Bohner was particularly troubled by vermin, and reported one morning discovering a “well-nourished body louse” on his small Japanese towel. Pilgrims today move between city pavements, paddy fields, ocean highways and woodland trails. I travelled the route by bicycle, spending three weeks in the saddle from 8am to 6pm every day and getting a leg-sapping lesson in how a mountainous country really feels. Accommodation, however, has improved since Bohner’s time. I stayed in neat hotels and inns that were blushingly cushy compared with the park benches and bus stations where the noblest walking pilgrims still sleep. The motivations of my fellow travellers, however, were not radically different from those Bohner encountered. Some pilgrims had set out to reckon with anguish, misfortune or personal failure. Others were seeking to heal illnesses. Many travel to memorialise relatives who have passed away. The spirits of the recently deceased are said to be unstable in Japan, even dangerous, and the pilgrimage can help to calm them. A young woman I met called Junko Kosaka carried a stamp album, one of the pilgrimage’s iconic mementos, which was marked at each temple with red seals and an inscription of the institution’s name and principal deity. Once the album is completed, it is said to take on sacred powers. Kosaka said she would place it in the coffin of one of her parents when they died, to put them on the fast track to heaven. One night, Toyokazu Akita, the owner of an inn where I stayed, told me that after he and his wife lost a son 30 years ago, they did the pilgrimage to mollify his spirit, but that the temples had calmed their own emotions too. “We met a lot of other people going through troubles of their own,” he said. “Sharing things with them made us feel better.” Certain temples specialise in blessings for getting pregnant, passing an exam or resolving eye problems. Some offer protection for people at unlucky ages: 42 for men and 33 for women. “If you’ve got 100 people, you’ll find 100 reasons for doing the pilgrimage,” a temple priest told me. The temple rituals are a kaleidoscope of their own. Pilgrims can wash their hands in holy water. They can ring a giant bell. They can throw coins into a tray. They can burn a stick of incense. They can read the Heart Sutra and at least six other prayers. But pilgrims do as much or as little as they like. There is no correct way to perform the* henro*. I wanted to find out what underlying beliefs might tie it all together. On my first day I had been warned that this would not be easy. As I visited a string of temples along the Yoshino river, I stopped to talk to a junior priest called Naoki Maeda at temple 2. He told me to remember one thing: Japanese people are not great talkers. “Speech is the silver medal. You get the gold medal for not speaking.” The spiritual implications of this had been understood by Lafcadio Hearn, an Irish-Greek who came to Japan in 1890 and became a naturalised Japanese citizen, the priest said. Hearn’s point about Japanese people, as Maeda saw it, was that “their lifestyle, their way of thinking, is religious, but if you put it into words, they dispute that”. He told me to read some of Hearn’s writing and I found a passage in which Hearn said that religion “as mere doctrine” would ultimately fade away but that “religion as feeling” could never die. In the days that followed, this began to make sense. When pilgrims told me they were not religious, I asked them where, or to what, they were directing their wishes, prayers and offerings. If they had any answers at all they were ambiguous. That partly reflected the blurred boundaries between faith and tradition in Japan. One pilgrim, a former Toyota factory worker called Kenzo Oshima, told me he was “absolutely not” religious and was walking the route to fortify his health. Yet at each temple he dutifully read the Heart Sutra because “it’s a matter of good manners”. There is also a fuzzy line in Japan between the spiritual and the day-to-day. Rather than existing in some alternate realm, the spirit world envelops everyday experience like the weather. “These are not things that Japanese people think about with great precision,” said Shinichi Takiguchi, who publishes a monthly *henro* newsletter. “People approach spirituality in a kind of foggy way. It’s a bit like they’re daydreaming.” Some pilgrims pick a spiritual figure as a lodestone for their journey, be it a deceased grandparent, Kobo Daishi, a mountain god, Buddha or something else. But there was no conflict between the spirits, said Kunihashi, the woman I followed up the mountain. Instead, the spirits complemented each other. “I don’t say ‘My god is right and yours is wrong’,” she said. Some pilgrims, however, were deeply suspicious of institutional religion. Two priests I spoke to linked this to the scars of the second world war and the emperor worship on which wartime indoctrination depended. A professor cited the 1995 gas attack in which members of the Aum Shinrikyo cult released sarin on the Tokyo subway, killing 13 people and stirring fears of new dangers in organised faith. . . . Behind the temples’ grandiose façades, priests occupy a sedate *tatami* world of slippers and green tea. I wanted to know what they made of the pilgrims’ spirituality. What they revealed was considerable ambivalence. When I asked about the chattering *henro* on bus tours, most priests decided that the dignified thing to do was to say little. They were more respectful of walking pilgrims. But several said they wanted the footsore travellers to appreciate that the journey depended on more than their own stamina. One of them was Hakushou Kubo, the deputy chief at temple 37. “You can’t do the *henro* alone,” he said. “You need the weather to be on your side. You need the help of nature.” Three days earlier a typhoon had swept across Shikoku, drenching me, so I was inclined to agree. Other priests were eager to emphasise the importance of Kobo Daishi. But Shunshou Manabe, the chief at temple 4, said that pilgrims’ awareness of him had become diluted in recent years. “If you put it in terms of sake, it’s as if the people who like pungent sake are falling in number. And the people who like sweet sake — the easy-to-drink ones — are increasing,” he said. I asked if that saddened him. “No, it’s not sad at all. Kobo Daishi’s teachings lead to that kind of approach. To be captured by something is to become a slave to it. That is what you must avoid.” The Daishi’s lesson was to not get hung up on the Daishi. . . . I was locking my bike to a railing in the car park of a noodle restaurant seven days into my journey when I heard a cooing voice behind me: “*O-henro-san, o-henro-san*.” I turned around to see an old lady with grey pigtails, who had spotted my white funeral vest. “You don’t meet a pilgrim every day,” she said excitedly. Her enthusiasm threw me — as did her enquiry into whether I had any “name slips”. These pieces of paper, sold in packs of 200 in temple shops, come printed with a dedication to Kobo Daishi and space for your name, address and the date; it is tradition to leave one in a box at each temple’s main building. I knew pilgrims sometimes gave them to people they met, but I didn’t expect to be asked for one outright. I handed over a slip and the old lady asked me to add my age and birthday. In return, she dug into her handbag and gave me two boiled sweets. There is a long tradition of local residents supporting pilgrims by offering them alms, or *o-settai*, which can take the form of anything from candy to overnight lodgings. Opinions vary on their motivations but the old lady seemed to be in the market for blessings. She watched my penmanship carefully to make sure that I did not overwrite the character **吉**, which others had told me to put on the piece of paper. It means “good fortune” — and one way to net some is to receive a pilgrim’s name slip. Here is a sample of what I was given on the road: a cup of hot chocolate; a vitamin drink; a rice triangle; a cotton tissue holder; a night’s accommodation in an abandoned bus; a caramel wafer biscuit; a glass of beer; a bread cake filled with melon jam; and two bottles of iced tea from a hotel receptionist who also found a bungee cord so that I could strap them to my bike. The gifts most perfectly attuned to my needs came from a drunken businessman. It was about 8pm and I’d joined the queue to pay for the next day’s breakfast in a convenience store, when a red-faced man lurched towards me and grabbed my basket. “You, come here,” he barked, hauling me over to a till. “You’re a *henro*, right? This is *o-settai*.” He paid ¥637 and would not stand for any protests. . . . Some 250m above sea level, Senyu temple is another tree-shrouded haven atop an unforgiving mountain. On a clear day, it offers a view of a bridge that links Japan’s main island to Shikoku via a series of stepping stone specks of land. As night fell, Kensho Oyamada, the shaven-headed chief priest of the temple, agreed to skip teaching a children’s karate class to talk to me. Japan was an “economic animal”, he said, whose values had been shaped by the gruelling process of postwar rebuilding. “There was nothing then, so the only thing to do was to work and make things. It was competition on exams, competition to get a job, competition to get promoted.” This had locked too many people into emotionally unhealthy lives, he said. Modern pilgrims emerged from that context. The orderliness and expectations of Japanese society can make it a suffocating place for some. The dominant organising system is sometimes the family or more often work. Nearly a century ago, Bohner wrote that the *henro* enabled pilgrims to “escape from the confines of their daily life, which has almost crushed them”. But Oyamada disagreed: “To me, the idea of escape implies turning your back. This is not about that,” he said. “This is about finding serenity. This is about refreshment.” He was clear, however, that institutional religion — feared as a new locus of control — had nothing to do with it. Sociologists have talked about the “privatisation” of faith in the west, a shift from unified religions towards people concocting their own versions of spirituality in the same way they assemble their wardrobes. It has been lamented as an antisocial byproduct of too much individualism. But it is also happening in Japan, a place usually branded as too conformist, because it is a form of liberation. Spirituality is a safety valve and the *henro* is an opportunity to use it. It does not come with a belief system that tells pilgrims what to do. They create their own selection of gods, ancestors, prayers, rituals and charms. “Then on Monday,” said Oyamada, “you go back to work and, even though you’ve got this unpleasant boss, you accept him as he is. Your heart becomes bigger.” The journey’s freewheeling faith explains its reviving power. Celestial liberty fosters terrestrial calm. The funeral robe is an ancient shroud, but it carries pilgrims to a new beginning. *Barney Jopson is the FT’s US policy correspondent and a former FT Tokyo correspondent* *Illustrations: Cat O’Neill* ## Comments
true
true
true
Religion may play an ambiguous role in Japanese life but the arduous pilgrimage remains popular with walkers, cyclists and coachloads
2024-10-13 00:00:00
2016-09-08 00:00:00
https://www.ft.com/__origami/service/image/v2/images/raw/https%3A%2F%2Fwww.ft.com%2F__origami%2Fservice%2Fimage%2Fv2%2Fimages%2Fraw%2Fhttp%253A%252F%252Fcom.ft.imagepublish.upp-prod-eu.s3.amazonaws.com%252Ff00d491e-74b6-11e6-bf48-b372cdb1043a%3Fsource%3Dnext-article%26fit%3Dscale-down%26quality%3Dhighest%26width%3D700%26dpr%3D1?source=next-opengraph&fit=scale-down&width=900
article
ft.com
Financial Times
null
null
7,265,542
http://www.itbusiness.ca/news/well-ca-loses-customer-credit-card-data-in-security-breach/46993
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
571,668
http://www.insidefacebook.com/2009/04/20/6-weeks-after-redesign-a-look-at-the-top-10-app-developers-on-facebook-by-reach/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
10,211,938
http://blog.python.org/2015/09/python-350-has-been-released.html
Python 3.5.0 has been released
Larry Hastings
Python core development news and information. Sunday, September 13, 2015 Python 3.5.0 has been released Python 3.5.0 is now available for download. Python 3.5.0 is the newest version of the Python language, and it contains many exciting new features and optimizations. We hope you enjoy Python 3.5.0! Newer Post Older Post Home
true
true
true
null
2024-10-13 00:00:00
2015-09-13 00:00:00
null
null
null
null
null
null
4,197,010
http://allthingsd.com/20120702/a-status-symbol-moves-down-market-whats-behind-the-uberx-launch/?refcat=news
A Status Symbol Moves Down Market: The Context for Uber’s Lower-Priced Launch
Liz Gannes
# A Status Symbol Moves Down Market: The Context for Uber’s Lower-Priced Launch Uber this week will add lower-cost hybrid cars to its San Francisco and New York fleets through a new feature called UberX. The move signals Uber’s greater sprawling ambition to be a platform for all sorts of transportation, logistics and deliveries — not just a black-car, status-symbol provider. Because the coverage so far of UberX has been a little skimpy on the logistics, let’s lay this out. Starting Wednesday, Uber will be able to summon something like 50-100 hybrids each in San Francisco and New York (that’s compared to about 400 Uber black-car drivers in San Francisco currently). The hybrids will be offering rides for about 35 percent less than the price of Uber’s town cars, and slightly more than existing taxis. Initially, to balance supply and demand and keep pickup times low, Uber is making the hybrids available to only a part of its user base. Kalanick said “low thousands of users” will get access to UberX through their Uber apps on Wednesday, and they will be able to invite friends to get priority access. When chosen users open the Uber app in San Francisco, they’ll be greeted by a new flow that allows them to choose between UberX, a black car, or a larger SUV (see screenshot above). The lowest estimated average fare for UberX is $14, for black cars $24 and for SUVs $34. At the moment, existing Uber drivers have agreed to buy the hybrid cars themselves. But in order to facilitate the UberX rollout, Uber is working to offer help with financing, purchasing and providing insurance for its driver partners, Kalanick said. I asked Kalanick if most passengers won’t just opt for the cheaper ride, when given a choice. Kalanick said he does anticipate strong demand for UberX rides, and probably a good amount of cannibalization of the black cars. But he thinks there’s still demand for the premium service. “The best way to describe it is that the experience will be efficient, but not as elegant,” said Kalanick. That is, the best drivers in the Uber system — with the strongest ratings and the nicest cars — can make more money from the higher-priced options, so that’s what they’ll do. On Uber, said Kalanick, “If you get a 4.9 [out of five stars] driver, I like to say you’re getting in a car with an artist. Put it this way — you won’t see an UberX driver opening the door for you.” Moving down market brings Uber into closer competition with existing taxi systems as well as apps like Cabulous, Taxi Magic and new ride-sharing alternatives like SideCar and Lyft. And as with all things Uber, there will probably be regulatory headaches involved. The alternative transportation providers may have started out with their own niches, but they all seem to be converging on each other. Asked for comment on the UberX launch today, Cabulous CEO Steve Humphreys told us he thinks the Uber will have trouble going down market. “It’s not hard for a Toyota to become a Lexus, but it’s a lot harder for a Masserati to become a Hyundai,” he said. Uber had already experimented with cheaper options in Chicago, where its apps can be used to book participating taxi cabs. Using existing taxis made sense in Chicago because of the particulars of the market, Kalanick said: There’s a surplus of taxis and relatively cheap fares. In San Francisco and New York, that’s not the case. “We’ve found in Chicago that a lower-cost option opens up Uber to millions more people, and that tide raises all boats, including our higher-priced options,” Kalanick said. Kalanick described UberX and giving users car options as Uber’s first push into becoming a broader platform. What will this platform do, exactly? Move people and stuff around, and quickly. “Uber is the cross of lifestyle and logistics,” Kalanick said. Kalanick said previous experiments, like delivering on-demand barbeque to attendees at SXSW and mariachi bands on Cinco de Mayo, showed Uber that its delivery system could expand to other functions. “It’s just different logistics.” And so, this week, the Uber team is hard at work cold-calling ice cream trucks across the country for a summer promotion in which users will be able to summon the delivery of a cold treat. —*Lauren Goode contributed to this report. *
true
true
true
With a new lower-priced option, Uber shows its sprawling ambition to be a platform for all sorts of transportation, logistics and deliveries.
2024-10-13 00:00:00
2012-07-02 00:00:00
/wp-content/themes/atd-2.0/images/staff/liz-gannes-170x170.jpg
article
allthingsd.com
AllThingsD
null
null
37,661,835
https://www.npr.org/2023/09/26/1191099421/amazon-ftc-lawsuit-antitrust-monopoly
U.S. sues Amazon in a monopoly case that could be existential for the retail giant
Alina Selyukh
# U.S. sues Amazon in a monopoly case that could be existential for the retail giant #### U.S. sues Amazon in a monopoly case that could be existential for the retail giant **Audio will be available later today.** U.S. regulators and 17 states sued Amazon on Tuesday in a pivotal case that could prove existential for the retail giant. In the sweeping antitrust lawsuit, the Federal Trade Commission and a bipartisan group of state attorneys general paint Amazon as a monopolist that suffocates competitors and raises costs for both sellers and shoppers. The FTC, tasked with protecting U.S. consumers and market competition, argues that Amazon punishes sellers for offering lower prices elsewhere on the internet and pressures them into paying for Amazon's delivery network. "Amazon is a monopolist and it is exploiting its monopolies in ways that leave shoppers and sellers paying more for worse service," FTC Chair Lina Khan told reporters on Tuesday. "In a competitive world, a monopoly hiking prices and degrading service would create an opening for rivals and potential rivals to ... grow and compete," she said. "But Amazon's unlawful monopolistic strategy has closed off that possibility, and the public is paying dearly as a result." Amazon, in a statement, argued that the FTC's lawsuit "radically departed" from the agency's mission to protect consumers, going after business practices that, in fact, spurred competition and gave shoppers and sellers more and better options. "If the FTC gets its way," Amazon General Counsel David Zapolsky wrote in a post, "the result would be fewer products to choose from, higher prices, slower deliveries for consumers, and reduced options for small businesses—the opposite of what antitrust law is designed to do." Broadly, Tuesday's** **case escalates a long-running criticism of Amazon: It both owns the online platform that many sellers use to reach shoppers, and it sells products on that very same platform. What's more, it owns the shipping and delivery network that everyone on the platform is incentivized to use. Around 60% of items purchased on Amazon are sold by third-party sellers, company executives have said. The FTC says Amazon's fees are so high that sellers effectively keep only half of what they make on the platform. The federal lawsuit did not immediately seek a breakup of the retail giant. Instead, the FTC and states are asking the court for a permanent injunction, although this could change down the road. The case, filed in federal court in Amazon's hometown of Seattle, is expected to play out over several years. ### FTC leader has focused on Amazon for years Though Amazon's growth has slowed, it's the most popular online store in the U.S., capturing over 40% or more of all online shopping, according to private and government research. About two-thirds of U.S. adults are members of Amazon's subscription service, Prime, as estimated by Consumer Intelligence Research Partners. Amazon has built up one of the largest delivery companies in the U.S. with a web of warehouses, air hubs and trucking operations that ship more packages than FedEx. It has also ventured into healthcare, home security, filmmaking and other fields — becoming one of the world's most valuable corporations, worth $1.3 trillion. Amazon's extensive reach and sway have long worried FTC Chair Khan. She rose to prominence as a law student in 2017, when she published "Amazon's Antitrust Paradox." The paper argued that the tech giant was anti-competitive even as it gave consumers lower prices and concluded that the company should be broken up. Later, as Democratic counsel for the House Judiciary Committee's antitrust panel in 2020, Khan helped write a 449-page report that called for "structural separations" of Amazon, Apple, Facebook and Google. They "have become the kinds of monopolies we last saw in the era of oil barons and railroad tycoons," the report said. ### Big Tech's power at heart of lawsuits Indeed, the FTC's new lawsuit against Amazon could stack alongside some of the most highest-profile federal antitrust cases, including Standard Oil more than a century ago, Microsoft three decades ago, or Google most recently. (Its domination of the search-engine market is the subject of a trial playing out in federal court right now.) The FTC previously sued Amazon in June in federal court in Seattle. The agency alleged** **the company for years "tricked" people into buying Prime memberships that were purposefully complicated to cancel. An update to the suit specifically named three Amazon executives and disclosed their internal interactions with employees who had raised concerns. The company this year also paid more than $30 million to settle two other FTC lawsuits, which alleged that Amazon failed to delete data on children's conversations with voice assistant Alexa, and that its employees monitored customers' Ring camera recordings without consent. As FTC chair, Khan positioned herself as an aggressive regulator, unafraid to challenge companies in court and undeterred by the prospect of some losses. Indeed, the FTC this year lost a lawsuit against Facebook parent Meta over its acquisition of virtual reality company Within Unlimited, and later struck out on its attempt to block Microsoft's purchase of videogame company Activision Blizzard. Amazon has tried, without success, to have Khan recused from FTC cases about the company. A review, disclosed in a footnote of an internal FTC memo, found no federal ethics grounds to prevent Khan from participating in cases related to Amazon. *NPR's Dara Kerr contributed to this report.* *Editor's note:** Amazon is among NPR's financial supporters and pays to distribute some of our content.*
true
true
true
The Federal Trade Commission and 17 states accuse Amazon of suffocating rivals and raising costs for both sellers and shoppers.
2024-10-13 00:00:00
2023-09-26 00:00:00
https://media.npr.org/in…e-s1400-c100.jpg
article
npr.org
NPR
null
null
11,998,483
http://die.life
die.life
null
This domain has recently been registered with Namecheap. die.life
true
true
true
die.life is your first and best source for all of the information you’re looking for. From general topics to more of what you would expect to find here, die.life has it all. We hope you find what you are searching for!
2024-10-13 00:00:00
null
null
null
null
die.life
null
null
17,859,874
https://medium.com/@ard_adam/the-nature-of-computer-programming-7526789b3af1
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
771,866
http://www.good.is/post/diary-of-a-social-venture-start-up-early-mistakes/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,075,270
https://zipshipit.com/bootstrap-startup-tools-used-zipshipit/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,187,246
http://www.theguardian.com/news/2015/mar/11/corrections-and-clarifications
Corrections and clarifications
Corrections; Clarifications Column Editor
Since we published our stories about Whisper between 16 October and 25 October 2014, the company has provided further information. We confirm that Whisper had drafted the changes to its terms of service and privacy policy before Whisper became aware that the Guardian was intending to write about it. We reported that IP addresses can only provide an approximate indication of a person’s whereabouts, not usually more accurate than their country, state or city. We are happy to clarify that this data (which all internet companies receive) is a very rough and unreliable indicator of location. We are also happy to make clear that the public cannot ascertain the identity or location of a Whisper user unless the user publicly discloses this information, that the information Whisper shared with the US Department of Defense’s Suicide Prevention Office did not include personal data, and that Whisper did not store data outside the United States. Whisper’s terms for sharing information proactively with law enforcement authorities where there is a danger of death or serious injury is both lawful and industry standard. The Guardian did not report that any of Whisper’s activities were unlawful. However, we are happy to clarify that there is no evidence for that suggestion. Whisper contests many other aspects of our reporting. The Guardian has clarified an article about Whisper’s terms of service and removed an opinion piece entitled “Think you can Whisper privately? Think again”. # Corrections and clarifications This article is more than 9 years old **Whisper – a clarification**
true
true
true
Whisper clarification
2024-10-13 00:00:00
2015-03-11 00:00:00
https://assets.guim.co.u…allback-logo.png
article
theguardian.com
The Guardian
null
null
20,665,244
https://www.usgamer.net/articles/eliza-new-visual-novel-zachtronics-reckons-with-generation-city-in-burnout-crisis-interview
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
31,702,970
https://www.miriamsuzanne.com/2022/06/04/indiweb/
Am I on the IndieWeb Yet?
Miriam
## Slow Social A few weeks back there was another round of social-media panic (which is reasonable), and the ever-present reminders to Use A Personal Site. Personal sites are great! But I have some issues with that reply, and I posted about it on Twitter: Personal sites are wonderful. But they can be a lot of work to set up & maintain. Also: personal solutions don’t solve the problem of building networks & public digital spaces. If we want ‘personal sites’ to be an answer to social platforms, we have a lot of work to do. At the core, I’m skeptical about treating “public space” issues as a matter of personal responsibility. Chris Coyier made a point of – a lovely little post about how cool RSS is, and the benefits of “slow social” web interactions.And I agree completely! *[insert requisite mourning for Reader]* But then I look at my site – this site, here – and it’s a struggle to get all the pieces working the way that I want, especially when it comes to syndication. I do have a feed that you can subscribe to, but I’ve struggled to categorize what on my site is a “post” worth syndicating vs a “page” vs ???. And I’m not always sure I have it set up right? Validators flag the `style` attribute (setting custom props in some content), or embedded `iframe` s (for audio/video), or `script` s (usually for embedding content). How much should I worry about these issues? Do I need to run my content through a tool like `sanitize-html` , and if so, how strict should it be? I often have similar questions when setting up microformats, and trying to match the needs of the format to the needs of my content. These are very solvable problems with testing and research – but they start to add up. ## WebMentions Since making that post, I’ve also started to explore WebMentions, following instructions from Matthias Ott, Keith Grant, and Max Böck. I recommend all three articles, but the first thing that becomes clear is that this requires multiple steps, and is not a simple or straight-forward process. The first round required several online services along with HTML & JS changes to my static site, just to verify my indieweb identity. Then more changes to the site and more online services to help fetch any mentions ~~(so far, nothing to see, but that’s probably expected)~~. It seems the only way to test the setup is to launch all those changes publicly, and then ask for other devs to send you mentions. (That’s partly the goal of this post.) Every time I think I have the basics in place, I find some other set of instructions suggesting there’s another step to take. **update [Jun 5, 2022]:** I seem to have things working here now, but I’m still not entirely clear on how it works. In the end I’ve added some metadata to the site `head` , a number of microformats to the markup, several third-party services, and an API call to download the data from one of them. I’m not convinced I have all the details right, and I’m not sure which validators to test against. If I want live updates (this is a static site) there’s still more to learn. ## A Proof Of Concept (for developers only) I’m an experienced web developer, and I can figure it out. But the steps aren’t simple, and most of my friends *are not web developers*. So, to me, this all feels like the prototype of an idea – a proof of concept. We have the technology to implement a slow social network of personal sites. I’m excited to keep playing with that code. But proving the concept is not the same as actually making it easy & accessible in a way that can replace platforms. ## What’s Next? I know there are plugins for WordPress and other blogging software to help make the setup simpler. That’s great! What I would like to see is a tool that helps bring the entire system together in one place. Somewhere that non-technical people can: - build their own site, with support for feeds/mentions - see what feeds are available on other sites, and subscribe to them - easily respond to other sites, and see the resulting threads That’s *a large feature set*, I know. In my mind, it would make the most sense for those features to live in a browser, but it might be possible to build as a web service with browser plugins? Whatever it looks like, *it will take a lot of work to get there*.
true
true
true
I haven't signed to any major web labels.
2024-10-13 00:00:00
2022-06-04 00:00:00
https://www.miriamsuzann…aie2zL-1600.jpeg
website
miriamsuzanne.com
Miriam Eric Suzanne
null
null
32,695,891
https://www.thinkadvisor.com/2022/09/02/ubs-wealthfront-cancel-1-4b-deal/
UBS, Wealthfront Cancel $1.4B Deal | ThinkAdvisor
Janet Levaux
When the deal was announced on Jan. 26, Wealthfront had some $27 billion in assets under management and over 470,000 clients in the U.S. In his analysis earlier this year, Nexus Strategist President and CEO Tim Welsh concluded that the average level of assets per client account was about $57,000. “Wealthfront’s pricing schedule of 25 basis points (the first $10,000 being ‘free’) means that UBS is paying $3,000 for just $117 of annual recurring revenue (ARR), which is problematic when looked at on an ROI basis,” he explained in a column for ThinkAdvisor. Late Friday, Wealthfront CEO David Fortunato said in a statement: “Today we announced that together with UBS we decided to terminate our pending acquisition and will instead remain an independent company.” “I am incredibly excited about Wealthfront’s path forward as an independent company and am proud to share that thanks to the hard work of our team and the trust you put in us, we will be cash flow positive and EBITDA profitable in the next few months,” Fortunato explained. The news about the cancelled Wealthfront deal comes four years after UBS shut down SmartWealth, a digital wealth management platform. In August 2018, UBS sold the technology to robo-advisor SigFig, which the Swiss-based bank had invested in two years earlier.
true
true
true
UBS will, however, buy a $69.7 million note that is convertible into the robo-advisor's shares.
2024-10-13 00:00:00
2022-09-02 00:00:00
https://images.thinkadvi…tter_640x640.jpg
newsarticle
thinkadvisor.com
ThinkAdvisor
null
null
17,277,049
https://antipolygraph.org/blog/2018/06/09/ncca-polygraph-countermeasure-course-files-leaked/
NCCA Polygraph Countermeasure Course Files Leaked
Anthony R
AntiPolygraph.org has today published a collection of documents associated with the National Center for Credibility Assessment’s training courses on polygraph countermeasures. These documents, which date to circa 2013, describe the U.S. government’s best efforts to detect polygraph countermeasures. Key revelations include: - In a secret 1995 study, **80 percent**of test subjects who were taught polygraph countermeasures**succeeded in beating**the U.S. government’s primary counterintelligence polygraph screening technique. The training took no more than an hour.**NCCA officials have concealed this study**from the National Academy of Sciences and the American public; - The federal government’s methods for attempting to detect polygraph countermeasures are conjectural in nature and were developed by two **non-scientist**polygraph operators at the federal polygraph school; - The U.S. government’s polygraph countermeasure detection training is centered on countering information found on two “anti-polygraph websites” (including this one) **run by U.S. citizens**(one of whom was**entrapped and sent to prison**and the other of whom was**targeted for entrapment**). Minimal attention is paid to foreign intelligence services. The core of the U.S. government’s training for polygraph operators on how to detect countermeasures is a 40-hour “comprehensive” course taught by the National Center for Credibility Assessment’s Threat Assessment and Strategic Support Branch (TASS). The course, called “CE 839,” is restricted to federally certified polygraph operators. The NCCA website, which disappeared from the Internet shortly after the 2016 presidential election, described the course thus (PDD is short for “Psychophysiological Detection of Deception,” a buzzword for “polgyraphy”): This 40-hour course prepares the PDD examiner to deter and detect employment of polygraph countermeasures in criminal and intelligence testing environments. The course presents background information for a foundation in concepts, theories, and research data related to polygraph countermeasures. Laboratory exercises are included to enhance skills and provide hands-on experience. Detailed discussion of numerous case studies involving examples of confirmed countermeasures. Law enforcement and intelligence PDD examinations are used to demonstrate methods of detecting and defeating this threat. Information provided includes discussion of threats posed by foreign intelligence services, terrorist organizations, and other criminal elements attempting to defeat law enforcement and intelligence PDD examinations. This course is intended as the primary polygraph countermeasures course for criminal and security screening PDD examiners or as a periodic refresher course for examiners supporting intelligence operations. The course included daily directed reading assignments followed by classroom discussions and quizzes. The documents published today by AntiPolygraph.org include an instructor’s template as well as PowerPoint presentations (including presenter’s notes) for the course. A note to slide 7 of the first block of instruction (18 MB PDF), dubbed “Back to Basics,” chides AntiPolygraph.org for publishing two Arabic-language jihadist documents on polygraph countermeasures with English translations, averring that “For the Anti-polygraph crowd the ‘end justifies the means’ – they are willing to compromise national security in their quest to rid the world of polygraph.” No explanation is provided as to how AntiPolygraph.org’s documenting of jihadist literature on polygraph countermeasures (and in the process bringing it to the attention of the federal polygraph community) constitutes our compromising national security. We would counter that **NCCA is compromising national security** through its embrace of the pseudoscience of polygraphy and its mulish resistance to independent review of its “research” findings. This “Back to Basics” presentation also includes a suggestion (at slide 22) to overinflate the blood pressure cuff (which causes pain) to punish examinees who breath more slowly than the polygraph operator would like: Block 2 (11.4 MB PDF) of the comprehensive course (“Physiological Features”) is what NCCA considers to be the core of the course. It identifies features of polygraph tracings that NCCA believes are indicative of polygraph countermeasure use. Although none of the NCCA documents provided to AntiPolygraph.org are marked as being classified, the presenter’s notes for slide 5 state that the discussion will be classified “secret”: The presenter’s notes for the concluding slide (52) bemoan the fact that “Today we have many performing high level CM. The average person can go on-line and find not only the test technique that will be used in their polygraph but the questions asked.” **You are at the right site to find that kind of information.** The presentation for the fourth block of instruction (4.5 MB PDF) includes some of the information we found most interesting. It documents a secret U.S. government countermeasure study conducted by Gordon H. Barland circa 1995 in which (including inconclusive results) **80% of test subjects taught to beat the polygraph were successful in doing so.** Countermeasure training lasted no more than an hour and consisted of telling the examinee, upon hearing a “control” question, to pick a number over 600 and count backwards by threes. Slides 7-9, with presenter’s notes, are reproduced below (you can click on the images to view them in a larger size): Barland’s study strongly suggested that **polygraph screening is highly vulnerable to polygraph countermeasures.** The U.S. government has **concealed this study from the public**, including the U.S. National Academy of Sciences’ National Research Council, which in 2001-2002 conducted a review of the scientific evidence on the polygraph. In its report, *The Polygraph and Lie Detection *(10.3 MB PDF), the Council noted (at p. 118, emphasis added): “…we were advised by officials from [the Department of Energy] and [the Department of Defense Polygraph Institute, since renamed the National Center for Credibility Assessment] that there was information relevant to our work, classified at the secret level, particularly with regard to polygraph countermeasures.In order to review such information, several committee members and staff obtained national security clearances at the secret level.We were subsequently told by officials of the Central Intelligence Agency and DoDPI that there were no completed studies of polygraph countermeasures at the secret level;we do not know whether there are any such studies at a higher level of classification.” **NCCA deliberately misled, and concealed relevant documentation from, the National Research Council.** With respect to the mention of officials of the Central Intelligence Agency, note that the most influential person at DoDPI/NCCA at the time was Donald J. Krapohl, a career CIA agent and polygraph operator who for many years was the éminence grise at DoDPI/NCCA. The presentation goes on to document how two non-scientist NCCA staff members, Paul M. Menges and Daniel V. Weatherman, devised a system that purports to identify those who employed countermeasures in Barland’s study by examining the polygraph tracings. Their scheme identifies a number of “physiological features” that are posited to be evidence of polygraph countermeasure use. The criteria are so broad that any polygraph operator could likely find indications of countermeasure use in any particular set of polygraph charts if he looks hard enough. Slide 13 notes that when Weatherman applied his “global scoring” approach to detect countermeasures, **only 47% of the innocent were correctly identified.** **The entire NCCA comprehensive countermeasure course is centered on these two non-scientists’ unpublished, un-peer-reviewed “research.”** Slide 36 notes: “Software unable to identify CM.” This is not surprising, given the conjectural, “I know them when I see them!” nature of Menges’ and Weatherman’s countermeasure “detection” methods. It’s worth noting that around the time Menges and Weatherman were attempting to devise a technique for detecting polygraph countermeasures, Menges authored an article published in the American Polygraph Association quarterly, *Polygraph*, suggesting that teaching polygraph countermeasures to the public should be outlawed. A decade later, Doug Williams, who runs Polygraph.com, which features prominently in NCCA’s polygraph countermeasure training, was targeted for entrapment in Operation Lie Busters and ultimately sentenced to two years in federal prison. He has since been released but is prohibited from providing in-person training on how to pass a polygraph “test” until July 2020. AntiPolygraph.org co-founder George Maschke was also targeted for entrapment during Operation Lie Busters but was not charged with any crime. Contemporaneous evidence suggests that AntiPolygraph.org was additionally targeted for surveillance by the NSA. The fifth block of instruction (5.7 MB PDF) covers suggested supplemental polygraph counter-countermeasure techniques such as the “Repeat the Last Word Test,” the “Focused CM Technique,” the “Yes/No Test,” and the abandonment of the “control” question “test” (CQT) in favor of the Concealed Information Test. On a final note, recall our 11 October 2014 blog post, “Senior Official at Federal Polygraph School Accused of Espionage.” Then recently-retired Defense Intelligence Agency counterintelligence investigator Scott W. Carmichael said this about these documents in an e-mail message to retired FBI polygrapher Robert Drdak: The study conducted by Dan [Weatherman] and Paul [Menges] drew on raw data collected by Dr. Gordon H. Barland and the classified report of his earlier study of mental countermeasures dated in 1994. Dan, Paul and Dr. Senter performed their work as employees of DoDPI. Your old colleague and former business partner Don Krapohl provided oversight for their work and edited the report submitted by Dan and Paul back in CY2003. The entire effort was official, and it was based on a classified study. Dan and Paul believed their findings constituted a new, reliable, and therefore extremely valuable diagnostic tool. Dr. Senter tested the tool and determined that, sure enough, the specific diagnostic features identified by Dan and Paul through their study correlated with a high degree of probability to the employment of countermeasures. As you may know, the U.S. government now requires all federal polygraph examiners to receive 40 hours of instruction on polygraph countermeasures to become certified as federal examiners; and, to receive 4 hours of polygraph countermeasures refresher training every year to maintain their certifications. DoDPI/DCCA [sic, correct NCCA] is so confident in the diagnostic tool developed by Dan and Paul, they decided to use the new tool as the very foundation for the 40-hour instruction and the annual 4-hour refresher training. Again, Dr. Barland’s 1994 study, which formed the basis for Weatherman and Menges’s study, was classified. By definition, then, and by DoD Instruction, Weatherman and Menges’s study and findings, were therefore also classified. But DoDPI did something stupid. They committed a security violation. In fact, they did so repeatedly. They found it inconvenient and unwieldy to carry classified briefing slides around with them as they traveled to various parts of the country to teach the diagnostic tool to examiners – and, when they found it necessary to brief uncleared personnel on the tool, rather than follow routine but bothersome administrative procedures which would have enabled them to do so, they elected to simply not stamp their briefing slides at all. Instead, they stamped their materials as Unclassified/FOUO – while handling and treating their materials as though they were classified, in order to ensure their tool did not fall into foreign hands. Why? Because anyone who learns how the entire US government now detects the employment of countermeasures, will be able to device [sic] methods to defeat the US government examination process. Additional NCCA countermeasure training materials, with brief descriptions, are available for download here. I invented these so-called “countermeasures” in 1979 and gave a detailed explanation of them to the US Congress when I testified in support of the EMPLOYEE POLYGRAPH PROTECTION ACT in 1985. As Dr. Maschke has mentioned in the article above, I was sent to the federal prison for two years for teaching them to an undercover agent. I guess this is what passes for justice in this country nowadays. Sad… […] False Confessions is a must-read for students of the history of polygraphy. Doug Williams is indisputably among the most influential persons in polygraphy’s nearly century-long history. His revelation of the polygraph trade’s secrets has earned the wrath of polygraph operators across the United States, and his manual “How to Sting the Polygraph” has long been part of the curriculum for the federal polygraph school’s countermeasures course. […]
true
true
true
null
2024-10-13 00:00:00
2018-06-09 00:00:00
null
article
antipolygraph.org
AntiPolygraph.org News
null
null
6,331,970
http://charleshughsmith.blogspot.com/2013/09/have-advances-in-consumer-electronics.html
Have Advances in Consumer Electronics Reached Diminishing Returns?
Charles Hugh Smith
### Have Advances in Consumer Electronics Reached Diminishing Returns? *Consumer and commercial electronic devices in 2053 are likely to look and perform almost exactly like they do today.* **Correspondent Mark G. recently proposed an interesting analogy between the technological product cycle of commercial aircraft and consumer electronics.**Aerospace technology experienced a Golden Age of rapid technological development that leveled off once fundamental technologies had matured. Investment in further advances reached a point of diminishing return: the cost of squeezing out modest gains exceeded the profit potential of the advances. **Here is Mark's commentary.** Silicon daddy: Moore's Law about to be repealed, but don't blame physics is a significant article. The principal expert quoted, Robert Colwell, is one of the true Kelly Johnsons of the golden age of semiconductors. I date this Golden Age period from 1975 to 2015. And like Kelly Johnson (Lockheed's P-38 to SR-71 wunderkind), Colwell was active over an entire 40-year Golden Age. He was already at Bell Labs in 1980 and he's still at DARPA today. I think this semiconductor - aerospace analogy is very strong.Look at what happened from 1930 to 1970 in aerospace engineering and manufacturing. We went from fabric covered crates flown by barnstorming Great Waldo Peppers to XB-70 Mach 3 bombers (production cancelled) and Boeing 747s. By 1970 a series of real physical limits were reached in an array of basic aerospace technologies, all nearly simultaneously. Come 1971 the US Congress deleted funding for the US supersonic transport (SST). The British-French Concorde and the Soviet Tu-144 went ahead. Both were commercial failures and developmental dead ends. The result is that 44 years later Boeing is still building 747s (and 737s). Boeing undoubtedly now has numerous retirees whose entire careers were spent in the 737 or 747 programs. This is an outcome very few people would have predicted in 1968 when these two programs were beginning. It's ironic that Alvin Toffler published Future Shock the same year that phenomenon generally stopped in aerospace. Subsequent margin tweaking in commercial aerospace engineering has focused on three functional areas: safety, manufacturing cost and operational cost. On the other hand, continued attempts to increase military aircraft performance helped lead to the runaway program costs we're still witnessing (for example, the $1 trillion F-35 Lightning program). This will be replicated in military electronics once the limit of Moore's Law is reached. Further increases in capability will start costing more, not less. This will lead to reduced total production runs and further cost increases from loss of scale. The above suggests that consumer and commercial electronic devices in 2053 are likely to look and perform almost exactly like they do today.Subsequent improvements will be aimed at reducing costs, not enhancing raw performance. It doesn't mean the world will stay in stasis. Look at the changes the ever rising population of 747s and 737s helped usher in between 1970 - 2013. **Thank you, Mark, for a provocative analysis of exponential trends.**As Robert Cowell noted in the above article, "Let's at least face the fact that Moore's Law is an exponential, and there cannot be an exponential that doesn't end," he said. "You can't have it." **As Mark noted at the end of his commentary, the economic and social changes from mature technologies arise from ubiquity rather than additional capabilities.**The revolution in commercial aerospace was not technological improvements in speed (such as the SST), it was the price reduction in the cost to passengers and the ubiquity of commercial aircraft and routes. 1. Debt and financialization 2. Crony capitalism and the elimination of accountability 3. Diminishing returns 4. Centralization 5. Technological, financial and demographic changes in our economy **Things are falling apart--that is obvious. But why are they falling apart?**The reasons are complex and global. Our economy and society have structural problems that cannot be solved by adding debt to debt. We are becoming poorer, not just from financial over-reach, but from fundamental forces that are not easy to identify or understand. We will cover the five core reasons why things are falling apart:1. Debt and financialization 2. Crony capitalism and the elimination of accountability 3. Diminishing returns 4. Centralization 5. Technological, financial and demographic changes in our economy Complex systems weakened by diminishing returns collapse under their own weight and are replaced by systems that are simpler, faster and affordable. If we cling to the old ways, our system will disintegrate. If we want sustainable prosperity rather than collapse, we must embrace a new model that is Decentralized, Adaptive, Transparent and Accountable (DATA). We are not powerless. Not accepting responsibility and being powerless are two sides of the same coin: once we accept responsibility, we become powerful. Complex systems weakened by diminishing returns collapse under their own weight and are replaced by systems that are simpler, faster and affordable. If we cling to the old ways, our system will disintegrate. If we want sustainable prosperity rather than collapse, we must embrace a new model that is Decentralized, Adaptive, Transparent and Accountable (DATA). We are not powerless. Not accepting responsibility and being powerless are two sides of the same coin: once we accept responsibility, we become powerful. **Kindle edition: $9.95 print edition: $24 on Amazon.com****To receive a 20% discount on the print edition: $19.20 (retail $24),**follow the link, open a Createspace account and enter discount code SJRGPLAB. (This is the only way I can offer a discount.)Thank you, Lance D. ($10), for your most generous contribution to this site-- I am greatly honored by your support and readership. |
true
true
true
Consumer and commercial electronic devices in 2053 are likely to look and perform almost exactly like they do today. Correspondent Mark ...
2024-10-13 00:00:00
2013-09-04 00:00:00
https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_vvbeRm6OsEw96UOTRbvH6kxrmcV19VVgx5b2RDIgerquN-Z3CQ0UEU7qLYZ7P6xUCjinIFRdIGtspfuRiWqqGJvdReCVP1ylXwbmpK3YyKfDsda3g=w1200-h630-p-k-no-nu
null
blogspot.com
charleshughsmith.blogspot.com
null
null
21,098,821
https://blog.skyl.ai/what-is-computer-vision/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
6,842,816
http://pando.com/2013/12/03/the-journalist-who-hacked-the-old-system/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,690,078
https://xkcd.com/1363/
xkcd Phone
About
xkcd Phone Permanent link to this comic: https://xkcd.com/1363/ Image URL (for hotlinking/embedding): https://imgs.xkcd.com/comics/xkcd_phone.png [[Ad for a phone, with several factoids positioned around a picture of the device]] ((factoids listed here starting clockwise from the top)) -Runs custom blend of Android and iOS -Simulates alternative speed of light (default: 100 miles per hour) and adjusts clock as phone accelerates -Wireless -Accelerometer detects when phone is in freefall and makes it scream -When exposed to light, phone says "hi!" -FlightAware partnership: makes airplane noise when flights pass overhead -Realistic case -Clear screen -Side-facing camera Introducing the XKCD Phone -- Your mobile world just went digital® {{Title text: Presented in partnership with Qualcomm, Craigslist, Whirlpool, Hostess, LifeStyles, and the US Chamber of Commerce. Manufactured on equipment which also processes peanuts. Price includes 2-year Knicks contract. Phone may extinguish nearby birthday candles. If phone ships with Siri, return immediately; do not speak to her and ignore any instructions she gives. Do not remove lead casing. Phone may attract trap insects; this is normal. Volume adjustable (requires root). If you experience sudden tingling, nausea, or vomiting, perform a factory reset immediately. Do not submerge in water; phone will drown. Exterior may be frictionless. Prolonged use can cause mood swings, short-term memory loss, and seizures. Avert eyes while replacing battery. Under certain circumstances, wireless transmitter may control God.}}
true
true
true
null
2024-10-13 00:00:00
2017-01-01 00:00:00
https://imgs.xkcd.com/co…kcd_phone_2x.png
null
xkcd.com
Xkcd
null
null
20,702,789
https://www.theguardian.com/world/2019/aug/13/the-fashion-line-designed-to-trick-surveillance-cameras
The fashion line designed to trick surveillance cameras
Alex Hern
Automatic license plate readers, which use networked surveillance cameras and simple image recognition to track the movements of cars around a city, may have met their match, in the form of a T-shirt. Or a dress. Or a hoodie. The anti-surveillance garments were revealed at the DefCon cybersecurity conference in Las Vegas on Saturday by the hacker and fashion designer Kate Rose, who presented the inaugural collection of her Adversarial Fashion line. Rose credits a conversation with a friend, the Electronic Frontier Foundation researcher Dave Maass, for inspiring the project: “He mentioned that the readers themselves are not very good,” she said. “They already read in things like picket fences and other junk. I thought that if they’re fooled by a fence, then maybe I could take a crack at it.” To human eyes, Rose’s fourth amendment T-shirt contains the words of the fourth amendment to the US constitution in bold yellow letters. The amendment, which protects Americans from “unreasonable searches and seizures”, has been an important defense against many forms of government surveillance: in 2012, for instance, the US supreme court ruled that it prevented police departments from hiding GPS trackers on cars without a warrant. But to an automatic license plate reader (ALPR) system, the shirt is a collection of license plates, and they will get added to the license plate reader’s database just like any others it sees. The intention is to make deploying that sort of surveillance less effective, more expensive, and harder to use without human oversight, in order to slow down the transition to what Rose calls “visual personally identifying data collection”. “It’s a highly invasive mass surveillance system that invades every part of our lives, collecting thousands of plates a minute. But if it’s able to be fooled by fabric, then maybe we shouldn’t have a system that hangs things of great importance on it,” she said. Rose likens her work to that of other security researchers at DefCon. “If a phone is discovered to have a vulnerability, we don’t throw our phones away. This is like that, disclosing a vulnerability. I was shocked it was so easy, and I would call on people who think these systems are critical to find better ways to do that verification.” Elsewhere at the convention, Droogie, a hacker, described a rather less successful way of testing the cybersecurity of license plates: registering a custom license plate with the California department of motor vehicles that read “NULL”, the code used in a number of common database systems used to represent an empty entry. Unfortunately, rather than giving him the power of administrative invisibility, Droogie experienced almost exactly the opposite outcome, receiving more than $12,000 in driving tickets. Every single speeding ticket for which no valid license plate could be found was assigned to his car. The Los Angeles police department eventually scrapped the tickets but advised the hacker to change his plates, or the same problem would continue to hit him. The anti-ALPR fabric is just the latest example of “adversarial fashion”, albeit the first to be targeted against car trackers. In 2016, the Berlin-based artist and technologist Adam Harvey worked with international interaction studio Hyphen-Labs to produce the Hyperface textile, fabric printed with a seemingly abstract pattern designed to trigger facial recognition systems. On Monday, the owners of the King’s Cross development in central London were revealed to be applying facial recognition without consent on any visitor to the 67-acre estate. The UK’s Information Commissioner warned the landowners that such use may not be legal under existing law.
true
true
true
Adversarial Fashion garments are covered in license plates, aimed at bamboozling a device’s databases
2024-10-13 00:00:00
2019-08-14 00:00:00
https://i.guim.co.uk/img…cdeeaf62d27acb7c
article
theguardian.com
The Guardian
null
null
418,613
http://jdegoes.squarespace.com/journal/2009/1/3/dispersed-development.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
19,089,286
https://www.nextplatform.com/2019/02/05/the-era-of-general-purpose-computers-is-ending/
The Era of General Purpose Computers is Ending
Michael Feldman
Moore’s Law has underwritten a remarkable period of growth and stability for the computer industry. The doubling of transistor density at a predictable cadence has fueled not only five decades of increased processor performance, but also the rise of the general-purpose computing model. However, according to a pair of researchers at MIT and Aachen University, that’s all coming to an end. Neil Thompson Research Scientist at MIT’s Computer Science and A.I. Lab and a Visiting Professor at Harvard, and Svenja Spanuth, a graduate student from RWTH Aachen University, contend what we have been covering here at *The Next Platform* all along; that the disintegration of Moore’s Law, along with new applications like deep learning and cryptocurrency mining, are driving the industry away from general-purpose microprocessors and toward a model that favors specialized microprocessor. “The rise of general-purpose computer chips has been remarkable. So, too, could be their fall,” they argue. As they point out, general-purpose computing was not always the norm. In the early days of supercomputing, custom-built vector-based architectures from companies like Cray dominated the HPC industry. A version of this still exists today in the vector systems built by NEC. But thanks to the speed at which Moore’s Law has improved the price-performance of transistors over the last few decades, the economic forces has greatly favored general-purpose processors. That’s mainly because the cost of developing and manufacturing a custom chip is between $30 and $80 million. So even for users demanding high performance microprocessors, the benefit of adopting a specialized architecture is quickly dissipated as the shrinking transistors in general-purpose chips erases any initial performance gains afforded by customized solutions. Meanwhile, the costs incurred by transistor shrinking can be amortized across millions of processors. But the computational economics enabled by Moore’s Law is now changing. In recent years, shrinking transistors has become much more expensive as the physical limitations of the underlying semiconductor material begins to assert itself. The authors point out that in the past 25 years, the cost to build a leading edge fab has risen 11 percent per year. In 2017, the Semiconductor Industry Association estimated that it costs about $7 billion to construct a new fab. Not only does that drive up the fixed costs for chipmakers, it has reduced the number semiconductor manufacturers from 25, in 2002, to just four today: Intel, Taiwan Semiconductor Manufacturing Company (TSMC), Samsung and GlobalFoundries. The team also highlights a report by the US Bureau of Labor Statistics (BLS) that attempts to quantify microprocessor performance-per-dollar. By this metric, the BLS determined that improvements have dropped from 48 percent annually in 2000-2004, to 29 percent annually in 2004-2008, to 8 percent annually in 2008-2013. All this has fundamentally changed the cost/benefit of shrinking transistors. As the authors note, for the first time in its history, Intel’s fixed costs have exceeded its variable costs due to the escalating expense of building and operating new fabs. Even more disconcerting is the fact that companies like Samsung and Qualcomm now believe that that cost for transistors manufactured on the latest process nodes is now increasing, further discouraging the pursuit of smaller geometries. Such thinking was likely behind GlobalFoundries’s recent decision to jettison its plans for its 7nm technology. It’s not just a deteriorating Moore’s Law. The other driver toward specialized processors is a new set of applications that are not amenable to general-purpose computing. For starters, you have platforms like mobile devices and the internet of things (IoT) that are so demanding with regard to energy efficiency and cost, and are deployed in such large volumes, that they necessitated customized chips even with a relatively robust Moore’s Law in place. Lower-volume applications with even more stringent requirements, such as in military and aviation hardware, are also conducive to special-purpose designs. But the authors believe the real watershed moment for the industry is being enabled by deep learning, an application category that cuts across nearly every computing environment – mobile, desktop, embedded, cloud, and supercomputing. Deep learning and its preferred hardware platform, GPUs, represent the most visible example of how computing may travel down the path from general-purpose to specialized processors. GPUs, which can be viewed as a semi-specialized computing architecture, have become the de facto platform for training deep neural networks thanks to their ability to do data-parallel processing much more efficiently than that of CPUs. The authors point out that although GPUs are also being exploited to accelerate scientific and engineering applications, it’s deep learning that will be the high-volume application that will make further specialization possible. Of course, it didn’t hurt that GPUs already had a high-volume business in desktop gaming, the application for which it was originally designed. But for deep learning, GPUs may only be the gateway drug. There are already AI and deep learning chips in the pipeline from Intel, Fujitsu, and more than a dozen startups. Google’s own Tensor Processing Unit (TPU), which was purpose-built to train and use neural networks, is now in its third iteration. “Creating a customized processor was very costly for Google, with experts estimating the fixed cost as tens of millions of dollars,” write the authors. “And yet, the benefits were also great – they claim that their performance gain was equivalent to seven years of Moore’s Law – and that the avoided infrastructure costs made it worth it.” Thompson and Spanuth also noted that the specialized processors are increasingly being used in supercomputing. They pointed to the November 2018 TOP500 rankings, which showed that for the first time specialized processors (mainly Nvidia GPUs) rather than CPUs were responsible for the majority of added performance. The authors also performed a regression-analysis on the list to show that supercomputers with specialized processors are “improving the number of calculations that they can perform per watt almost five times as fast as those that only use universal processors, and that this result is highly statistically significant.” Thompson and Spanuth offer a mathematical model for determining the cost/benefit of specialization, taking into account the fixed cost of developing custom chips, the chip volume, the speedup delivered by the custom implementation, and the rate of processor improvement. Since the latter is tied to Moore’s Law, its slowing pace means that it’s getting easier to rationalize specialized chips, even if the expected speedups are relatively modest. “Thus, for many (but not all) applications it will now be economically viable to get specialized processors – at least in terms of hardware,” claim the authors. “Another way of seeing this is to consider that during the 2000-2004 period, an application with a market size of ~83,000 processors would have required that specialization provide a 100x speed-up to be worthwhile. In 2008-2013 such a processor would only need a 2x speedup.” Thompson and Spanuth also incorporated the additional expense of re-targeting application software for specialized processors, which they pegged at $11 per line of code. This complicates the model somewhat, since you have to take into account the size of the code base, which is not always easy to track down. Here, they also make the point that once code re-development is complete, it tends to inhibit the movement of the code base back to general-purpose platforms. The bottom line is that the slow demise of Moore’s Law is unraveling what used to be a virtuous cycle of innovation, market expansion, and re-investment. As more specialized chips start to siphon off slices of the computer industry, this cycle becomes fragmented. As fewer users adopt the latest manufacturing nodes, financing the fabs becomes harder, slowing further technology advances. This has the effect of fragmenting the computer industry into specialized domains. Some of these domains, like deep learning, will be in the fast lane, by virtue of their size and their suitability for specialized hardware. However, areas like database processing, while widely used, may become a backwater of sorts, since this type of transactional computation does not to lend itself to specialized chips, say the authors. Still other areas, like climate modeling are too small to warrant their own customized hardware, although they could benefit from it. The authors anticipate that cloud computing will, to some extent, blunt the effect of these disparities by offering a variety of infrastructure for smaller and less catered for communities. The growing availability more specialized cloud resources like GPUs, FPGAs, and in the case of Google, TPUs, suggest that the haves and have-nots may be able to operate on a more even playing field. None of this means CPUs or even GPUs are doomed. Although the authors didn’t delve into this aspect, it’s quite possible that specialized, semi-specialized, and general-purpose compute engines will be integrated on the same chip or processor package. Some chipmakers are already pursuing this path. Nvidia, for example, incorporated Tensor Cores, its own specialized circuitry for deep learning, in its Volta-generation GPUs. By doing so, Nvidia was able to offer a platform that served both traditional supercomputing simulations and deep learning applications. Likewise, CPUs are being integrated with specialized logic blocks for things like encryption/decryption, graphics acceleration, signal processing, and, of course, deep learning. Expect this trend to continue. The complete paper from Thompson and Spanuth is definitely worth a read. You can download it for free here. Nvidia’s RTX Turing line of GPUs have employed Trained Tensor cores based AIs for Denoising RTX Turing’s limited Ray Tracing Cores’ noisy output, that and also the Trained AI based Upscaling that is for Nvidia named DLSS(Deep learning Super Sampling) instead of the traditional methods of upscaling. So even though the RTX Turing’s dedicated Ray Tracing cores can complete 10 billion Ray Paths per second(RTX 2080Ti) the average frame times at between 30 frames per second(FPS) and 60 FPS are measured in milleseconds, 33.33ms at 30 FPS down to 16.67ms at 60 FPS and below 16.67 for higher frame rates than 60 FPS. So sudddenly there are still not sufficient numbers of Rays Traced at those limited frame times to generate anything but a grainy ray traced image that must be denoised by a Tensor core hosted AI pass before all that is mixed in with the traditional raster putput. Nvidia employs a hybrid Raster/Ray Tracing approach that’s helped along by the Tensor Core hosted AI that’s trained for denoising an image that’s created with a rather limited amount of Rays available per milleseconds of available Frame Times. And that includes, if desired, any sorts of ambient occlusion, reflections, refractions and Shadow/Lighting passes that can also make use of Rays insted of traditional raster methods of shadow mapping etc. So each one of those passes needs Ray Tracing core computations with there being insufficient Ray Generation capacity to generate in gaming workloads anything close to a clear image without AI based denoising. So most GPUs will eventually be getting dedicated Tensor Cores especially any console CPU/APU processors where upscaling is more utilized due to the limited power of the Integrated Graphics used on Consoles. There are already hints of MCM’s with chiplets that will come as GPUs and other specilized Die/Chiplet processors instead of only CPU die/chiplets. Look at your average cell phone and those devices have already been using DSPs and Neural Processors(Tensor Cores) in addition to the usual CPU cores and integrated GPUs. Tensor cores will be adopted on most conumer based processor systems because of the ability to offload the training tasks to some giant computing/AI clusters and then host whatever trained AIs that are needed on mobile devices in order to offload more and varied tasks from the power hungry General Purpose CPU cores. That’s how Apple gets within its power budget on its Smartphones and Tablets, and ditto for Qualcomm and Samsung/others. That Era of general purpose computing has been over for some years now on Mobile devices. And really that process started once GPUs started to become more commonplace and integrated along with the CPU cores for APUs and similar offerngs or even GPUs hosted on servers and used for non gaming computational workloads. Necessity had dictated the use of specilized Procesors on smartphones and tablets in order to perform ever more complex computing tasks more efficiemtly and within the limited power budgets and thermal constraints on mobile devices. And the same is going to be true for exascale computing where General Purpose Processors use too much power in the megawatts range and GPUs have already been of great help in petascale computing on the way to the Exascale in performing more GFlops/Watt than any General Purpose Processors(CPUs) are capable of. All of that is fine, but it doesn’t mean this “law” that Moore’s name is used for holds true, or will become true again in the future. Why is everyone ignoring the actual definition of the law? If some law of thermodynamics were shown not to hold true anymore, then would everyone ignore it – keep using the name – and then write about how it doesn’t matter because it is basically going to become true again in the future? No… That’s insane. As of 2018, there are possibly more than just four semiconductor manufacturers with 14 – 16nm production capabilities. According to Wikipedia, there are also Toshiba (Fab 5) and United Microelectronics (Fab 12a), which would make it six.
true
true
true
Moore’s Law has underwritten a remarkable period of growth and stability for the computer industry. The doubling of transistor density at a predictable
2024-10-13 00:00:00
2019-02-05 00:00:00
https://www.nextplatform…chip_sd77w78.jpg
article
nextplatform.com
The Next Platform
null
null
12,151,785
https://www.change.org/p/larry-ellison-tell-oracle-to-move-forward-java-ee-as-a-critical-part-of-the-global-it-industry
3,957 people signed and won this petition
Reza Rahman
# Tell Oracle to Move Forward Java EE as a Critical Part of the Global IT Industry # Tell Oracle to Move Forward Java EE as a Critical Part of the Global IT Industry ## Why this petition matters This petition was created by the Java EE Guardians. We are a group of people and organizations very concerned about Oracle’s current lack of commitment to Java EE. We are doing everything we can to preserve the interests of the Java EE community and the global IT industry. We believe that working together – *including Oracle* – we can ensure a very bright future for Java, Java EE and server-side computing. To make any of this possible we urgently need your support. Please help us by signing this petition. Every voice counts. **Java EE is incredibly important to the long term health of the entire Java ecosystem.** This is because of the basic fact that Java on the server will remain mission critical to global IT in the foreseeable future. - Hundreds of thousands of applications worldwide are written in Java EE, many of those applications are regularly being brought to light. Even applications and frameworks that claim they do not use Java EE are in fact heavily dependent on many Java EE APIs today and going forward, regardless of trends like cloud or microservices. Just some of these APIs include Servlet, JAX-RS, WebSocket, JMS, JPA, JSF and so much more. - There were no less than 4,500 input points to the groundbreaking, unprecedented survey to determine Java EE 8 features. - In major survey after survey developers continue to show their strong support for Java EE and its APIs. - Java EE vendors and products are some of the most enviably profitable in our industry certainly including Oracle and WebLogic. - Few multi-vendor open standards are as widely implemented, supported, depended upon or as widely participated in as Java EE. Indeed there are no practical alternatives to Java EE as an open standards based platform. - There is an extremely passionate, responsible community behind Java EE – most technologies would be hard pressed to find anything like the Java EE community. The Java EE Guardians is a testament to this fact. **There is growing evidence that Oracle is conspicuously neglecting Java EE, weakening a very broad ecosystem that depends on strong Java EE development. Almost all work from Oracle on Java EE has ceased for more than six months with no end to the inactivity in sight. Unless things change soon Java EE 8 won’t be delivered in anywhere near the time when it was initially promised if it is delivered at all.** It is very difficult to determine why this neglect from Oracle is occurring or how long it will last. **Oracle has not shared it’s motivations even with it’s closest commercial partners let alone the community.** A very troubling possibility is that it is being done because Oracle is backing away from an open standards based collaborative development approach and is instead pursuing a highly proprietary, unilateral path. **There is a lot the community is doing together to try to tackle this problem the best we can.** - We are continuing to enthusiastically evangelize Java EE, including Java EE 8. - We are strongly supporting active Java EE 8 JSRs like CDI 2 led by companies like Red Hat. - We are lobbying Oracle to fulfill its commitments to the Java EE community through all channels available to us. This includes Java EE 8 expert groups as well as the Java Community Process (JCP) Executive Committee (EC). - We are keeping all Java EE 8 expert group discussions active, in many cases despite lack of activity from Oracle. - We are moving ahead Java EE 8 reference implementations, TCKs and specification documents through open source in many cases despite inactive Oracle specification leads. Our biggest challenge in this regard is access to the TCK and getting our work accepted by Oracle specification leads. - We are exploring whether some inactive Oracle led JSRs can switch ownership to us or vendors like Red Hat, IBM, Tomitribe or Payara. Our biggest challenge in this regard is persuading Oracle to relinquish control of JSRs they are not delivering on. - In conjunction to the above, in the interim we will provide the functionality that should be standardized in Java EE through open source. We will work with vendors like Oracle, Red Hat, IBM, Tomitribe and Payara to include these features in their Java EE runtimes out-of-the-box. We will provide these features to vendors completely free of charge with the clear goal of standardization as quickly as possible via the JCP. **As committed as we are we still need Oracle to cooperate with us as a responsible, community focused steward to move Java EE forward.** Persuading Oracle to adapt to the legitimate interests of people outside of itself – even its own customers – has proven challenging in the past. In all likelihood it may not be easy this time either, though there must always remain plentiful room for reasoned optimism. **That is why your voice is so very important. ** Please join us in signing this petition to ask Oracle to: - Clarify how it intends to preserve the best interests of the Java, Java EE and servers-side computing ecosystems. - Commit to delivering Java EE 8 in time with a reasonable feature set that satisfies the needs of the community and the industry. - Effectively cooperate with the community and other vendors to either accept contributions or transfer ownership of Java EE 8 work. After signing the petition please join us at javaee-guardians.io. The Java EE Guardians include many technical luminaries, journalists, Java Champions, JCP experts, JUG leaders and Java developers including Dr. James Gosling, Cameron McKenzie, Arjan Tijms, Bauke Scholtz, Werner Keil, Reza Rahman and Kito Mann. The Java EE Guardians include many Java User Groups and companies around the world including Connecticut JUG, Istanbul JUG, the Japan JUG, Columbus, Ohio JUG, Peru JUG, Madras JUG, India, Esprit Tunisian JUG, Pakistan JUG and Bulgarian JUG. ### Victory This petition made change with 3,957 supporters!## Share this petition ## Decision Makers - Larry EllisonExecutive Chairman and CTO, Oracle - Safra CatzCEO, Oracle - Mark HurdCEO, Oracle - Thomas KurianPresident, Product Development, Oracle - Inderjeet SinghExecute Vice President, Fusion Middleware Development, Oracle
true
true
true
Tell Oracle to Move Forward Java EE as a Critical Part of the Global IT Industry
2024-10-13 00:00:00
2016-06-09 00:00:00
https://assets.change.or…d.jpg?1528795865
change-org:petition
change.org
Change.org
null
null
36,832,701
https://bramcohen.com/p/why-cant-people-agree
Why can't people agree?
Bram Cohen
# Why can't people agree? ### It's mostly because people suck, but there's some altruism in there as well According to Aumann’s Agreement Theorem people should never ‘agree to disagree’, they should always be able to come to consensus. So why don’t they? There are several big reasons for this, some having to do with people sucking and some with them being good. The first reason really sucks: People lie. Often people want to win an argument either because they have some ulterior motive or because they just like winning arguments. Against a completely credulous opponent they can do this by lying or bullshitting about the strength of the evidence in their favor. The subtle but important distinction between a lie and bullshit is that in a lie the person saying it knows it to be untrue while with bullshit the person saying it doesn’t know if it’s true and often doesn’t care. With the most extreme bullshit reasonable bayesian priors would suggest that it’s overwhelmingly likely to be untrue, and hence probably should qualify as a lie, but people who are bullshitters often don’t understand that concept. Maybe lying and bullshit aren’t that different. Ulterior motives are easy to come by. For example, one can say that if you give me money good things will magically happen to you. It’s easy to gain overwhelming evidence for this via simple selection bias, remembering people who it worked out for and forgetting ones who it didn’t. You may laugh at this as being rhetorical, but prosperity gospel is a whole industry. (I’m probably showing a form of bias here myself by assuming that other people are minimally competent at data science fraud. If you want to commit fraud the hard-to-detect way of doing it is to throw out some of the data you’ve collected, that way all the data you present looks legit because it’s literally real data. Instead even Harvard professors do their fraud by the data science equivalent of scribbling on a map with a sharpie.) The second reason for a breakdown of agreement has to do with who’s an authority. One common misidentification of an authority figure is the first person singular, that is to say, people think their opinions deserve the benefit of the doubt and the onus is on others to provide evidence debunking them. They will even accuse others of engaging in a ‘logical fallacy’ and being ‘dismissive’ for, well, being dismissive. They are of course the ones engaging in a logical fallacy themselves. People bullshit enough that bayesian priors should suggest that anything strange someone suggests without justification is probably bullshit. It’s important to point out that appeals to authority are not a logical fallacy. For most matters of science, engineering, politics, and history only a tiny fraction of the general population have much hope of contributing meaningfully to them and everyone else is far better off identifying who appropriate authorities on a subject are and deferring to them. You might notice that the people I’m arguing with here like calling things logical fallacies. One even might say that these overly detailed breakdowns I like doing are mostly explaining things most people find obvious for the benefit of autistics and are only really useful for the very small number of people who simultaneously struggle to understand the world, but very good at thinking about things in the abstract and are earnestly interested in explanations. There might be a lot to that. The other big common misidentification of authority is in the form of deciding people to entrust based on prejudice and superstition. Deep in the human mind there’s programming to be part of a tribe and view the tribal leader as the great authority on all things. The people designated by these means tend to be (but are no means exclusively) tall, charismatic, old, male, and have a penchant for righteous ranting. Religion obviously exploits this, but so do the mostly fake management consulting and CEO industries. Who should be viewed as trustworthy is a fraught and difficult subject, but you can do better than most people by using reasoning based on evidence instead of your gut instincts, particularly looking out for the biases mentioned above. Yes I know righteous ranting is fun, sorry to be a downer. There are whole industries based off exploiting your poor instincts, and while your primal judgement is good at many things, picking out trustworthy people isn’t one of them. On a more positive note, some disagreements are due to subject matter experts sacrificing their own accuracy for the greater good. Received wisdom needs to come from somewhere, and prior to its existence experts can individually give their opinions but can only come to general consensus by comparing with other experts, arguing their positions, and weighing the opinions of others. Even after conventional wisdom is achieved it still needs to be questioned and reevaluated, especially as new evidence comes to light. This results in experts actually being more willing to break with the conventional wisdom, both because they’re more likely to have coherent reason to do so but also because they’re engaged in a general process of making the conventional wisdom more accurate. Interestingly experts are often unaware of the reasons for the convention of not simply giving the conventional wisdom, and out of arrogance or habit will explain current internal debates of their field to outsiders instead of switching modes to giving the conventional wisdom. This gives outsiders a lot of confusing and generally irrelevant information and gives them the impression that there’s a lot more internal strife in serious fields than there actually is. If you are an expert and are asked to explain something to the general public you should give a trailing view of the conventional wisdom and maybe hint at current discussions but say they’re speculative. Please note that watching a lot of youtube videos on a subject does not make you an expert. Also note that disagreeing with the conventional wisdom is not evidence of your expertise. Very few people are experts on any given thing, and everyone is an expert on only a few things. Expertise at the level of it being reasonable to challenge the conventional wisdom is a very high bar. Finally people sometimes disagree with the conventional wisdom because of the emperor’s new clothes phenomenon. At some point the conventional wisdom is so obviously wrong that you should question it even if you aren’t a subject matter expert. This is a very high bar, but does sometimes happen. The big example from my experience was reading Freud, who I vaguely accepted as the big authority figure in psychology even though his theories sounded slightly loopy until I read his actual writings and realized they’re utterly deranged. Unsurprisingly Freud has been thoroughly discredited since. Thankfully with ever greater communications technology instances of whole fields being out to lunch are decreasing in frequency. More common now is there being multiple ‘schools’ of a field, where at least one of them has some real credibility. Unfortunately it isn’t always the largest school. I think Aumann’s assumptions are unrealistic. From a Bayesian point of view, almost all of your beliefs, you hold them because of a large number of different experiences that you had, that you have forgotten the details of. You do not have a set of “common knowledge parameters”, similar in size to the number of world states, shared across all participants. I disagree with the inclusion of “politics” it has reached peek derangement and the leadership shows it. Not discussing the system angle itself but just the individuals - I don’t say everything was better before but we have truely given birth to some clownish people on both sides of the isle. At that point an honest plumber could do a better job and I don’t think that’s factually wrong. To give these people the benefit of an expert is a bridge to far for me. Just my five cent
true
true
true
It's mostly because people suck, but there's some altruism in there as well
2024-10-13 00:00:00
2023-07-22 00:00:00
https://substackcdn.com/image/fetch/f_auto,q_auto:best,fl_progressive:steep/https%3A%2F%2Fbramcohen.substack.com%2Ftwitter%2Fsubscribe-card.jpg%3Fv%3D2020193030%26version%3D9
article
bramcohen.com
Bram’s Thoughts
null
null
11,581,590
http://labs.earthpeople.se/2016/04/controlling-ssh-access-with-github-organizations/
Controlling ssh access with GitHub organizations
null
Ok, I’m coming clean. Controlling access to our various servers has been a mess. Sure, we’ve stored passwords in a safe way (1Password for teams ftw!) but what happens if someone leaves the company or that root password somehow were to get out… Well, we did not have a plan for such a thing. Sure, setting up ssh keys is easy, but we never got around to it. We manage more than a handful servers, and making sure the authorized_keys on these boxes is up to date just felt unmanageable. This changed today when i got the idea to make use of this GitHub’s feature which exposes the public keys. I wrote a little script that fetches all users within our GitHub organization, pulls down the public keys and updates the ~.ssh/authorized_keys-file nightly with a cron job. This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters <?php | | # import github keys | | $ch = curl_init('https://api.github.com/orgs/EarthPeople/members'); | | curl_setopt($ch, CURLOPT_USERPWD, "XXX:XXX"); | | curl_setopt($ch, CURLOPT_TIMEOUT, 30); | | curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); | | curl_setopt($ch, CURLOPT_USERAGENT,'EpBot'); | | $users = @json_decode(curl_exec($ch)); | | curl_close($ch); | | echo "found ".count($users)." users in organisation\n"; | | if($users){ | | foreach($users as $user => $params){ | | echo "user: ".$params->login."\n"; | | $keys[] = file_get_contents('https://github.com/'.$params->login.'.keys'); | | } | | } | | echo "found ".count($keys)." keys\n"; | | if(count($users) === count($keys)){ | | file_put_contents('/root/.ssh/authorized_keys', implode($keys, "\n")); | | echo "imported to keys to ~/.ssh/authorized_keys\n"; | | }else{ | | echo "error, could not fetch keys for all users\n"; | | } | Yes, this is PHP but when all you’ve got is a hammer – everything looks like a nail. This needs some error handling too, but I thought I’d share it anyway. /Peder
true
true
true
null
2024-10-13 00:00:00
2016-04-27 00:00:00
null
null
earthpeople.se
labs.earthpeople.se
null
null
9,143,119
https://github.com/pgte/nock
GitHub - nock/nock: HTTP server mocking and expectations library for Node.js
Nock
NoticeWe have introduced experimental support for fetch. Please share your feedback with us. You can install it by: `npm install --save-dev nock@beta` HTTP server mocking and expectations library for Node.js Nock can be used to test modules that perform HTTP requests in isolation. For instance, if a module performs HTTP requests to a CouchDB server or makes HTTP requests to the Amazon API, you can test that module in isolation. **Table of Contents** - How does it work? - Install - Usage - READ THIS! - About interceptors - Specifying hostname - Specifying path - Specifying request body - Specifying request query string - Specifying replies - Specifying headers - HTTP Verbs - Support for HTTP and HTTPS - Non-standard ports - Repeat response n times - Delay the response - Chaining - Scope filtering - Conditional scope filtering - Path filtering - Request Body filtering - Request Headers Matching - Optional Requests - Allow **unmocked**requests on a mocked hostname - Expectations - Restoring - Activating - Turning Nock Off (experimental!) - Enable/Disable real HTTP requests - Recording - Events - Nock Back - Common issues - Debugging - Contributing - Contributors - Sponsors - License Nock works by overriding Node's `http.request` function. Also, it overrides `http.ClientRequest` too to cover for modules that use it directly. `$ npm install --save-dev nock` The latest version of nock supports all currently maintained Node versions, see Node Release Schedule Here is a list of past nock versions with respective node version support node | nock | ---|---| 0.10 | up to 8.x | 0.11 | up to 8.x | 0.12 | up to 8.x | 4 | up to 9.x | 5 | up to 8.x | 6 | up to 10.x | 7 | up to 9.x | 8 | up to 11.x | 9 | up to 9.x | On your test, you can setup your mocking object like this: ``` const nock = require('nock') const scope = nock('https://api.github.com') .get('/repos/atom/atom/license') .reply(200, { license: { key: 'mit', name: 'MIT License', spdx_id: 'MIT', url: 'https://api.github.com/licenses/mit', node_id: 'MDc6TGljZW5zZTEz', }, }) ``` This setup says that we will intercept every HTTP call to `https://api.github.com` . It will intercept an HTTPS GET request to `/repos/atom/atom/license` , reply with a status 200, and the body will contain a (partial) response in JSON. When you setup an interceptor for a URL and that interceptor is used, it is removed from the interceptor list. This means that you can intercept 2 or more calls to the same URL and return different things on each of them. It also means that you must setup one interceptor for each request you are going to have, otherwise nock will throw an error because that URL was not present in the interceptor list. If you don’t want interceptors to be removed as they are used, you can use the .persist() method. The request hostname can be a string, URL, or a RegExp. ``` const scope = nock('http://www.example.com') .get('/resource') .reply(200, 'domain matched') ``` ``` const scope = nock(new URL('http://www.example.com')) .get('/resource') .reply(200, 'domain matched') ``` ``` const scope = nock(/example\.com/) .get('/resource') .reply(200, 'domain regex matched') ``` Note: You can choose to include or not the protocol in the hostname matching. The request path can be a string, a RegExp or a filter function and you can use any HTTP verb. Using a string: ``` const scope = nock('http://www.example.com') .get('/resource') .reply(200, 'path matched') ``` Using a regular expression: ``` const scope = nock('http://www.example.com') .get(/source$/) .reply(200, 'path using regex matched') ``` Using a function: ``` const scope = nock('http://www.example.com') .get(uri => uri.includes('cats')) .reply(200, 'path using function matched') ``` You can specify the request body to be matched as the second argument to the `get` , `post` , `put` or `delete` specifications. There are five types of second argument allowed: **String**: nock will exact match the stringified request body with the provided string ``` nock('http://www.example.com') .post('/login', 'username=pgte&password=123456') .reply(200, { id: '123ABC' }) ``` **Buffer**: nock will exact match the stringified request body with the provided buffer ``` nock('http://www.example.com') .post('/login', Buffer.from([0xff, 0x11])) .reply(200, { id: '123ABC' }) ``` **RegExp**: nock will test the stringified request body against the provided RegExp ``` nock('http://www.example.com') .post('/login', /username=\w+/gi) .reply(200, { id: '123ABC' }) ``` **JSON object**: nock will exact match the request body with the provided object. In order to increase flexibility, nock also supports RegExp as an attribute value for the keys: ``` nock('http://www.example.com') .post('/login', { username: 'pgte', password: /.+/i }) .reply(200, { id: '123ABC' }) ``` **Function**: nock will evaluate the function providing the request body object as first argument. Return true if it should be considered a match: ``` nock('http://www.example.com') .post('/login', body => body.username && body.password) .reply(200, { id: '123ABC' }) ``` In case you need to perform a partial matching on a complex, nested request body you should have a look at libraries like lodash.matches. Indeed, partial matching can be achieved as: ``` nock('http://www.example.com') .post('/user', _.matches({ address: { country: 'US' } })) .reply(200, { id: '123ABC' }) ``` Nock understands query strings. Search parameters can be included as part of the path: `nock('http://example.com').get('/users?foo=bar').reply(200)` Instead of placing the entire URL, you can specify the query part as an object: ``` nock('http://example.com') .get('/users') .query({ name: 'pedro', surname: 'teixeira' }) .reply(200, { results: [{ id: 'pgte' }] }) ``` Nock supports array-style/object-style query parameters. The encoding format matches with request module. ``` nock('http://example.com') .get('/users') .query({ names: ['alice', 'bob'], tags: { alice: ['admin', 'tester'], bob: ['tester'], }, }) .reply(200, { results: [{ id: 'pgte' }] }) ``` A `URLSearchParams` instance can be provided. ``` const params = new URLSearchParams({ foo: 'bar' }) nock('http://example.com').get('/').query(params).reply(200) ``` Nock supports passing a function to query. The function determines if the actual query matches or not. ``` nock('http://example.com') .get('/users') .query(actualQueryObject => { // do some compare with the actual Query Object // return true for matched // return false for not matched return true }) .reply(200, { results: [{ id: 'pgte' }] }) ``` To mock the entire url regardless of the passed query string: ``` nock('http://example.com') .get('/users') .query(true) .reply(200, { results: [{ id: 'pgte' }] }) ``` A query string that is already URL encoded can be matched by passing the `encodedQueryParams` flag in the options when creating the Scope. ``` nock('http://example.com', { encodedQueryParams: true }) .get('/users') .query('foo%5Bbar%5D%3Dhello%20world%21') .reply(200, { results: [{ id: 'pgte' }] }) ``` You can specify the return status code for a path on the first argument of reply like this: `const scope = nock('http://myapp.iriscouch.com').get('/users/1').reply(404)` You can also specify the reply body as a string: ``` const scope = nock('http://www.google.com') .get('/') .reply(200, 'Hello from Google!') ``` or as a JSON-encoded object: ``` const scope = nock('http://myapp.iriscouch.com').get('/').reply(200, { username: 'pgte', email: 'pedro.teixeira@gmail.com', _id: '4324243fsd', }) ``` or even as a file: ``` const scope = nock('http://myapp.iriscouch.com') .get('/') .replyWithFile(200, __dirname + '/replies/user.json', { 'Content-Type': 'application/json', }) ``` Instead of an object or a buffer you can also pass in a callback to be evaluated for the value of the response body: ``` const scope = nock('http://www.google.com') .post('/echo') .reply(201, (uri, requestBody) => requestBody) ``` In Nock 11.x it was possible to invoke `.reply()` with a status code and a function that returns an array containing a status code and body. (The status code from the array would take precedence over the one passed directly to reply.) This is no longer allowed. In Nock 12 and later, either call `.reply()` with a status code and a function that returns the body, or call it with a single argument: a function that returns an array containing both the status code and body. An asynchronous function that gets an error-first callback as its last argument also works: ``` const scope = nock('http://www.google.com') .post('/echo') .reply(201, (uri, requestBody, cb) => { fs.readFile('cat-poems.txt', cb) // Error-first callback }) ``` In Nock 11 and later, if an error is passed to the callback, Nock will rethrow it as a programmer error. In Nock 10 and earlier, the error was sent in the response body, with a 500 HTTP response status code. You can also return the status code and body using just one function: ``` const scope = nock('http://www.google.com') .post('/echo') .reply((uri, requestBody) => { return [ 201, 'THIS IS THE REPLY BODY', { header: 'value' }, // optional headers ] }) ``` or, use an error-first callback that also gets the status code: ``` const scope = nock('http://www.google.com') .post('/echo') .reply((uri, requestBody, cb) => { setTimeout(() => cb(null, [201, 'THIS IS THE REPLY BODY']), 1000) }) ``` A Stream works too: ``` const scope = nock('http://www.google.com') .get('/cat-poems') .reply(200, (uri, requestBody) => { return fs.createReadStream('cat-poems.txt') }) ``` If you're using the reply callback style, you can access the original client request using `this.req` like this: ``` const scope = nock('http://www.google.com') .get('/cat-poems') .reply(function (uri, requestBody) { console.log('path:', this.req.path) console.log('headers:', this.req.headers) // ... }) ``` Note: Remember to use normal `function` in that case, as arrow functions are using enclosing scope for`this` binding. You can reply with an error like this: ``` nock('http://www.google.com') .get('/cat-poems') .replyWithError('something awful happened') ``` JSON error responses are allowed too: ``` nock('http://www.google.com').get('/cat-poems').replyWithError({ message: 'something awful happened', code: 'AWFUL_ERROR', }) ``` Note: This will emit an `error` event on the`request` object, not the reply. Per HTTP/1.1 4.2 Message Headers specification, all message headers are case insensitive and thus internally Nock uses lower-case for all field names even if some other combination of cases was specified either in mocking specification or in mocked requests themselves. You can specify the request headers like this: ``` const scope = nock('http://www.example.com', { reqheaders: { authorization: 'Basic Auth', }, }) .get('/') .reply(200) ``` Or you can use a regular expression or function to check the header values. The function will be passed the header value. ``` const scope = nock('http://www.example.com', { reqheaders: { 'X-My-Headers': headerValue => headerValue.includes('cats'), 'X-My-Awesome-Header': /Awesome/i, }, }) .get('/') .reply(200) ``` If `reqheaders` is not specified or if `host` is not part of it, Nock will automatically add `host` value to request header. If no request headers are specified for mocking then Nock will automatically skip matching of request headers. Since the `host` header is a special case which may get automatically inserted by Nock, its matching is skipped unless it was *also* specified in the request being mocked. You can also have Nock fail the request if certain headers are present: ``` const scope = nock('http://www.example.com', { badheaders: ['cookie', 'x-forwarded-for'], }) .get('/') .reply(200) ``` When invoked with this option, Nock will not match the request if any of the `badheaders` are present. Basic authentication can be specified as follows: ``` const scope = nock('http://www.example.com') .get('/') .basicAuth({ user: 'john', pass: 'doe' }) .reply(200) ``` You can specify the reply headers like this: ``` const scope = nock('https://api.github.com') .get('/repos/atom/atom/license') .reply(200, { license: 'MIT' }, { 'X-RateLimit-Remaining': 4999 }) ``` Or you can use a function to generate the headers values. The function will be passed the request, response, and response body (if available). The body will be either a buffer, a stream, or undefined. ``` const scope = nock('http://www.headdy.com') .get('/') .reply(200, 'Hello World!', { 'Content-Length': (req, res, body) => body.length, ETag: () => `${Date.now()}`, }) ``` You can also specify default reply headers for all responses like this: ``` const scope = nock('http://www.headdy.com') .defaultReplyHeaders({ 'X-Powered-By': 'Rails', 'Content-Type': 'application/json', }) .get('/') .reply(200, 'The default headers should come too') ``` Or you can use a function to generate the default headers values: ``` const scope = nock('http://www.headdy.com') .defaultReplyHeaders({ 'Content-Length': (req, res, body) => body.length, }) .get('/') .reply(200, 'The default headers should come too') ``` When using `interceptor.reply()` to set a response body manually, you can have the `Content-Length` header calculated automatically. ``` const scope = nock('http://www.headdy.com') .replyContentLength() .get('/') .reply(200, { hello: 'world' }) ``` **NOTE:** this does not work with streams or other advanced means of specifying the reply body. You can automatically append a `Date` header to your mock reply: ``` const scope = nock('http://www.headdy.com') .replyDate() .get('/') .reply(200, { hello: 'world' }) ``` Or provide your own `Date` object: ``` const scope = nock('http://www.headdy.com') .replyDate(new Date(2015, 0, 1)) .get('/') .reply(200, { hello: 'world' }) ``` Nock supports any HTTP verb, and it has convenience methods for the GET, POST, PUT, HEAD, DELETE, PATCH, OPTIONS and MERGE HTTP verbs. You can intercept any HTTP verb using `.intercept(path, verb [, requestBody [, options]])` : ``` const scope = nock('http://my.domain.com') .intercept('/path', 'PATCH') .reply(304) ``` By default nock assumes HTTP. If you need to use HTTPS you can specify the `https://` prefix like this: ``` const scope = nock('https://secure.my.server.com') // ... ``` You are able to specify a non-standard port like this: `const scope = nock('http://my.server.com:8081')` You are able to specify the number of times to repeat the same response. **NOTE:** When request times is more than the number you specified, you will get an error before cleaning this interceptor. ``` nock('http://zombo.com').get('/').times(4).reply(200, 'Ok') http.get('http://zombo.com/') // respond body "Ok" http.get('http://zombo.com/') // respond body "Ok" http.get('http://zombo.com/') // respond body "Ok" http.get('http://zombo.com/') // respond body "Ok" // This code will get an error with message: // Nock: No match for request http.get('http://zombo.com/') // clean your interceptor nock.cleanAll() http.get('http://zombo.com/') // real respond with zombo.com result ``` Sugar syntax ``` nock('http://zombo.com').get('/').once().reply(200, 'Ok') nock('http://zombo.com').get('/').twice().reply(200, 'Ok') nock('http://zombo.com').get('/').thrice().reply(200, 'Ok') ``` To repeat this response for as long as nock is active, use .persist(). Nock can simulate response latency to allow you to test timeouts, race conditions, an other timing related scenarios. You are able to specify the number of milliseconds that your reply should be delayed. ``` nock('http://my.server.com') .get('/') .delay(2000) // 2 seconds delay will be applied to the response header. .reply(200, '<html></html>') ``` `delay(1000)` is an alias for `delayConnection(1000).delayBody(0)` `delay({ head: 1000, body: 2000 })` is an alias for `delayConnection(1000).delayBody(2000)` Both of which are covered in detail below. You are able to specify the number of milliseconds that your connection should be idle before it starts to receive the response. To simulate a socket timeout, provide a larger value than the timeout setting on the request. ``` nock('http://my.server.com') .get('/') .delayConnection(2000) // 2 seconds .reply(200, '<html></html>') req = http.request('http://my.server.com', { timeout: 1000 }) ``` Nock emits timeout events almost immediately by comparing the requested connection delay to the timeout parameter passed to `http.request()` or `http.ClientRequest#setTimeout()` . This allows you to test timeouts without using fake timers or slowing down your tests. If the client chooses to *not* take an action (e.g. abort the request), the request and response will continue on as normal, after real clock time has passed. Following the `'finish'` event being emitted by `ClientRequest` , Nock will wait for the next event loop iteration before checking if the request has been aborted. At this point, any connection delay value is compared against any request timeout setting and a `'timeout'` is emitted when appropriate from the socket and the request objects. A Node timeout timer is then registered with any connection delay value to delay real time before checking again if the request has been aborted and the `'response'` is emitted by the request. A similar method, `.socketDelay()` was removed in version 13. It was thought that having two methods so subtlety similar was confusing. The discussion can be found at #1974. You are able to specify the number of milliseconds that the response body should be delayed. This is the time between the headers being received and the body starting to be received. ``` nock('http://my.server.com') .get('/') .delayBody(2000) // 2 seconds .reply(200, '<html></html>') ``` Following the `'response'` being emitted by `ClientRequest` , Nock will register a timeout timer with the body delay value to delay real time before the IncomingMessage emits its first `'data'` or the `'end'` event. You can chain behaviour like this: ``` const scope = nock('http://myapp.iriscouch.com') .get('/users/1') .reply(404) .post('/users', { username: 'pgte', email: 'pedro.teixeira@gmail.com', }) .reply(201, { ok: true, id: '123ABC', rev: '946B7D1C', }) .get('/users/123ABC') .reply(200, { _id: '123ABC', _rev: '946B7D1C', username: 'pgte', email: 'pedro.teixeira@gmail.com', }) ``` You can filter the scope (protocol, domain or port) of nock through a function. The filtering function is accepted at the `filteringScope` field of the `options` argument. This can be useful if you have a node module that randomly changes subdomains to which it sends requests, e.g., the Dropbox node module behaves like this. ``` const scope = nock('https://api.dropbox.com', { filteringScope: scope => /^https:\/\/api[0-9]*.dropbox.com/.test(scope), }) .get('/1/metadata/auto/Photos?include_deleted=false&list=true') .reply(200) ``` You can also choose to filter out a scope based on your system environment (or any external factor). The filtering function is accepted at the `conditionally` field of the `options` argument. This can be useful if you only want certain scopes to apply depending on how your tests are executed. ``` const scope = nock('https://api.myservice.com', { conditionally: () => true, }) ``` You can also filter the URLs based on a function. This can be useful, for instance, if you have random or time-dependent data in your URL. You can use a regexp for replacement, just like String.prototype.replace: ``` const scope = nock('http://api.myservice.com') .filteringPath(/password=[^&]*/g, 'password=XXX') .get('/users/1?password=XXX') .reply(200, 'user') ``` Or you can use a function: ``` const scope = nock('http://api.myservice.com') .filteringPath(path => '/ABC') .get('/ABC') .reply(200, 'user') ``` Note that `scope.filteringPath` is not cumulative: it should only be used once per scope. You can also filter the request body based on a function. This can be useful, for instance, if you have random or time-dependent data in your request body. You can use a regexp for replacement, just like String.prototype.replace: ``` const scope = nock('http://api.myservice.com') .filteringRequestBody(/password=[^&]*/g, 'password=XXX') .post('/users/1', 'data=ABC&password=XXX') .reply(201, 'OK') ``` Or you can use a function to transform the body: ``` const scope = nock('http://api.myservice.com') .filteringRequestBody(body => 'ABC') .post('/', 'ABC') .reply(201, 'OK') ``` If you don't want to match the request body you should omit the `body` argument from the method function: ``` const scope = nock('http://api.myservice.com') .post('/some_uri') // no body argument .reply(200, 'OK') ``` If you need to match requests only if certain request headers match, you can. ``` const scope = nock('http://api.myservice.com') // Interceptors created after here will only match when the header `accept` equals `application/json`. .matchHeader('accept', 'application/json') .get('/') .reply(200, { data: 'hello world', }) .get('/') // Only this interceptor will match the header value `x-my-action` with `MyFirstAction` .matchHeader('x-my-action', 'MyFirstAction') .reply(200, { data: 'FirstActionResponse', }) .get('/') // Only this interceptor will match the header value `x-my-action` with `MySecondAction` .matchHeader('x-my-action', 'MySecondAction') .reply(200, { data: 'SecondActionResponse', }) ``` You can also use a regexp for the header body. ``` const scope = nock('http://api.myservice.com') .matchHeader('User-Agent', /Mozilla\/.*/) .get('/') .reply(200, { data: 'hello world', }) ``` You can also use a function for the header body. ``` const scope = nock('http://api.myservice.com') .matchHeader('content-length', val => val >= 1000) .get('/') .reply(200, { data: 'hello world', }) ``` By default every mocked request is expected to be made exactly once, and until it is it'll appear in `scope.pendingMocks()` , and `scope.isDone()` will return false (see expectations). In many cases this is fine, but in some (especially cross-test setup code) it's useful to be able to mock a request that may or may not happen. You can do this with `optionally()` . Optional requests are consumed just like normal ones once matched, but they do not appear in `pendingMocks()` , and `isDone()` will return true for scopes with only optional requests pending. ``` const example = nock('http://example.com') example.pendingMocks() // [] example.get('/pathA').reply(200) example.pendingMocks() // ["GET http://example.com:80/path"] // ...After a request to example.com/pathA: example.pendingMocks() // [] example.get('/pathB').optionally().reply(200) example.pendingMocks() // [] // You can also pass a boolean argument to `optionally()`. This // is useful if you want to conditionally make a mocked request // optional. const getMock = optional => example.get('/pathC').optionally(optional).reply(200) getMock(true) example.pendingMocks() // [] getMock(false) example.pendingMocks() // ["GET http://example.com:80/pathC"] ``` If you need some request on the same host name to be mocked and some others to **really** go through the HTTP stack, you can use the `allowUnmocked` option like this: ``` const scope = nock('http://my.existing.service.com', { allowUnmocked: true }) .get('/my/url') .reply(200, 'OK!') // GET /my/url => goes through nock // GET /other/url => actually makes request to the server ``` Note: When applying `{allowUnmocked: true}` , if the request is made to the real server, no interceptor is removed. Every time an HTTP request is performed for a scope that is mocked, Nock expects to find a handler for it. If it doesn't, it will throw an error. Calls to nock() return a scope which you can assert by calling `scope.done()` . This will assert that all specified calls on that scope were performed. Example: ``` const scope = nock('http://google.com') .get('/') .reply(200, 'Hello from Google!') // do some stuff setTimeout(() => { // Will throw an assertion error if meanwhile a "GET http://google.com" was // not performed. scope.done() }, 5000) ``` You can call `isDone()` on a single expectation to determine if the expectation was met: ``` const scope = nock('http://google.com').get('/').reply(200) scope.isDone() // will return false ``` It is also available in the global scope, which will determine if all expectations have been met: `nock.isDone()` You can cleanup all the prepared mocks (could be useful to cleanup some state after a failed test) like this: `nock.cleanAll()` You can abort all current pending request like this: `nock.abortPendingRequests()` You can make all the interceptors for a scope persist by calling `.persist()` on it: ``` const scope = nock('http://example.com') .persist() .get('/') .reply(200, 'Persisting all the way') ``` Note that while a persisted scope will always intercept the requests, it is considered "done" after the first interception. If you want to stop persisting an individual persisted mock you can call `persist(false)` : ``` const scope = nock('http://example.com').persist().get('/').reply(200, 'ok') // Do some tests ... scope.persist(false) ``` You can also use `nock.cleanAll()` which removes all mocks, including persistent mocks. To specify an exact number of times that nock should repeat the response, use .times(). If a scope is not done, you can inspect the scope to infer which ones are still pending using the `scope.pendingMocks()` function: ``` if (!scope.isDone()) { console.error('pending mocks: %j', scope.pendingMocks()) } ``` It is also available in the global scope: `console.error('pending mocks: %j', nock.pendingMocks())` You can see every mock that is currently active (i.e. might potentially reply to requests) in a scope using `scope.activeMocks()` . A mock is active if it is pending, optional but not yet completed, or persisted. Mocks that have intercepted their requests and are no longer doing anything are the only mocks which won't appear here. You probably don't need to use this - it mainly exists as a mechanism to recreate the previous (now-changed) behavior of `pendingMocks()` . `console.error('active mocks: %j', scope.activeMocks())` It is also available in the global scope: `console.error('active mocks: %j', nock.activeMocks())` Your tests may sometimes want to deactivate the nock interceptor. Once deactivated, nock needs to be re-activated to work. You can check if nock interceptor is active or not by using `nock.isActive()` . Sample: ``` if (!nock.isActive()) { nock.activate() } ``` You can clone a scope by calling `.clone()` on it: ``` const scope = nock('http://example.test') const getScope = scope.get('/').reply(200) const postScope = scope.clone().post('/').reply(200) ``` You can restore the HTTP interceptor to the normal unmocked behaviour by calling: `nock.restore()` **note 1**: restore does not clear the interceptor list. Use nock.cleanAll() if you expect the interceptor list to be empty. **note 2**: restore will also remove the http interceptor itself. You need to run nock.activate() to re-activate the http interceptor. Without re-activation, nock will not intercept any calls. Only for cases where nock has been deactivated using nock.restore(), you can reactivate the HTTP interceptor to start intercepting HTTP calls using: `nock.activate()` **note**: To check if nock HTTP interceptor is active or inactive, use nock.isActive(). You can bypass Nock completely by setting the `NOCK_OFF` environment variable to `"true"` . This way you can have your tests hit the real servers just by switching on this environment variable. `$ NOCK_OFF=true node my_test.js` By default, any requests made to a host that is not mocked will be executed normally. If you want to block these requests, nock allows you to do so. For disabling real http requests. `nock.disableNetConnect()` So, if you try to request any host not 'nocked', it will throw a `NetConnectNotAllowedError` . ``` nock.disableNetConnect() const req = http.get('http://google.com/') req.on('error', err => { console.log(err) }) // The returned `http.ClientRequest` will emit an error event (or throw if you're not listening for it) // This code will log a NetConnectNotAllowedError with message: // Nock: Disallowed net connect for "google.com:80" ``` For enabling any real HTTP requests (the default behavior): `nock.enableNetConnect()` You could allow real HTTP requests for certain host names by providing a string or a regular expression for the hostname, or a function that accepts the hostname and returns true or false: ``` // Using a string nock.enableNetConnect('amazon.com') // Or a RegExp nock.enableNetConnect(/(amazon|github)\.com/) // Or a Function nock.enableNetConnect( host => host.includes('amazon.com') || host.includes('github.com'), ) http.get('http://www.amazon.com/') http.get('http://github.com/') http.get('http://google.com/') // This will throw NetConnectNotAllowedError with message: // Nock: Disallowed net connect for "google.com:80" ``` A common use case when testing local endpoints would be to disable all but localhost, then add in additional nocks for external requests: ``` nock.disableNetConnect() // Allow localhost connections so we can test local routes and mock servers. nock.enableNetConnect('127.0.0.1') ``` When you're done with the test, you probably want to set everything back to normal: ``` nock.cleanAll() nock.enableNetConnect() ``` This is a cool feature: Guessing what the HTTP calls are is a mess, especially if you are introducing nock on your already-coded tests. For these cases where you want to mock an existing live system you can record and playback the HTTP calls like this: ``` nock.recorder.rec() // Some HTTP calls happen and the nock code necessary to mock // those calls will be outputted to console ``` Recording relies on intercepting real requests and responses and then persisting them for later use. In order to stop recording you should call `nock.restore()` and recording will stop. **ATTENTION!:** when recording is enabled, nock does no validation, nor will any mocks be enabled. Please be sure to turn off recording before attempting to use any mocks in your tests. If you just want to capture the generated code into a var as an array you can use: ``` nock.recorder.rec({ dont_print: true, }) // ... some HTTP calls const nockCalls = nock.recorder.play() ``` The `nockCalls` var will contain an array of strings representing the generated code you need. Copy and paste that code into your tests, customize at will, and you're done! You can call `nock.recorder.clear()` to remove already recorded calls from the array that `nock.recorder.play()` returns. (Remember that you should do this one test at a time). In case you want to generate the code yourself or use the test data in some other way, you can pass the `output_objects` option to `rec` : ``` nock.recorder.rec({ output_objects: true, }) // ... some HTTP calls const nockCallObjects = nock.recorder.play() ``` The returned call objects have the following properties: `scope` - the scope of the call including the protocol and non-standard ports (e.g.`'https://github.com:12345'` )`method` - the HTTP verb of the call (e.g.`'GET'` )`path` - the path of the call (e.g.`'/pgte/nock'` )`body` - the body of the call, if any`status` - the HTTP status of the reply (e.g.`200` )`response` - the body of the reply which can be a JSON, string, hex string representing binary buffers or an array of such hex strings (when handling`content-encoded` in reply header)`rawHeaders` - the headers of the reply which are formatted as a flat array containing header name and header value pairs (e.g.`['accept', 'application/json', 'set-cookie', 'my-cookie=value']` )`reqheader` - the headers of the request If you save this as a JSON file, you can load them directly through `nock.load(path)` . Then you can post-process them before using them in the tests. For example, to add request body filtering (shown here fixing timestamps to match the ones captured during recording): ``` nocks = nock.load(pathToJson) nocks.forEach(function (nock) { nock.filteringRequestBody = (body, aRecordedBody) => { if (typeof body !== 'string' || typeof aRecordedBody !== 'string') { return body } const recordedBodyResult = /timestamp:([0-9]+)/.exec(aRecordedBody) if (recordedBodyResult) { const recordedTimestamp = recordedBodyResult[1] return body.replace( /(timestamp):([0-9]+)/g, function (match, key, value) { return key + ':' + recordedTimestamp }, ) } else { return body } } }) ``` Alternatively, if you need to pre-process the captured nock definitions before using them (e.g. to add scope filtering) then you can use `nock.loadDefs(path)` and `nock.define(nockDefs)` . Shown here is scope filtering for Dropbox node module which constantly changes the subdomain to which it sends the requests: ``` // Pre-process the nock definitions as scope filtering has to be defined before the nocks are defined (due to its very hacky nature). const nockDefs = nock.loadDefs(pathToJson) nockDefs.forEach(def => { // Do something with the definition object e.g. scope filtering. def.options = { ...def.options, filteringScope: scope => /^https:\/\/api[0-9]*.dropbox.com/.test(scope), } }) // Load the nocks from pre-processed definitions. const nocks = nock.define(nockDefs) ``` Recording request headers by default is deemed more trouble than it's worth as some of them depend on the timestamp or other values that may change after the tests have been recorded thus leading to complex postprocessing of recorded tests. Thus by default the request headers are not recorded. The genuine use cases for recording request headers (e.g. checking authorization) can be handled manually or by using `enable_reqheaders_recording` in `recorder.rec()` options. ``` nock.recorder.rec({ dont_print: true, output_objects: true, enable_reqheaders_recording: true, }) ``` Note that even when request headers recording is enabled Nock will never record `user-agent` headers. `user-agent` values change with the version of Node and underlying operating system and are thus useless for matching as all that they can indicate is that the user agent isn't the one that was used to record the tests. Nock will print using `console.log` by default (assuming that `dont_print` is `false` ). If a different function is passed into `logging` , nock will send the log string (or object, when using `output_objects` ) to that function. Here's a basic example. ``` const appendLogToFile = content => { fs.appendFile('record.txt', content) } nock.recorder.rec({ logging: appendLogToFile, }) ``` By default, nock will wrap its output with the separator string `<<<<<<-- cut here -->>>>>>` before and after anything it prints, whether to the console or a custom log function given with the `logging` option. To disable this, set `use_separator` to false. ``` nock.recorder.rec({ use_separator: false, }) ``` This allows removing a specific interceptor. This can be either an interceptor instance or options for a url. It's useful when there's a list of common interceptors shared between tests, where an individual test requires one of the shared interceptors to behave differently. Examples: ``` nock.removeInterceptor({ hostname: 'localhost', path: '/mockedResource', }) ``` ``` nock.removeInterceptor({ hostname: 'localhost', path: '/login', method: 'POST', proto: 'https', }) ``` ``` const interceptor = nock('http://example.org').get('somePath') nock.removeInterceptor(interceptor) ``` **Note** `.reply(...)` method returns Scope, not Interceptor, and so it is not a valid argument for `nock.removeInterceptor` . So if your method chain ends with `.reply` to be used with `nock.removeInterceptor` the chain need to be break in between: ``` // this will NOT work const interceptor = nock('http://example.org').get('somePath').reply(200, 'OK') nock.removeInterceptor(interceptor) // this is how it should be const interceptor = nock('http://example.org').get('somePath') interceptor.reply(200, 'OK') nock.removeInterceptor(interceptor) ``` A scope emits the following events: `emit('request', function(req, interceptor, body))` `emit('replied', function(req, interceptor))` You can also listen for no match events like this: `nock.emitter.on('no match', req => {})` Fixture recording support and playback. You must specify a fixture directory before using, for example: In your test helper ``` const nockBack = require('nock').back nockBack.fixtures = '/path/to/fixtures/' nockBack.setMode('record') ``` `nockBack.fixtures` : path to fixture directory`nockBack.setMode()` : the mode to use By default if the fixture doesn't exist, a `nockBack` will create a new fixture and save the recorded output for you. The next time you run the test, if the fixture exists, it will be loaded in. The `this` context of the callback function will have a property `scopes` to access all of the loaded nock scopes. ``` const nockBack = require('nock').back const request = require('request') nockBack.setMode('record') nockBack.fixtures = __dirname + '/nockFixtures' //this only needs to be set once in your test helper // recording of the fixture nockBack('zomboFixture.json', nockDone => { request.get('http://zombo.com', (err, res, body) => { nockDone() // usage of the created fixture nockBack('zomboFixture.json', function (nockDone) { http.get('http://zombo.com/').end() // respond body "Ok" this.assertScopesFinished() //throws an exception if all nocks in fixture were not satisfied http.get('http://zombo.com/').end() // throws exception because someFixture.json only had one call nockDone() //never gets here }) }) }) ``` If your tests are using promises then use `nockBack` like this: ``` return nockBack('promisedFixture.json').then(({ nockDone, context }) => { // do your tests returning a promise and chain it with // `.then(nockDone)` }) ``` Or, with async/await: ``` const { nockDone, context } = await nockBack('promisedFixture.json') // your test code nockDone() ``` As an optional second parameter you can pass the following options `before` : a preprocessing function, gets called before nock.define`after` : a postprocessing function, gets called after nock.define`afterRecord` : a postprocessing function, gets called after recording. Is passed the array of scopes recorded and should return the intact array, a modified version of the array, or if custom formatting is desired, a stringified version of the array to save to the fixture`recorder` : custom options to pass to the recorder ``` function prepareScope(scope) { scope.filteringRequestBody = (body, aRecordedBody) => { if (typeof body !== 'string' || typeof aRecordedBody !== 'string') { return body } const recordedBodyResult = /timestamp:([0-9]+)/.exec(aRecordedBody) if (recordedBodyResult) { const recordedTimestamp = recordedBodyResult[1] return body.replace( /(timestamp):([0-9]+)/g, (match, key, value) => `${key}:${recordedTimestamp}`, ) } else { return body } } } nockBack('exampleFixture.json', { before: prepareScope }, nockDone => { request.get('http://example.com', function (err, res, body) { // do your tests nockDone() }) }) ``` To set the mode call `nockBack.setMode(mode)` or run the tests with the `NOCK_BACK_MODE` environment variable set before loading nock. If the mode needs to be changed programmatically, the following is valid: `nockBack.setMode(nockBack.currentMode)` - wild: all requests go out to the internet, don't replay anything, doesn't record anything - dryrun: The default, use recorded nocks, allow http calls, doesn't record anything, useful for writing new tests - record: use recorded nocks, record new nocks - update: remove recorded nocks, record nocks - lockdown: use recorded nocks, disables all http calls even when not nocked, doesn't record Although you can certainly open the recorded JSON fixtures to manually verify requests recorded by nockBack - it's sometimes useful to put those expectations in your tests. The `context.query` function can be used to return all of the interceptors that were recored in a given fixture. By itself, this functions as a negative expectation - you can verify that certain calls do NOT happen in the fixture. Since `assertScopesFinished` can verify there are no *extra* calls in a fixture - pairing the two methods allows you to verify the exact set of HTTP interactions recorded in the fixture. This is especially useful when re-recording for instance, a service that synchronizes via several HTTP calls to an external API. **NB**: The list of fixtures is only available when reading. It will only be populated for nocks that are played back from fixtures. ``` it('#synchronize - synchronize with the external API', async localState => { const { nockDone, context } = await back('http-interaction.json') const syncronizer = new Synchronizer(localState) sycnronizer.syncronize() nockDone() context.assertScopesFinished() expect(context.query()).toEqual( expect.arrayContaining([ expect.objectContaining({ method: 'POST', path: '/create/thing', }), expect.objectContaining({ method: 'POST', path: 'create/thing', }), ]), ) }) ``` **"No match for response" when using got with error responses** Got automatically retries failed requests twice. That means if you have a test which mocks a 4xx or 5xx response, got will immediately reissue it. At that point, the mock will have been consumed and the second request will error out with **Nock: No match for request**. The same is true for `.replyWithError()` . Adding `{ retry: 0 }` to the `got` invocations will disable retrying, e.g.: `await got('http://example.test/', { retry: 0 })` If you need to do this in all your tests, you can create a module `got_client.js` which exports a custom got instance: ``` const got = require('got') module.exports = got.extend({ retry: 0 }) ``` This is how it's handled in Nock itself (see #1523). To use Nock with Axios, you may need to configure Axios to use the Node adapter as in the example below: ``` import axios from 'axios' import nock from 'nock' import test from 'ava' // You can use any test framework. // If you are using jsdom, axios will default to using the XHR adapter which // can't be intercepted by nock. So, configure axios to use the node adapter. // // References: // https://github.com/axios/axios/pull/5277 axios.defaults.adapter = 'http' test('can fetch test response', async t => { // Set up the mock request. const scope = nock('http://localhost') .get('/test') .reply(200, 'test response') // Make the request. Note that the hostname must match exactly what is passed // to `nock()`. Alternatively you can set `axios.defaults.host = 'http://localhost'` // and run `axios.get('/test')`. await axios.get('http://localhost/test') // Assert that the expected request was made. scope.done() }) ``` For Nock + Axios + Jest to work, you'll have to also adapt your jest.config.js, like so: ``` const config = { moduleNameMapper: { // Force CommonJS build for http adapter to be available. // via https://github.com/axios/axios/issues/5101#issuecomment-1276572468 '^axios$': require.resolve('axios'), }, } ``` Memory issues can be avoided by calling `nock.restore()` after each test suite. One of the core principles of Jest is that it runs tests in isolation. It does this by manipulating the modules cache of Node in a way that conflicts with how Nock monkey patches the builtin `http` and `https` modules. Related issue with more details. Nock uses `debug` , so just run with environmental variable `DEBUG` set to `nock.*` . `user@local$ DEBUG=nock.* node my_test.js` Each step in the matching process is logged this way and can be useful when determining why a request was not intercepted by Nock. For example the following shows that matching failed because the request had an extra search parameter. ``` nock('http://example.com').get('/').query({ foo: 'bar' }).reply() await got('http://example.com/?foo=bar&baz=foz') ``` ``` user@local$ DEBUG=nock.scope:example.com node my_test.js ... nock.scope:example.com Interceptor queries: {"foo":"bar"} +1ms nock.scope:example.com Request queries: {"foo":"bar","baz":"foz"} +0ms nock.scope:example.com query matching failed +0ms ``` Thanks for wanting to contribute! Take a look at our Contributing Guide for notes on our commit message conventions and how to run tests. Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms. Thanks goes to these wonderful people (emoji key): This project follows the all-contributors specification. Contributions of any kind welcome! Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor] Copyright (c) 2011–2019 Pedro Teixeira and other contributors.
true
true
true
HTTP server mocking and expectations library for Node.js - nock/nock
2024-10-13 00:00:00
2011-09-22 00:00:00
https://opengraph.githubassets.com/ca1783fc65504badf682a8a533d1afd8cef1c9b23ee97cfc97433d396a9b9b36/nock/nock
object
github.com
GitHub
null
null
8,088,871
http://www.slate.com/blogs/future_tense/2014/07/25/s_517_leahy_senate_bill_on_cellphone_unlocking_passes_congress.html
It Will Soon Be Legal to Unlock Your Cellphone
Will Oremus
Congress has approved a bill that will make it legal for Americans to unlock their cellphones so they can switch between wireless carriers. The House of Representatives on Friday unanimously passed S. 517, less than two weeks after the Senate approved it. Now it just needs a signature from President Obama to become law. The bill, authored by Sen. Patrick Leahy, reverses a 2012 ruling by the Library of Congress that made cellphone unlocking a violation of federal copyright laws. That meant that people were stuck with their existing carriers—like AT&T or Verizon—even after their contracts had expired. Now, assuming Obama signs the bill, cellphone unlocking will once again be exempted from a highly controversial statute in the 1998 Digital Millennium Copyright Act called Section 1201. That section makes it illegal for consumers to circumvent technologies that control access to a copyrighted work. The Library of Congress first exempted cell phone unlocking from that statute in 2006, but then closed the exemption in 2012. That outraged activists who saw it as a way for wireless carriers to grab power from consumers by limiting what they can do with their own cellphones. The grassroots protests coalesced around a 2013 WhiteHouse.gov petition written by Sina Khanifar, a 27-year-old San Francisco entrepreneur. Against the odds, the petition gathered more than 100,000 signatures and eventually drew a supportive response from the White House. Even then, action from the FCC and/or Congress seemed unlikely. But activists pressed on, and in the end both the FCC and Congress addressed many of their concerns. Friday’s passage of the Senate bill came seven months after U.S. wireless carriers reached a voluntary agreement, under FCC pressure, to unlock customers’ cellphones for them on request once their contracts expired. The new law doesn’t necessarily mean people can unlock their phones whenever the want, however. Many wireless contracts still prohibit the practice for as long as the contract remains in force (often two years), even when customers are traveling abroad. Still, advocates view Congress’ approval of the bill as a rare and much-needed blow for consumer rights in the digital age. “It’s been a long road against powerful, entrenched interests,” Khanifar said, adding that the cellphone unlocking skirmish is part of a much bigger battle over digital copyright laws. “Hopefully this is beginning to highlight how ridiculous the 1201 process really is,” he said. ** Update, July 25, 2014, 4:26 p.m.:** President Obama confirmed to *Bloomberg*that he looks forward to signing the bill. *Previously in Slate:*
true
true
true
Congress has approved a bill that will make it legal for Americans to unlock their cellphones so they can switch between wireless carriers.
2024-10-13 00:00:00
2014-07-25 00:00:00
https://compote.slate.co…e.jpg?width=1560
article
slate.com
Slate
null
null
14,890,652
https://wikileaks.org/macron-emails/
Macron Campaign Emails
null
# Macron Campaign Emails Today, Monday 31 July 2017, WikiLeaks publishes a searchable archive of 21,075 unique verified emails associated with the French presidential campaign of Emmanual Macron. The emails range from 20 March 2009 to 24 April 2017. The 21,075 emails have been individually forensically verified by WikiLeaks through its DKIM system. The full archive of 71,848 emails with 26,506 attachments from 4,493 unique senders is provided for context. WikiLeaks only certifies as verified the 21,075 emails marked with its green "DKIM verified" banner however based on statistical sampling the overwheling majority of the rest of the emails archive are authentic. As the emails are often in chains and include portions of each other it is usually possible to confirm the integrity other emails in the chain as a result of the DKIM verified emails within it. Guillaume Poupard, the head of French government cyber security agency ANSSI, told AP on June 1 this year that the method used to obtain the emails resembled the actions of an "isolated individual". Poupard stated that, contrary to media speculation, ANSSI could not attribute the attack to Russia and that France had previously been subject to hacking attacks designed to falsify attribution.
true
true
true
null
2024-10-13 00:00:00
2009-01-01 00:00:00
null
null
null
null
null
null
13,619,714
https://about.gitlab.com/2017/02/10/postmortem-of-database-outage-of-january-31/
Postmortem of database outage of January 31
GitLab
On January 31st 2017, we experienced a major service outage for one of our products, the online service GitLab.com. The outage was caused by an accidental removal of data from our primary database server. This incident caused the GitLab.com service to be unavailable for many hours. We also lost some production data that we were eventually unable to recover. Specifically, we lost modifications to database data such as projects, comments, user accounts, issues and snippets, that took place between 17:20 and 00:00 UTC on January 31. Our best estimate is that it affected roughly 5,000 projects, 5,000 comments and 700 new user accounts. Code repositories or wikis hosted on GitLab.com were unavailable during the outage, but were not affected by the data loss. GitLab Enterprise customers, GitHost customers, and self-managed GitLab CE users were not affected by the outage, or the data loss. Losing production data is unacceptable. To ensure this does not happen again we're working on multiple improvements to our operations & recovery procedures for GitLab.com. In this article we'll look at what went wrong, what we did to recover, and what we'll do to prevent this from happening in the future. To the GitLab.com users whose data we lost and to the people affected by the outage: we're sorry. I apologize personally, as GitLab's CEO, and on behalf of everyone at GitLab. ## Database setup GitLab.com currently uses a single primary and a single secondary in hot-standby mode. The standby is only used for failover purposes. In this setup a single database has to handle all the load, which is not ideal. The primary's hostname is `db1.cluster.gitlab.com` , while the secondary's hostname is `db2.cluster.gitlab.com` . In the past we've had various other issues with this particular setup due to `db1.cluster.gitlab.com` being a single point of failure. For example: - A database outage on November 28th, 2016 due to project_authorizations having too much bloat - CI distributed heavy polling and exclusive row locking for seconds takes GitLab.com down - Scary DB spikes ## Timeline On January 31st an engineer started setting up multiple PostgreSQL servers in our staging environment. The plan was to try out pgpool-II to see if it would reduce the load on our database by load balancing queries between the available hosts. Here is the issue for that plan: infrastructure#259. **± 17:20 UTC:** prior to starting this work, our engineer took an LVM snapshot of the production database and loaded this into the staging environment. This was necessary to ensure the staging database was up to date, allowing for more accurate load testing. This procedure normally happens automatically once every 24 hours (at 01:00 UTC), but they wanted a more up to date copy of the database. **± 19:00 UTC:** GitLab.com starts experiencing an increase in database load due to what we suspect was spam. In the week leading up to this event GitLab.com had been experiencing similar problems, but not this severe. One of the problems this load caused was that many users were not able to post comments on issues and merge requests. Getting the load under control took several hours. We would later find out that part of the load was caused by a background job trying to remove a GitLab employee and their associated data. This was the result of their account being flagged for abuse and accidentally scheduled for removal. More information regarding this particular problem can be found in the issue "Removal of users by spam should not hard delete". **± 23:00 UTC:** Due to the increased load, our PostgreSQL secondary's replication process started to lag behind. The replication failed as WAL segments needed by the secondary were already removed from the primary. As GitLab.com was not using WAL archiving, the secondary had to be re-synchronised manually. This involves removing the existing data directory on the secondary, and running pg_basebackup to copy over the database from the primary to the secondary. One of the engineers went to the secondary and wiped the data directory, then ran `pg_basebackup` . Unfortunately `pg_basebackup` would hang, producing no meaningful output, despite the `--verbose` option being set. After a few tries `pg_basebackup` mentioned that it could not connect due to the master not having enough available replication connections (as controlled by the `max_wal_senders` option). To resolve this our engineers decided to temporarily increase `max_wal_senders` from the default value of `3` to `32` . When applying the settings, PostgreSQL refused to restart, claiming too many semaphores were being created. This can happen when, for example, `max_connections` is set too high. In our case this was set to `8000` . Such a value is way too high, yet it had been applied almost a year ago and was working fine until that point. To resolve this the setting's value was reduced to `2000` , resulting in PostgreSQL restarting without issues. Unfortunately this did not resolve the problem of `pg_basebackup` not starting replication immediately. One of the engineers decided to run it with `strace` to see what it was blocking on. `strace` showed that `pg_basebackup` was hanging in a `poll` call, but that did not provide any other meaningful information that might have explained why. **± 23:30 UTC:** one of the engineers thinks that perhaps `pg_basebackup` created some files in the PostgreSQL data directory of the secondary during the previous attempts to run it. While normally `pg_basebackup` prints an error when this is the case, the engineer in question wasn't too sure what was going on. It would later be revealed by another engineer (who wasn't around at the time) that this is normal behaviour: `pg_basebackup` will wait for the primary to start sending over replication data and it will sit and wait silently until that time. Unfortunately this was not clearly documented in our engineering runbooks nor in the official `pg_basebackup` document. Trying to restore the replication process, an engineer proceeds to wipe the PostgreSQL database directory, errantly thinking they were doing so on the secondary. Unfortunately this process was executed on the primary instead. The engineer terminated the process a second or two after noticing their mistake, but at this point around 300 GB of data had already been removed. Hoping they could restore the database the engineers involved went to look for the database backups, and asked for help on Slack. Unfortunately the process of both finding and using backups failed completely. ## Broken recovery procedures This brings us to the recovery procedures. Normally in an event like this, one should be able to restore a database in relatively little time using a recent backup, though some form of data loss can not always be prevented. For GitLab.com we have the following procedures in place: - Every 24 hours a backup is generated using `pg_dump` , this backup is uploaded to Amazon S3. Old backups are automatically removed after some time. - Every 24 hours we generate an LVM snapshot of the disk storing the production database data. This snapshot is then loaded into the staging environment, allowing us to more safely test changes without impacting our production environment. Direct access to the staging database is restricted, similar to our production database. - For various servers (e.g. the NFS servers storing Git data) we use Azure disk snapshots. These snapshots are taken once per 24 hours. - Replication between PostgreSQL hosts, primarily used for failover purposes and not for disaster recovery. At this point the replication process was broken and data had already been wiped from both the primary and secondary, meaning we could not restore from either host. ### Database backups using pg_dump When we went to look for the `pg_dump` backups we found out they were not there. The S3 bucket was empty, and there was no recent backup to be found anywhere. Upon closer inspection we found out that the backup procedure was using `pg_dump` 9.2, while our database is running PostgreSQL 9.6 (for Postgres, 9.x releases are considered major). A difference in major versions results in `pg_dump` producing an error, terminating the backup procedure. The difference is the result of how our Omnibus package works. We currently support both PostgreSQL 9.2 and 9.6, allowing users to upgrade (either manually or using commands provided by the package). To determine the correct version to use the Omnibus package looks at the PostgreSQL version of the database cluster (as determined by `$PGDIR/PG_VERSION` , with `$PGDIR` being the path to the data directory). When PostgreSQL 9.6 is detected Omnibus ensures all binaries use PostgreSQL 9.6, otherwise it defaults to PostgreSQL 9.2. The `pg_dump` procedure was executed on a regular application server, not the database server. As a result there is no PostgreSQL data directory present on these servers, thus Omnibus defaults to PostgreSQL 9.2. This in turn resulted in `pg_dump` terminating with an error. While notifications are enabled for any cronjobs that error, these notifications are sent by email. For GitLab.com we use DMARC. Unfortunately DMARC was not enabled for the cronjob emails, resulting in them being rejected by the receiver. This means we were never aware of the backups failing, until it was too late. ### Azure disk snapshots Azure disk snapshots are used to generate a snapshot of an entire disk. These snapshots don't make it easy to restore individual chunks of data (e.g. a lost user account), though it's possible. The primary purpose is to restore entire disks in case of disk failure. In Azure a snapshot belongs to a storage account, and a storage account in turn is linked to one or more hosts. Each storage account has a limit of roughly 30 TB. When restoring a snapshot using a host in the same storage account, the procedure usually completes very quickly. However, when using a host in a different storage account the procedure can take hours if not days to complete. For example, in one such case it took over a week to restore a snapshot. As a result we try not to rely on this system too much. While enabled for the NFS servers, these snapshots were not enabled for any of the database servers as we assumed that our other backup procedures were sufficient enough. ### LVM snapshots The LVM snapshots are primarily used to easily copy data from our production environment to our staging environment. While this process was working as intended, the produced snapshots are not really meant to be used for disaster recovery. At the time of the outage we had two snapshots available: - A snapshot created for our staging environment every 24 hours, almost 24 hours before the outage happened. - A snapshot created manually by one of the engineers roughly 6 hours before the outage. When we generate a snapshot the following steps are taken: - Generate a snapshot of production. - Copy the snapshot to staging. - Create a new disk using this snapshot. - Remove all webhooks from the resulting database, to prevent them from being triggered by accident. ## Recovering GitLab.com To recover GitLab.com we decided to use the LVM snapshot created 6 hours before the outage, as it was our only option to reduce data loss as much as possible (the alternative was to lose almost 24 hours of data). This process would involve the following steps: - Copy the existing staging database to production, which would not contain any webhooks. - In parallel, copy the snapshot used to set up the database as this snapshot might still contain the webhooks (we weren't entirely sure). - Set up a production database using the snapshot from step 1. - Set up a separate database using the snapshot from step 2. - Restore webhooks using the database set up in the previous step. - Increment all database sequences by 100,000 so one can't re-use IDs that might have been used before the outage. - Gradually re-enable GitLab.com. For our staging environment we were using Azure classic, without Premium Storage. This is primarily done to save costs as premium storage is quite expensive. As a result the disks are very slow, resulting in them being the main bottleneck in the restoration process. Because LVM snapshots are stored on the hosts they are taken for we had two options to restore data: - Copy over the LVM snapshot - Copy over the PostgreSQL data directory In both cases the amount of data to copy would be roughly the same. Since copying over and restoring the data directory would be easier we decided to go with this solution. Copying the data from the staging to the production host took around 18 hours. These disks are network disks and are throttled to a really low number (around 60Mbps), there is no way to move from cheap storage to premium, so this was the performance we would get out of it. There was no network or processor bottleneck, the bottleneck was in the drives. Once copied we were able to restore the database (including webhooks) to the state it was at January 31st, 17:20 UTC. On February 1st at 17:00 UTC we managed to restore the GitLab.com database without webhooks. Restoring webhooks was done by creating a separate staging database using the LVM snapshot, but without triggering the removal of webhooks. This allowed us to generate a SQL dump of the table and import this into the restored GitLab.com database. Around 18:00 UTC we finished the final restoration procedures such as restoring the webhooks and confirming everything was operating as expected. ## Publication of the outage In the spirit of transparency we kept track of progress and notes in a publicly visible Google document. We also streamed the recovery procedure on YouTube, with a peak viewer count of around 5000 (resulting in the stream being the #2 live stream on YouTube for several hours). The stream was used to give our users live updates about the recovery procedure. Finally we used Twitter (https://twitter.com/gitlabstatus) to inform those that might not be watching the stream. The document in question was initially private to GitLab employees and contained name of the engineer who accidentally removed the data. While the name was added by the engineer themselves (and they had no problem with this being public), we will redact names in future cases as other engineers may not be comfortable with their name being published. ## Data loss impact Database data such as projects, issues, snippets, etc. created between January 31st 17:20 UTC and 23:30 UTC has been lost. Git repositories and Wikis were not removed as they are stored separately. It's hard to estimate how much data has been lost exactly, but we estimate we have lost at least 5000 projects, 5000 comments, and roughly 700 users. This only affected users of GitLab.com, self-managed instances or GitHost instances were not affected. ## Impact on GitLab itself Since GitLab uses GitLab.com to develop GitLab the outage meant that for some it was harder to get work done. Most developers could continue working using their local Git repositories, but creating issues and such had to be delayed. To publish the blog post "GitLab.com Database Incident" we used a private GitLab instance we normally use for private/sensitive workflows (e.g. security releases). This allowed us to build and deploy a new version of the website while GitLab.com was unavailable. We also have a public monitoring website located at https://dashboards.gitlab.com/. Unfortunately the current setup for this website was not able to handle the load produced by users using this service during the outage. Fortunately our internal monitoring systems (which dashboards.gitlab.com is based on) were not affected. ## Root cause analysis To analyse the root cause of these problems we'll use a technique called "The 5 Whys". We'll break up the incident into 2 main problems: GitLab.com being down, and it taking a long time to restore GitLab.com. **Problem 1:** GitLab.com was down for about 18 hours. **Why was GitLab.com down?**- The database directory of the primary database was removed by accident, instead of removing the database directory of the secondary.**Why was the database directory removed?**- Database replication stopped, requiring the secondary to be reset/rebuilt. This in turn requires that the PostgreSQL data directory is empty. Restoring this required manual work as this was not automated, nor was it documented properly.**Why did replication stop?**- A spike in database load caused the database replication process to stop. This was due to the primary removing WAL segments before the secondary could replicate them.**Why did the database load increase?**- This was caused by two events happening at the same time: an increase in spam, and a process trying to remove a GitLab employee and their associated data.**Why was a GitLab employee scheduled for removal?**- The employee was reported for abuse by a troll. The current system used for responding to abuse reports makes it too easy to overlook the details of those reported. As a result the employee was accidentally scheduled for removal. **Problem 2:** restoring GitLab.com took over 18 hours. **Why did restoring GitLab.com take so long?**- GitLab.com had to be restored using a copy of the staging database. This was hosted on slower Azure VMs in a different region.**Why was the staging database needed for restoring GitLab.com?**- Azure disk snapshots were not enabled for the database servers, and the periodic database backups using`pg_dump` were not working.**Why could we not fail over to the secondary database host?**- The secondary database's data was wiped as part of restoring database replication. As such it could not be used for disaster recovery.**Why could we not use the standard backup procedure?**- The standard backup procedure uses`pg_dump` to perform a logical backup of the database. This procedure failed silently because it was using PostgreSQL 9.2, while GitLab.com runs on PostgreSQL 9.6.**Why did the backup procedure fail silently?**- Notifications were sent upon failure, but because of the Emails being rejected there was no indication of failure. The sender was an automated process with no other means to report any errors.**Why were the Emails rejected?**- Emails were rejected by the receiving mail server due to the Emails not being signed using DMARC.**Why were Azure disk snapshots not enabled?**- We assumed our other backup procedures were sufficient. Furthermore, restoring these snapshots can take days.**Why was the backup procedure not tested on a regular basis?**- Because there was no ownership, as a result nobody was responsible for testing this procedure. ## Improving recovery procedures We are currently working on fixing and improving our various recovery procedures. Work is split across the following issues: - Overview of status of all issues listed in this blog post (#1684) - Update PS1 across all hosts to more clearly differentiate between hosts and environments (#1094) - Prometheus monitoring for backups (#1095) - Set PostgreSQL's max_connections to a sane value (#1096) - Investigate Point in time recovery & continuous archiving for PostgreSQL (#1097) - Hourly LVM snapshots of the production databases (#1098) - Azure disk snapshots of production databases (#1099) - Move staging to the ARM environment (#1100) - Recover production replica(s) (#1101) - Automated testing of recovering PostgreSQL database backups (#1102) - Improve PostgreSQL replication documentation/runbooks (#1103) - Investigate pgbarman for creating PostgreSQL backups (#1105) - Investigate using WAL-E as a means of Database Backup and Realtime Replication (#494) - Build Streaming Database Restore - Assign an owner for data durability We are also working on setting up multiple secondaries and balancing the load amongst these hosts. More information on this can be found at: Our main focus is to improve disaster recovery, and making it more obvious as to what host you're using; instead of preventing production engineers from running certain commands. For example, one could alias `rm` to something safer but in doing so would only protect themselves against accidentally running `rm -rf /important-data` , not against disk corruption or any of the many other ways you can lose data. An ideal environment is one in which you *can* make mistakes but easily and quickly recover from them with minimal to no impact. This in turn requires you to be able to perform these procedures on a regular basis, and make it easy to test and roll back any changes. For example, we are in the process of setting up procedures that allow developers to test their database migrations. More information on this can be found in the issue "Tool for executing and reverting Rails migrations on staging". We're also looking into ways to build better recovery procedures for the entire GitLab.com infrastructure, and not just the database; and to ensure there is ownership of these procedures. The issue for this is "Disaster recovery for everything that is not the database". Monitoring wise we also started working on a public backup monitoring dashboard, which can be found at https://dashboards.gitlab.com/dashboard/db/postgresql-backups. Currently this dashboard only contains data of our `pg_dump` backup procedure, but we aim to add more data over time. One might notice that at the moment our `pg_dump` backups are 3 days old. We perform these backups on a secondary as `pg_dump` can put quite a bit of pressure on a database. Since we are in the process of rebuilding our secondaries the `pg_dump` backup procedure is suspended for the time being. Fear not however, as LVM snapshots are now taken every hour instead of once per 24 hours. Enabling Azure disk snapshots is something we're still looking into. Finally, we're looking into improving our abuse reporting and response system. More information regarding this can be found in the issue "Removal of users by spam should not hard delete". If you think there are additional measures we can take to prevent incidents like this please let us know in the comments. ## Troubleshooting FAQ ### Some of my merge requests are shown as being open, but their commits have already been merged into the default branch. How can I resolve this? Pushing to the default branch will automatically update the merge request so that it's aware of there not being any differences between the source and target branch. At this point you can safely close the merge request. ### My merge request has not yet been merged, and I am not seeing my changes. How can I resolve this? There are 3 options to resolve this: - Close the MR and create a new one - Push new changes to the merge request's source branch - Rebase/amend, and force push to the merge request's source branch ### My GitLab Pages website was not updated. How can I solve this? Go to your project, then "Pipelines", "New Pipeline", use "master" as the branch, then create the pipeline. This will create and start a new pipeline using your master branch, which should result in your website being updated. ### My Pipelines were not executed Most likely they were, but the database is not aware of this. To solve this, create a new pipeline using the right branch and run it. ### Some commits are not showing up Pushing new commits should automatically solve this. Alternatively you can try force pushing to the target branch. ### I created a project after 17:20 UTC and it shows up, but my issues are gone. What happened? Project details are stored in the database. This meant that this data was lost for projects created after 17:20. We ran a procedure to restore these projects based on their Git repositories that were still stored in our NFS cluster. This procedure however was only able to restore projects in their most basic form, without associated data such as issues and merge requests.
true
true
true
Postmortem on the database outage of January 31 2017 with the lessons we learned.
2024-10-13 00:00:00
2017-02-10 00:00:00
https://images.ctfassets…webp&w=820&h=500
article
gitlab.com
GitLab
null
null
22,766,328
https://archive.org/details/introductiontope00moll/mode/2up
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
40,311,976
https://www.honest-broker.com/p/10-new-albums-im-recommending-right-5e9
Nine New Albums I'm Recommending Right Now
Ted Gioia
# Nine New Albums I'm Recommending Right Now ### I share tracks of distinction from all corners of the world Here’s my latest roundup of swell new music. As always, I dig into the hidden crevices and crannies of the music business to find good stuff others don’t know about. A few of these are recordings you will* only *hear about from *The Honest Broker*. But each of these albums is playlist-worthy and will not disgrace even the most exclusive turntable. Most of them are likely to show up on my Best of Year list. Happy listening! #### If you want to support my work, the best way is by taking out a premium subscription (for just $6 per month). **Mei Semones: Kabutomushi** Japanese Indie Pop from Brooklyn (with Lots of Sweet Guitar) I’ve always liked extreme examples of the cool aesthetic—especially those susurrating vocals that sound like a seductive stranger whispering secrets in your ear. So while others of my generation were body slamming in smelly nightclubs of low repute, I was often back in my dorm room daydreaming to Astrud Gilberto records. I loved those old Chet Baker vocal tracks, too, and even his pop-oriented acolytes. (Anybody else here up for a Kenny Rankin or Michael Franks revival?) I didn’t even need to understand the words—just let João Gilberto sing out the contents of the phone directory or daily news (as Miles Davis famously asserted), and I’m as content as a Copacabana clam. Which is a long way of saying that I love Mei Semones and her understated vocals, even when I don’t know what she is singing about. This Alt J-Pop artist resides in Brooklyn, and sings in both English and Japanese. She also mixes in subversive bits of jazz, especially in her guitar playing. I’d listen to her even if she was just doing instrumentals. But those vocals are the main course—and they’re so delectable that I’m even willing to ignore the pink stuffed animals. **Glass Beams: Mahal ** Masked Indian Psychedelic Groove Trio Not long ago, you wore masks on just three occasions: pandemics, Halloween, and bank robberies. But now they are *de rigeur* for music careers, enhancing the mystique of everybody from Daft Punk to The Masked Singer. Glass Beams has the hippest masks of all, definite fashion statements although perhaps not N95 compliant. But I’m more into their slow groove, which is both psychedelic and cinematic, infused with a taste of surf guitar. If Ennio Morricone had worked in Bollywood, he might have conceived of music like this. With all due respect to the band’s disguises, I’d like to blow their covers and give these artists some individual credit. But so far, only one member of the trio has been identified, founder Rajan Silva. So some masked mystery remains, but this band, with its growing following, is no longer a secret. **Fred Hersch: Silent, Listening** Solo Jazz Piano There’s a special excitement when great creative minds join together in partnership. So I’ve been eagerly awaiting this album from the moment I learning that pianist Fred Hersch was working with record producer Manfred Eicher. Eicher, founder of ECM Records, has a magical touch—there’s no living record producer I trust more. And Hersch has a magical touch, too, but in a different way. He demonstrates it a the piano keyboard, where he has delivered a body of work that stands out for its exceptional emotional depth and conceptual intelligence. The results are everything I could have wanted. Let’s hope they do it again. **U**p Next: Two New Albums of Wedding Music Before my son’s recent wedding, I took him aside for some fatherly advice. “Strange things happen at weddings,” I warned, “so don’t be surprised by bizarre scenes and outbursts. If anybody causes a ruckus, just try to ignore it.” That probably sounds crazy to you. But I’ve attended too many Sicilian weddings in my time, and know the score. As it turned out, everything happened smoothly. Maybe people are tamer nowadays. Nobody pulled out a gun (as happened when my aunt got married). Nobody tried to escape by hiding under a table (as happened at my wedding). Nobody interrupted the ceremony (well, except the photographer). ## Keep reading with a 7-day free trial Subscribe to The Honest Broker to keep reading this post and get 7 days of free access to the full post archives.
true
true
true
I share tracks of distinction from all corners of the world
2024-10-13 00:00:00
2024-05-09 00:00:00
https://substackcdn.com/…b1b_640x740.webp
article
honest-broker.com
The Honest Broker
null
null
31,339,718
https://homes.esat.kuleuven.be/~asenol/leaky-forms/
Leaky Forms: A Study of Email and Password Exfiltration Before Form Submission
null
## Leaky Forms: A Study of Email and Password Exfiltration Before Form Submission #### Presented at USENIX Security'22 Email addresses—or identifiers derived from them—are known to be used by data brokers and advertisers for cross-site, cross-platform, and persistent identification of potentially unsuspecting individuals. In order to find out whether access to online forms are misused by online trackers, we present a measurement of *email and password collection that occur before form submission* on the top 100K websites. ## 📈 Highlights - Users' email addresses are exfiltrated to tracking, marketing and analytics domains before form submission and before giving consent on **1,844**websites when visited from the EU and**2,950**when visited from the US. - We found incidental password collection on **52**websites by third-party session replay scripts. (These issues were fixed thanks to our disclosures). - In a follow-up investigation, we found that Meta (formerly, Facebook) and TikTok collect hashed personal information from web forms even when the user does not submit the form and does not give consent. ## 📽️ Screen Captures #### Full list of screen captures: ## 🏁 Findings #### Top ten websites where email addresses are leaked to tracker domains EU | ||| ---|---|---|---| Rank | Website | Third-party | Hash/encoding/compression | 154 | *usatoday.com | taboola.com | Hash (SHA-256) | 242 | *trello.com | bizible.com | Encoded (URL) | 243 | *independent.co.uk | taboola.com | Hash (SHA-256) | 300 | shopify.com | bizible.com | Encoded (URL) | 328 | marriott.com | glassboxdigital.io | Encoded (BASE-64) | 567 | *newsweek.com | rlcdn.com | Hash (MD5, SHA-1, SHA-256) | 705 | *prezi.com | taboola.com | Hash (SHA-256) | 754 | *branch.io | bizible.com | Encoded (URL) | 1,153 | prothomalo.com | facebook.com | Hash (SHA-256) | 1,311 | codecademy.com | fullstory.com | Unencoded | 1,543 | *azcentral.com | taboola.com | Hash (SHA-256) | US | ||| ---|---|---|---| Rank | Website | Third-party | Hash/encoding/compression | 95 | issuu.com | taboola.com | Hash (SHA-256) | 128 | businessinsider.com | taboola.com | Hash (SHA-256) | 154 | usatoday.com | taboola.com | Hash (SHA-256) | 191 | time.com | bouncex.net | Compression (LZW) | 196 | udemy.com | awin1.com | Hash (SHA-256 with salt) | zenaps.com | Hash (SHA-256 with salt) | || 217 | healthline.com | rlcdn.com | Hash (MD5, SHA-1, SHA-256) | 234 | foxnews.com | rlcdn.com | Hash (MD5, SHA-1, SHA-256) | 242 | trello.com | bizible.com | Encoded (URL) | 278 | theverge.com | rlcdn.com | Hash (MD5, SHA-1, SHA-256) | 288 | webmd.com | rlcdn.com | Hash (MD5, SHA-1, SHA-256) | *: Not reproducible anymore as of February 2022. 1. Site rank: Tranco rank 2. Encoding: Encoding or hash algorithm used when sending the email 3. Website: Hostname of the initially visited website (before a potential redirection) 4. Request Domain: eTLD+1 of the leaky request URL 5. Third Party Entitiy: Owner of the tracker domain 6. Tracker Category: Category of the tracker domain, this information comes from DuckDuckGo's Tracker Radar dataset 7. Blocklist: Blocklist that detected the tracker. WTM: whotracks.me, uBO: uBlock Origin, DDG: DuckDuckGo, DS: Disconnect 8. Page URL: (last_page) URL of the page where our crawler filled the email field 9. XPath: XPath of the filled email field 10. Id:ID of the email element that our crawler filled (10, 11 & 12 identify the leaking page and input elements. They can be used for debugging, reproduction etc.) 1. Site rank: Tranco rank 2. Encoding: Encoding or hash algorithm used when sending the email 3. Website: Hostname of the initially visited website (before a potential redirection) 4. Request Domain: eTLD+1 of the leaky request URL 5. Third Party Entitiy: Owner of the tracker domain 6. Tracker Category: Category of the tracker domain, this information comes from DuckDuckGo's Tracker Radar dataset 7. Blocklist: Blocklist that detected the tracker. WTM: whotracks.me, uBO: uBlock Origin, DDG: DuckDuckGo, DS: Disconnect 8. Page URL: (last_page) URL of the page where our crawler filled the email field 9. XPath: XPath of the filled email field 10. Id:ID of the email element that our crawler filled (10, 11 & 12 identify the leaking page and input elements. They can be used for debugging, reproduction etc.) ## **Top tracker domains - Email leaks** **Top tracker domains - Email leaks** EU | |||| ---|---|---|---|---| Entity Name | Tracker Domain | Num. sites | Prom. | Min. Rank | Taboola | taboola.com | 327 | 302.9 | 154 | Adobe | bizible.com | 160 | 173.0 | 242 | FullStory | fullstory.com | 182 | 75.6 | 1311 | Awin Inc. | zenaps.com | 113 | 48.7 | 2043 | Awin Inc. | awin1.com | 112 | 48.5 | 2043 | Yandex | yandex.com | 121 | 41.9 | 1688 | AdRoll | adroll.com | 117 | 39.6 | 3753 | Glassbox | glassboxdigital.io | 6 | 31.9 | 328 | Listrak | listrakbi.com | 91 | 24.9 | 2219 | Oracle | bronto.com | 90 | 24.6 | 2332 | LiveRamp | rlcdn.com | 11 | 20.0 | 567 | SaleCycle | salecycle.com | 35 | 17.5 | 2577 | Automattic | gravatar.com | 38 | 16.7 | 2048 | facebook.com | 21 | 14.8 | 1153 | | Salesforce | pardot.com | 36 | 30.8 | 2675 | Oktopost | okt.to | 31 | 11.4 | 6589 | US | |||| ---|---|---|---|---| Entity Name | Tracker Domain | Num. sites | Prom. | Min. Rank | LiveRamp | rlcdn.com | 524 | 553.8 | 217 | Taboola | taboola.com | 383 | 499.0 | 95 | Bounce Exchange | bouncex.net | 189 | 224.7 | 191 | Adobe | bizible.com | 191 | 212.0 | 242 | Awin | zenaps.com | 119 | 212.0 | 196 | Awin | awin1.com | 118 | 111.2 | 196 | FullStory | fullstory.com | 230 | 105.6 | 1311 | Listrak | listrakbi.com | 226 | 66.0 | 1403 | LiveRamp | pippio.com | 138 | 65.1 | 567 | SmarterHQ | smarterhq. | 32 | 63.8 | 556 | Verizon Media | yahoo. | 255 | 62.3 | 4281 | AdRoll | adroll.com | 122 | 48.6 | 2343 | Yandex | yandex.ru | 141 | 48.1 | 1648 | Criteo SA | criteo.com | 134 | 46.0 | 1403 | Neustar | agkn.com | 133 | 45.9 | 1403 | Oracle | addthis.com | 133 | 45.9 | 1403 | Crawls conducted in May’21. We use prominence to sort third parties in this table because it better represents the scale of a given third party’s reach. ## **Top tracker domains - Password leaks** **Top tracker domains - Password leaks**EU | |||| ---|---|---|---|---| Entity Name | Tracker Domain | Num. sites | Prom. | Min. Rank | Yandex | yandex.com | 37 | 12.12 | 4699 | Yandex | yandex.ru | 7 | 2.41 | 12989 | Mixpanel | mixpanel.com | 1 | 0.12 | 84547 | LogRocket | lr-ingest.io | 1 | 0.12 | 82766 | US | |||| ---|---|---|---|---| Entity Name | Tracker Domain | Num. sites | Prom. | Min. Rank | Yandex | yandex.ru | 45 | 17.23 | 1688 | Mixpanel | mixpanel.com | 1 | 0.12 | 84547 | LogRocket | lr-ingest.io | 1 | 0.12 | 82766 | Crawls conducted in May’21. We use prominence to sort third parties in this table because it better represents the scale of a given third party’s reach. ## **Website categories:** Per-category number of websites we crawled, filled, and observed an email leak to a tracker domain **Website categories:**Per-category number of websites we crawled, filled, and observed an email leak to a tracker domain Categories | EU/US Sites | EU Filled sites | EU Leaky sites | EU % (Leaky / Filled) | US Filled sites | US Leaky sites | US % (Leaky / Filled) | ---|---|---|---|---|---|---|---| Fashion/Beauty | 1669 | 1176 | 131 | 11.1 | 1179 | 224 | 19.0 | Online Shopping | 5395 | 3658 | 345 | 9.4 | 3744 | 567 | 15.1 | General News | 7390 | 3579 | 235 | 6.6 | 3848 | 392 | 10.2 | Software/Hardware | 4933 | 2834 | 138 | 4.9 | 2855 | 162 | 5.7 | Business | 13462 | 7805 | 377 | 4.8 | 7924 | 484 | 6.1 | Marketing/Merchandising | 4964 | 3167 | 119 | 3.8 | 3218 | 192 | 6.0 | Internet Services | 7974 | 4627 | 171 | 3.7 | 4671 | 199 | 4.3 | Travel | 2519 | 1355 | 46 | 3.4 | 1379 | 82 | 5.9 | Health | 2516 | 1389 | 44 | 3.2 | 1439 | 69 | 4.8 | Finance/Banking | 3699 | 1505 | 41 | 2.7 | 1518 | 49 | 3.2 | Sports | 1910 | 1044 | 28 | 2.7 | 1002 | 56 | 5.6 | Portal Sites | 1544 | 682 | 17 | 2.5 | 694 | 19 | 2.7 | Education/Reference | 10190 | 4185 | 88 | 2.1 | 4432 | 134 | 3.0 | Entertainment | 5297 | 2610 | 47 | 1.8 | 2619 | 98 | 3.7 | Recreation/Hobbies | 1098 | 754 | 13 | 1.7 | 760 | 95 | 12.5 | Blogs/Wiki | 5415 | 3095 | 42 | 1.4 | 3055 | 237 | 7.8 | Technical/Business Forums | 1297 | 717 | 9 | 1.3 | 734 | 17 | 2.3 | Non-Profit/Advocacy/NGO | 2713 | 1842 | 22 | 1.2 | 1866 | 24 | 1.3 | Games | 2173 | 925 | 9 | 1.0 | 896 | 11 | 1.2 | Public Information | 2346 | 1049 | 8 | 0.8 | 1084 | 27 | 2.5 | Govern.Military | 3754 | 939 | 5 | 0.5 | 974 | 7 | 0.7 | Uncategorized | 1616 | 636 | 3 | 0.5 | 646 | 2 | 0.3 | Pornography | 1388 | 528 | 0 | 0.0 | 534 | 0 | 0.0 | Based on desktop crawls using the no-action mode ## **Password leaks:** Leaky websites that we identified incidental password collection by tracker domains. These issues are already fixed by the involved third parties, thanks to our disclosures. **Password leaks:**Leaky websites that we identified incidental password collection by tracker domains. These issues are already fixed by the involved third parties, thanks to our disclosures. EU | || ---|---|---| Rank | Website | Tracker domain | 84544 | bolshayaperemena.online | yandex.com | 95341 | jolly.me | yandex.com | 71216 | unitedtraders.com | yandex.com | 88449 | strelka.com | yandex.com | 82147 | livedune.ru | yandex.com | 63801 | megabonus.com | yandex.ru | 45753 | galaksion.com | yandex.com | 82766 | publicize.co | lr-ingest.io | 4699 | olymptrade.com | yandex.com | 41729 | www.smartfinstories.biz | yandex.com | 12989 | app.travelpayouts.com | yandex.ru | 73186 | vitaexpress.ru | yandex.com | 73456 | www.rookee.ru | yandex.com | 61411 | www.bajajauto.com | yandex.com | 77435 | www.kupibilet.ru | yandex.com | 55176 | prom.md | yandex.com | 26742 | bitmedia.io | yandex.ru | 85477 | kismia.com | yandex.com | 84547 | www.sellingpower.com | mixpanel.com | 12804 | ctc.ru | yandex.com | 87547 | apphud.com | yandex.com | 4922 | www.exness.com | yandex.com | 15339 | www.giraff.io | yandex.com | 92125 | www.darkorbit.com | bigpoint.net | 54164 | app.zeydoo.com | yandex.com | 37783 | autopiter.ru | yandex.com | 58315 | www.technodom.kz | yandex.com | 30800 | fboom.me | yandex.ru | 33030 | smmplanner.com | yandex.com | 23105 | expertoption.com | yandex.com | 89963 | www.wifimap.io | yandex.com | 98251 | www.youhodler.com | yandex.com | 61734 | spartak.com | yandex.com | 31769 | mybook.ru | yandex.com | 54886 | www.amediateka.ru | yandex.com | 49600 | youdo.com | yandex.com | 32302 | www.forumhouse.ru | yandex.ru | 18107 | www.sibnet.ru | yandex.com | 68865 | www.toyota.ru | yandex.com | 54997 | www.b-kontur.ru | yandex.com | 81703 | www.ps.kz | yandex.com | 58760 | propush.me | yandex.com | 63202 | ofd.astralnalog.ru | yandex.com | 31678 | lingualeo.com | yandex.ru | 77561 | www.ligastavok.ru | yandex.com | 16434 | www.open.ru | yandex.com | US | || ---|---|---| Rank | Website | Tracker domain | 95341 | jolly.me | yandex.ru | 84544 | bolshayaperemena.online | yandex.ru | 88449 | strelka.com | yandex.ru | 71216 | unitedtraders.com | yandex.ru | 82147 | livedune.ru | yandex.ru | 63801 | megabonus.com | yandex.ru | 1688 | www.championat.com | yandex.ru | 45753 | galaksion.com | yandex.ru | 44662 | perm.zarplata.ru | yandex.ru | 16639 | satu.kz | yandex.ru | 82766 | publicize.co | lr-ingest.io | 12989 | app.travelpayouts.com | yandex.ru | 73186 | vitaexpress.ru | yandex.ru | 55176 | prom.md | yandex.ru | 61411 | www.bajajauto.com | yandex.ru | 77435 | www.kupibilet.ru | yandex.ru | 85477 | kismia.com | yandex.ru | 73456 | www.rookee.ru | yandex.ru | 26742 | bitmedia.io | yandex.ru | 87547 | apphud.com | yandex.ru | 15339 | www.giraff.io | yandex.ru | 12804 | ctc.ru | yandex.ru | 92125 | www.darkorbit.com | bigpoint.net | 84547 | www.sellingpower.com | mixpanel.com | 54164 | app.zeydoo.com | yandex.ru | 28758 | secretmag.ru | yandex.ru | 30800 | fboom.me | yandex.ru | 89963 | www.wifimap.io | yandex.ru | 47558 | lentaru.media.eagleplatform.com | yandex.ru | 33030 | smmplanner.com | yandex.ru | 98251 | www.youhodler.com | yandex.ru | 58315 | www.technodom.kz | yandex.ru | 23105 | expertoption.com | yandex.ru | 31769 | mybook.ru | yandex.ru | 49600 | youdo.com | yandex.ru | 54886 | www.amediateka.ru | yandex.ru | 68865 | www.toyota.ru | yandex.ru | 61734 | spartak.com | yandex.ru | 32302 | www.forumhouse.ru | yandex.ru | 18107 | www.sibnet.ru | yandex.ru | 81703 | www.ps.kz | yandex.ru | 54821 | www.viennahouse.com | yandex.ru | 54997 | www.b-kontur.ru | yandex.ru | 63202 | ofd.astralnalog.ru | yandex.ru | 58760 | propush.me | yandex.ru | 31678 | lingualeo.com | yandex.ru | 16434 | www.open.ru | yandex.ru | ## **EU Mobile leaks:** Email address leaks to tracker domains in the EU-mobile crawl, no-action mode **EU Mobile leaks:**Email address leaks to tracker domains in the EU-mobile crawl, no-action mode 1. Site rank: Tranco rank 2. Encoding: Encoding or hash algorithm used when sending the email 3. Website: Hostname of the initially visited website (before a potential redirection) 4. Request Domain: eTLD+1 of the leaky request URL 5. Third Party Entitiy: Owner of the tracker domain 6. Tracker Category: Category of the tracker domain, this information comes from DuckDuckGo's Tracker Radar dataset 7. Blocklist: Blocklist that detected the tracker. WTM: whotracks.me, uBO: uBlock Origin, DDG: DuckDuckGo, DS: Disconnect 8. Page URL: (last_page) URL of the page where our crawler filled the email field 9. XPath: XPath of the filled email field 10. Id:ID of the email element that our crawler filled (10, 11 & 12 identify the leaking page and input elements. They can be used for debugging, reproduction etc.) ## **US Mobile leaks:** Email address leaks to tracker domains in the US-mobile crawl, no-action mode **US Mobile leaks:**Email address leaks to tracker domains in the US-mobile crawl, no-action mode 1. Site rank: Tranco rank 2. Encoding: Encoding or hash algorithm used when sending the email 3. Website: Hostname of the initially visited website (before a potential redirection) 4. Request Domain: eTLD+1 of the leaky request URL 5. Third Party Entitiy: Owner of the tracker domain 6. Tracker Category: Category of the tracker domain, this information comes from DuckDuckGo's Tracker Radar dataset 7. Blocklist: Blocklist that detected the tracker. WTM: whotracks.me, uBO: uBlock Origin, DDG: DuckDuckGo, DS: Disconnect 8. Page URL: (last_page) URL of the page where our crawler filled the email field 9. XPath: XPath of the filled email field 10. Id:ID of the email element that our crawler filled (10, 11 & 12 identify the leaking page and input elements. They can be used for debugging, reproduction etc.) ## **Previously Unlisted Tracker Domains:** Previously unknown tracker domains that collect email addresses. **Previously Unlisted Tracker Domains:**Previously unknown tracker domains that collect email addresses. ## **Received email samples:** In the six-week period following the crawls, we received 290 emails from 88 distinct sites on the email addresses used in the desktop crawls, despite not submitting any form. **Received email samples:**In the six-week period following the crawls, we received 290 emails from 88 distinct sites on the email addresses used in the desktop crawls, despite not submitting any form. ##### Some emails invite us back to their site ##### Some emails offer a discount to their site ## **Novel compression, encoding and hashing methods:** We extended leak detection from prior work to detect new compression, encoding and hashing methods used to exfiltrate email addresses. **Novel compression, encoding and hashing methods:**We extended leak detection from prior work to detect new compression, encoding and hashing methods used to exfiltrate email addresses. ## ⚠️ Leaks to Meta (Facebook) & TikTok - Both Meta Pixel and TikTok Pixel has a feature called Automatic Advanced Matching [1, 2] that collects hashed personal identifiers from the web forms in an automated manner. The hashed personal identifiers are then used to target ads on the respective platforms, measure conversions, or create new custom audiences. ( *You can read about privacy issues caused by Meta Pixel on The Markup's Pixel Hunt series.*) - According to Meta's, and TikTok's documentation, Automatic Advanced Matching should trigger data collection when a user submits a form. We found that unlike what is claimed, both Meta and TikTok Pixel collects hashed personal data when the user clicks links or buttons that in no way resemble a submit button. In fact, Meta and TikTok scripts don't even try to recognize submit buttons, or listen to (form) submit events. You can view their overly broad and suspiciously similar list of selectors, which designates what page elements will trigger data collection. That means Meta and TikTok Pixel collect hashed personal information, even when a user decides to abandon a form, and clicks a button/link to navigate away from the page. - In March 2022, we ran additional crawls of top 100K websites to detect leaks triggered by unrelated button or link clicks. Our crawler filled the email and password fields, and then clicked on a no-op (non-functional) button that it injected into the page. Injecting and clicking on the no-op button enabled us to detect leaks that would be triggered by Meta and TikTok's Automatic Advanced Matching. We found that 8,438 (US) / 7,379 (EU) sites may leak to Meta when the user clicks on virtually any button or a link, after filling up a form. In addition, we found 154 (US) / 147 (EU) sites that may leak to TikTok in a similar manner. - Full scale of leaks to Meta, and all leaks to TikTok were discovered after finalizing our paper, through the crawls described in the previous bullet point. There are two main reasons for that: 1) TikTok and Meta leaks are slightly different than the rest of the leaks we studied in our paper--they require further interaction with the page after filling out a form. 2) TikTok started beta testing the Automatic Advanced Matching feature in February 2022, long after we finish the crawls used in the paper (May'21). That also means, unlike the other results in our paper, some of the findings presented above (on Meta & TikTok pixel) are not peer-reviewed. - We filed a bug report with Meta (25 March 2022), and reached out to TikTok using their Contact the Data Protection Officer form and Request privacy information form (21 April 2022). Meta swiftly responded to our bug report and said they assigned the issue to their engineering team (25 March 2022). TikTok hasn't yet responded to our disclosure. (The report to TikTok was filed more recently, since leaks to TikTok were discovered more recently--in fact while investigating the leaks to Meta.) - ##### Disclosure to Meta **SubscribedButtonClick event fires on virtually every click, causing PII collection against user intent**[Link]When Automatic Advanced Matching is enabled, SubscribedButtonClick event is fired after clicking virtually any button or link on a page. That means Meta Pixel collects hashed personal information, even when a user decides to abandon a form, and clicks a button/link to navigate away from the page. According to its official page [1], Automatic Advanced Matching should trigger data collection when a user submits a form: *"After the visitor clicks Submit, the pixel's JavaScript code automatically detects and passes the relevant form fields to Facebook."*Unlike what is claimed, Meta Pixel collects hashed personal data when the user clicks links or buttons that in no way resemble a submit button (attached screenshot). In fact, Meta's JavaScript code in question doesn't even try to recognize submit buttons, or listen to (form) submit events. See the attached screen captures: **abcmouse.com**(a website for children): Meta Pixel collects the hashed email address when the user closes the newsletter dialog. In that case, sharing the email address is the exact opposite of the user's intent ( attached screen capture).**prothomalo.com**: clicking Back, Terms of Service or Privacy Policy links triggers the collection of the hashed email address, and (hashed) first and last names. (attached screen capture)We hope you will recognize the disconnect between the described and actual behavior of Automatic Advanced Matching, and take necessary actions to address this issue. [1] https://web.archive.org/web/20220325001706/https://www.facebook.com/business/m/signalshealth/optimize/automatic-advanced-matchingPS: This bug report is based on an academic study. If you need more technical details or have difficulties reproducing, contact [email address]. ##### Disclosure to Tiktok *A slightly modified version of the following text is submitted to conform the word limit of the contact form.***Automatic Advanced Matching causes PII collection against user intent**When Automatic Advanced Matching is enabled, users' hashed email address is collected after clicking virtually any button on the page. That means TikTok Pixel collects hashed personal information, even when a user decides to abandon a form, and clicks a button or link to navigate away from the page. According to TikTok’s help pages [1], Automatic Advanced Matching should trigger data collection when a user submits a form: "Automatic Advanced Matching is programmed to recognize form fields, specifically capturing email and phone data. The capability is only triggered when the visitor submits information in an email or phone input field." Unlike what is claimed in the linked help page, TikTok Pixel also sends hashed personal data when the user clicks links or buttons that in no way resemble a submit button. In fact, the TikTok's JavaScript code in question doesn't even try to recognize submit buttons, or listens to form "submit" events. That means clicking almost any button or a link will trigger the data collection, regardless of the user’s intention. The issue is demonstrated in the following screen captures. Dismissing the consent dialog on kiwico.com/login causes Tiktok to collect the hashed email address from the login form: screen capture Clicking a radio button triggers collection on redcross.ca: screen capture On benjerry.com, hashed email collection is triggered by clicking on the email input field--not even a button or a link: screen capture We hope you will recognize the disconnect between the described [1] and actual behavior of Automatic Advanced Matching, and take necessary actions to address this issue. [1] https://archive.ph/BQ7nt#selection-3501.113-3501.217 PS: This bug report is a result of an academic study. If you need more technical details or have difficulties reproducing, please reach us at: [contact email] 1. Site rank: Tranco rank 2. Encoding: Encoding or hash algorithm used when sending the email 3. Website: Hostname of the initially visited website (before a potential redirection) 4. Request Domain: eTLD+1 of the leaky request URL 5. Page URL: (last_page) URL of the page where our crawler filled the email field 6. XPath: XPath of the filled email field 7. Id:ID of the email element that our crawler filled (5, 6 & 7 identify the leaking page and input elements. They can be used for debugging, reproduction etc.) 1. Site rank: Tranco rank 2. Encoding: Encoding or hash algorithm used when sending the email 3. Website: Hostname of the initially visited website (before a potential redirection) 4. Request Domain: eTLD+1 of the leaky request URL 5. Page URL: (last_page) URL of the page where our crawler filled the email field 6. XPath: XPath of the filled email field 7. Id:ID of the email element that our crawler filled (5, 6 & 7 identify the leaking page and input elements. They can be used for debugging, reproduction etc.) ## 📣 Security Disclosures, GDPR Requests, and Leak Notifications Our methods allow us to detect email and password leaks from clients to trackers, but what happens after the leaks reach third party's servers is unknown to us. In order to better understand the server-side processing of collected emails, and to disclose cases of password collection, we have reached out to more than a hundred first and third parties. In all cases we sent the emails using one of the authors' university email addresses, and real name; while also disclosing the nature of the research. #### Password collection disclosure Once again we note that we believe all password leaks to third parties mentioned below are incidental. - Yandex, the most prominent tracker that collects users' passwords, has quickly responded to our disclosure and rolled out a fix to prevent password collection. We have also notified more than 50 websites where passwords were collected. Since the majority of the websites embedding Yandex were in Russian, we have enclosed a Russian translation of our message in the notification email, along with our message in English. - Mixpanel released an update only two days after we disclosed the issue. With this change, even the users with outdated SDKs--which was the root cause of the problem--were protected from collecting passwords involuntarily. - LogRocket, who collected passwords on publicize.co's login page, have never replied to our repeated contact attempts; and the password leak remained on Publicize's website for more than ten weeks, before it was finally fixed. We have also enrolled the help of a contact at the Electronic Frontier Foundation, who tried calling LogRocket's phone number, emailed their privacy contact address, and their cofounder—all to no avail. Our attempts to disclose the issue via LogRocket's chatbot have also failed. We have also contacted Publicize, and have not heard back. #### GDPR requests on email exfiltration to first & third parties We reached out to **58 first** and **28 third** parties with GDPR requests. We avoided sending blanket data access requests to minimize the overhead for the entities who were obliged to respond to our GDPR requests. Instead, we asked specific questions about how the collected emails are processed, retained and shared. ## **Sample GDPR requests:** **Sample GDPR requests:** ##### 1. Sample GDPR request sent to third parties To Whom It May Concern: I and my colleagues from multiple European research institutions are investigating personal data collection on popular websites. During our experiments, we found that third-party.com collects email addresses from input fields before the user submits the form. We detected this behavior on several websites including first-party.com. As an EU resident myself, I am requesting access to the following information pursuant to Article 15(1) GDPR: - What are the processing purposes for collecting email addresses before form submission? - What is the legal basis for collecting email addresses before form submission (see article 6(1) GDPR)? - What is the retention period for the collected email addresses? - Do you share email addresses with other parties, including your business partners? The email address I'm sending this request for is "email-prefix+first-party.com@gmail.com" [in CC]. Please feel free to send a verification code to that address to make sure it belongs to me. Please note that this is not a blanket data access request. Rather, our questions aim to bring transparency to the collection of email addresses that occur before form submission. We purposefully limited our questions to minimize the overhead for you. We would appreciate it if you could update us if you change the way email addresses are collected before form submission. We plan to describe responses to our disclosure in our academic paper, which is a collaboration between researchers from law and computer science disciplines. We would like to stress that we have not captured data of your visitors, or any other user in our study. Please feel free to reach out if you need any clarification or additional information regarding our request. Kind regards, Name Surname ##### 2. Sample GDPR request sent to first parties To Whom It May Concern: I and my colleagues from multiple European research institutes are investigating personal data collection on popular websites. During our experiments, we found that a third-party (third-party.com) on your website (first-party.com) collects visitors' email addresses from a form field even if the visitor doesn't submit the form, and doesn't give their consent. Technical details pertaining to the data collection can be found at the end of this email. As an EU resident myself, I would like to request access to the following information pursuant to Article 15(1) GDPR: - Were you aware that (XXX) collects email addresses from input fields on your website before website visitors clicked 'submit'? - What are the processing purposes for collecting email addresses before form submission? - What is the legal basis for collecting email addresses before form submission (see article 6(1) GDPR)? - What is the retention period for email addresses collected before form submission? Please note that this is not a blanket data access request. Rather, our questions aim to bring transparency to the collection of email addresses that occur before form submission. We purposefully limited our questions to minimize the overhead for you. We would appreciate it if you could update us if you take any action to change the way that email addresses are collected on your website. We plan to describe websites' responses to our disclosure in our academic paper, which is a collaboration between researchers from law and computer science disciplines. We would like to stress that we have not captured data of your visitors, or any other user in our study. Please feel free to reach out if you need any clarification or additional information regarding our request. Kind regards, Name Surname Technical details of the email collection: - The email address (email-prefix+first-party.com@gmail.com") was collected on (first- party.com/inner-page) from the input field with the xpath="XXX". - If you would like to verify that the collected email address belongs to me, feel free to send a verification code to ("email-prefix+first-party.com@gmail.com"). - The Unix timestamp of the visit during which my email was collected was (XXX). - The email address was sent to (first-party.com) in Base64-encoded/SHA-256-hashed/... form in the following request: third-party.com/leak-endpoint-full-url, post-Data: ... ## **A sample of responses from first parties:** 30/58 first parties replied **A sample of responses from first parties:**30/58 first parties replied - Fivethirtyeight.com (via Walt Disney's DPO), trello.com (Atlassian), lever.co, branch.io and cision.com said they had not been aware of the email collection prior to form submission on their websites and since addressed the issue. - Marriott said that the information collected by Glassbox is used for purposes including customer care, technical support, and fraud prevention. - Tapad, a cross-device tracking company on whose web- site we found an email leak, said that they are not offering their services to UK & EEA users since August, 2021; and they have deleted all data that they held from these regions. - stellamccartney.com explained that the emails on their websites were collected before the submission due to a technical issue, which was fixed upon our disclosure. According to their response, the SaleCycle script that collected email addresses had not been visible to their cookie management tool from OneTrust. ## **A sample of responses from third parties:** 15/28 third parties replied **A sample of responses from third parties:**15/28 third parties replied - Taboola said in certain cases they collect users' email hashes before form submission for ad and content personalization; they keep email hashes for at most 13 months; and they do not share them with other third parties. Taboola also said they only collect email hashes after getting user consent. However, upon sharing our findings showing otherwise, they acknowledged and fixed the issues on the reported websites. Reportedly, the data collection were triggered before consent due to 1) websites outside the EU who do not recognize the GDPR, or 2) misconfiguration of consent management platforms. - Zoominfo said their “FormComplete” product appends contact details of users to forms, when the user exists in ZoomInfo's sales and marketing database. They said the ability to capture form data prior to submission can be enabled or disabled by their clients. - ActiveProspect said their TrustedForm product is used to certify consumer's consent to be contacted for compliance with regulations such as the Telephone Consumer Protection Act in the US. They said data captured from abandoned forms are marked for deletion within 72 hours, is not shared with anyone including the site owner. #### Notification to websites with email leaks in the US crawl We sent a friendly notification to these websites about the email exfiltration, rather than a formal GDPR request. We did not get any response from these **33** websites. ## **Sample notification sent to websites with email leaks in the US crawl:** **Sample notification sent to websites with email leaks in the US crawl:** To Whom It May Concern: I and my colleagues from multiple European research institutes are investigating how and why email addresses are collected from online forms. During our investigations, we found that when (first-party.com) is visited from the US, a third-party (third-party.com) collects visitors' email addresses from a form field even if the form is abandoned (never submitted). Technical details to reproduce this issue can be found at the end of this email. We wanted to inform you since we found that websites may not always be aware that their visitors' email addresses (or their hashes) are collected by third-party scripts, before submitting any forms. We would appreciate it if you answer the following questions, but we note that you don't have an obligation to do so: - Were you aware that third-party.com collects email addresses from input fields on your website before website visitors clicked 'submit'? - What are the processing purposes for collecting email addresses before form submission? - What is the retention period for email addresses collected before form submission? Please note that this is not a data access request. We purposefully limited our questions to minimize the overhead for you. Rather, our questions aim to bring transparency to the collection of email addresses that occur before form submission. We would appreciate it if you could let us know if you take any action to change the way that email addresses are collected on your website. We plan to describe websites' responses to our disclosure in our academic paper, which is a collaboration between researchers from law and computer science disciplines. We would like to stress that we have not captured data of your visitors, or any other user in our study. Please feel free to reach out if you need any additional information about our disclosure. Technical details: - The email address was collected on (first- party.com/inner-page) from the input field with the id="XXX" and the xpath="XXX", and was sent to (third-party.com). We will be happy to provide additional technical details and a screen capture of the email collection if that will make it easy for you to verify the issue. Kind regards, Name Surname ## ⏩ Follow-up Crawls We ran additional crawls between 25-31 January 2022 to collect fresh data about the behavior we studied. Here are some highlights: - Many websites where we detected leaks to Taboola started to use modal consent banners, which prevents interaction with the pages before giving consent. We didn't detect any email leaks on these websites without interacting with the consent banners, which substantially reduced the number of leaks to Taboola. - Adroll started showing a consent dialog which reduced the number of leaks to Adroll to zero in the crawl. However, in manual follow up analysis we found several websites where Adroll collected hashed emails when the user simply clicked on the page. We have shared these examples with Adroll, who then addressed the issues and said that their *"work to troubleshoot and deploy a comprehensive solution is underway with testing and incremental roll out".* - We identified incidental password collection by FullStory, Hotjar, Decibel and Yandex. Upon disclosures, Fullstory and Hotjar swiftly fixed the issues, which were due to mistakes on part of the first parties (e.g. copying the password values into other DOM elements' attributes). Yandex said they need some time to solve the issue, which they eventually did. - In a manual follow-up investigation, we found additional password leaks to LogRocket on the login form of the zoning.sandiego.gov website. We've disclosed the issue to both LogRocket, zoning.sandiego.gov and opencounter.com (the provider of the web application running on zoning.sandiego.gov). We didn't get any response from those parties, but we verified that password leaks to LogRocket were eventually addressed. ## **EU email leaks** (Follow-up crawl) **EU email leaks**(Follow-up crawl) 1. Site rank: Tranco rank 2. Encoding: Encoding or hash algorithm used when sending the email 3. Website: Hostname of the initially visited website (before a potential redirection) 4. Request Domain: eTLD+1 of the leaky request URL 5. Third Party Entitiy: Owner of the tracker domain 6. Tracker Category: Category of the tracker domain, this information comes from Tracker Radar Collector"s dataset 7. Blocklist: Blocklist that detected the tracker. WTM: whotracks.me, uBO: uBlock Origin, DDG: DuckDuckGo, DS: Disconnect 8. Page URL: (last_page) URL of the page where our crawler filled the email field 9. XPath: XPath of the filled email field 10. Id:ID of the email element that our crawler filled (10, 11 & 12 identify the leaking page and input elements. They can be used for debugging, reproduction etc.) ## **US email leaks** (Follow-up crawl) **US email leaks**(Follow-up crawl) 1. Site rank: Tranco rank 2. Encoding: Encoding or hash algorithm used when sending the email 3. Website: Hostname of the initially visited website (before a potential redirection) 4. Request Domain: eTLD+1 of the leaky request URL 5. Third Party Entitiy: Owner of the tracker domain 6. Tracker Category: Category of the tracker domain, this information comes from Tracker Radar Collector"s dataset 7. Blocklist: Blocklist that detected the tracker. WTM: whotracks.me, uBO: uBlock Origin, DDG: DuckDuckGo, DS: Disconnect 8. Page URL: (last_page) URL of the page where our crawler filled the email field 9. XPath: XPath of the filled email field 10. Id:ID of the email element that our crawler filled (10, 11 & 12 identify the leaking page and input elements. They can be used for debugging, reproduction etc.) ## **Password leaks** (Follow-up crawl) **Password leaks**(Follow-up crawl) EU | || ---|---|---| Rank | Website | Tracker domain | 20674 | clearbanc.com | fullstory.com | 28226 | nexmo.com | decibelinsight.net | 98254 | www.agrofy.com.ar | hotjar.com | US | || ---|---|---| Rank | Website | Tracker domain | 20043 | www.nav.com | fullstory.com | 20674 | clearbanc.com | fullstory.com | 56040 | www.medikforum.ru | yandex.ru | 89002 | www.appjobs.com | hotjar.com | 98254 | www.agrofy.com.ar | hotjar.com | ## 🎇 LeakInspector: an add-on that warns and protects against personal data exfiltration - We developed LeakInspector to help publishers and end-users to audit third parties that harvest personal information from online forms without their knowledge or consent. It has the following features: - Blocks requests containing personal data extracted from the web forms and highlights related form fields by showing add-on's icon. - Logs technical details of the detected sniff and leak attempts to console to enable technical audits. The logged information includes the value and XPath of the sniffed input element, the origin of the sniffer script, and details of the leaky request such as the URL and the POST data. - LeakInspector also features a user interface where recent sniff and leak attempts are listed, along with the tracker domain, company and tracker category. The user interface module is based on DuckDuckGo’s Privacy Essentials add-on. - ⚠️ The add-on is a proof-of-concept, and has not been tested at scale. Please use at your own discretion. - Our attempts to publish the add-on on the Chrome Web Store failed, because new uploads of Manifest v2 add-ons are not accepted. For leak detection, our add-on requires access to network request details, which will be disallowed in Manifest v3. ~~We are working on publishing the add-on for Firefox.~~Our attempts to publish the add-on for Firefox have failed. ## 📁 Data and Code Our crawl data (screenshots, request details, HTML sources), the source code for the crawler, analysis scripts, and the LeakInspector add-on can be found in the following links:- The data from ten crawls performed between May 2021 and June 2021 - Main repository containing leak detection and analysis scripts - LeakInspector add-on source code - Crawler source code ## ☝️ Questions & Answers **Q: Why do you only focus data collection prior to form submission?** **A:** We believe it is strongly against users’ expectations to collect personal data from web forms for tracking purposes prior to submitting a form. We wanted to measure this behavior to assess its prevalence. **Q: Do people really abandon forms that they start filling in?** **A:** According to a survey by The Manifest, 81% of the 502 respondents have abandoned forms at least once, and 59% abandoned a form in the last month. **Q: Who decides to collect form data before submission: websites or third parties?** **A:** Depending on the case, it may be the website who configures the third-party script to collect data before form submission; or this may be the third party’s default behavior. **Q: Are websites aware of third parties collecting form data before submission?** **A:** Some websites told us that they were not aware of this data collection and rectified the issue upon our disclosures. **Q: How are Meta and TikTok results different from the ones you present in the paper?** **A:** TikTok and Meta leaks require further interaction with the page after filling out a form. However, our screen captures show how easy it is to trigger this data collection. **Q: Did you share your findings with anybody prior to public release?** **A:** We have shared an earlier version of our paper with certain privacy authorities and browser vendors (Google, Mozilla, Brave, Apple and DuckDuckGo). Brave asked for our dataset, and encouraged us to reach out to the blocklist maintainers to add missing tracker domains. DuckDuckGo invited us to give a talk to present our findings. We had a call with a Mozilla engineer to discuss potential solutions to the issue. A European DPA asked for the list of websites/third parties from their country engaging in email exfiltration, which we have shared. **Q: In the screen captures you use a form (“SHA256 Online”) to convert the email address to a random looking string. What is that for?** **A:** Some third parties collect email addresses after hashing them. Please see this blog post on what hashing is, and why hashing email addresses does not protect your privacy. **Q: How many distinct trackers were found to collect email addresses?** **A:** Emails (or their hashes) were sent to 174 distinct domains (eTLD+1) in the US crawl, and 157 distinct domains in the EU crawl. ## Acknowledgments We thank Alexei Miagkov, Arvind Narayanan, Bart Jacobs, Bart Preneel, Claudia Diaz, David Roefs, Dorine Gebbink, Galina Bulbul, Gwendal Le Grand, Hanna Schraffenberger, Konrad Dzwinel, Pete Snyder, Sergey Galich, Steve Englehardt, Vincent Toubiana, our shepherd Alexandros Kapravelos, SecWeb and USENIX Security reviewers for their valuable comments and contributions. The idea for measuring email exfiltration before form submission is initially developed with Steve Englehardt and Arvind Narayanan during an earlier study. Asuman Senol was funded by the Cyber-Defence (CYD) Campus of armasuisse Science and Technology. Gunes Acar was initially supported by a postdoctoral fellowship from the Research Foundation Flanders (FWO). The study was supported by CyberSecurity Research Flanders with reference number VR20192203. ## Corrigendum - *13 May 2022*: The initial version of our website and paper incorrectly referred TowerData as the owner of the rlcdn.com domain. The rlcdn.com domain belongs to LiveRamp. We've also reported this issue to Disconnect, which was one of the sources we used to identify domain ownership.
true
true
true
null
2024-10-13 00:00:00
2018-04-09 00:00:00
null
null
null
null
null
null
31,493,683
https://thenewstack.io/james-webb-space-telescope-and-344-single-points-of-failure/
James Webb Space Telescope and 344 Single Points of Failure
Jennifer Riggins
# James Webb Space Telescope and 344 Single Points of Failure Earlier this year, the single greatest site reliability engineering (SRE) lesson unfolded itself out in space. Last week we saw the very first, better-than-even-expected images from the James Webb Space Telescope or JWST. After ten years of design and build on a $9 billion budget, this was an effort in testing 344 single points of failure — all before deploying to production, with the distributed system a million miles and one month away. Needless to say, there are a lot of reliability lessons to be learned from this endeavor. At his WTF is SRE talk last month, Robert Barron brought his perspective as an IBM SRE architect, amateur space historian, and a hobby space photographer to uncover the patterns of reliability that enabled this feat. And how NASA was able to trust its automation so much that it’d release something with no hopes of fixing it. It’s a real journey into observability at scale. ## Universe-Scale Functional and Nonfunctional Requirements “It’s a great platform for demonstrating site reliability engineering concepts because this is reliability to the extreme,” Barron said of the James Webb Space Telescope (JWST). “If something goes wrong, if it’s not reliable, then it doesn’t work. We can’t just deploy it again. It’s not something logical, it’s something physical that has to work properly and I think there are a lot of lessons and a lot of inspiration that we can take from this work into our day-to-day lives.” After 30 years of amazing photos from the Hubble Telescope, there was a demand for new business and technical capabilities, including to be able to see through and past clouds as they are created. Computer, enhance! Compare the same target — seen by Spitzer & in Webb’s calibration images. Spitzer, NASA’s first infrared Great Observatory, led the way for Webb’s larger primary mirror & improved detectors to see the infrared sky with even more clarity: https://t.co/dIqEpp8hVi pic.twitter.com/g941Ug2rJ8 — NASA Webb Telescope (@NASAWebb) May 9, 2022 When designing JWST, the design engineers kicked off with the functional requirements, which in turn drove a lot of non-functional requirements. For instance, it needed to be much more powerful and larger than Hubble, but to achieve that it needed a significantly larger mirror. However, an operational constraint arose that the mirror is so large that it doesn’t fit into any rocket, so it needed to be broken up into pieces. The non-functional requirement became to create a foldable mirror. A solution arose to break the mirror up into smaller hexagons, which can be aligned together to form a honeycomb-shaped mirror. The second non-functional requirement of the JWST was to go beyond Hubble in not only seeing invisible light, but in seeing hot infrared light. But, to be accurate, the mirror needs to keep cold. “Not just colder, but we need to be able to control the temperatures. Exactly. Because any variation and we’re going to look at something and think ‘Oh, this is a star. This is a galaxy. Not that’s just something there on JWST itself, which is slightly colder or warmer than it should be,” Barron explained. Unlike Hubble which orbits the Earth, JWST is unable to orbit because then its temperatures would vary greatly in sun and shade. Plus, it needs to be much farther away from earth than Hubble has ever gone. With this in mind, the controls and antennas face Earth and the telescope faces away with the honeycomb set of mirrors that reflect into a second set of mirrors which then sends the images back to the cameras, which are located in the middle of the honeycomb mirrors. Then behind it is a massive set of sunshades that work to control the temperature of the telescope. ## When Overhead Costs Soar When NASA decided back in 1995 to make this next-generation space telescope, the agency assumed it’d cost about a billion dollars. In 2003, they started to design it, “and they realized that it’s not just scaling up Hubble, we need technological breakthroughs — the foldable mirrors, precise control of the temperature, the unfurling of the heat shields, and so on,” said Barron. Over the next four years of high-level design, they moved the budget to $3.5 billion and planned on another billion for a decade of operations. Then between 2007 and 2021, NASA dove into the design, build and test phase of what was named the James Webb Space Telescope. “Like good SREs we test and, because we have ten technological breakthroughs that we need to achieve, we have a lot of failures,” Barron said. “So we retest and fail, and retest and fail. And this takes a lot of time, and the project is nearly canceled many times. And eventually it costs $9.5 billion dollars just to build it. And that $1 billion that we thought would be enough to operate for 10 years is only going to be enough to operate for five years.” All things considered, the JWST was launched in December of last year, kicking off its operation, and what Barron referred to as “pirouetting and ballet moves” through space. “You can see that over a period of 13 days that the telescope, like a butterfly, opens up, spreads its wings, and started reporting home. And then starts going further away from Earth until it reaches the location where it will remain for the next decade,” he explained. This journey took a total of 30 days. As of the WTF is SRE event that Barron spoke at the end of April, the JWST was considered mid-deployment, “before reaching production we’re doing the final tests before we can say that the system is working and can start giving actual scientific data.” During this deployment phase, there are so many components and pieces moving and changing, it uncovered many points of failure — 344 to be exact. JWST “is famous for having over 300 single points of failure during this process of 30 days, each of which has to go perfectly, each of which if the fails, the entire telescope will not be able to function,” Barron explained. When those first exceptional photos came back, discovering new, fainter galaxies, was it luck or a feat of extreme site reliability engineering? “How did NASA reach the point where they could send $10 billion worth of satellite out into space without being able to fix anything without being able to reach out with an astronaut to say, ‘Oh, I need to move something, I need to restart something, I need to do something manual.’ How can the system be completely fully automated? And can I trust that no dragons will come from outer space and do something to the telescope which will cause it to fail?” — Robert Barron @FlyingBarron ## Redundancy. Repairability. Reliability. You could say this is more than a leap of faith. That trust that NASA had in all this working properly, Barron believes, comes from its decades-long history of sending crafts into space, which is grounded in the values of: - Redundancy - Repairability - Reliability Both the Voyager spacecraft that went to Jupiter, Saturn, Uranus, and Neptune and the Mars Rover were actually sets of identical twin crafts, in case one failed. Similarly, constellations of satellites work in tandem as fail-safes. This redundancy has long been embraced by NASA, but wasn’t the option with the JWST price tag. When redundancy is out, NASA next reaches for repairability. The Hubble Telescope has been repaired and upgraded multiple times for both fixes and preventive maintenance. And, according to Barron, 50% of the astronaut time on the International Space Station is actually spent on toil. “If the astronauts left the International Space Station, then, in a very short period of time, it would just break down and they’d be forced to send it back down into the atmosphere to burn up,” he explained. But, again, the non-functional requirement of repairability was also not an option for JWST because it is floating far beyond the current capability of astronauts. So the next step toward reliability came from building the JWST out of component architecture. Barron went through a brief history of the Space Race between the Soviet Union and the U.S. from 1960 to 1988. He uncovered the pattern that redundancy didn’t actually matter much because the failure modes were shared in both crafts each time, like an alloy wasn’t durable enough or a launch was during a sandstorm. He did note that the Soviet space program chose not to publish their mistakes, so they were less likely than NASA to learn from them. I’m sorry I missed the political and civil servant history of James Webb who played a pervasive role in homophobic discrimination, helping set historical policy to remove/ban LGBTQ people from federal gov. This naming harms the amazing #JWST contributors https://t.co/jA7pRjaSgM — Jennifer Riggins💙💛 (@jkriggins) July 15, 2022 “Redundancy is very good, but sometimes at a system level, it doesn’t solve a problem because the problem is much wider,” which Barron said happens to SREs as well. Kubernetes, for example, has componentization, redundancy and load balancing built-in, but that doesn’t matter if the problem is with the DNS or an application bug. Often reliability demands more than simple redundancy. The monolith Hubble was designed from the start with repairability and upgradeability in mind. With this repairability out of the picture, there had to be a lot more testing on JWST versus Hubble, for each single point of failure. For example, each mirror was a smaller component that could be realigned remotely. He analogized this to Kubernetes, where you want to allocate the right amount of CPU, memories, and resources available to each and every microservice. In fact, JWST saw some observability trade-offs because it could only allow for so many selfie cameras to observe its own condition because adding more could affect the temperature and alter its observations. ## The JWST SRE Strategy There’s no doubt that the James Web Space Telescope SRE strategy has more stakes than any enacted on Earth. It still makes for a fantastic example of how site reliability engineering and observability needs vary within the context of circumstances. And that sometimes chaos engineering can only be performed before it goes into production. Barron observed some of the JWST’s SRE strategy: - Aim for 100% availability (no room for an error budget) - Embrace new technologies for a new product - Invest all efforts in one major deployment - Maximize functional capacity by reducing monitoring and observability load - Prioritize nonfunctional requirements, balancing with functional ones - Create redundant systems, as far as possible - Reduce technical debt and avoid problems detected in previous deployments - Identify as many single points of failure as possible, then test for them again and again - Balance observability requirements — cost, load, complexity — with benefits - Always test and recognize how testing increases business value The JWST experiment is also a good reminder that, with fewer stakes than NASA, much more frequent, smaller deployment cadence, and with less than 100% uptime required, you can experiment more with redundancy, repairability and reliability to continuously improve your systems. Under ideally significantly less pressure. “As SREs, we don’t want to aim for 100% availability. We want the right amount of availability, and we don’t want to overspend — neither resources nor budget — in order to get there. We don’t want to embrace too many new technologies for new products,” Barron said. “A lot of the lessons from JWST are what not to do.” *Disclosure: The author of this article was a host of the WTF is SRE conference.*
true
true
true
Earlier this year, the single greatest site reliability engineering (SRE) lesson unfolded itself out in space. Last week we saw
2024-10-13 00:00:00
2022-05-21 00:00:00
https://cdn.thenewstack.…ure-1024x493.png
article
thenewstack.io
The New Stack
null
null
8,856,488
https://6to5.org/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null