{"title": "avg", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nApril 2001, rev. April 2003(This article is derived from a talk given at the 2001 Franz\nDeveloper Symposium.)\nIn the summer of 1995, my friend Robert Morris and I\nstarted a startup called \nViaweb. \nOur plan was to write\nsoftware that would let end users build online stores.\nWhat was novel about this software, at the time, was\nthat it ran on our server, using ordinary Web pages\nas the interface.A lot of people could have been having this idea at the\nsame time, of course, but as far as I know, Viaweb was\nthe first Web-based application. It seemed such\na novel idea to us that we named the company after it:\nViaweb, because our software worked via the Web,\ninstead of running on your desktop computer.Another unusual thing about this software was that it\nwas written primarily in a programming language called\nLisp. It was one of the first big end-user\napplications to be written in Lisp, which up till then\nhad been used mostly in universities and research labs. [1]The Secret WeaponEric Raymond has written an essay called \"How to Become a Hacker,\"\nand in it, among other things, he tells would-be hackers what\nlanguages they should learn. He suggests starting with Python and\nJava, because they are easy to learn. The serious hacker will also\nwant to learn C, in order to hack Unix, and Perl for system\nadministration and cgi scripts. Finally, the truly serious hacker\nshould consider learning Lisp:\n\n Lisp is worth learning for the profound enlightenment experience\n you will have when you finally get it; that experience will make\n you a better programmer for the rest of your days, even if you\n never actually use Lisp itself a lot.\n\nThis is the same argument you tend to hear for learning Latin. It\nwon't get you a job, except perhaps as a classics professor, but\nit will improve your mind, and make you a better writer in languages\nyou do want to use, like English.But wait a minute. This metaphor doesn't stretch that far. The\nreason Latin won't get you a job is that no one speaks it. If you\nwrite in Latin, no one can understand you. But Lisp is a computer\nlanguage, and computers speak whatever language you, the programmer,\ntell them to.So if Lisp makes you a better programmer, like he says, why wouldn't\nyou want to use it? If a painter were offered a brush that would\nmake him a better painter, it seems to me that he would want to\nuse it in all his paintings, wouldn't he? I'm not trying to make\nfun of Eric Raymond here. On the whole, his advice is good. What\nhe says about Lisp is pretty much the conventional wisdom. But\nthere is a contradiction in the conventional wisdom: Lisp will\nmake you a better programmer, and yet you won't use it.Why not? Programming languages are just tools, after all. If Lisp\nreally does yield better programs, you should use it. And if it\ndoesn't, then who needs it?This is not just a theoretical question. Software is a very\ncompetitive business, prone to natural monopolies. A company that\ngets software written faster and better will, all other things\nbeing equal, put its competitors out of business. And when you're\nstarting a startup, you feel this very keenly. Startups tend to\nbe an all or nothing proposition. You either get rich, or you get\nnothing. In a startup, if you bet on the wrong technology, your\ncompetitors will crush you.Robert and I both knew Lisp well, and we couldn't see any reason\nnot to trust our instincts and go with Lisp. We knew that everyone\nelse was writing their software in C++ or Perl. But we also knew\nthat that didn't mean anything. If you chose technology that way,\nyou'd be running Windows. When you choose technology, you have to\nignore what other people are doing, and consider only what will\nwork the best.This is especially true in a startup. In a big company, you can\ndo what all the other big companies are doing. But a startup can't\ndo what all the other startups do. I don't think a lot of people\nrealize this, even in startups.The average big company grows at about ten percent a year. So if\nyou're running a big company and you do everything the way the\naverage big company does it, you can expect to do as well as the\naverage big company-- that is, to grow about ten percent a year.The same thing will happen if you're running a startup, of course.\nIf you do everything the way the average startup does it, you should\nexpect average performance. The problem here is, average performance\nmeans that you'll go out of business. The survival rate for startups\nis way less than fifty percent. So if you're running a startup,\nyou had better be doing something odd. If not, you're in trouble.Back in 1995, we knew something that I don't think our competitors\nunderstood, and few understand even now: when you're writing\nsoftware that only has to run on your own servers, you can use\nany language you want. When you're writing desktop software,\nthere's a strong bias toward writing applications in the same\nlanguage as the operating system. Ten years ago, writing applications\nmeant writing applications in C. But with Web-based software,\nespecially when you have the source code of both the language and\nthe operating system, you can use whatever language you want.This new freedom is a double-edged sword, however. Now that you\ncan use any language, you have to think about which one to use.\nCompanies that try to pretend nothing has changed risk finding that\ntheir competitors do not.If you can use any language, which do you use? We chose Lisp.\nFor one thing, it was obvious that rapid development would be\nimportant in this market. We were all starting from scratch, so\na company that could get new features done before its competitors\nwould have a big advantage. We knew Lisp was a really good language\nfor writing software quickly, and server-based applications magnify\nthe effect of rapid development, because you can release software\nthe minute it's done.If other companies didn't want to use Lisp, so much the better.\nIt might give us a technological edge, and we needed all the help\nwe could get. When we started Viaweb, we had no experience in\nbusiness. We didn't know anything about marketing, or hiring\npeople, or raising money, or getting customers. Neither of us had\never even had what you would call a real job. The only thing we\nwere good at was writing software. We hoped that would save us.\nAny advantage we could get in the software department, we would\ntake.So you could say that using Lisp was an experiment. Our hypothesis\nwas that if we wrote our software in Lisp, we'd be able to get\nfeatures done faster than our competitors, and also to do things\nin our software that they couldn't do. And because Lisp was so\nhigh-level, we wouldn't need a big development team, so our costs\nwould be lower. If this were so, we could offer a better product\nfor less money, and still make a profit. We would end up getting\nall the users, and our competitors would get none, and eventually\ngo out of business. That was what we hoped would happen, anyway.What were the results of this experiment? Somewhat surprisingly,\nit worked. We eventually had many competitors, on the order of\ntwenty to thirty of them, but none of their software could compete\nwith ours. We had a wysiwyg online store builder that ran on the\nserver and yet felt like a desktop application. Our competitors\nhad cgi scripts. And we were always far ahead of them in features.\nSometimes, in desperation, competitors would try to introduce\nfeatures that we didn't have. But with Lisp our development cycle\nwas so fast that we could sometimes duplicate a new feature within\na day or two of a competitor announcing it in a press release. By\nthe time journalists covering the press release got round to calling\nus, we would have the new feature too.It must have seemed to our competitors that we had some kind of\nsecret weapon-- that we were decoding their Enigma traffic or\nsomething. In fact we did have a secret weapon, but it was simpler\nthan they realized. No one was leaking news of their features to\nus. We were just able to develop software faster than anyone\nthought possible.When I was about nine I happened to get hold of a copy of The Day\nof the Jackal, by Frederick Forsyth. The main character is an\nassassin who is hired to kill the president of France. The assassin\nhas to get past the police to get up to an apartment that overlooks\nthe president's route. He walks right by them, dressed up as an\nold man on crutches, and they never suspect him.Our secret weapon was similar. We wrote our software in a weird\nAI language, with a bizarre syntax full of parentheses. For years\nit had annoyed me to hear Lisp described that way. But now it\nworked to our advantage. In business, there is nothing more valuable\nthan a technical advantage your competitors don't understand. In\nbusiness, as in war, surprise is worth as much as force.And so, I'm a little embarrassed to say, I never said anything\npublicly about Lisp while we were working on Viaweb. We never\nmentioned it to the press, and if you searched for Lisp on our Web\nsite, all you'd find were the titles of two books in my bio. This\nwas no accident. A startup should give its competitors as little\ninformation as possible. If they didn't know what language our\nsoftware was written in, or didn't care, I wanted to keep it that\nway.[2]The people who understood our technology best were the customers.\nThey didn't care what language Viaweb was written in either, but\nthey noticed that it worked really well. It let them build great\nlooking online stores literally in minutes. And so, by word of\nmouth mostly, we got more and more users. By the end of 1996 we\nhad about 70 stores online. At the end of 1997 we had 500. Six\nmonths later, when Yahoo bought us, we had 1070 users. Today, as\nYahoo Store, this software continues to dominate its market. It's\none of the more profitable pieces of Yahoo, and the stores built\nwith it are the foundation of Yahoo Shopping. I left Yahoo in\n1999, so I don't know exactly how many users they have now, but\nthe last I heard there were about 20,000.\nThe Blub ParadoxWhat's so great about Lisp? And if Lisp is so great, why doesn't\neveryone use it? These sound like rhetorical questions, but actually\nthey have straightforward answers. Lisp is so great not because\nof some magic quality visible only to devotees, but because it is\nsimply the most powerful language available. And the reason everyone\ndoesn't use it is that programming languages are not merely\ntechnologies, but habits of mind as well, and nothing changes\nslower. Of course, both these answers need explaining.I'll begin with a shockingly controversial statement: programming\nlanguages vary in power.Few would dispute, at least, that high level languages are more\npowerful than machine language. Most programmers today would agree\nthat you do not, ordinarily, want to program in machine language.\nInstead, you should program in a high-level language, and have a\ncompiler translate it into machine language for you. This idea is\neven built into the hardware now: since the 1980s, instruction sets\nhave been designed for compilers rather than human programmers.Everyone knows it's a mistake to write your whole program by hand\nin machine language. What's less often understood is that there\nis a more general principle here: that if you have a choice of\nseveral languages, it is, all other things being equal, a mistake\nto program in anything but the most powerful one. [3]There are many exceptions to this rule. If you're writing a program\nthat has to work very closely with a program written in a certain\nlanguage, it might be a good idea to write the new program in the\nsame language. If you're writing a program that only has to do\nsomething very simple, like number crunching or bit manipulation,\nyou may as well use a less abstract language, especially since it\nmay be slightly faster. And if you're writing a short, throwaway\nprogram, you may be better off just using whatever language has\nthe best library functions for the task. But in general, for\napplication software, you want to be using the most powerful\n(reasonably efficient) language you can get, and using anything\nelse is a mistake, of exactly the same kind, though possibly in a\nlesser degree, as programming in machine language.You can see that machine language is very low level. But, at least\nas a kind of social convention, high-level languages are often all\ntreated as equivalent. They're not. Technically the term \"high-level\nlanguage\" doesn't mean anything very definite. There's no dividing\nline with machine languages on one side and all the high-level\nlanguages on the other. Languages fall along a continuum [4] of\nabstractness, from the most powerful all the way down to machine\nlanguages, which themselves vary in power.Consider Cobol. Cobol is a high-level language, in the sense that\nit gets compiled into machine language. Would anyone seriously\nargue that Cobol is equivalent in power to, say, Python? It's\nprobably closer to machine language than Python.Or how about Perl 4? Between Perl 4 and Perl 5, lexical closures\ngot added to the language. Most Perl hackers would agree that Perl\n5 is more powerful than Perl 4. But once you've admitted that,\nyou've admitted that one high level language can be more powerful\nthan another. And it follows inexorably that, except in special\ncases, you ought to use the most powerful you can get.This idea is rarely followed to its conclusion, though. After a\ncertain age, programmers rarely switch languages voluntarily.\nWhatever language people happen to be used to, they tend to consider\njust good enough.Programmers get very attached to their favorite languages, and I\ndon't want to hurt anyone's feelings, so to explain this point I'm\ngoing to use a hypothetical language called Blub. Blub falls right\nin the middle of the abstractness continuum. It is not the most\npowerful language, but it is more powerful than Cobol or machine\nlanguage.And in fact, our hypothetical Blub programmer wouldn't use either\nof them. Of course he wouldn't program in machine language. That's\nwhat compilers are for. And as for Cobol, he doesn't know how\nanyone can get anything done with it. It doesn't even have x (Blub\nfeature of your choice).As long as our hypothetical Blub programmer is looking down the\npower continuum, he knows he's looking down. Languages less powerful\nthan Blub are obviously less powerful, because they're missing some\nfeature he's used to. But when our hypothetical Blub programmer\nlooks in the other direction, up the power continuum, he doesn't\nrealize he's looking up. What he sees are merely weird languages.\nHe probably considers them about equivalent in power to Blub, but\nwith all this other hairy stuff thrown in as well. Blub is good\nenough for him, because he thinks in Blub.When we switch to the point of view of a programmer using any of\nthe languages higher up the power continuum, however, we find that\nhe in turn looks down upon Blub. How can you get anything done in\nBlub? It doesn't even have y.By induction, the only programmers in a position to see all the\ndifferences in power between the various languages are those who\nunderstand the most powerful one. (This is probably what Eric\nRaymond meant about Lisp making you a better programmer.) You can't\ntrust the opinions of the others, because of the Blub paradox:\nthey're satisfied with whatever language they happen to use, because\nit dictates the way they think about programs.I know this from my own experience, as a high school kid writing\nprograms in Basic. That language didn't even support recursion.\nIt's hard to imagine writing programs without using recursion, but\nI didn't miss it at the time. I thought in Basic. And I was a\nwhiz at it. Master of all I surveyed.The five languages that Eric Raymond recommends to hackers fall at\nvarious points on the power continuum. Where they fall relative\nto one another is a sensitive topic. What I will say is that I\nthink Lisp is at the top. And to support this claim I'll tell you\nabout one of the things I find missing when I look at the other\nfour languages. How can you get anything done in them, I think,\nwithout macros? [5]Many languages have something called a macro. But Lisp macros are\nunique. And believe it or not, what they do is related to the\nparentheses. The designers of Lisp didn't put all those parentheses\nin the language just to be different. To the Blub programmer, Lisp\ncode looks weird. But those parentheses are there for a reason.\nThey are the outward evidence of a fundamental difference between\nLisp and other languages.Lisp code is made out of Lisp data objects. And not in the trivial\nsense that the source files contain characters, and strings are\none of the data types supported by the language. Lisp code, after\nit's read by the parser, is made of data structures that you can\ntraverse.If you understand how compilers work, what's really going on is\nnot so much that Lisp has a strange syntax as that Lisp has no\nsyntax. You write programs in the parse trees that get generated\nwithin the compiler when other languages are parsed. But these\nparse trees are fully accessible to your programs. You can write\nprograms that manipulate them. In Lisp, these programs are called\nmacros. They are programs that write programs.Programs that write programs? When would you ever want to do that?\nNot very often, if you think in Cobol. All the time, if you think\nin Lisp. It would be convenient here if I could give an example\nof a powerful macro, and say there! how about that? But if I did,\nit would just look like gibberish to someone who didn't know Lisp;\nthere isn't room here to explain everything you'd need to know to\nunderstand what it meant. In \nAnsi Common Lisp I tried to move\nthings along as fast as I could, and even so I didn't get to macros\nuntil page 160.But I think I can give a kind of argument that might be convincing.\nThe source code of the Viaweb editor was probably about 20-25%\nmacros. Macros are harder to write than ordinary Lisp functions,\nand it's considered to be bad style to use them when they're not\nnecessary. So every macro in that code is there because it has to\nbe. What that means is that at least 20-25% of the code in this\nprogram is doing things that you can't easily do in any other\nlanguage. However skeptical the Blub programmer might be about my\nclaims for the mysterious powers of Lisp, this ought to make him\ncurious. We weren't writing this code for our own amusement. We\nwere a tiny startup, programming as hard as we could in order to\nput technical barriers between us and our competitors.A suspicious person might begin to wonder if there was some\ncorrelation here. A big chunk of our code was doing things that\nare very hard to do in other languages. The resulting software\ndid things our competitors' software couldn't do. Maybe there was\nsome kind of connection. I encourage you to follow that thread.\nThere may be more to that old man hobbling along on his crutches\nthan meets the eye.Aikido for StartupsBut I don't expect to convince anyone \n(over 25) \nto go out and learn\nLisp. The purpose of this article is not to change anyone's mind,\nbut to reassure people already interested in using Lisp-- people\nwho know that Lisp is a powerful language, but worry because it\nisn't widely used. In a competitive situation, that's an advantage.\nLisp's power is multiplied by the fact that your competitors don't\nget it.If you think of using Lisp in a startup, you shouldn't worry that\nit isn't widely understood. You should hope that it stays that\nway. And it's likely to. It's the nature of programming languages\nto make most people satisfied with whatever they currently use.\nComputer hardware changes so much faster than personal habits that\nprogramming practice is usually ten to twenty years behind the\nprocessor. At places like MIT they were writing programs in\nhigh-level languages in the early 1960s, but many companies continued\nto write code in machine language well into the 1980s. I bet a\nlot of people continued to write machine language until the processor,\nlike a bartender eager to close up and go home, finally kicked them\nout by switching to a risc instruction set.Ordinarily technology changes fast. But programming languages are\ndifferent: programming languages are not just technology, but what\nprogrammers think in. They're half technology and half religion.[6]\nAnd so the median language, meaning whatever language the median\nprogrammer uses, moves as slow as an iceberg. Garbage collection,\nintroduced by Lisp in about 1960, is now widely considered to be\na good thing. Runtime typing, ditto, is growing in popularity.\nLexical closures, introduced by Lisp in the early 1970s, are now,\njust barely, on the radar screen. Macros, introduced by Lisp in the\nmid 1960s, are still terra incognita.Obviously, the median language has enormous momentum. I'm not\nproposing that you can fight this powerful force. What I'm proposing\nis exactly the opposite: that, like a practitioner of Aikido, you\ncan use it against your opponents.If you work for a big company, this may not be easy. You will have\na hard time convincing the pointy-haired boss to let you build\nthings in Lisp, when he has just read in the paper that some other\nlanguage is poised, like Ada was twenty years ago, to take over\nthe world. But if you work for a startup that doesn't have\npointy-haired bosses yet, you can, like we did, turn the Blub\nparadox to your advantage: you can use technology that your\ncompetitors, glued immovably to the median language, will never be\nable to match.If you ever do find yourself working for a startup, here's a handy\ntip for evaluating competitors. Read their job listings. Everything\nelse on their site may be stock photos or the prose equivalent,\nbut the job listings have to be specific about what they want, or\nthey'll get the wrong candidates.During the years we worked on Viaweb I read a lot of job descriptions.\nA new competitor seemed to emerge out of the woodwork every month\nor so. The first thing I would do, after checking to see if they\nhad a live online demo, was look at their job listings. After a\ncouple years of this I could tell which companies to worry about\nand which not to. The more of an IT flavor the job descriptions\nhad, the less dangerous the company was. The safest kind were the\nones that wanted Oracle experience. You never had to worry about\nthose. You were also safe if they said they wanted C++ or Java\ndevelopers. If they wanted Perl or Python programmers, that would\nbe a bit frightening-- that's starting to sound like a company\nwhere the technical side, at least, is run by real hackers. If I\nhad ever seen a job posting looking for Lisp hackers, I would have\nbeen really worried.\nNotes[1] Viaweb at first had two parts: the editor, written in Lisp,\nwhich people used to build their sites, and the ordering system,\nwritten in C, which handled orders. The first version was mostly\nLisp, because the ordering system was small. Later we added two\nmore modules, an image generator written in C, and a back-office\nmanager written mostly in Perl.In January 2003, Yahoo released a new version of the editor \nwritten in C++ and Perl. It's hard to say whether the program is no\nlonger written in Lisp, though, because to translate this program\ninto C++ they literally had to write a Lisp interpreter: the source\nfiles of all the page-generating templates are still, as far as I\nknow, Lisp code. (See Greenspun's Tenth Rule.)[2] Robert Morris says that I didn't need to be secretive, because\neven if our competitors had known we were using Lisp, they wouldn't\nhave understood why: \"If they were that smart they'd already be\nprogramming in Lisp.\"[3] All languages are equally powerful in the sense of being Turing\nequivalent, but that's not the sense of the word programmers care\nabout. (No one wants to program a Turing machine.) The kind of\npower programmers care about may not be formally definable, but\none way to explain it would be to say that it refers to features\nyou could only get in the less powerful language by writing an\ninterpreter for the more powerful language in it. If language A\nhas an operator for removing spaces from strings and language B\ndoesn't, that probably doesn't make A more powerful, because you\ncan probably write a subroutine to do it in B. But if A supports,\nsay, recursion, and B doesn't, that's not likely to be something\nyou can fix by writing library functions.[4] Note to nerds: or possibly a lattice, narrowing toward the top;\nit's not the shape that matters here but the idea that there is at\nleast a partial order.[5] It is a bit misleading to treat macros as a separate feature.\nIn practice their usefulness is greatly enhanced by other Lisp\nfeatures like lexical closures and rest parameters.[6] As a result, comparisons of programming languages either take\nthe form of religious wars or undergraduate textbooks so determinedly\nneutral that they're really works of anthropology. People who\nvalue their peace, or want tenure, avoid the topic. But the question\nis only half a religious one; there is something there worth\nstudying, especially if you want to design new languages."} {"title": "goodtaste", "text": "November 2021(This essay is derived from a talk at the Cambridge Union.)When I was a kid, I'd have said there wasn't. My father told me so.\nSome people like some things, and other people like other things,\nand who's to say who's right?It seemed so obvious that there was no such thing as good taste\nthat it was only through indirect evidence that I realized my father\nwas wrong. And that's what I'm going to give you here: a proof by\nreductio ad absurdum. If we start from the premise that there's no\nsuch thing as good taste, we end up with conclusions that are\nobviously false, and therefore the premise must be wrong.We'd better start by saying what good taste is. There's a narrow\nsense in which it refers to aesthetic judgements and a broader one\nin which it refers to preferences of any kind. The strongest proof\nwould be to show that taste exists in the narrowest sense, so I'm\ngoing to talk about taste in art. You have better taste than me if\nthe art you like is better than the art I like.If there's no such thing as good taste, then there's no such thing\nas good art. Because if there is such a\nthing as good art, it's\neasy to tell which of two people has better taste. Show them a lot\nof works by artists they've never seen before and ask them to\nchoose the best, and whoever chooses the better art has better\ntaste.So if you want to discard the concept of good taste, you also have\nto discard the concept of good art. And that means you have to\ndiscard the possibility of people being good at making it. Which\nmeans there's no way for artists to be good at their jobs. And not\njust visual artists, but anyone who is in any sense an artist. You\ncan't have good actors, or novelists, or composers, or dancers\neither. You can have popular novelists, but not good ones.We don't realize how far we'd have to go if we discarded the concept\nof good taste, because we don't even debate the most obvious cases.\nBut it doesn't just mean we can't say which of two famous painters\nis better. It means we can't say that any painter is better than a\nrandomly chosen eight year old.That was how I realized my father was wrong. I started studying\npainting. And it was just like other kinds of work I'd done: you\ncould do it well, or badly, and if you tried hard, you could get\nbetter at it. And it was obvious that Leonardo and Bellini were\nmuch better at it than me. That gap between us was not imaginary.\nThey were so good. And if they could be good, then art could be\ngood, and there was such a thing as good taste after all.Now that I've explained how to show there is such a thing as good\ntaste, I should also explain why people think there isn't. There\nare two reasons. One is that there's always so much disagreement\nabout taste. Most people's response to art is a tangle of unexamined\nimpulses. Is the artist famous? Is the subject attractive? Is this\nthe sort of art they're supposed to like? Is it hanging in a famous\nmuseum, or reproduced in a big, expensive book? In practice most\npeople's response to art is dominated by such extraneous factors.And the people who do claim to have good taste are so often mistaken.\nThe paintings admired by the so-called experts in one generation\nare often so different from those admired a few generations later.\nIt's easy to conclude there's nothing real there at all. It's only\nwhen you isolate this force, for example by trying to paint and\ncomparing your work to Bellini's, that you can see that it does in\nfact exist.The other reason people doubt that art can be good is that there\ndoesn't seem to be any room in the art for this goodness. The\nargument goes like this. Imagine several people looking at a work\nof art and judging how good it is. If being good art really is a\nproperty of objects, it should be in the object somehow. But it\ndoesn't seem to be; it seems to be something happening in the heads\nof each of the observers. And if they disagree, how do you choose\nbetween them?The solution to this puzzle is to realize that the purpose of art\nis to work on its human audience, and humans have a lot in common.\nAnd to the extent the things an object acts upon respond in the\nsame way, that's arguably what it means for the object to have the\ncorresponding property. If everything a particle interacts with\nbehaves as if the particle had a mass of m, then it has a mass of\nm. So the distinction between \"objective\" and \"subjective\" is not\nbinary, but a matter of degree, depending on how much the subjects\nhave in common. Particles interacting with one another are at one\npole, but people interacting with art are not all the way at the\nother; their reactions aren't random.Because people's responses to art aren't random, art can be designed\nto operate on people, and be good or bad depending on how effectively\nit does so. Much as a vaccine can be. If someone were talking about\nthe ability of a vaccine to confer immunity, it would seem very\nfrivolous to object that conferring immunity wasn't really a property\nof vaccines, because acquiring immunity is something that happens\nin the immune system of each individual person. Sure, people's\nimmune systems vary, and a vaccine that worked on one might not\nwork on another, but that doesn't make it meaningless to talk about\nthe effectiveness of a vaccine.The situation with art is messier, of course. You can't measure\neffectiveness by simply taking a vote, as you do with vaccines.\nYou have to imagine the responses of subjects with a deep knowledge\nof art, and enough clarity of mind to be able to ignore extraneous\ninfluences like the fame of the artist. And even then you'd still\nsee some disagreement. People do vary, and judging art is hard,\nespecially recent art. There is definitely not a total order either\nof works or of people's ability to judge them. But there is equally\ndefinitely a partial order of both. So while it's not possible to\nhave perfect taste, it is possible to have good taste.\nThanks to the Cambridge Union for inviting me, and to Trevor\nBlackwell, Jessica Livingston, and Robert Morris for reading drafts\nof this.\n"} {"title": "newideas", "text": "May 2021There's one kind of opinion I'd be very afraid to express publicly.\nIf someone I knew to be both a domain expert and a reasonable person\nproposed an idea that sounded preposterous, I'd be very reluctant\nto say \"That will never work.\"Anyone who has studied the history of ideas, and especially the\nhistory of science, knows that's how big things start. Someone\nproposes an idea that sounds crazy, most people dismiss it, then\nit gradually takes over the world.Most implausible-sounding ideas are in fact bad and could be safely\ndismissed. But not when they're proposed by reasonable domain\nexperts. If the person proposing the idea is reasonable, then they\nknow how implausible it sounds. And yet they're proposing it anyway.\nThat suggests they know something you don't. And if they have deep\ndomain expertise, that's probably the source of it.\n[1]Such ideas are not merely unsafe to dismiss, but disproportionately\nlikely to be interesting. When the average person proposes an\nimplausible-sounding idea, its implausibility is evidence of their\nincompetence. But when a reasonable domain expert does it, the\nsituation is reversed. There's something like an efficient market\nhere: on average the ideas that seem craziest will, if correct,\nhave the biggest effect. So if you can eliminate the theory that\nthe person proposing an implausible-sounding idea is incompetent,\nits implausibility switches from evidence that it's boring to\nevidence that it's exciting.\n[2]Such ideas are not guaranteed to work. But they don't have to be.\nThey just have to be sufficiently good bets \u2014 to have sufficiently\nhigh expected value. And I think on average they do. I think if you\nbet on the entire set of implausible-sounding ideas proposed by\nreasonable domain experts, you'd end up net ahead.The reason is that everyone is too conservative. The word \"paradigm\"\nis overused, but this is a case where it's warranted. Everyone is\ntoo much in the grip of the current paradigm. Even the people who\nhave the new ideas undervalue them initially. Which means that\nbefore they reach the stage of proposing them publicly, they've\nalready subjected them to an excessively strict filter.\n[3]The wise response to such an idea is not to make statements, but\nto ask questions, because there's a real mystery here. Why has this\nsmart and reasonable person proposed an idea that seems so wrong?\nAre they mistaken, or are you? One of you has to be. If you're the\none who's mistaken, that would be good to know, because it means\nthere's a hole in your model of the world. But even if they're\nmistaken, it should be interesting to learn why. A trap that an\nexpert falls into is one you have to worry about too.This all seems pretty obvious. And yet there are clearly a lot of\npeople who don't share my fear of dismissing new ideas. Why do they\ndo it? Why risk looking like a jerk now and a fool later, instead\nof just reserving judgement?One reason they do it is envy. If you propose a radical new idea\nand it succeeds, your reputation (and perhaps also your wealth)\nwill increase proportionally. Some people would be envious if that\nhappened, and this potential envy propagates back into a conviction\nthat you must be wrong.Another reason people dismiss new ideas is that it's an easy way\nto seem sophisticated. When a new idea first emerges, it usually\nseems pretty feeble. It's a mere hatchling. Received wisdom is a\nfull-grown eagle by comparison. So it's easy to launch a devastating\nattack on a new idea, and anyone who does will seem clever to those\nwho don't understand this asymmetry.This phenomenon is exacerbated by the difference between how those\nworking on new ideas and those attacking them are rewarded. The\nrewards for working on new ideas are weighted by the value of the\noutcome. So it's worth working on something that only has a 10%\nchance of succeeding if it would make things more than 10x better.\nWhereas the rewards for attacking new ideas are roughly constant;\nsuch attacks seem roughly equally clever regardless of the target.People will also attack new ideas when they have a vested interest\nin the old ones. It's not surprising, for example, that some of\nDarwin's harshest critics were churchmen. People build whole careers\non some ideas. When someone claims they're false or obsolete, they\nfeel threatened.The lowest form of dismissal is mere factionalism: to automatically\ndismiss any idea associated with the opposing faction. The lowest\nform of all is to dismiss an idea because of who proposed it.But the main thing that leads reasonable people to dismiss new ideas\nis the same thing that holds people back from proposing them: the\nsheer pervasiveness of the current paradigm. It doesn't just affect\nthe way we think; it is the Lego blocks we build thoughts out of.\nPopping out of the current paradigm is something only a few people\ncan do. And even they usually have to suppress their intuitions at\nfirst, like a pilot flying through cloud who has to trust his\ninstruments over his sense of balance.\n[4]Paradigms don't just define our present thinking. They also vacuum\nup the trail of crumbs that led to them, making our standards for\nnew ideas impossibly high. The current paradigm seems so perfect\nto us, its offspring, that we imagine it must have been accepted\ncompletely as soon as it was discovered \u2014 that whatever the church thought\nof the heliocentric model, astronomers must have been convinced as\nsoon as Copernicus proposed it. Far, in fact, from it. Copernicus\npublished the heliocentric model in 1532, but it wasn't till the\nmid seventeenth century that the balance of scientific opinion\nshifted in its favor.\n[5]Few understand how feeble new ideas look when they first appear.\nSo if you want to have new ideas yourself, one of the most valuable\nthings you can do is to learn what they look like when they're born.\nRead about how new ideas happened, and try to get yourself into the\nheads of people at the time. How did things look to them, when the\nnew idea was only half-finished, and even the person who had it was\nonly half-convinced it was right?But you don't have to stop at history. You can observe big new ideas\nbeing born all around you right now. Just look for a reasonable\ndomain expert proposing something that sounds wrong.If you're nice, as well as wise, you won't merely resist attacking\nsuch people, but encourage them. Having new ideas is a lonely\nbusiness. Only those who've tried it know how lonely. These people\nneed your help. And if you help them, you'll probably learn something\nin the process.Notes[1]\nThis domain expertise could be in another field. Indeed,\nsuch crossovers tend to be particularly promising.[2]\nI'm not claiming this principle extends much beyond math,\nengineering, and the hard sciences. In politics, for example,\ncrazy-sounding ideas generally are as bad as they sound. Though\narguably this is not an exception, because the people who propose\nthem are not in fact domain experts; politicians are domain experts\nin political tactics, like how to get elected and how to get\nlegislation passed, but not in the world that policy acts upon.\nPerhaps no one could be.[3]\nThis sense of \"paradigm\" was defined by Thomas Kuhn in his\nStructure of Scientific Revolutions, but I also recommend his\nCopernican Revolution, where you can see him at work developing the\nidea.[4]\nThis is one reason people with a touch of Asperger's may have\nan advantage in discovering new ideas. They're always flying on\ninstruments.[5]\nHall, Rupert. From Galileo to Newton. Collins, 1963. This\nbook is particularly good at getting into contemporaries' heads.Thanks to Trevor Blackwell, Patrick Collison, Suhail Doshi, Daniel\nGackle, Jessica Livingston, and Robert Morris for reading drafts of this."} {"title": "superangels", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2010After barely changing at all for decades, the startup funding\nbusiness is now in what could, at least by comparison, be called\nturmoil. At Y Combinator we've seen dramatic changes in the funding\nenvironment for startups. Fortunately one of them is much higher\nvaluations.The trends we've been seeing are probably not YC-specific. I wish\nI could say they were, but the main cause is probably just that we\nsee trends first\u2014partly because the startups we fund are very\nplugged into the Valley and are quick to take advantage of anything\nnew, and partly because we fund so many that we have enough data\npoints to see patterns clearly.What we're seeing now, everyone's probably going to be seeing in\nthe next couple years. So I'm going to explain what we're seeing,\nand what that will mean for you if you try to raise money.Super-AngelsLet me start by describing what the world of startup funding used\nto look like. There used to be two sharply differentiated types\nof investors: angels and venture capitalists. Angels are individual\nrich people who invest small amounts of their own money, while VCs\nare employees of funds that invest large amounts of other people's.For decades there were just those two types of investors, but now\na third type has appeared halfway between them: the so-called\nsuper-angels. \n[1]\n And VCs have been provoked by their arrival\ninto making a lot of angel-style investments themselves. So the\npreviously sharp line between angels and VCs has become hopelessly\nblurred.There used to be a no man's land between angels and VCs. Angels\nwould invest $20k to $50k apiece, and VCs usually a million or more.\nSo an angel round meant a collection of angel investments that\ncombined to maybe $200k, and a VC round meant a series A round in\nwhich a single VC fund (or occasionally two) invested $1-5 million.The no man's land between angels and VCs was a very inconvenient\none for startups, because it coincided with the amount many wanted\nto raise. Most startups coming out of Demo Day wanted to raise\naround $400k. But it was a pain to stitch together that much out\nof angel investments, and most VCs weren't interested in investments\nso small. That's the fundamental reason the super-angels have\nappeared. They're responding to the market.The arrival of a new type of investor is big news for startups,\nbecause there used to be only two and they rarely competed with one\nanother. Super-angels compete with both angels and VCs. That's\ngoing to change the rules about how to raise money. I don't know\nyet what the new rules will be, but it looks like most of the changes\nwill be for the better.A super-angel has some of the qualities of an angel, and some of\nthe qualities of a VC. They're usually individuals, like angels.\nIn fact many of the current super-angels were initially angels of\nthe classic type. But like VCs, they invest other people's money.\nThis allows them to invest larger amounts than angels: a typical\nsuper-angel investment is currently about $100k. They make investment\ndecisions quickly, like angels. And they make a lot more investments\nper partner than VCs\u2014up to 10 times as many.The fact that super-angels invest other people's money makes them\ndoubly alarming to VCs. They don't just compete for startups; they\nalso compete for investors. What super-angels really are is a new\nform of fast-moving, lightweight VC fund. And those of us in the\ntechnology world know what usually happens when something comes\nalong that can be described in terms like that. Usually it's the\nreplacement.Will it be? As of now, few of the startups that take money from\nsuper-angels are ruling out taking VC money. They're just postponing\nit. But that's still a problem for VCs. Some of the startups that\npostpone raising VC money may do so well on the angel money they\nraise that they never bother to raise more. And those who do raise\nVC rounds will be able to get higher valuations when they do. If\nthe best startups get 10x higher valuations when they raise series\nA rounds, that would cut VCs' returns from winners at least tenfold.\n[2]So I think VC funds are seriously threatened by the super-angels.\nBut one thing that may save them to some extent is the uneven\ndistribution of startup outcomes: practically all the returns are\nconcentrated in a few big successes. The expected value of a startup\nis the percentage chance it's Google. So to the extent that winning\nis a matter of absolute returns, the super-angels could win practically\nall the battles for individual startups and yet lose the war, if\nthey merely failed to get those few big winners. And there's a\nchance that could happen, because the top VC funds have better\nbrands, and can also do more for their portfolio companies. \n[3]Because super-angels make more investments per partner, they have\nless partner per investment. They can't pay as much attention to\nyou as a VC on your board could. How much is that extra attention\nworth? It will vary enormously from one partner to another. There's\nno consensus yet in the general case. So for now this is something\nstartups are deciding individually.Till now, VCs' claims about how much value they added were sort of\nlike the government's. Maybe they made you feel better, but you\nhad no choice in the matter, if you needed money on the scale only\nVCs could supply. Now that VCs have competitors, that's going to\nput a market price on the help they offer. The interesting thing\nis, no one knows yet what it will be.Do startups that want to get really big need the sort of advice and\nconnections only the top VCs can supply? Or would super-angel money\ndo just as well? The VCs will say you need them, and the super-angels\nwill say you don't. But the truth is, no one knows yet, not even\nthe VCs and super-angels themselves. All the super-angels know\nis that their new model seems promising enough to be worth trying,\nand all the VCs know is that it seems promising enough to worry\nabout.RoundsWhatever the outcome, the conflict between VCs and super-angels is\ngood news for founders. And not just for the obvious reason that\nmore competition for deals means better terms. The whole shape of\ndeals is changing.One of the biggest differences between angels and VCs is the amount\nof your company they want. VCs want a lot. In a series A round\nthey want a third of your company, if they can get it. They don't\ncare much how much they pay for it, but they want a lot because the\nnumber of series A investments they can do is so small. In a\ntraditional series A investment, at least one partner from the VC\nfund takes a seat on your board. \n[4]\n Since board seats last about\n5 years and each partner can't handle more than about 10 at once,\nthat means a VC fund can only do about 2 series A deals per partner\nper year. And that means they need to get as much of the company\nas they can in each one. You'd have to be a very promising startup\nindeed to get a VC to use up one of his 10 board seats for only a\nfew percent of you.Since angels generally don't take board seats, they don't have this\nconstraint. They're happy to buy only a few percent of you. And\nalthough the super-angels are in most respects mini VC funds, they've\nretained this critical property of angels. They don't take board\nseats, so they don't need a big percentage of your company.Though that means you'll get correspondingly less attention from\nthem, it's good news in other respects. Founders never really liked\ngiving up as much equity as VCs wanted. It was a lot of the company\nto give up in one shot. Most founders doing series A deals would\nprefer to take half as much money for half as much stock, and then\nsee what valuation they could get for the second half of the stock\nafter using the first half of the money to increase its value. But\nVCs never offered that option.Now startups have another alternative. Now it's easy to raise angel\nrounds about half the size of series A rounds. Many of the startups\nwe fund are taking this route, and I predict that will be true of\nstartups in general.A typical big angel round might be $600k on a convertible note with\na valuation cap of $4 million premoney. Meaning that when the note\nconverts into stock (in a later round, or upon acquisition), the\ninvestors in that round will get .6 / 4.6, or 13% of the company.\nThat's a lot less than the 30 to 40% of the company you usually\ngive up in a series A round if you do it so early. \n[5]But the advantage of these medium-sized rounds is not just that\nthey cause less dilution. You also lose less control. After an\nangel round, the founders almost always still have control of the\ncompany, whereas after a series A round they often don't. The\ntraditional board structure after a series A round is two founders,\ntwo VCs, and a (supposedly) neutral fifth person. Plus series A\nterms usually give the investors a veto over various kinds of\nimportant decisions, including selling the company. Founders usually\nhave a lot of de facto control after a series A, as long as things\nare going well. But that's not the same as just being able to do\nwhat you want, like you could before.A third and quite significant advantage of angel rounds is that\nthey're less stressful to raise. Raising a traditional series A\nround has in the past taken weeks, if not months. When a VC firm\ncan only do 2 deals per partner per year, they're careful about\nwhich they do. To get a traditional series A round you have to go\nthrough a series of meetings, culminating in a full partner meeting\nwhere the firm as a whole says yes or no. That's the really scary\npart for founders: not just that series A rounds take so long, but\nat the end of this long process the VCs might still say no. The\nchance of getting rejected after the full partner meeting averages\nabout 25%. At some firms it's over 50%.Fortunately for founders, VCs have been getting a lot faster.\nNowadays Valley VCs are more likely to take 2 weeks than 2 months.\nBut they're still not as fast as angels and super-angels, the most\ndecisive of whom sometimes decide in hours.Raising an angel round is not only quicker, but you get feedback\nas it progresses. An angel round is not an all or nothing thing\nlike a series A. It's composed of multiple investors with varying\ndegrees of seriousness, ranging from the upstanding ones who commit\nunequivocally to the jerks who give you lines like \"come back to\nme to fill out the round.\" You usually start collecting money from\nthe most committed investors and work your way out toward the\nambivalent ones, whose interest increases as the round fills up.But at each point you know how you're doing. If investors turn\ncold you may have to raise less, but when investors in an angel\nround turn cold the process at least degrades gracefully, instead\nof blowing up in your face and leaving you with nothing, as happens\nif you get rejected by a VC fund after a full partner meeting.\nWhereas if investors seem hot, you can not only close the round\nfaster, but now that convertible notes are becoming the norm,\nactually raise the price to reflect demand.ValuationHowever, the VCs have a weapon they can use against the super-angels,\nand they have started to use it. VCs have started making angel-sized\ninvestments too. The term \"angel round\" doesn't mean that all the\ninvestors in it are angels; it just describes the structure of the\nround. Increasingly the participants include VCs making investments\nof a hundred thousand or two. And when VCs invest in angel rounds\nthey can do things that super-angels don't like. VCs are quite\nvaluation-insensitive in angel rounds\u2014partly because they are\nin general, and partly because they don't care that much about the\nreturns on angel rounds, which they still view mostly as a way to\nrecruit startups for series A rounds later. So VCs who invest in\nangel rounds can blow up the valuations for angels and super-angels\nwho invest in them. \n[6]Some super-angels seem to care about valuations. Several turned\ndown YC-funded startups after Demo Day because their valuations\nwere too high. This was not a problem for the startups; by definition\na high valuation means enough investors were willing to accept it.\nBut it was mysterious to me that the super-angels would quibble\nabout valuations. Did they not understand that the big returns\ncome from a few big successes, and that it therefore mattered far\nmore which startups you picked than how much you paid for them?After thinking about it for a while and observing certain other\nsigns, I have a theory that explains why the super-angels may be\nsmarter than they seem. It would make sense for super-angels to\nwant low valuations if they're hoping to invest in startups that\nget bought early. If you're hoping to hit the next Google, you\nshouldn't care if the valuation is 20 million. But if you're looking\nfor companies that are going to get bought for 30 million, you care.\nIf you invest at 20 and the company gets bought for 30, you only\nget 1.5x. You might as well buy Apple.So if some of the super-angels were looking for companies that could\nget acquired quickly, that would explain why they'd care about\nvaluations. But why would they be looking for those? Because\ndepending on the meaning of \"quickly,\" it could actually be very\nprofitable. A company that gets acquired for 30 million is a failure\nto a VC, but it could be a 10x return for an angel, and moreover,\na quick 10x return. Rate of return is what matters in\ninvesting\u2014not the multiple you get, but the multiple per year.\nIf a super-angel gets 10x in one year, that's a higher rate of\nreturn than a VC could ever hope to get from a company that took 6\nyears to go public. To get the same rate of return, the VC would\nhave to get a multiple of 10^6\u2014one million x. Even Google\ndidn't come close to that.So I think at least some super-angels are looking for companies\nthat will get bought. That's the only rational explanation for\nfocusing on getting the right valuations, instead of the right\ncompanies. And if so they'll be different to deal with than VCs.\nThey'll be tougher on valuations, but more accommodating if you want\nto sell early.PrognosisWho will win, the super-angels or the VCs? I think the answer to\nthat is, some of each. They'll each become more like one another.\nThe super-angels will start to invest larger amounts, and the VCs\nwill gradually figure out ways to make more, smaller investments\nfaster. A decade from now the players will be hard to tell apart,\nand there will probably be survivors from each group.What does that mean for founders? One thing it means is that the\nhigh valuations startups are presently getting may not last forever.\nTo the extent that valuations are being driven up by price-insensitive\nVCs, they'll fall again if VCs become more like super-angels and\nstart to become more miserly about valuations. Fortunately if this\ndoes happen it will take years.The short term forecast is more competition between investors, which\nis good news for you. The super-angels will try to undermine the\nVCs by acting faster, and the VCs will try to undermine the\nsuper-angels by driving up valuations. Which for founders will\nresult in the perfect combination: funding rounds that close fast,\nwith high valuations.But remember that to get that combination, your startup will have\nto appeal to both super-angels and VCs. If you don't seem like you\nhave the potential to go public, you won't be able to use VCs to\ndrive up the valuation of an angel round.There is a danger of having VCs in an angel round: the so-called\nsignalling risk. If VCs are only doing it in the hope of investing\nmore later, what happens if they don't? That's a signal to everyone\nelse that they think you're lame.How much should you worry about that? The seriousness of signalling\nrisk depends on how far along you are. If by the next time you\nneed to raise money, you have graphs showing rising revenue or\ntraffic month after month, you don't have to worry about any signals\nyour existing investors are sending. Your results will speak for\nthemselves. \n[7]Whereas if the next time you need to raise money you won't yet have\nconcrete results, you may need to think more about the message your\ninvestors might send if they don't invest more. I'm not sure yet\nhow much you have to worry, because this whole phenomenon of VCs\ndoing angel investments is so new. But my instincts tell me you\ndon't have to worry much. Signalling risk smells like one of those\nthings founders worry about that's not a real problem. As a rule,\nthe only thing that can kill a good startup is the startup itself.\nStartups hurt themselves way more often than competitors hurt them,\nfor example. I suspect signalling risk is in this category too.One thing YC-funded startups have been doing to mitigate the risk\nof taking money from VCs in angel rounds is not to take too much\nfrom any one VC. Maybe that will help, if you have the luxury of\nturning down money.Fortunately, more and more startups will. After decades of competition\nthat could best be described as intramural, the startup funding\nbusiness is finally getting some real competition. That should\nlast several years at least, and maybe a lot longer. Unless there's\nsome huge market crash, the next couple years are going to be a\ngood time for startups to raise money. And that's exciting because\nit means lots more startups will happen.\nNotes[1]\nI've also heard them called \"Mini-VCs\" and \"Micro-VCs.\" I\ndon't know which name will stick.There were a couple predecessors. Ron Conway had angel funds\nstarting in the 1990s, and in some ways First Round Capital is closer to a\nsuper-angel than a VC fund.[2]\nIt wouldn't cut their overall returns tenfold, because investing\nlater would probably (a) cause them to lose less on investments\nthat failed, and (b) not allow them to get as large a percentage\nof startups as they do now. So it's hard to predict precisely what\nwould happen to their returns.[3]\nThe brand of an investor derives mostly from the success of\ntheir portfolio companies. The top VCs thus have a big brand\nadvantage over the super-angels. They could make it self-perpetuating\nif they used it to get all the best new startups. But I don't think\nthey'll be able to. To get all the best startups, you have to do\nmore than make them want you. You also have to want them; you have\nto recognize them when you see them, and that's much harder.\nSuper-angels will snap up stars that VCs miss. And that will cause\nthe brand gap between the top VCs and the super-angels gradually\nto erode.[4]\nThough in a traditional series A round VCs put two partners\non your board, there are signs now that VCs may begin to conserve\nboard seats by switching to what used to be considered an angel-round\nboard, consisting of two founders and one VC. Which is also to the\nfounders' advantage if it means they still control the company.[5]\nIn a series A round, you usually have to give up more than\nthe actual amount of stock the VCs buy, because they insist you\ndilute yourselves to set aside an \"option pool\" as well. I predict\nthis practice will gradually disappear though.[6]\nThe best thing for founders, if they can get it, is a convertible\nnote with no valuation cap at all. In that case the money invested\nin the angel round just converts into stock at the valuation of the\nnext round, no matter how large. Angels and super-angels tend not\nto like uncapped notes. They have no idea how much of the company\nthey're buying. If the company does well and the valuation of the\nnext round is high, they may end up with only a sliver of it. So\nby agreeing to uncapped notes, VCs who don't care about valuations\nin angel rounds can make offers that super-angels hate to match.[7]\nObviously signalling risk is also not a problem if you'll\nnever need to raise more money. But startups are often mistaken\nabout that.Thanks to Sam Altman, John Bautista, Patrick Collison, James\nLindenbaum, Reid Hoffman, Jessica Livingston and Harj Taggar\nfor reading drafts\nof this."} {"title": "useful", "text": "February 2020What should an essay be? Many people would say persuasive. That's\nwhat a lot of us were taught essays should be. But I think we can\naim for something more ambitious: that an essay should be useful.To start with, that means it should be correct. But it's not enough\nmerely to be correct. It's easy to make a statement correct by\nmaking it vague. That's a common flaw in academic writing, for\nexample. If you know nothing at all about an issue, you can't go\nwrong by saying that the issue is a complex one, that there are\nmany factors to be considered, that it's a mistake to take too\nsimplistic a view of it, and so on.Though no doubt correct, such statements tell the reader nothing.\nUseful writing makes claims that are as strong as they can be made\nwithout becoming false.For example, it's more useful to say that Pike's Peak is near the\nmiddle of Colorado than merely somewhere in Colorado. But if I say\nit's in the exact middle of Colorado, I've now gone too far, because\nit's a bit east of the middle.Precision and correctness are like opposing forces. It's easy to\nsatisfy one if you ignore the other. The converse of vaporous\nacademic writing is the bold, but false, rhetoric of demagogues.\nUseful writing is bold, but true.It's also two other things: it tells people something important,\nand that at least some of them didn't already know.Telling people something they didn't know doesn't always mean\nsurprising them. Sometimes it means telling them something they\nknew unconsciously but had never put into words. In fact those may\nbe the more valuable insights, because they tend to be more\nfundamental.Let's put them all together. Useful writing tells people something\ntrue and important that they didn't already know, and tells them\nas unequivocally as possible.Notice these are all a matter of degree. For example, you can't\nexpect an idea to be novel to everyone. Any insight that you have\nwill probably have already been had by at least one of the world's\n7 billion people. But it's sufficient if an idea is novel to a lot\nof readers.Ditto for correctness, importance, and strength. In effect the four\ncomponents are like numbers you can multiply together to get a score\nfor usefulness. Which I realize is almost awkwardly reductive, but\nnonetheless true._____\nHow can you ensure that the things you say are true and novel and\nimportant? Believe it or not, there is a trick for doing this. I\nlearned it from my friend Robert Morris, who has a horror of saying\nanything dumb. His trick is not to say anything unless he's sure\nit's worth hearing. This makes it hard to get opinions out of him,\nbut when you do, they're usually right.Translated into essay writing, what this means is that if you write\na bad sentence, you don't publish it. You delete it and try again.\nOften you abandon whole branches of four or five paragraphs. Sometimes\na whole essay.You can't ensure that every idea you have is good, but you can\nensure that every one you publish is, by simply not publishing the\nones that aren't.In the sciences, this is called publication bias, and is considered\nbad. When some hypothesis you're exploring gets inconclusive results,\nyou're supposed to tell people about that too. But with essay\nwriting, publication bias is the way to go.My strategy is loose, then tight. I write the first draft of an\nessay fast, trying out all kinds of ideas. Then I spend days rewriting\nit very carefully.I've never tried to count how many times I proofread essays, but\nI'm sure there are sentences I've read 100 times before publishing\nthem. When I proofread an essay, there are usually passages that\nstick out in an annoying way, sometimes because they're clumsily\nwritten, and sometimes because I'm not sure they're true. The\nannoyance starts out unconscious, but after the tenth reading or\nso I'm saying \"Ugh, that part\" each time I hit it. They become like\nbriars that catch your sleeve as you walk past. Usually I won't\npublish an essay till they're all gone \u0097 till I can read through\nthe whole thing without the feeling of anything catching.I'll sometimes let through a sentence that seems clumsy, if I can't\nthink of a way to rephrase it, but I will never knowingly let through\none that doesn't seem correct. You never have to. If a sentence\ndoesn't seem right, all you have to do is ask why it doesn't, and\nyou've usually got the replacement right there in your head.This is where essayists have an advantage over journalists. You\ndon't have a deadline. You can work for as long on an essay as you\nneed to get it right. You don't have to publish the essay at all,\nif you can't get it right. Mistakes seem to lose courage in the\nface of an enemy with unlimited resources. Or that's what it feels\nlike. What's really going on is that you have different expectations\nfor yourself. You're like a parent saying to a child \"we can sit\nhere all night till you eat your vegetables.\" Except you're the\nchild too.I'm not saying no mistake gets through. For example, I added condition\n(c) in \"A Way to Detect Bias\" \nafter readers pointed out that I'd\nomitted it. But in practice you can catch nearly all of them.There's a trick for getting importance too. It's like the trick I\nsuggest to young founders for getting startup ideas: to make something\nyou yourself want. You can use yourself as a proxy for the reader.\nThe reader is not completely unlike you, so if you write about\ntopics that seem important to you, they'll probably seem important\nto a significant number of readers as well.Importance has two factors. It's the number of people something\nmatters to, times how much it matters to them. Which means of course\nthat it's not a rectangle, but a sort of ragged comb, like a Riemann\nsum.The way to get novelty is to write about topics you've thought about\na lot. Then you can use yourself as a proxy for the reader in this\ndepartment too. Anything you notice that surprises you, who've\nthought about the topic a lot, will probably also surprise a\nsignificant number of readers. And here, as with correctness and\nimportance, you can use the Morris technique to ensure that you\nwill. If you don't learn anything from writing an essay, don't\npublish it.You need humility to measure novelty, because acknowledging the\nnovelty of an idea means acknowledging your previous ignorance of\nit. Confidence and humility are often seen as opposites, but in\nthis case, as in many others, confidence helps you to be humble.\nIf you know you're an expert on some topic, you can freely admit\nwhen you learn something you didn't know, because you can be confident\nthat most other people wouldn't know it either.The fourth component of useful writing, strength, comes from two\nthings: thinking well, and the skillful use of qualification. These\ntwo counterbalance each other, like the accelerator and clutch in\na car with a manual transmission. As you try to refine the expression\nof an idea, you adjust the qualification accordingly. Something\nyou're sure of, you can state baldly with no qualification at all,\nas I did the four components of useful writing. Whereas points that\nseem dubious have to be held at arm's length with perhapses.As you refine an idea, you're pushing in the direction of less\nqualification. But you can rarely get it down to zero. Sometimes\nyou don't even want to, if it's a side point and a fully refined\nversion would be too long.Some say that qualifications weaken writing. For example, that you\nshould never begin a sentence in an essay with \"I think,\" because\nif you're saying it, then of course you think it. And it's true\nthat \"I think x\" is a weaker statement than simply \"x.\" Which is\nexactly why you need \"I think.\" You need it to express your degree\nof certainty.But qualifications are not scalars. They're not just experimental\nerror. There must be 50 things they can express: how broadly something\napplies, how you know it, how happy you are it's so, even how it\ncould be falsified. I'm not going to try to explore the structure\nof qualification here. It's probably more complex than the whole\ntopic of writing usefully. Instead I'll just give you a practical\ntip: Don't underestimate qualification. It's an important skill in\nits own right, not just a sort of tax you have to pay in order to\navoid saying things that are false. So learn and use its full range.\nIt may not be fully half of having good ideas, but it's part of\nhaving them.There's one other quality I aim for in essays: to say things as\nsimply as possible. But I don't think this is a component of\nusefulness. It's more a matter of consideration for the reader. And\nit's a practical aid in getting things right; a mistake is more\nobvious when expressed in simple language. But I'll admit that the\nmain reason I write simply is not for the reader's sake or because\nit helps get things right, but because it bothers me to use more\nor fancier words than I need to. It seems inelegant, like a program\nthat's too long.I realize florid writing works for some people. But unless you're\nsure you're one of them, the best advice is to write as simply as\nyou can._____\nI believe the formula I've given you, importance + novelty +\ncorrectness + strength, is the recipe for a good essay. But I should\nwarn you that it's also a recipe for making people mad.The root of the problem is novelty. When you tell people something\nthey didn't know, they don't always thank you for it. Sometimes the\nreason people don't know something is because they don't want to\nknow it. Usually because it contradicts some cherished belief. And\nindeed, if you're looking for novel ideas, popular but mistaken\nbeliefs are a good place to find them. Every popular mistaken belief\ncreates a dead zone of ideas around \nit that are relatively unexplored because they contradict it.The strength component just makes things worse. If there's anything\nthat annoys people more than having their cherished assumptions\ncontradicted, it's having them flatly contradicted.Plus if you've used the Morris technique, your writing will seem\nquite confident. Perhaps offensively confident, to people who\ndisagree with you. The reason you'll seem confident is that you are\nconfident: you've cheated, by only publishing the things you're\nsure of. It will seem to people who try to disagree with you that\nyou never admit you're wrong. In fact you constantly admit you're\nwrong. You just do it before publishing instead of after.And if your writing is as simple as possible, that just makes things\nworse. Brevity is the diction of command. If you watch someone\ndelivering unwelcome news from a position of inferiority, you'll\nnotice they tend to use lots of words, to soften the blow. Whereas\nto be short with someone is more or less to be rude to them.It can sometimes work to deliberately phrase statements more weakly\nthan you mean. To put \"perhaps\" in front of something you're actually\nquite sure of. But you'll notice that when writers do this, they\nusually do it with a wink.I don't like to do this too much. It's cheesy to adopt an ironic\ntone for a whole essay. I think we just have to face the fact that\nelegance and curtness are two names for the same thing.You might think that if you work sufficiently hard to ensure that\nan essay is correct, it will be invulnerable to attack. That's sort\nof true. It will be invulnerable to valid attacks. But in practice\nthat's little consolation.In fact, the strength component of useful writing will make you\nparticularly vulnerable to misrepresentation. If you've stated an\nidea as strongly as you could without making it false, all anyone\nhas to do is to exaggerate slightly what you said, and now it is\nfalse.Much of the time they're not even doing it deliberately. One of the\nmost surprising things you'll discover, if you start writing essays,\nis that people who disagree with you rarely disagree with what\nyou've actually written. Instead they make up something you said\nand disagree with that.For what it's worth, the countermove is to ask someone who does\nthis to quote a specific sentence or passage you wrote that they\nbelieve is false, and explain why. I say \"for what it's worth\"\nbecause they never do. So although it might seem that this could\nget a broken discussion back on track, the truth is that it was\nnever on track in the first place.Should you explicitly forestall likely misinterpretations? Yes, if\nthey're misinterpretations a reasonably smart and well-intentioned\nperson might make. In fact it's sometimes better to say something\nslightly misleading and then add the correction than to try to get\nan idea right in one shot. That can be more efficient, and can also\nmodel the way such an idea would be discovered.But I don't think you should explicitly forestall intentional\nmisinterpretations in the body of an essay. An essay is a place to\nmeet honest readers. You don't want to spoil your house by putting\nbars on the windows to protect against dishonest ones. The place\nto protect against intentional misinterpretations is in end-notes.\nBut don't think you can predict them all. People are as ingenious\nat misrepresenting you when you say something they don't want to\nhear as they are at coming up with rationalizations for things they\nwant to do but know they shouldn't. I suspect it's the same skill._____\nAs with most other things, the way to get better at writing essays\nis to practice. But how do you start? Now that we've examined the\nstructure of useful writing, we can rephrase that question more\nprecisely. Which constraint do you relax initially? The answer is,\nthe first component of importance: the number of people who care\nabout what you write.If you narrow the topic sufficiently, you can probably find something\nyou're an expert on. Write about that to start with. If you only\nhave ten readers who care, that's fine. You're helping them, and\nyou're writing. Later you can expand the breadth of topics you write\nabout.The other constraint you can relax is a little surprising: publication.\nWriting essays doesn't have to mean publishing them. That may seem\nstrange now that the trend is to publish every random thought, but\nit worked for me. I wrote what amounted to essays in notebooks for\nabout 15 years. I never published any of them and never expected\nto. I wrote them as a way of figuring things out. But when the web\ncame along I'd had a lot of practice.Incidentally, \nSteve \nWozniak did the same thing. In high school he\ndesigned computers on paper for fun. He couldn't build them because\nhe couldn't afford the components. But when Intel launched 4K DRAMs\nin 1975, he was ready._____\nHow many essays are there left to write though? The answer to that\nquestion is probably the most exciting thing I've learned about\nessay writing. Nearly all of them are left to write.Although the essay \nis an old form, it hasn't been assiduously\ncultivated. In the print era, publication was expensive, and there\nwasn't enough demand for essays to publish that many. You could\npublish essays if you were already well known for writing something\nelse, like novels. Or you could write book reviews that you took\nover to express your own ideas. But there was not really a direct\npath to becoming an essayist. Which meant few essays got written,\nand those that did tended to be about a narrow range of subjects.Now, thanks to the internet, there's a path. Anyone can publish\nessays online. You start in obscurity, perhaps, but at least you\ncan start. You don't need anyone's permission.It sometimes happens that an area of knowledge sits quietly for\nyears, till some change makes it explode. Cryptography did this to\nnumber theory. The internet is doing it to the essay.The exciting thing is not that there's a lot left to write, but\nthat there's a lot left to discover. There's a certain kind of idea\nthat's best discovered by writing essays. If most essays are still\nunwritten, most such ideas are still undiscovered.Notes[1] Put railings on the balconies, but don't put bars on the windows.[2] Even now I sometimes write essays that are not meant for\npublication. I wrote several to figure out what Y Combinator should\ndo, and they were really helpful.Thanks to Trevor Blackwell, Daniel Gackle, Jessica Livingston, and\nRobert Morris for reading drafts of this."} {"title": "aord", "text": "October 2015When I talk to a startup that's been operating for more than 8 or\n9 months, the first thing I want to know is almost always the same.\nAssuming their expenses remain constant and their revenue growth\nis what it has been over the last several months, do they make it to\nprofitability on the money they have left? Or to put it more\ndramatically, by default do they live or die?The startling thing is how often the founders themselves don't know.\nHalf the founders I talk to don't know whether they're default alive\nor default dead.If you're among that number, Trevor Blackwell has made a handy\ncalculator you can use to find out.The reason I want to know first whether a startup is default alive\nor default dead is that the rest of the conversation depends on the\nanswer. If the company is default alive, we can talk about ambitious\nnew things they could do. If it's default dead, we probably need\nto talk about how to save it. We know the current trajectory ends\nbadly. How can they get off that trajectory?Why do so few founders know whether they're default alive or default\ndead? Mainly, I think, because they're not used to asking that.\nIt's not a question that makes sense to ask early on, any more than\nit makes sense to ask a 3 year old how he plans to support\nhimself. But as the company grows older, the question switches from\nmeaningless to critical. That kind of switch often takes people\nby surprise.I propose the following solution: instead of starting to ask too\nlate whether you're default alive or default dead, start asking too\nearly. It's hard to say precisely when the question switches\npolarity. But it's probably not that dangerous to start worrying\ntoo early that you're default dead, whereas it's very dangerous to\nstart worrying too late.The reason is a phenomenon I wrote about earlier: the\nfatal pinch.\nThe fatal pinch is default dead + slow growth + not enough\ntime to fix it. And the way founders end up in it is by not realizing\nthat's where they're headed.There is another reason founders don't ask themselves whether they're\ndefault alive or default dead: they assume it will be easy to raise\nmore money. But that assumption is often false, and worse still, the\nmore you depend on it, the falser it becomes.Maybe it will help to separate facts from hopes. Instead of thinking\nof the future with vague optimism, explicitly separate the components.\nSay \"We're default dead, but we're counting on investors to save\nus.\" Maybe as you say that, it will set off the same alarms in your\nhead that it does in mine. And if you set off the alarms sufficiently\nearly, you may be able to avoid the fatal pinch.It would be safe to be default dead if you could count on investors\nsaving you. As a rule their interest is a function of\ngrowth. If you have steep revenue growth, say over 5x a year, you\ncan start to count on investors being interested even if you're not\nprofitable.\n[1]\nBut investors are so fickle that you can never\ndo more than start to count on them. Sometimes something about your\nbusiness will spook investors even if your growth is great. So no\nmatter how good your growth is, you can never safely treat fundraising\nas more than a plan A. You should always have a plan B as well: you\nshould know (as in write down) precisely what you'll need to do to\nsurvive if you can't raise more money, and precisely when you'll \nhave to switch to plan B if plan A isn't working.In any case, growing fast versus operating cheaply is far from the\nsharp dichotomy many founders assume it to be. In practice there\nis surprisingly little connection between how much a startup spends\nand how fast it grows. When a startup grows fast, it's usually\nbecause the product hits a nerve, in the sense of hitting some big\nneed straight on. When a startup spends a lot, it's usually because\nthe product is expensive to develop or sell, or simply because\nthey're wasteful.If you're paying attention, you'll be asking at this point not just\nhow to avoid the fatal pinch, but how to avoid being default dead.\nThat one is easy: don't hire too fast. Hiring too fast is by far\nthe biggest killer of startups that raise money.\n[2]Founders tell themselves they need to hire in order to grow. But\nmost err on the side of overestimating this need rather than\nunderestimating it. Why? Partly because there's so much work to\ndo. Naive founders think that if they can just hire enough\npeople, it will all get done. Partly because successful startups have\nlots of employees, so it seems like that's what one does in order\nto be successful. In fact the large staffs of successful startups\nare probably more the effect of growth than the cause. And\npartly because when founders have slow growth they don't want to\nface what is usually the real reason: the product is not appealing\nenough.Plus founders who've just raised money are often encouraged to\noverhire by the VCs who funded them. Kill-or-cure strategies are\noptimal for VCs because they're protected by the portfolio effect.\nVCs want to blow you up, in one sense of the phrase or the other.\nBut as a founder your incentives are different. You want above all\nto survive.\n[3]Here's a common way startups die. They make something moderately\nappealing and have decent initial growth. They raise their first\nround fairly easily, because the founders seem smart and the idea\nsounds plausible. But because the product is only moderately\nappealing, growth is ok but not great. The founders convince\nthemselves that hiring a bunch of people is the way to boost growth.\nTheir investors agree. But (because the product is only moderately\nappealing) the growth never comes. Now they're rapidly running out\nof runway. They hope further investment will save them. But because\nthey have high expenses and slow growth, they're now unappealing\nto investors. They're unable to raise more, and the company dies.What the company should have done is address the fundamental problem:\nthat the product is only moderately appealing. Hiring people is\nrarely the way to fix that. More often than not it makes it harder.\nAt this early stage, the product needs to evolve more than to be\n\"built out,\" and that's usually easier with fewer people.\n[4]Asking whether you're default alive or default dead may save you\nfrom this. Maybe the alarm bells it sets off will counteract the\nforces that push you to overhire. Instead you'll be compelled to\nseek growth in other ways. For example, by doing\nthings that don't scale, or by redesigning the product in the\nway only founders can.\nAnd for many if not most startups, these paths to growth will be\nthe ones that actually work.Airbnb waited 4 months after raising money at the end of Y\u00a0Combinator\nbefore they hired their first employee. In the meantime the founders\nwere terribly overworked. But they were overworked evolving Airbnb\ninto the astonishingly successful organism it is now.Notes[1]\nSteep usage growth will also interest investors. Revenue\nwill ultimately be a constant multiple of usage, so x% usage growth\npredicts x% revenue growth. But in practice investors discount\nmerely predicted revenue, so if you're measuring usage you need a\nhigher growth rate to impress investors.[2]\nStartups that don't raise money are saved from hiring too\nfast because they can't afford to. But that doesn't mean you should\navoid raising money in order to avoid this problem, any more than\nthat total abstinence is the only way to avoid becoming an alcoholic.[3]\nI would not be surprised if VCs' tendency to push founders\nto overhire is not even in their own interest. They don't know how\nmany of the companies that get killed by overspending might have\ndone well if they'd survived. My guess is a significant number.[4]\nAfter reading a draft, Sam Altman wrote:\"I think you should make the hiring point more strongly. I think\nit's roughly correct to say that YC's most successful companies\nhave never been the fastest to hire, and one of the marks of a great\nfounder is being able to resist this urge.\"Paul Buchheit adds:\"A related problem that I see a lot is premature scaling\u2014founders\ntake a small business that isn't really working (bad unit economics,\ntypically) and then scale it up because they want impressive growth\nnumbers. This is similar to over-hiring in that it makes the business\nmuch harder to fix once it's big, plus they are bleeding cash really\nfast.\"\nThanks to Sam Altman, Paul Buchheit, Joe Gebbia, Jessica Livingston,\nand Geoff Ralston for reading drafts of this."} {"title": "before", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2014(This essay is derived from a guest lecture in Sam Altman's startup class at\nStanford. It's intended for college students, but much of it is\napplicable to potential founders at other ages.)One of the advantages of having kids is that when you have to give\nadvice, you can ask yourself \"what would I tell my own kids?\" My\nkids are little, but I can imagine what I'd tell them about startups\nif they were in college, and that's what I'm going to tell you.Startups are very counterintuitive. I'm not sure why. Maybe it's\njust because knowledge about them hasn't permeated our culture yet.\nBut whatever the reason, starting a startup is a task where you\ncan't always trust your instincts.It's like skiing in that way. When you first try skiing and you\nwant to slow down, your instinct is to lean back. But if you lean\nback on skis you fly down the hill out of control. So part of\nlearning to ski is learning to suppress that impulse. Eventually\nyou get new habits, but at first it takes a conscious effort. At\nfirst there's a list of things you're trying to remember as you\nstart down the hill.Startups are as unnatural as skiing, so there's a similar list for\nstartups. Here I'm going to give you the first part of it \u2014 the things\nto remember if you want to prepare yourself to start a startup.\nCounterintuitiveThe first item on it is the fact I already mentioned: that startups\nare so weird that if you trust your instincts, you'll make a lot\nof mistakes. If you know nothing more than this, you may at least\npause before making them.When I was running Y Combinator I used to joke that our function\nwas to tell founders things they would ignore. It's really true.\nBatch after batch, the YC partners warn founders about mistakes\nthey're about to make, and the founders ignore them, and then come\nback a year later and say \"I wish we'd listened.\"Why do the founders ignore the partners' advice? Well, that's the\nthing about counterintuitive ideas: they contradict your intuitions.\nThey seem wrong. So of course your first impulse is to disregard\nthem. And in fact my joking description is not merely the curse\nof Y Combinator but part of its raison d'etre. If founders' instincts\nalready gave them the right answers, they wouldn't need us. You\nonly need other people to give you advice that surprises you. That's\nwhy there are a lot of ski instructors and not many running\ninstructors.\n[1]You can, however, trust your instincts about people. And in fact\none of the most common mistakes young founders make is not to\ndo that enough. They get involved with people who seem impressive,\nbut about whom they feel some misgivings personally. Later when\nthings blow up they say \"I knew there was something off about him,\nbut I ignored it because he seemed so impressive.\"If you're thinking about getting involved with someone \u2014 as a\ncofounder, an employee, an investor, or an acquirer \u2014 and you\nhave misgivings about them, trust your gut. If someone seems\nslippery, or bogus, or a jerk, don't ignore it.This is one case where it pays to be self-indulgent. Work with\npeople you genuinely like, and you've known long enough to be sure.\nExpertiseThe second counterintuitive point is that it's not that important\nto know a lot about startups. The way to succeed in a startup is\nnot to be an expert on startups, but to be an expert on your users\nand the problem you're solving for them.\nMark Zuckerberg didn't succeed because he was an expert on startups.\nHe succeeded despite being a complete noob at startups, because he\nunderstood his users really well.If you don't know anything about, say, how to raise an angel round,\ndon't feel bad on that account. That sort of thing you can learn\nwhen you need to, and forget after you've done it.In fact, I worry it's not merely unnecessary to learn in great\ndetail about the mechanics of startups, but possibly somewhat\ndangerous. If I met an undergrad who knew all about convertible\nnotes and employee agreements and (God forbid) class FF stock, I\nwouldn't think \"here is someone who is way ahead of their peers.\"\nIt would set off alarms. Because another of the characteristic\nmistakes of young founders is to go through the motions of starting\na startup. They make up some plausible-sounding idea, raise money\nat a good valuation, rent a cool office, hire a bunch of people.\nFrom the outside that seems like what startups do. But the next\nstep after rent a cool office and hire a bunch of people is: gradually\nrealize how completely fucked they are, because while imitating all\nthe outward forms of a startup they have neglected the one thing\nthat's actually essential: making something people want.\nGameWe saw this happen so often that we made up a name for it: playing\nhouse. Eventually I realized why it was happening. The reason\nyoung founders go through the motions of starting a startup is\nbecause that's what they've been trained to do for their whole lives\nup to that point. Think about what you have to do to get into\ncollege, for example. Extracurricular activities, check. Even in\ncollege classes most of the work is as artificial as running laps.I'm not attacking the educational system for being this way. There\nwill always be a certain amount of fakeness in the work you do when\nyou're being taught something, and if you measure their performance\nit's inevitable that people will exploit the difference to the point\nwhere much of what you're measuring is artifacts of the fakeness.I confess I did it myself in college. I found that in a lot of\nclasses there might only be 20 or 30 ideas that were the right shape\nto make good exam questions. The way I studied for exams in these\nclasses was not (except incidentally) to master the material taught\nin the class, but to make a list of potential exam questions and\nwork out the answers in advance. When I walked into the final, the\nmain thing I'd be feeling was curiosity about which of my questions\nwould turn up on the exam. It was like a game.It's not surprising that after being trained for their whole lives\nto play such games, young founders' first impulse on starting a\nstartup is to try to figure out the tricks for winning at this new\ngame. Since fundraising appears to be the measure of success for\nstartups (another classic noob mistake), they always want to know what the\ntricks are for convincing investors. We tell them the best way to\nconvince investors is to make a startup\nthat's actually doing well, meaning growing fast, and then simply\ntell investors so. Then they want to know what the tricks are for\ngrowing fast. And we have to tell them the best way to do that is\nsimply to make something people want.So many of the conversations YC partners have with young founders\nbegin with the founder asking \"How do we...\" and the partner replying\n\"Just...\"Why do the founders always make things so complicated? The reason,\nI realized, is that they're looking for the trick.So this is the third counterintuitive thing to remember about\nstartups: starting a startup is where gaming the system stops\nworking. Gaming the system may continue to work if you go to work\nfor a big company. Depending on how broken the company is, you can\nsucceed by sucking up to the right people, giving the impression\nof productivity, and so on. \n[2]\nBut that doesn't work with startups.\nThere is no boss to trick, only users, and all users care about is\nwhether your product does what they want. Startups are as impersonal\nas physics. You have to make something people want, and you prosper\nonly to the extent you do.The dangerous thing is, faking does work to some degree on investors.\nIf you're super good at sounding like you know what you're talking\nabout, you can fool investors for at least one and perhaps even two\nrounds of funding. But it's not in your interest to. The company\nis ultimately doomed. All you're doing is wasting your own time\nriding it down.So stop looking for the trick. There are tricks in startups, as\nthere are in any domain, but they are an order of magnitude less\nimportant than solving the real problem. A founder who knows nothing\nabout fundraising but has made something users love will have an\neasier time raising money than one who knows every trick in the\nbook but has a flat usage graph. And more importantly, the founder\nwho has made something users love is the one who will go on to\nsucceed after raising the money.Though in a sense it's bad news in that you're deprived of one of\nyour most powerful weapons, I think it's exciting that gaming the\nsystem stops working when you start a startup. It's exciting that\nthere even exist parts of the world where you win by doing good\nwork. Imagine how depressing the world would be if it were all\nlike school and big companies, where you either have to spend a lot\nof time on bullshit things or lose to people who do.\n[3]\nI would\nhave been delighted if I'd realized in college that there were parts\nof the real world where gaming the system mattered less than others,\nand a few where it hardly mattered at all. But there are, and this\nvariation is one of the most important things to consider when\nyou're thinking about your future. How do you win in each type of\nwork, and what would you like to win by doing?\n[4]\nAll-ConsumingThat brings us to our fourth counterintuitive point: startups are\nall-consuming. If you start a startup, it will take over your life\nto a degree you cannot imagine. And if your startup succeeds, it\nwill take over your life for a long time: for several years at the\nvery least, maybe for a decade, maybe for the rest of your working\nlife. So there is a real opportunity cost here.Larry Page may seem to have an enviable life, but there are aspects\nof it that are unenviable. Basically at 25 he started running as\nfast as he could and it must seem to him that he hasn't stopped to\ncatch his breath since. Every day new shit happens in the Google\nempire that only the CEO can deal with, and he, as CEO, has to deal\nwith it. If he goes on vacation for even a week, a whole week's\nbacklog of shit accumulates. And he has to bear this uncomplainingly,\npartly because as the company's daddy he can never show fear or\nweakness, and partly because billionaires get less than zero sympathy\nif they talk about having difficult lives. Which has the strange\nside effect that the difficulty of being a successful startup founder\nis concealed from almost everyone except those who've done it.Y Combinator has now funded several companies that can be called\nbig successes, and in every single case the founders say the same\nthing. It never gets any easier. The nature of the problems change.\nYou're worrying about construction delays at your London office\ninstead of the broken air conditioner in your studio apartment.\nBut the total volume of worry never decreases; if anything it\nincreases.Starting a successful startup is similar to having kids in that\nit's like a button you push that changes your life irrevocably.\nAnd while it's truly wonderful having kids, there are a lot of\nthings that are easier to do before you have them than after. Many\nof which will make you a better parent when you do have kids. And\nsince you can delay pushing the button for a while, most people in\nrich countries do.Yet when it comes to startups, a lot of people seem to think they're\nsupposed to start them while they're still in college. Are you\ncrazy? And what are the universities thinking? They go out of\ntheir way to ensure their students are well supplied with contraceptives,\nand yet they're setting up entrepreneurship programs and startup\nincubators left and right.To be fair, the universities have their hand forced here. A lot\nof incoming students are interested in startups. Universities are,\nat least de facto, expected to prepare them for their careers. So\nstudents who want to start startups hope universities can teach\nthem about startups. And whether universities can do this or not,\nthere's some pressure to claim they can, lest they lose applicants\nto other universities that do.Can universities teach students about startups? Yes and no. They\ncan teach students about startups, but as I explained before, this\nis not what you need to know. What you need to learn about are the\nneeds of your own users, and you can't do that until you actually\nstart the company.\n[5]\nSo starting a startup is intrinsically\nsomething you can only really learn by doing it. And it's impossible\nto do that in college, for the reason I just explained: startups\ntake over your life. You can't start a startup for real as a\nstudent, because if you start a startup for real you're not a student\nanymore. You may be nominally a student for a bit, but you won't even\nbe that for long.\n[6]Given this dichotomy, which of the two paths should you take? Be\na real student and not start a startup, or start a real startup and\nnot be a student? I can answer that one for you. Do not start a\nstartup in college. How to start a startup is just a subset of a\nbigger problem you're trying to solve: how to have a good life.\nAnd though starting a startup can be part of a good life for a lot\nof ambitious people, age 20 is not the optimal time to do it.\nStarting a startup is like a brutally fast depth-first search. Most\npeople should still be searching breadth-first at 20.You can do things in your early 20s that you can't do as well before\nor after, like plunge deeply into projects on a whim and travel\nsuper cheaply with no sense of a deadline. For unambitious people,\nthis sort of thing is the dreaded \"failure to launch,\" but for the\nambitious ones it can be an incomparably valuable sort of exploration.\nIf you start a startup at 20 and you're sufficiently successful,\nyou'll never get to do it.\n[7]Mark Zuckerberg will never get to bum around a foreign country. He\ncan do other things most people can't, like charter jets to fly him\nto foreign countries. But success has taken a lot of the serendipity\nout of his life. Facebook is running him as much as he's running\nFacebook. And while it can be very cool to be in the grip of a\nproject you consider your life's work, there are advantages to\nserendipity too, especially early in life. Among other things it\ngives you more options to choose your life's work from.There's not even a tradeoff here. You're not sacrificing anything\nif you forgo starting a startup at 20, because you're more likely\nto succeed if you wait. In the unlikely case that you're 20 and\none of your side projects takes off like Facebook did, you'll face\na choice of running with it or not, and it may be reasonable to run\nwith it. But the usual way startups take off is for the founders\nto make them take off, and it's gratuitously\nstupid to do that at 20.\nTryShould you do it at any age? I realize I've made startups sound\npretty hard. If I haven't, let me try again: starting a startup\nis really hard. What if it's too hard? How can you tell if you're\nup to this challenge?The answer is the fifth counterintuitive point: you can't tell. Your\nlife so far may have given you some idea what your prospects might\nbe if you tried to become a mathematician, or a professional football\nplayer. But unless you've had a very strange life you haven't done\nmuch that was like being a startup founder.\nStarting a startup will change you a lot. So what you're trying\nto estimate is not just what you are, but what you could grow into,\nand who can do that?For the past 9 years it was my job to predict whether people would\nhave what it took to start successful startups. It was easy to\ntell how smart they were, and most people reading this will be over\nthat threshold. The hard part was predicting how tough and ambitious they would become. There\nmay be no one who has more experience at trying to predict that,\nso I can tell you how much an expert can know about it, and the\nanswer is: not much. I learned to keep a completely open mind about\nwhich of the startups in each batch would turn out to be the stars.The founders sometimes think they know. Some arrive feeling sure\nthey will ace Y Combinator just as they've aced every one of the (few,\nartificial, easy) tests they've faced in life so far. Others arrive\nwondering how they got in, and hoping YC doesn't discover whatever\nmistake caused it to accept them. But there is little correlation\nbetween founders' initial attitudes and how well their companies\ndo.I've read that the same is true in the military \u2014 that the\nswaggering recruits are no more likely to turn out to be really\ntough than the quiet ones. And probably for the same reason: that\nthe tests involved are so different from the ones in their previous\nlives.If you're absolutely terrified of starting a startup, you probably\nshouldn't do it. But if you're merely unsure whether you're up to\nit, the only way to find out is to try. Just not now.\nIdeasSo if you want to start a startup one day, what should you do in\ncollege? There are only two things you need initially: an idea and\ncofounders. And the m.o. for getting both is the same. Which leads\nto our sixth and last counterintuitive point: that the way to get\nstartup ideas is not to try to think of startup ideas.I've written a whole essay on this,\nso I won't repeat it all here. But the short version is that if\nyou make a conscious effort to think of startup ideas, the ideas\nyou come up with will not merely be bad, but bad and plausible-sounding,\nmeaning you'll waste a lot of time on them before realizing they're\nbad.The way to come up with good startup ideas is to take a step back.\nInstead of making a conscious effort to think of startup ideas,\nturn your mind into the type that startup ideas form in without any\nconscious effort. In fact, so unconsciously that you don't even\nrealize at first that they're startup ideas.This is not only possible, it's how Apple, Yahoo, Google, and\nFacebook all got started. None of these companies were even meant\nto be companies at first. They were all just side projects. The\nbest startups almost have to start as side projects, because great\nideas tend to be such outliers that your conscious mind would reject\nthem as ideas for companies.Ok, so how do you turn your mind into the type that startup ideas\nform in unconsciously? (1) Learn a lot about things that matter,\nthen (2) work on problems that interest you (3) with people you\nlike and respect. The third part, incidentally, is how you get\ncofounders at the same time as the idea.The first time I wrote that paragraph, instead of \"learn a lot about\nthings that matter,\" I wrote \"become good at some technology.\" But\nthat prescription, though sufficient, is too narrow. What was\nspecial about Brian Chesky and Joe Gebbia was not that they were\nexperts in technology. They were good at design, and perhaps even\nmore importantly, they were good at organizing groups and making\nprojects happen. So you don't have to work on technology per se,\nso long as you work on problems demanding enough to stretch you.What kind of problems are those? That is very hard to answer in\nthe general case. History is full of examples of young people who\nwere working on important problems that no\none else at the time thought were important, and in particular\nthat their parents didn't think were important. On the other hand,\nhistory is even fuller of examples of parents who thought their\nkids were wasting their time and who were right. So how do you\nknow when you're working on real stuff?\n[8]I know how I know. Real problems are interesting, and I am\nself-indulgent in the sense that I always want to work on interesting\nthings, even if no one else cares about them (in fact, especially\nif no one else cares about them), and find it very hard to make\nmyself work on boring things, even if they're supposed to be\nimportant.My life is full of case after case where I worked on something just\nbecause it seemed interesting, and it turned out later to be useful\nin some worldly way. Y\nCombinator itself was something I only did because it seemed\ninteresting. So I seem to have some sort of internal compass that\nhelps me out. But I don't know what other people have in their\nheads. Maybe if I think more about this I can come up with heuristics\nfor recognizing genuinely interesting problems, but for the moment\nthe best I can offer is the hopelessly question-begging advice that\nif you have a taste for genuinely interesting problems, indulging\nit energetically is the best way to prepare yourself for a startup.\nAnd indeed, probably also the best way to live.\n[9]But although I can't explain in the general case what counts as an\ninteresting problem, I can tell you about a large subset of them.\nIf you think of technology as something that's spreading like a\nsort of fractal stain, every moving point on the edge represents\nan interesting problem. So one guaranteed way to turn your mind\ninto the type that has good startup ideas is to get yourself to the\nleading edge of some technology \u2014 to cause yourself, as Paul\nBuchheit put it, to \"live in the future.\" When you reach that point,\nideas that will seem to other people uncannily prescient will seem\nobvious to you. You may not realize they're startup ideas, but\nyou'll know they're something that ought to exist.For example, back at Harvard in the mid 90s a fellow grad student\nof my friends Robert and Trevor wrote his own voice over IP software.\nHe didn't mean it to be a startup, and he never tried to turn it\ninto one. He just wanted to talk to his girlfriend in Taiwan without\npaying for long distance calls, and since he was an expert on\nnetworks it seemed obvious to him that the way to do it was turn\nthe sound into packets and ship it over the Internet. He never did\nany more with his software than talk to his girlfriend, but this\nis exactly the way the best startups get started.So strangely enough the optimal thing to do in college if you want\nto be a successful startup founder is not some sort of new, vocational\nversion of college focused on \"entrepreneurship.\" It's the classic\nversion of college as education for its own sake. If you want to\nstart a startup after college, what you should do in college is\nlearn powerful things. And if you have genuine intellectual\ncuriosity, that's what you'll naturally tend to do if you just\nfollow your own inclinations.\n[10]The component of entrepreneurship that really matters is domain\nexpertise. The way to become Larry Page was to become an expert\non search. And the way to become an expert on search was to be\ndriven by genuine curiosity, not some ulterior motive.At its best, starting a startup is merely an ulterior motive for\ncuriosity. And you'll do it best if you introduce the ulterior\nmotive toward the end of the process.So here is the ultimate advice for young would-be startup founders,\nboiled down to two words: just learn.\nNotes[1]\nSome founders listen more than others, and this tends to be a\npredictor of success. One of the things I\nremember about the Airbnbs during YC is how intently they listened.[2]\nIn fact, this is one of the reasons startups are possible. If\nbig companies weren't plagued by internal inefficiencies, they'd\nbe proportionately more effective, leaving less room for startups.[3]\nIn a startup you have to spend a lot of time on schleps, but this sort of work is merely\nunglamorous, not bogus.[4]\nWhat should you do if your true calling is gaming the system?\nManagement consulting.[5]\nThe company may not be incorporated, but if you start to get\nsignificant numbers of users, you've started it, whether you realize\nit yet or not.[6]\nIt shouldn't be that surprising that colleges can't teach\nstudents how to be good startup founders, because they can't teach\nthem how to be good employees either.The way universities \"teach\" students how to be employees is to\nhand off the task to companies via internship programs. But you\ncouldn't do the equivalent thing for startups, because by definition\nif the students did well they would never come back.[7]\nCharles Darwin was 22 when he received an invitation to travel\naboard the HMS Beagle as a naturalist. It was only because he was\notherwise unoccupied, to a degree that alarmed his family, that he\ncould accept it. And yet if he hadn't we probably would not know\nhis name.[8]\nParents can sometimes be especially conservative in this\ndepartment. There are some whose definition of important problems\nincludes only those on the critical path to med school.[9]\nI did manage to think of a heuristic for detecting whether you\nhave a taste for interesting ideas: whether you find known boring\nideas intolerable. Could you endure studying literary theory, or\nworking in middle management at a large company?[10]\nIn fact, if your goal is to start a startup, you can stick\neven more closely to the ideal of a liberal education than past\ngenerations have. Back when students focused mainly on getting a\njob after college, they thought at least a little about how the\ncourses they took might look to an employer. And perhaps even\nworse, they might shy away from taking a difficult class lest they\nget a low grade, which would harm their all-important GPA. Good\nnews: users don't care what your GPA\nwas. And I've never heard of investors caring either. Y Combinator\ncertainly never asks what classes you took in college or what grades\nyou got in them.\nThanks to Sam Altman, Paul Buchheit, John Collison, Patrick\nCollison, Jessica Livingston, Robert Morris, Geoff Ralston, and\nFred Wilson for reading drafts of this."} {"title": "bias", "text": "October 2015This will come as a surprise to a lot of people, but in some cases\nit's possible to detect bias in a selection process without knowing\nanything about the applicant pool. Which is exciting because among\nother things it means third parties can use this technique to detect\nbias whether those doing the selecting want them to or not.You can use this technique whenever (a) you have at least\na random sample of the applicants that were selected, (b) their\nsubsequent performance is measured, and (c) the groups of\napplicants you're comparing have roughly equal distribution of ability.How does it work? Think about what it means to be biased. What\nit means for a selection process to be biased against applicants\nof type x is that it's harder for them to make it through. Which\nmeans applicants of type x have to be better to get selected than\napplicants not of type x.\n[1]\nWhich means applicants of type x\nwho do make it through the selection process will outperform other\nsuccessful applicants. And if the performance of all the successful\napplicants is measured, you'll know if they do.Of course, the test you use to measure performance must be a valid\none. And in particular it must not be invalidated by the bias you're\ntrying to measure.\nBut there are some domains where performance can be measured, and\nin those detecting bias is straightforward. Want to know if the\nselection process was biased against some type of applicant? Check\nwhether they outperform the others. This is not just a heuristic\nfor detecting bias. It's what bias means.For example, many suspect that venture capital firms are biased\nagainst female founders. This would be easy to detect: among their\nportfolio companies, do startups with female founders outperform\nthose without? A couple months ago, one VC firm (almost certainly\nunintentionally) published a study showing bias of this type. First\nRound Capital found that among its portfolio companies, startups\nwith female founders outperformed\nthose without by 63%. \n[2]The reason I began by saying that this technique would come as a\nsurprise to many people is that we so rarely see analyses of this\ntype. I'm sure it will come as a surprise to First Round that they\nperformed one. I doubt anyone there realized that by limiting their\nsample to their own portfolio, they were producing a study not of\nstartup trends but of their own biases when selecting companies.I predict we'll see this technique used more in the future. The\ninformation needed to conduct such studies is increasingly available.\nData about who applies for things is usually closely guarded by the\norganizations selecting them, but nowadays data about who gets\nselected is often publicly available to anyone who takes the trouble\nto aggregate it.\nNotes[1]\nThis technique wouldn't work if the selection process looked\nfor different things from different types of applicants\u2014for\nexample, if an employer hired men based on their ability but women\nbased on their appearance.[2]\nAs Paul Buchheit points out, First Round excluded their most \nsuccessful investment, Uber, from the study. And while it \nmakes sense to exclude outliers from some types of studies, \nstudies of returns from startup investing, which is all about \nhitting outliers, are not one of them.\nThanks to Sam Altman, Jessica Livingston, and Geoff Ralston for reading\ndrafts of this."} {"title": "copy", "text": "July 2006\nWhen I was in high school I spent a lot of time imitating bad\nwriters. What we studied in English classes was mostly fiction,\nso I assumed that was the highest form of writing. Mistake number\none. The stories that seemed to be most admired were ones in which\npeople suffered in complicated ways. Anything funny or\ngripping was ipso facto suspect, unless it was old enough to be hard to\nunderstand, like Shakespeare or Chaucer. Mistake number two. The\nideal medium seemed the short story, which I've since learned had\nquite a brief life, roughly coincident with the peak of magazine\npublishing. But since their size made them perfect for use in\nhigh school classes, we read a lot of them, which gave us the\nimpression the short story was flourishing. Mistake number three.\nAnd because they were so short, nothing really had to happen; you\ncould just show a randomly truncated slice of life, and that was\nconsidered advanced. Mistake number four. The result was that I\nwrote a lot of stories in which nothing happened except that someone\nwas unhappy in a way that seemed deep.For most of college I was a philosophy major. I was very impressed\nby the papers published in philosophy journals. They were so\nbeautifully typeset, and their tone was just captivating\u2014alternately\ncasual and buffer-overflowingly technical. A fellow would be walking\nalong a street and suddenly modality qua modality would spring upon\nhim. I didn't ever quite understand these papers, but I figured\nI'd get around to that later, when I had time to reread them more\nclosely. In the meantime I tried my best to imitate them. This\nwas, I can now see, a doomed undertaking, because they weren't\nreally saying anything. No philosopher ever refuted another, for\nexample, because no one said anything definite enough to refute.\nNeedless to say, my imitations didn't say anything either.In grad school I was still wasting time imitating the wrong things.\nThere was then a fashionable type of program called an expert system,\nat the core of which was something called an inference engine. I\nlooked at what these things did and thought \"I could write that in\na thousand lines of code.\" And yet eminent professors were writing\nbooks about them, and startups were selling them for a year's salary\na copy. What an opportunity, I thought; these impressive things\nseem easy to me; I must be pretty sharp. Wrong. It was simply a\nfad. The books the professors wrote about expert systems are now\nignored. They were not even on a path to anything interesting.\nAnd the customers paying so much for them were largely the same\ngovernment agencies that paid thousands for screwdrivers and toilet\nseats.How do you avoid copying the wrong things? Copy only what you\ngenuinely like. That would have saved me in all three cases. I\ndidn't enjoy the short stories we had to read in English classes;\nI didn't learn anything from philosophy papers; I didn't use expert\nsystems myself. I believed these things were good because they\nwere admired.It can be hard to separate the things you like from the things\nyou're impressed with. One trick is to ignore presentation. Whenever\nI see a painting impressively hung in a museum, I ask myself: how\nmuch would I pay for this if I found it at a garage sale, dirty and\nframeless, and with no idea who painted it? If you walk around a\nmuseum trying this experiment, you'll find you get some truly\nstartling results. Don't ignore this data point just because it's\nan outlier.Another way to figure out what you like is to look at what you enjoy\nas guilty pleasures. Many things people like, especially if they're\nyoung and ambitious, they like largely for the feeling of virtue\nin liking them. 99% of people reading Ulysses are thinking\n\"I'm reading Ulysses\" as they do it. A guilty pleasure is\nat least a pure one. What do you read when you don't feel up to being\nvirtuous? What kind of book do you read and feel sad that there's\nonly half of it left, instead of being impressed that you're half\nway through? That's what you really like.Even when you find genuinely good things to copy, there's another\npitfall to be avoided. Be careful to copy what makes them good,\nrather than their flaws. It's easy to be drawn into imitating\nflaws, because they're easier to see, and of course easier to copy\ntoo. For example, most painters in the eighteenth and nineteenth\ncenturies used brownish colors. They were imitating the great\npainters of the Renaissance, whose paintings by that time were brown\nwith dirt. Those paintings have since been cleaned, revealing\nbrilliant colors; their imitators are of course still brown.It was painting, incidentally, that cured me of copying the wrong\nthings. Halfway through grad school I decided I wanted to try being\na painter, and the art world was so manifestly corrupt that it\nsnapped the leash of credulity. These people made philosophy\nprofessors seem as scrupulous as mathematicians. It was so clearly\na choice of doing good work xor being an insider that I was forced\nto see the distinction. It's there to some degree in almost every\nfield, but I had till then managed to avoid facing it.That was one of the most valuable things I learned from painting:\nyou have to figure out for yourself what's \ngood. You can't trust\nauthorities. They'll lie to you on this one.\n\nComment on this essay."} {"title": "ecw", "text": "December 2014If the world were static, we could have monotonically increasing\nconfidence in our beliefs. The more (and more varied) experience\na belief survived, the less likely it would be false. Most people\nimplicitly believe something like this about their opinions. And\nthey're justified in doing so with opinions about things that don't\nchange much, like human nature. But you can't trust your opinions\nin the same way about things that change, which could include\npractically everything else.When experts are wrong, it's often because they're experts on an\nearlier version of the world.Is it possible to avoid that? Can you protect yourself against\nobsolete beliefs? To some extent, yes. I spent almost a decade\ninvesting in early stage startups, and curiously enough protecting\nyourself against obsolete beliefs is exactly what you have to do\nto succeed as a startup investor. Most really good startup ideas\nlook like bad ideas at first, and many of those look bad specifically\nbecause some change in the world just switched them from bad to\ngood. I spent a lot of time learning to recognize such ideas, and\nthe techniques I used may be applicable to ideas in general.The first step is to have an explicit belief in change. People who\nfall victim to a monotonically increasing confidence in their\nopinions are implicitly concluding the world is static. If you\nconsciously remind yourself it isn't, you start to look for change.Where should one look for it? Beyond the moderately useful\ngeneralization that human nature doesn't change much, the unfortunate\nfact is that change is hard to predict. This is largely a tautology\nbut worth remembering all the same: change that matters usually\ncomes from an unforeseen quarter.So I don't even try to predict it. When I get asked in interviews\nto predict the future, I always have to struggle to come up with\nsomething plausible-sounding on the fly, like a student who hasn't\nprepared for an exam.\n[1]\nBut it's not out of laziness that I haven't\nprepared. It seems to me that beliefs about the future are so\nrarely correct that they usually aren't worth the extra rigidity\nthey impose, and that the best strategy is simply to be aggressively\nopen-minded. Instead of trying to point yourself in the right\ndirection, admit you have no idea what the right direction is, and\ntry instead to be super sensitive to the winds of change.It's ok to have working hypotheses, even though they may constrain\nyou a bit, because they also motivate you. It's exciting to chase\nthings and exciting to try to guess answers. But you have to be\ndisciplined about not letting your hypotheses harden into anything\nmore.\n[2]I believe this passive m.o. works not just for evaluating new ideas\nbut also for having them. The way to come up with new ideas is not\nto try explicitly to, but to try to solve problems and simply not\ndiscount weird hunches you have in the process.The winds of change originate in the unconscious minds of domain\nexperts. If you're sufficiently expert in a field, any weird idea\nor apparently irrelevant question that occurs to you is ipso facto\nworth exploring. \n[3]\n Within Y Combinator, when an idea is described\nas crazy, it's a compliment\u2014in fact, on average probably a\nhigher compliment than when an idea is described as good.Startup investors have extraordinary incentives for correcting\nobsolete beliefs. If they can realize before other investors that\nsome apparently unpromising startup isn't, they can make a huge\namount of money. But the incentives are more than just financial.\nInvestors' opinions are explicitly tested: startups come to them\nand they have to say yes or no, and then, fairly quickly, they learn\nwhether they guessed right. The investors who say no to a Google\n(and there were several) will remember it for the rest of their\nlives.Anyone who must in some sense bet on ideas rather than merely\ncommenting on them has similar incentives. Which means anyone who\nwants such incentives can have them, by turning their comments into\nbets: if you write about a topic in some fairly durable and public\nform, you'll find you worry much more about getting things right\nthan most people would in a casual conversation.\n[4]Another trick I've found to protect myself against obsolete beliefs\nis to focus initially on people rather than ideas. Though the nature\nof future discoveries is hard to predict, I've found I can predict\nquite well what sort of people will make them. Good new ideas come\nfrom earnest, energetic, independent-minded people.Betting on people over ideas saved me countless times as an investor.\nWe thought Airbnb was a bad idea, for example. But we could tell\nthe founders were earnest, energetic, and independent-minded.\n(Indeed, almost pathologically so.) So we suspended disbelief and\nfunded them.This too seems a technique that should be generally applicable.\nSurround yourself with the sort of people new ideas come from. If\nyou want to notice quickly when your beliefs become obsolete, you\ncan't do better than to be friends with the people whose discoveries\nwill make them so.It's hard enough already not to become the prisoner of your own\nexpertise, but it will only get harder, because change is accelerating.\nThat's not a recent trend; change has been accelerating since the\npaleolithic era. Ideas beget ideas. I don't expect that to change.\nBut I could be wrong.\nNotes[1]\nMy usual trick is to talk about aspects of the present that\nmost people haven't noticed yet.[2]\nEspecially if they become well enough known that people start\nto identify them with you. You have to be extra skeptical about\nthings you want to believe, and once a hypothesis starts to be\nidentified with you, it will almost certainly start to be in that\ncategory.[3]\nIn practice \"sufficiently expert\" doesn't require one to be\nrecognized as an expert\u2014which is a trailing indicator in any\ncase. In many fields a year of focused work plus caring a lot would\nbe enough.[4]\nThough they are public and persist indefinitely, comments on\ne.g. forums and places like Twitter seem empirically to work like\ncasual conversation. The threshold may be whether what you write\nhas a title.\nThanks to Sam Altman, Patrick Collison, and Robert Morris\nfor reading drafts of this."} {"title": "foundervisa", "text": "\n\nApril 2009I usually avoid politics, but since we now seem to have an administration that's open to suggestions, I'm going to risk making one. The single biggest thing the government could do to increase the number of startups in this country is a policy that would cost nothing: establish a new class of visa for startup founders.The biggest constraint on the number of new startups that get created in the US is not tax policy or employment law or even Sarbanes-Oxley. It's that we won't let the people who want to start them into the country.Letting just 10,000 startup founders into the country each year could have a visible effect on the economy. If we assume 4 people per startup, which is probably an overestimate, that's 2500 new companies. Each year. They wouldn't all grow as big as Google, but out of 2500 some would come close.By definition these 10,000 founders wouldn't be taking jobs from Americans: it could be part of the terms of the visa that they couldn't work for existing companies, only new ones they'd founded. In fact they'd cause there to be \nmore jobs for Americans, because the companies they started would hire more employees as they grew.The tricky part might seem to be how one defined a startup. But that could be solved quite easily: let the market decide. Startup investors work hard to find the best startups. The government could not do better than to piggyback on their expertise, and use investment by recognized startup investors as the test of whether a company was a real startup.How would the government decide who's a startup investor? The same way they decide what counts as a university for student visas. We'll establish our own accreditation procedure. We know who one another are.10,000 people is a drop in the bucket by immigration standards, but would represent a huge increase in the pool of startup founders. I think this would have such a visible effect on the economy that it would make the legislator who introduced the bill famous. The only way to know for sure would be to try it, and that would cost practically nothing.\nThanks to Trevor Blackwell, Paul Buchheit, Jeff Clavier, David Hornik, Jessica Livingston, Greg Mcadoo, Aydin Senkut, and Fred Wilson for reading drafts of this.Related:"} {"title": "gap", "text": "May 2004When people care enough about something to do it well, those who\ndo it best tend to be far better than everyone else. There's a\nhuge gap between Leonardo and second-rate contemporaries like\nBorgognone. You see the same gap between Raymond Chandler and the\naverage writer of detective novels. A top-ranked professional chess\nplayer could play ten thousand games against an ordinary club player\nwithout losing once.Like chess or painting or writing novels, making money is a very\nspecialized skill. But for some reason we treat this skill\ndifferently. No one complains when a few people surpass all the\nrest at playing chess or writing novels, but when a few people make\nmore money than the rest, we get editorials saying this is wrong.Why? The pattern of variation seems no different than for any other\nskill. What causes people to react so strongly when the skill is\nmaking money?I think there are three reasons we treat making money as different:\nthe misleading model of wealth we learn as children; the disreputable\nway in which, till recently, most fortunes were accumulated; and\nthe worry that great variations in income are somehow bad for\nsociety. As far as I can tell, the first is mistaken, the second\noutdated, and the third empirically false. Could it be that, in a\nmodern democracy, variation in income is actually a sign of health?The Daddy Model of WealthWhen I was five I thought electricity was created by electric\nsockets. I didn't realize there were power plants out there\ngenerating it. Likewise, it doesn't occur to most kids that wealth\nis something that has to be generated. It seems to be something\nthat flows from parents.Because of the circumstances in which they encounter it, children\ntend to misunderstand wealth. They confuse it with money. They\nthink that there is a fixed amount of it. And they think of it as\nsomething that's distributed by authorities (and so should be\ndistributed equally), rather than something that has to be created\n(and might be created unequally).In fact, wealth is not money. Money is just a convenient way of\ntrading one form of wealth for another. Wealth is the underlying\nstuff\u2014the goods and services we buy. When you travel to a\nrich or poor country, you don't have to look at people's bank\naccounts to tell which kind you're in. You can see\nwealth\u2014in buildings and streets, in the clothes and the health\nof the people.Where does wealth come from? People make it. This was easier to\ngrasp when most people lived on farms, and made many of the things\nthey wanted with their own hands. Then you could see in the house,\nthe herds, and the granary the wealth that each family created. It\nwas obvious then too that the wealth of the world was not a fixed\nquantity that had to be shared out, like slices of a pie. If you\nwanted more wealth, you could make it.This is just as true today, though few of us create wealth directly\nfor ourselves (except for a few vestigial domestic tasks). Mostly\nwe create wealth for other people in exchange for money, which we\nthen trade for the forms of wealth we want. \n[1]Because kids are unable to create wealth, whatever they have has\nto be given to them. And when wealth is something you're given,\nthen of course it seems that it should be distributed equally.\n[2]\nAs in most families it is. The kids see to that. \"Unfair,\" they\ncry, when one sibling gets more than another.In the real world, you can't keep living off your parents. If you\nwant something, you either have to make it, or do something of\nequivalent value for someone else, in order to get them to give you\nenough money to buy it. In the real world, wealth is (except for\na few specialists like thieves and speculators) something you have\nto create, not something that's distributed by Daddy. And since\nthe ability and desire to create it vary from person to person,\nit's not made equally.You get paid by doing or making something people want, and those\nwho make more money are often simply better at doing what people\nwant. Top actors make a lot more money than B-list actors. The\nB-list actors might be almost as charismatic, but when people go\nto the theater and look at the list of movies playing, they want\nthat extra oomph that the big stars have.Doing what people want is not the only way to get money, of course.\nYou could also rob banks, or solicit bribes, or establish a monopoly.\nSuch tricks account for some variation in wealth, and indeed for\nsome of the biggest individual fortunes, but they are not the root\ncause of variation in income. The root cause of variation in income,\nas Occam's Razor implies, is the same as the root cause of variation\nin every other human skill.In the United States, the CEO of a large public company makes about\n100 times as much as the average person. \n[3]\nBasketball players\nmake about 128 times as much, and baseball players 72 times as much.\nEditorials quote this kind of statistic with horror. But I have\nno trouble imagining that one person could be 100 times as productive\nas another. In ancient Rome the price of slaves varied by\na factor of 50 depending on their skills. \n[4]\nAnd that's without\nconsidering motivation, or the extra leverage in productivity that\nyou can get from modern technology.Editorials about athletes' or CEOs' salaries remind me of early\nChristian writers, arguing from first principles about whether the\nEarth was round, when they could just walk outside and check.\n[5]\nHow much someone's work is worth is not a policy question. It's\nsomething the market already determines.\"Are they really worth 100 of us?\" editorialists ask. Depends on\nwhat you mean by worth. If you mean worth in the sense of what\npeople will pay for their skills, the answer is yes, apparently.A few CEOs' incomes reflect some kind of wrongdoing. But are there\nnot others whose incomes really do reflect the wealth they generate?\nSteve Jobs saved a company that was in a terminal decline. And not\nmerely in the way a turnaround specialist does, by cutting costs;\nhe had to decide what Apple's next products should be. Few others\ncould have done it. And regardless of the case with CEOs, it's\nhard to see how anyone could argue that the salaries of professional\nbasketball players don't reflect supply and demand.It may seem unlikely in principle that one individual could really\ngenerate so much more wealth than another. The key to this mystery\nis to revisit that question, are they really worth 100 of us?\nWould a basketball team trade one of their players for 100\nrandom people? What would Apple's next product look like if you\nreplaced Steve Jobs with a committee of 100 random people? \n[6]\nThese\nthings don't scale linearly. Perhaps the CEO or the professional\nathlete has only ten times (whatever that means) the skill and\ndetermination of an ordinary person. But it makes all the difference\nthat it's concentrated in one individual.When we say that one kind of work is overpaid and another underpaid,\nwhat are we really saying? In a free market, prices are determined\nby what buyers want. People like baseball more than poetry, so\nbaseball players make more than poets. To say that a certain kind\nof work is underpaid is thus identical with saying that people want\nthe wrong things.Well, of course people want the wrong things. It seems odd to be\nsurprised by that. And it seems even odder to say that it's\nunjust that certain kinds of work are underpaid. \n[7]\nThen\nyou're saying that it's unjust that people want the wrong things.\nIt's lamentable that people prefer reality TV and corndogs to\nShakespeare and steamed vegetables, but unjust? That seems like\nsaying that blue is heavy, or that up is circular.The appearance of the word \"unjust\" here is the unmistakable spectral\nsignature of the Daddy Model. Why else would this idea occur in\nthis odd context? Whereas if the speaker were still operating on\nthe Daddy Model, and saw wealth as something that flowed from a\ncommon source and had to be shared out, rather than something\ngenerated by doing what other people wanted, this is exactly what\nyou'd get on noticing that some people made much more than others.When we talk about \"unequal distribution of income,\" we should\nalso ask, where does that income come from?\n[8]\nWho made the wealth\nit represents? Because to the extent that income varies simply\naccording to how much wealth people create, the distribution may\nbe unequal, but it's hardly unjust.Stealing ItThe second reason we tend to find great disparities of wealth\nalarming is that for most of human history the usual way to accumulate\na fortune was to steal it: in pastoral societies by cattle raiding;\nin agricultural societies by appropriating others' estates in times\nof war, and taxing them in times of peace.In conflicts, those on the winning side would receive the estates\nconfiscated from the losers. In England in the 1060s, when William\nthe Conqueror distributed the estates of the defeated Anglo-Saxon\nnobles to his followers, the conflict was military. By the 1530s,\nwhen Henry VIII distributed the estates of the monasteries to his\nfollowers, it was mostly political. \n[9]\nBut the principle was the\nsame. Indeed, the same principle is at work now in Zimbabwe.In more organized societies, like China, the ruler and his officials\nused taxation instead of confiscation. But here too we see the\nsame principle: the way to get rich was not to create wealth, but\nto serve a ruler powerful enough to appropriate it.This started to change in Europe with the rise of the middle class.\nNow we think of the middle class as people who are neither rich nor\npoor, but originally they were a distinct group. In a feudal\nsociety, there are just two classes: a warrior aristocracy, and the\nserfs who work their estates. The middle class were a new, third\ngroup who lived in towns and supported themselves by manufacturing\nand trade.Starting in the tenth and eleventh centuries, petty nobles and\nformer serfs banded together in towns that gradually became powerful\nenough to ignore the local feudal lords. \n[10]\nLike serfs, the middle\nclass made a living largely by creating wealth. (In port cities\nlike Genoa and Pisa, they also engaged in piracy.) But unlike serfs\nthey had an incentive to create a lot of it. Any wealth a serf\ncreated belonged to his master. There was not much point in making\nmore than you could hide. Whereas the independence of the townsmen\nallowed them to keep whatever wealth they created.Once it became possible to get rich by creating wealth, society as\na whole started to get richer very rapidly. Nearly everything we\nhave was created by the middle class. Indeed, the other two classes\nhave effectively disappeared in industrial societies, and their\nnames been given to either end of the middle class. (In the original\nsense of the word, Bill Gates is middle class.)But it was not till the Industrial Revolution that wealth creation\ndefinitively replaced corruption as the best way to get rich. In\nEngland, at least, corruption only became unfashionable (and in\nfact only started to be called \"corruption\") when there started to\nbe other, faster ways to get rich.Seventeenth-century England was much like the third world today,\nin that government office was a recognized route to wealth. The\ngreat fortunes of that time still derived more from what we would\nnow call corruption than from commerce. \n[11]\nBy the nineteenth\ncentury that had changed. There continued to be bribes, as there\nstill are everywhere, but politics had by then been left to men who\nwere driven more by vanity than greed. Technology had made it\npossible to create wealth faster than you could steal it. The\nprototypical rich man of the nineteenth century was not a courtier\nbut an industrialist.With the rise of the middle class, wealth stopped being a zero-sum\ngame. Jobs and Wozniak didn't have to make us poor to make themselves\nrich. Quite the opposite: they created things that made our lives\nmaterially richer. They had to, or we wouldn't have paid for them.But since for most of the world's history the main route to wealth\nwas to steal it, we tend to be suspicious of rich people. Idealistic\nundergraduates find their unconsciously preserved child's model of\nwealth confirmed by eminent writers of the past. It is a case of\nthe mistaken meeting the outdated.\"Behind every great fortune, there is a crime,\" Balzac wrote. Except\nhe didn't. What he actually said was that a great fortune with no\napparent cause was probably due to a crime well enough executed\nthat it had been forgotten. If we were talking about Europe in\n1000, or most of the third world today, the standard misquotation\nwould be spot on. But Balzac lived in nineteenth-century France,\nwhere the Industrial Revolution was well advanced. He knew you\ncould make a fortune without stealing it. After all, he did himself,\nas a popular novelist.\n[12]Only a few countries (by no coincidence, the richest ones) have\nreached this stage. In most, corruption still has the upper hand.\nIn most, the fastest way to get wealth is by stealing it. And so\nwhen we see increasing differences in income in a rich country,\nthere is a tendency to worry that it's sliding back toward becoming\nanother Venezuela. I think the opposite is happening. I think\nyou're seeing a country a full step ahead of Venezuela.The Lever of TechnologyWill technology increase the gap between rich and poor? It will\ncertainly increase the gap between the productive and the unproductive.\nThat's the whole point of technology. With a tractor an energetic\nfarmer could plow six times as much land in a day as he could with\na team of horses. But only if he mastered a new kind of farming.I've seen the lever of technology grow visibly in my own time. In\nhigh school I made money by mowing lawns and scooping ice cream at\nBaskin-Robbins. This was the only kind of work available at the\ntime. Now high school kids could write software or design web\nsites. But only some of them will; the rest will still be scooping\nice cream.I remember very vividly when in 1985 improved technology made it\npossible for me to buy a computer of my own. Within months I was\nusing it to make money as a freelance programmer. A few years\nbefore, I couldn't have done this. A few years before, there was\nno such thing as a freelance programmer. But Apple created\nwealth, in the form of powerful, inexpensive computers, and programmers\nimmediately set to work using it to create more.As this example suggests, the rate at which technology increases\nour productive capacity is probably exponential, rather than linear.\nSo we should expect to see ever-increasing variation in individual\nproductivity as time goes on. Will that increase the gap between\nrich and the poor? Depends which gap you mean.Technology should increase the gap in income, but it seems to\ndecrease other gaps. A hundred years ago, the rich led a different\nkind of life from ordinary people. They lived in houses\nfull of servants, wore elaborately uncomfortable clothes, and\ntravelled about in carriages drawn by teams of horses which themselves\nrequired their own houses and servants. Now, thanks to technology,\nthe rich live more like the average person.Cars are a good example of why. It's possible to buy expensive,\nhandmade cars that cost hundreds of thousands of dollars. But there\nis not much point. Companies make more money by building a large\nnumber of ordinary cars than a small number of expensive ones. So\na company making a mass-produced car can afford to spend a lot more\non its design. If you buy a custom-made car, something will always\nbe breaking. The only point of buying one now is to advertise that\nyou can.Or consider watches. Fifty years ago, by spending a lot of money\non a watch you could get better performance. When watches had\nmechanical movements, expensive watches kept better time. Not any\nmore. Since the invention of the quartz movement, an ordinary Timex\nis more accurate than a Patek Philippe costing hundreds of thousands\nof dollars.\n[13]\nIndeed, as with expensive cars, if you're determined\nto spend a lot of money on a watch, you have to put up with some\ninconvenience to do it: as well as keeping worse time, mechanical\nwatches have to be wound.The only thing technology can't cheapen is brand. Which is precisely\nwhy we hear ever more about it. Brand is the residue left as the\nsubstantive differences between rich and poor evaporate. But what\nlabel you have on your stuff is a much smaller matter than having\nit versus not having it. In 1900, if you kept a carriage, no one\nasked what year or brand it was. If you had one, you were rich.\nAnd if you weren't rich, you took the omnibus or walked. Now even\nthe poorest Americans drive cars, and it is only because we're so\nwell trained by advertising that we can even recognize the especially\nexpensive ones.\n[14]The same pattern has played out in industry after industry. If\nthere is enough demand for something, technology will make it cheap\nenough to sell in large volumes, and the mass-produced versions\nwill be, if not better, at least more convenient.\n[15]\nAnd there\nis nothing the rich like more than convenience. The rich people I\nknow drive the same cars, wear the same clothes, have the same kind\nof furniture, and eat the same foods as my other friends. Their\nhouses are in different neighborhoods, or if in the same neighborhood\nare different sizes, but within them life is similar. The houses\nare made using the same construction techniques and contain much\nthe same objects. It's inconvenient to do something expensive and\ncustom.The rich spend their time more like everyone else too. Bertie\nWooster seems long gone. Now, most people who are rich enough not\nto work do anyway. It's not just social pressure that makes them;\nidleness is lonely and demoralizing.Nor do we have the social distinctions there were a hundred years\nago. The novels and etiquette manuals of that period read now\nlike descriptions of some strange tribal society. \"With respect\nto the continuance of friendships...\" hints Mrs. Beeton's Book\nof Household Management (1880), \"it may be found necessary, in\nsome cases, for a mistress to relinquish, on assuming the responsibility\nof a household, many of those commenced in the earlier part of her\nlife.\" A woman who married a rich man was expected to drop friends\nwho didn't. You'd seem a barbarian if you behaved that way today.\nYou'd also have a very boring life. People still tend to segregate\nthemselves somewhat, but much more on the basis of education than\nwealth.\n[16]Materially and socially, technology seems to be decreasing the gap\nbetween the rich and the poor, not increasing it. If Lenin walked\naround the offices of a company like Yahoo or Intel or Cisco, he'd\nthink communism had won. Everyone would be wearing the same clothes,\nhave the same kind of office (or rather, cubicle) with the same\nfurnishings, and address one another by their first names instead\nof by honorifics. Everything would seem exactly as he'd predicted,\nuntil he looked at their bank accounts. Oops.Is it a problem if technology increases that gap? It doesn't seem\nto be so far. As it increases the gap in income, it seems to\ndecrease most other gaps.Alternative to an AxiomOne often hears a policy criticized on the grounds that it would\nincrease the income gap between rich and poor. As if it were an\naxiom that this would be bad. It might be true that increased\nvariation in income would be bad, but I don't see how we can say\nit's axiomatic.Indeed, it may even be false, in industrial democracies. In a\nsociety of serfs and warlords, certainly, variation in income is a\nsign of an underlying problem. But serfdom is not the only cause\nof variation in income. A 747 pilot doesn't make 40 times as much\nas a checkout clerk because he is a warlord who somehow holds her\nin thrall. His skills are simply much more valuable.I'd like to propose an alternative idea: that in a modern society,\nincreasing variation in income is a sign of health. Technology\nseems to increase the variation in productivity at faster than\nlinear rates. If we don't see corresponding variation in income,\nthere are three possible explanations: (a) that technical innovation\nhas stopped, (b) that the people who would create the most wealth\naren't doing it, or (c) that they aren't getting paid for it.I think we can safely say that (a) and (b) would be bad. If you\ndisagree, try living for a year using only the resources available\nto the average Frankish nobleman in 800, and report back to us.\n(I'll be generous and not send you back to the stone age.)The only option, if you're going to have an increasingly prosperous\nsociety without increasing variation in income, seems to be (c),\nthat people will create a lot of wealth without being paid for it.\nThat Jobs and Wozniak, for example, will cheerfully work 20-hour\ndays to produce the Apple computer for a society that allows them,\nafter taxes, to keep just enough of their income to match what they\nwould have made working 9 to 5 at a big company.Will people create wealth if they can't get paid for it? Only if\nit's fun. People will write operating systems for free. But they\nwon't install them, or take support calls, or train customers to\nuse them. And at least 90% of the work that even the highest tech\ncompanies do is of this second, unedifying kind.All the unfun kinds of wealth creation slow dramatically in a society\nthat confiscates private fortunes. We can confirm this empirically.\nSuppose you hear a strange noise that you think may be due to a\nnearby fan. You turn the fan off, and the noise stops. You turn\nthe fan back on, and the noise starts again. Off, quiet. On,\nnoise. In the absence of other information, it would seem the noise\nis caused by the fan.At various times and places in history, whether you could accumulate\na fortune by creating wealth has been turned on and off. Northern\nItaly in 800, off (warlords would steal it). Northern Italy in\n1100, on. Central France in 1100, off (still feudal). England in\n1800, on. England in 1974, off (98% tax on investment income).\nUnited States in 1974, on. We've even had a twin study: West\nGermany, on; East Germany, off. In every case, the creation of\nwealth seems to appear and disappear like the noise of a fan as you\nswitch on and off the prospect of keeping it.There is some momentum involved. It probably takes at least a\ngeneration to turn people into East Germans (luckily for England).\nBut if it were merely a fan we were studying, without all the extra\nbaggage that comes from the controversial topic of wealth, no one\nwould have any doubt that the fan was causing the noise.If you suppress variations in income, whether by stealing private\nfortunes, as feudal rulers used to do, or by taxing them away, as\nsome modern governments have done, the result always seems to be\nthe same. Society as a whole ends up poorer.If I had a choice of living in a society where I was materially\nmuch better off than I am now, but was among the poorest, or in one\nwhere I was the richest, but much worse off than I am now, I'd take\nthe first option. If I had children, it would arguably be immoral\nnot to. It's absolute poverty you want to avoid, not relative\npoverty. If, as the evidence so far implies, you have to have one\nor the other in your society, take relative poverty.You need rich people in your society not so much because in spending\ntheir money they create jobs, but because of what they have to do\nto get rich. I'm not talking about the trickle-down effect\nhere. I'm not saying that if you let Henry Ford get rich, he'll\nhire you as a waiter at his next party. I'm saying that he'll make\nyou a tractor to replace your horse.Notes[1]\nPart of the reason this subject is so contentious is that some\nof those most vocal on the subject of wealth\u2014university\nstudents, heirs, professors, politicians, and journalists\u2014have\nthe least experience creating it. (This phenomenon will be familiar\nto anyone who has overheard conversations about sports in a bar.)Students are mostly still on the parental dole, and have not stopped\nto think about where that money comes from. Heirs will be on the\nparental dole for life. Professors and politicians live within\nsocialist eddies of the economy, at one remove from the creation\nof wealth, and are paid a flat rate regardless of how hard they\nwork. And journalists as part of their professional code segregate\nthemselves from the revenue-collecting half of the businesses they\nwork for (the ad sales department). Many of these people never\ncome face to face with the fact that the money they receive represents\nwealth\u2014wealth that, except in the case of journalists, someone\nelse created earlier. They live in a world in which income is\ndoled out by a central authority according to some abstract notion\nof fairness (or randomly, in the case of heirs), rather than given\nby other people in return for something they wanted, so it may seem\nto them unfair that things don't work the same in the rest of the\neconomy.(Some professors do create a great deal of wealth for\nsociety. But the money they're paid isn't a quid pro quo.\nIt's more in the nature of an investment.)[2]\nWhen one reads about the origins of the Fabian Society, it\nsounds like something cooked up by the high-minded Edwardian\nchild-heroes of Edith Nesbit's The Wouldbegoods.[3]\nAccording to a study by the Corporate Library, the median total\ncompensation, including salary, bonus, stock grants, and the exercise\nof stock options, of S&P 500 CEOs in 2002 was $3.65 million.\nAccording to Sports Illustrated, the average NBA player's\nsalary during the 2002-03 season was $4.54 million, and the average\nmajor league baseball player's salary at the start of the 2003\nseason was $2.56 million. According to the Bureau of Labor\nStatistics, the mean annual wage in the US in 2002 was $35,560.[4]\nIn the early empire the price of an ordinary adult slave seems\nto have been about 2,000 sestertii (e.g. Horace, Sat. ii.7.43).\nA servant girl cost 600 (Martial vi.66), while Columella (iii.3.8)\nsays that a skilled vine-dresser was worth 8,000. A doctor, P.\nDecimus Eros Merula, paid 50,000 sestertii for his freedom (Dessau,\nInscriptiones 7812). Seneca (Ep. xxvii.7) reports\nthat one Calvisius Sabinus paid 100,000 sestertii apiece for slaves\nlearned in the Greek classics. Pliny (Hist. Nat. vii.39)\nsays that the highest price paid for a slave up to his time was\n700,000 sestertii, for the linguist (and presumably teacher) Daphnis,\nbut that this had since been exceeded by actors buying their own\nfreedom.Classical Athens saw a similar variation in prices. An ordinary\nlaborer was worth about 125 to 150 drachmae. Xenophon (Mem.\nii.5) mentions prices ranging from 50 to 6,000 drachmae (for the\nmanager of a silver mine).For more on the economics of ancient slavery see:Jones, A. H. M., \"Slavery in the Ancient World,\" Economic History\nReview, 2:9 (1956), 185-199, reprinted in Finley, M. I. (ed.),\nSlavery in Classical Antiquity, Heffer, 1964.[5]\nEratosthenes (276\u2014195 BC) used shadow lengths in different\ncities to estimate the Earth's circumference. He was off by only\nabout 2%.[6]\nNo, and Windows, respectively.[7]\nOne of the biggest divergences between the Daddy Model and\nreality is the valuation of hard work. In the Daddy Model, hard\nwork is in itself deserving. In reality, wealth is measured by\nwhat one delivers, not how much effort it costs. If I paint someone's\nhouse, the owner shouldn't pay me extra for doing it with a toothbrush.It will seem to someone still implicitly operating on the Daddy\nModel that it is unfair when someone works hard and doesn't get\npaid much. To help clarify the matter, get rid of everyone else\nand put our worker on a desert island, hunting and gathering fruit.\nIf he's bad at it he'll work very hard and not end up with much\nfood. Is this unfair? Who is being unfair to him?[8]\nPart of the reason for the tenacity of the Daddy Model may be\nthe dual meaning of \"distribution.\" When economists talk about\n\"distribution of income,\" they mean statistical distribution. But\nwhen you use the phrase frequently, you can't help associating it\nwith the other sense of the word (as in e.g. \"distribution of alms\"),\nand thereby subconsciously seeing wealth as something that flows\nfrom some central tap. The word \"regressive\" as applied to tax\nrates has a similar effect, at least on me; how can anything\nregressive be good?[9]\n\"From the beginning of the reign Thomas Lord Roos was an assiduous\ncourtier of the young Henry VIII and was soon to reap the rewards.\nIn 1525 he was made a Knight of the Garter and given the Earldom\nof Rutland. In the thirties his support of the breach with Rome,\nhis zeal in crushing the Pilgrimage of Grace, and his readiness to\nvote the death-penalty in the succession of spectacular treason\ntrials that punctuated Henry's erratic matrimonial progress made\nhim an obvious candidate for grants of monastic property.\"Stone, Lawrence, Family and Fortune: Studies in Aristocratic\nFinance in the Sixteenth and Seventeenth Centuries, Oxford\nUniversity Press, 1973, p. 166.[10]\nThere is archaeological evidence for large settlements earlier,\nbut it's hard to say what was happening in them.Hodges, Richard and David Whitehouse, Mohammed, Charlemagne and\nthe Origins of Europe, Cornell University Press, 1983.[11]\nWilliam Cecil and his son Robert were each in turn the most\npowerful minister of the crown, and both used their position to\namass fortunes among the largest of their times. Robert in particular\ntook bribery to the point of treason. \"As Secretary of State and\nthe leading advisor to King James on foreign policy, [he] was a\nspecial recipient of favour, being offered large bribes by the Dutch\nnot to make peace with Spain, and large bribes by Spain to make\npeace.\" (Stone, op. cit., p. 17.)[12]\nThough Balzac made a lot of money from writing, he was notoriously\nimprovident and was troubled by debts all his life.[13]\nA Timex will gain or lose about .5 seconds per day. The most\naccurate mechanical watch, the Patek Philippe 10 Day Tourbillon,\nis rated at -1.5 to +2 seconds. Its retail price is about $220,000.[14]\nIf asked to choose which was more expensive, a well-preserved\n1989 Lincoln Town Car ten-passenger limousine ($5,000) or a 2004\nMercedes S600 sedan ($122,000), the average Edwardian might well\nguess wrong.[15]\nTo say anything meaningful about income trends, you have to\ntalk about real income, or income as measured in what it can buy.\nBut the usual way of calculating real income ignores much of the\ngrowth in wealth over time, because it depends on a consumer price\nindex created by bolting end to end a series of numbers that are\nonly locally accurate, and that don't include the prices of new\ninventions until they become so common that their prices stabilize.So while we might think it was very much better to live in a world\nwith antibiotics or air travel or an electric power grid than\nwithout, real income statistics calculated in the usual way will\nprove to us that we are only slightly richer for having these things.Another approach would be to ask, if you were going back to the\nyear x in a time machine, how much would you have to spend on trade\ngoods to make your fortune? For example, if you were going back\nto 1970 it would certainly be less than $500, because the processing\npower you can get for $500 today would have been worth at least\n$150 million in 1970. The function goes asymptotic fairly quickly,\nbecause for times over a hundred years or so you could get all you\nneeded in present-day trash. In 1800 an empty plastic drink bottle\nwith a screw top would have seemed a miracle of workmanship.[16]\nSome will say this amounts to the same thing, because the rich\nhave better opportunities for education. That's a valid point. It\nis still possible, to a degree, to buy your kids' way into top\ncolleges by sending them to private schools that in effect hack the\ncollege admissions process.According to a 2002 report by the National Center for Education\nStatistics, about 1.7% of American kids attend private, non-sectarian\nschools. At Princeton, 36% of the class of 2007 came from such\nschools. (Interestingly, the number at Harvard is significantly\nlower, about 28%.) Obviously this is a huge loophole. It does at\nleast seem to be closing, not widening.Perhaps the designers of admissions processes should take a lesson\nfrom the example of computer security, and instead of just assuming\nthat their system can't be hacked, measure the degree to which it\nis."} {"title": "gh", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nJuly 2004(This essay is derived from a talk at Oscon 2004.)\nA few months ago I finished a new \nbook, \nand in reviews I keep\nnoticing words like \"provocative'' and \"controversial.'' To say\nnothing of \"idiotic.''I didn't mean to make the book controversial. I was trying to make\nit efficient. I didn't want to waste people's time telling them\nthings they already knew. It's more efficient just to give them\nthe diffs. But I suppose that's bound to yield an alarming book.EdisonsThere's no controversy about which idea is most controversial:\nthe suggestion that variation in wealth might not be as big a\nproblem as we think.I didn't say in the book that variation in wealth was in itself a\ngood thing. I said in some situations it might be a sign of good\nthings. A throbbing headache is not a good thing, but it can be\na sign of a good thing-- for example, that you're recovering\nconsciousness after being hit on the head.Variation in wealth can be a sign of variation in productivity.\n(In a society of one, they're identical.) And that\nis almost certainly a good thing: if your society has no variation\nin productivity, it's probably not because everyone is Thomas\nEdison. It's probably because you have no Thomas Edisons.In a low-tech society you don't see much variation in productivity.\nIf you have a tribe of nomads collecting sticks for a fire, how\nmuch more productive is the best stick gatherer going to be than\nthe worst? A factor of two? Whereas when you hand people a complex tool\nlike a computer, the variation in what they can do with\nit is enormous.That's not a new idea. Fred Brooks wrote about it in 1974, and\nthe study he quoted was published in 1968. But I think he\nunderestimated the variation between programmers. He wrote about productivity in lines\nof code: the best programmers can solve a given problem in a tenth\nthe time. But what if the problem isn't given? In programming, as\nin many fields, the hard part isn't solving problems, but deciding\nwhat problems to solve. Imagination is hard to measure, but\nin practice it dominates the kind of productivity that's measured\nin lines of code.Productivity varies in any field, but there are few in which it\nvaries so much. The variation between programmers\nis so great that it becomes a difference in kind. I don't\nthink this is something intrinsic to programming, though. In every field,\ntechnology magnifies differences in productivity. I think what's\nhappening in programming is just that we have a lot of technological\nleverage. But in every field the lever is getting longer, so the\nvariation we see is something that more and more fields will see\nas time goes on. And the success of companies, and countries, will\ndepend increasingly on how they deal with it.If variation in productivity increases with technology, then the\ncontribution of the most productive individuals will not only be\ndisproportionately large, but will actually grow with time. When\nyou reach the point where 90% of a group's output is created by 1%\nof its members, you lose big if something (whether Viking raids,\nor central planning) drags their productivity down to the average.If we want to get the most out of them, we need to understand these\nespecially productive people. What motivates them? What do they\nneed to do their jobs? How do you recognize them? How do you\nget them to come and work for you? And then of course there's the\nquestion, how do you become one?More than MoneyI know a handful of super-hackers, so I sat down and thought about\nwhat they have in common. Their defining quality is probably that\nthey really love to program. Ordinary programmers write code to pay\nthe bills. Great hackers think of it as something they do for fun,\nand which they're delighted to find people will pay them for.Great programmers are sometimes said to be indifferent to money.\nThis isn't quite true. It is true that all they really care about\nis doing interesting work. But if you make enough money, you get\nto work on whatever you want, and for that reason hackers are\nattracted by the idea of making really large amounts of money.\nBut as long as they still have to show up for work every day, they\ncare more about what they do there than how much they get paid for\nit.Economically, this is a fact of the greatest importance, because\nit means you don't have to pay great hackers anything like what\nthey're worth. A great programmer might be ten or a hundred times\nas productive as an ordinary one, but he'll consider himself lucky\nto get paid three times as much. As I'll explain later, this is\npartly because great hackers don't know how good they are. But\nit's also because money is not the main thing they want.What do hackers want? Like all craftsmen, hackers like good tools.\nIn fact, that's an understatement. Good hackers find it unbearable\nto use bad tools. They'll simply refuse to work on projects with\nthe wrong infrastructure.At a startup I once worked for, one of the things pinned up on our\nbulletin board was an ad from IBM. It was a picture of an AS400,\nand the headline read, I think, \"hackers despise\nit.'' [1]When you decide what infrastructure to use for a project, you're\nnot just making a technical decision. You're also making a social\ndecision, and this may be the more important of the two. For\nexample, if your company wants to write some software, it might\nseem a prudent choice to write it in Java. But when you choose a\nlanguage, you're also choosing a community. The programmers you'll\nbe able to hire to work on a Java project won't be as\nsmart as the\nones you could get to work on a project written in Python.\nAnd the quality of your hackers probably matters more than the\nlanguage you choose. Though, frankly, the fact that good hackers\nprefer Python to Java should tell you something about the relative\nmerits of those languages.Business types prefer the most popular languages because they view\nlanguages as standards. They don't want to bet the company on\nBetamax. The thing about languages, though, is that they're not\njust standards. If you have to move bits over a network, by all\nmeans use TCP/IP. But a programming language isn't just a format.\nA programming language is a medium of expression.I've read that Java has just overtaken Cobol as the most popular\nlanguage. As a standard, you couldn't wish for more. But as a\nmedium of expression, you could do a lot better. Of all the great\nprogrammers I can think of, I know of only one who would voluntarily\nprogram in Java. And of all the great programmers I can think of\nwho don't work for Sun, on Java, I know of zero.Great hackers also generally insist on using open source software.\nNot just because it's better, but because it gives them more control.\nGood hackers insist on control. This is part of what makes them\ngood hackers: when something's broken, they need to fix it. You\nwant them to feel this way about the software they're writing for\nyou. You shouldn't be surprised when they feel the same way about\nthe operating system.A couple years ago a venture capitalist friend told me about a new\nstartup he was involved with. It sounded promising. But the next\ntime I talked to him, he said they'd decided to build their software\non Windows NT, and had just hired a very experienced NT developer\nto be their chief technical officer. When I heard this, I thought,\nthese guys are doomed. One, the CTO couldn't be a first rate\nhacker, because to become an eminent NT developer he would have\nhad to use NT voluntarily, multiple times, and I couldn't imagine\na great hacker doing that; and two, even if he was good, he'd have\na hard time hiring anyone good to work for him if the project had\nto be built on NT. [2]The Final FrontierAfter software, the most important tool to a hacker is probably\nhis office. Big companies think the function of office space is to express\nrank. But hackers use their offices for more than that: they\nuse their office as a place to think in. And if you're a technology\ncompany, their thoughts are your product. So making hackers work\nin a noisy, distracting environment is like having a paint factory\nwhere the air is full of soot.The cartoon strip Dilbert has a lot to say about cubicles, and with\ngood reason. All the hackers I know despise them. The mere prospect\nof being interrupted is enough to prevent hackers from working on\nhard problems. If you want to get real work done in an office with\ncubicles, you have two options: work at home, or come in early or\nlate or on a weekend, when no one else is there. Don't companies\nrealize this is a sign that something is broken? An office\nenvironment is supposed to be something that helps\nyou work, not something you work despite.Companies like Cisco are proud that everyone there has a cubicle,\neven the CEO. But they're not so advanced as they think; obviously\nthey still view office space as a badge of rank. Note too that\nCisco is famous for doing very little product development in house.\nThey get new technology by buying the startups that created it-- where\npresumably the hackers did have somewhere quiet to work.One big company that understands what hackers need is Microsoft.\nI once saw a recruiting ad for Microsoft with a big picture of a\ndoor. Work for us, the premise was, and we'll give you a place to\nwork where you can actually get work done. And you know, Microsoft\nis remarkable among big companies in that they are able to develop\nsoftware in house. Not well, perhaps, but well enough.If companies want hackers to be productive, they should look at\nwhat they do at home. At home, hackers can arrange things themselves\nso they can get the most done. And when they work at home, hackers\ndon't work in noisy, open spaces; they work in rooms with doors. They\nwork in cosy, neighborhoody places with people around and somewhere\nto walk when they need to mull something over, instead of in glass\nboxes set in acres of parking lots. They have a sofa they can take\na nap on when they feel tired, instead of sitting in a coma at\ntheir desk, pretending to work. There's no crew of people with\nvacuum cleaners that roars through every evening during the prime\nhacking hours. There are no meetings or, God forbid, corporate\nretreats or team-building exercises. And when you look at what\nthey're doing on that computer, you'll find it reinforces what I\nsaid earlier about tools. They may have to use Java and Windows\nat work, but at home, where they can choose for themselves, you're\nmore likely to find them using Perl and Linux.Indeed, these statistics about Cobol or Java being the most popular\nlanguage can be misleading. What we ought to look at, if we want\nto know what tools are best, is what hackers choose when they can\nchoose freely-- that is, in projects of their own. When you ask\nthat question, you find that open source operating systems already\nhave a dominant market share, and the number one language is probably\nPerl.InterestingAlong with good tools, hackers want interesting projects. What\nmakes a project interesting? Well, obviously overtly sexy\napplications like stealth planes or special effects software would\nbe interesting to work on. But any application can be interesting\nif it poses novel technical challenges. So it's hard to predict\nwhich problems hackers will like, because some become\ninteresting only when the people working on them discover a new\nkind of solution. Before ITA\n(who wrote the software inside Orbitz),\nthe people working on airline fare searches probably thought it\nwas one of the most boring applications imaginable. But ITA made\nit interesting by \nredefining the problem in a more ambitious way.I think the same thing happened at Google. When Google was founded,\nthe conventional wisdom among the so-called portals was that search\nwas boring and unimportant. But the guys at Google didn't think\nsearch was boring, and that's why they do it so well.This is an area where managers can make a difference. Like a parent\nsaying to a child, I bet you can't clean up your whole room in\nten minutes, a good manager can sometimes redefine a problem as a\nmore interesting one. Steve Jobs seems to be particularly good at\nthis, in part simply by having high standards. There were a lot\nof small, inexpensive computers before the Mac. He redefined the\nproblem as: make one that's beautiful. And that probably drove\nthe developers harder than any carrot or stick could.They certainly delivered. When the Mac first appeared, you didn't\neven have to turn it on to know it would be good; you could tell\nfrom the case. A few weeks ago I was walking along the street in\nCambridge, and in someone's trash I saw what appeared to be a Mac\ncarrying case. I looked inside, and there was a Mac SE. I carried\nit home and plugged it in, and it booted. The happy Macintosh\nface, and then the finder. My God, it was so simple. It was just\nlike ... Google.Hackers like to work for people with high standards. But it's not\nenough just to be exacting. You have to insist on the right things.\nWhich usually means that you have to be a hacker yourself. I've\nseen occasional articles about how to manage programmers. Really\nthere should be two articles: one about what to do if\nyou are yourself a programmer, and one about what to do if you're not. And the \nsecond could probably be condensed into two words: give up.The problem is not so much the day to day management. Really good\nhackers are practically self-managing. The problem is, if you're\nnot a hacker, you can't tell who the good hackers are. A similar\nproblem explains why American cars are so ugly. I call it the\ndesign paradox. You might think that you could make your products\nbeautiful just by hiring a great designer to design them. But if\nyou yourself don't have good taste, \nhow are you going to recognize\na good designer? By definition you can't tell from his portfolio.\nAnd you can't go by the awards he's won or the jobs he's had,\nbecause in design, as in most fields, those tend to be driven by\nfashion and schmoozing, with actual ability a distant third.\nThere's no way around it: you can't manage a process intended to\nproduce beautiful things without knowing what beautiful is. American\ncars are ugly because American car companies are run by people with\nbad taste.Many people in this country think of taste as something elusive,\nor even frivolous. It is neither. To drive design, a manager must\nbe the most demanding user of a company's products. And if you\nhave really good taste, you can, as Steve Jobs does, make satisfying\nyou the kind of problem that good people like to work on.Nasty Little ProblemsIt's pretty easy to say what kinds of problems are not interesting:\nthose where instead of solving a few big, clear, problems, you have\nto solve a lot of nasty little ones. One of the worst kinds of\nprojects is writing an interface to a piece of software that's\nfull of bugs. Another is when you have to customize\nsomething for an individual client's complex and ill-defined needs.\nTo hackers these kinds of projects are the death of a thousand\ncuts.The distinguishing feature of nasty little problems is that you\ndon't learn anything from them. Writing a compiler is interesting\nbecause it teaches you what a compiler is. But writing an interface\nto a buggy piece of software doesn't teach you anything, because the\nbugs are random. [3] So it's not just fastidiousness that makes good\nhackers avoid nasty little problems. It's more a question of\nself-preservation. Working on nasty little problems makes you\nstupid. Good hackers avoid it for the same reason models avoid\ncheeseburgers.Of course some problems inherently have this character. And because\nof supply and demand, they pay especially well. So a company that\nfound a way to get great hackers to work on tedious problems would\nbe very successful. How would you do it?One place this happens is in startups. At our startup we had \nRobert Morris working as a system administrator. That's like having the\nRolling Stones play at a bar mitzvah. You can't hire that kind of\ntalent. But people will do any amount of drudgery for companies\nof which they're the founders. [4]Bigger companies solve the problem by partitioning the company.\nThey get smart people to work for them by establishing a separate\nR&D department where employees don't have to work directly on\ncustomers' nasty little problems. [5] In this model, the research\ndepartment functions like a mine. They produce new ideas; maybe\nthe rest of the company will be able to use them.You may not have to go to this extreme. \nBottom-up programming\nsuggests another way to partition the company: have the smart people\nwork as toolmakers. If your company makes software to do x, have\none group that builds tools for writing software of that type, and\nanother that uses these tools to write the applications. This way\nyou might be able to get smart people to write 99% of your code,\nbut still keep them almost as insulated from users as they would\nbe in a traditional research department. The toolmakers would have\nusers, but they'd only be the company's own developers. [6]If Microsoft used this approach, their software wouldn't be so full\nof security holes, because the less smart people writing the actual\napplications wouldn't be doing low-level stuff like allocating\nmemory. Instead of writing Word directly in C, they'd be plugging\ntogether big Lego blocks of Word-language. (Duplo, I believe, is\nthe technical term.)ClumpingAlong with interesting problems, what good hackers like is other\ngood hackers. Great hackers tend to clump together-- sometimes\nspectacularly so, as at Xerox Parc. So you won't attract good\nhackers in linear proportion to how good an environment you create\nfor them. The tendency to clump means it's more like the square\nof the environment. So it's winner take all. At any given time,\nthere are only about ten or twenty places where hackers most want to\nwork, and if you aren't one of them, you won't just have fewer\ngreat hackers, you'll have zero.Having great hackers is not, by itself, enough to make a company\nsuccessful. It works well for Google and ITA, which are two of\nthe hot spots right now, but it didn't help Thinking Machines or\nXerox. Sun had a good run for a while, but their business model\nis a down elevator. In that situation, even the best hackers can't\nsave you.I think, though, that all other things being equal, a company that\ncan attract great hackers will have a huge advantage. There are\npeople who would disagree with this. When we were making the rounds\nof venture capital firms in the 1990s, several told us that software\ncompanies didn't win by writing great software, but through brand,\nand dominating channels, and doing the right deals.They really seemed to believe this, and I think I know why. I\nthink what a lot of VCs are looking for, at least unconsciously,\nis the next Microsoft. And of course if Microsoft is your model,\nyou shouldn't be looking for companies that hope to win by writing\ngreat software. But VCs are mistaken to look for the next Microsoft,\nbecause no startup can be the next Microsoft unless some other\ncompany is prepared to bend over at just the right moment and be\nthe next IBM.It's a mistake to use Microsoft as a model, because their whole\nculture derives from that one lucky break. Microsoft is a bad data\npoint. If you throw them out, you find that good products do tend\nto win in the market. What VCs should be looking for is the next\nApple, or the next Google.I think Bill Gates knows this. What worries him about Google is\nnot the power of their brand, but the fact that they have\nbetter hackers. [7]\nRecognitionSo who are the great hackers? How do you know when you meet one?\nThat turns out to be very hard. Even hackers can't tell. I'm\npretty sure now that my friend Trevor Blackwell is a great hacker.\nYou may have read on Slashdot how he made his \nown Segway. The\nremarkable thing about this project was that he wrote all the\nsoftware in one day (in Python, incidentally).For Trevor, that's\npar for the course. But when I first met him, I thought he was a\ncomplete idiot. He was standing in Robert Morris's office babbling\nat him about something or other, and I remember standing behind\nhim making frantic gestures at Robert to shoo this nut out of his\noffice so we could go to lunch. Robert says he misjudged Trevor\nat first too. Apparently when Robert first met him, Trevor had\njust begun a new scheme that involved writing down everything about\nevery aspect of his life on a stack of index cards, which he carried\nwith him everywhere. He'd also just arrived from Canada, and had\na strong Canadian accent and a mullet.The problem is compounded by the fact that hackers, despite their\nreputation for social obliviousness, sometimes put a good deal of\neffort into seeming smart. When I was in grad school I used to\nhang around the MIT AI Lab occasionally. It was kind of intimidating\nat first. Everyone there spoke so fast. But after a while I\nlearned the trick of speaking fast. You don't have to think any\nfaster; just use twice as many words to say everything. With this amount of noise in the signal, it's hard to tell good\nhackers when you meet them. I can't tell, even now. You also\ncan't tell from their resumes. It seems like the only way to judge\na hacker is to work with him on something.And this is the reason that high-tech areas \nonly happen around universities. The active ingredient\nhere is not so much the professors as the students. Startups grow up\naround universities because universities bring together promising young\npeople and make them work on the same projects. The\nsmart ones learn who the other smart ones are, and together\nthey cook up new projects of their own.Because you can't tell a great hacker except by working with him,\nhackers themselves can't tell how good they are. This is true to\na degree in most fields. I've found that people who\nare great at something are not so much convinced of their own\ngreatness as mystified at why everyone else seems so incompetent.\nBut it's particularly hard for hackers to know how good they are,\nbecause it's hard to compare their work. This is easier in most\nother fields. In the hundred meters, you know in 10 seconds who's\nfastest. Even in math there seems to be a general consensus about\nwhich problems are hard to solve, and what constitutes a good\nsolution. But hacking is like writing. Who can say which of two\nnovels is better? Certainly not the authors.With hackers, at least, other hackers can tell. That's because,\nunlike novelists, hackers collaborate on projects. When you get\nto hit a few difficult problems over the net at someone, you learn\npretty quickly how hard they hit them back. But hackers can't\nwatch themselves at work. So if you ask a great hacker how good\nhe is, he's almost certain to reply, I don't know. He's not just\nbeing modest. He really doesn't know.And none of us know, except about people we've actually worked\nwith. Which puts us in a weird situation: we don't know who our\nheroes should be. The hackers who become famous tend to become\nfamous by random accidents of PR. Occasionally I need to give an\nexample of a great hacker, and I never know who to use. The first\nnames that come to mind always tend to be people I know personally,\nbut it seems lame to use them. So, I think, maybe I should say\nRichard Stallman, or Linus Torvalds, or Alan Kay, or someone famous\nlike that. But I have no idea if these guys are great hackers.\nI've never worked with them on anything.If there is a Michael Jordan of hacking, no one knows, including\nhim.CultivationFinally, the question the hackers have all been wondering about:\nhow do you become a great hacker? I don't know if it's possible\nto make yourself into one. But it's certainly possible to do things\nthat make you stupid, and if you can make yourself stupid, you\ncan probably make yourself smart too.The key to being a good hacker may be to work on what you like.\nWhen I think about the great hackers I know, one thing they have\nin common is the extreme \ndifficulty of making them work \non anything they\ndon't want to. I don't know if this is cause or effect; it may be\nboth.To do something well you have to love it. \nSo to the extent you\ncan preserve hacking as something you love, you're likely to do it\nwell. Try to keep the sense of wonder you had about programming at\nage 14. If you're worried that your current job is rotting your\nbrain, it probably is.The best hackers tend to be smart, of course, but that's true in\na lot of fields. Is there some quality that's unique to hackers?\nI asked some friends, and the number one thing they mentioned was\ncuriosity. \nI'd always supposed that all smart people were curious--\nthat curiosity was simply the first derivative of knowledge. But\napparently hackers are particularly curious, especially about how\nthings work. That makes sense, because programs are in effect\ngiant descriptions of how things work.Several friends mentioned hackers' ability to concentrate-- their\nability, as one put it, to \"tune out everything outside their own\nheads.'' I've certainly noticed this. And I've heard several \nhackers say that after drinking even half a beer they can't program at\nall. So maybe hacking does require some special ability to focus.\nPerhaps great hackers can load a large amount of context into their\nhead, so that when they look at a line of code, they see not just\nthat line but the whole program around it. John McPhee\nwrote that Bill Bradley's success as a basketball player was due\npartly to his extraordinary peripheral vision. \"Perfect'' eyesight\nmeans about 47 degrees of vertical peripheral vision. Bill Bradley\nhad 70; he could see the basket when he was looking at the floor.\nMaybe great hackers have some similar inborn ability. (I cheat by\nusing a very dense language, \nwhich shrinks the court.)This could explain the disconnect over cubicles. Maybe the people\nin charge of facilities, not having any concentration to shatter,\nhave no idea that working in a cubicle feels to a hacker like having\none's brain in a blender. (Whereas Bill, if the rumors of autism\nare true, knows all too well.)One difference I've noticed between great hackers and smart people\nin general is that hackers are more \npolitically incorrect. To the\nextent there is a secret handshake among good hackers, it's when they\nknow one another well enough to express opinions that would get\nthem stoned to death by the general public. And I can see why\npolitical incorrectness would be a useful quality in programming.\nPrograms are very complex and, at least in the hands of good\nprogrammers, very fluid. In such situations it's helpful to have\na habit of questioning assumptions.Can you cultivate these qualities? I don't know. But you can at\nleast not repress them. So here is my best shot at a recipe. If\nit is possible to make yourself into a great hacker, the way to do\nit may be to make the following deal with yourself: you never have\nto work on boring projects (unless your family will starve otherwise),\nand in return, you'll never allow yourself to do a half-assed job.\nAll the great hackers I know seem to have made that deal, though\nperhaps none of them had any choice in the matter.Notes\n[1] In fairness, I have to say that IBM makes decent hardware. I\nwrote this on an IBM laptop.[2] They did turn out to be doomed. They shut down a few months\nlater.[3] I think this is what people mean when they talk\nabout the \"meaning of life.\" On the face of it, this seems an \nodd idea. Life isn't an expression; how could it have meaning?\nBut it can have a quality that feels a lot like meaning. In a project\nlike a compiler, you have to solve a lot of problems, but the problems\nall fall into a pattern, as in a signal. Whereas when the problems\nyou have to solve are random, they seem like noise.\n[4] Einstein at one point worked designing refrigerators. (He had equity.)[5] It's hard to say exactly what constitutes research in the\ncomputer world, but as a first approximation, it's software that\ndoesn't have users.I don't think it's publication that makes the best hackers want to work\nin research departments. I think it's mainly not having to have a\nthree hour meeting with a product manager about problems integrating\nthe Korean version of Word 13.27 with the talking paperclip.[6] Something similar has been happening for a long time in the\nconstruction industry. When you had a house built a couple hundred\nyears ago, the local builders built everything in it. But increasingly\nwhat builders do is assemble components designed and manufactured\nby someone else. This has, like the arrival of desktop publishing,\ngiven people the freedom to experiment in disastrous ways, but it\nis certainly more efficient.[7] Google is much more dangerous to Microsoft than Netscape was.\nProbably more dangerous than any other company has ever been. Not\nleast because they're determined to fight. On their job listing\npage, they say that one of their \"core values'' is \"Don't be evil.''\nFrom a company selling soybean oil or mining equipment, such a\nstatement would merely be eccentric. But I think all of us in the\ncomputer world recognize who that is a declaration of war on.Thanks to Jessica Livingston, Robert Morris, and Sarah Harlin\nfor reading earlier versions of this talk."} {"title": "mod", "text": "December 2019There are two distinct ways to be politically moderate: on purpose\nand by accident. Intentional moderates are trimmers, deliberately\nchoosing a position mid-way between the extremes of right and left.\nAccidental moderates end up in the middle, on average, because they\nmake up their own minds about each question, and the far right and\nfar left are roughly equally wrong.You can distinguish intentional from accidental moderates by the\ndistribution of their opinions. If the far left opinion on some\nmatter is 0 and the far right opinion 100, an intentional moderate's\nopinion on every question will be near 50. Whereas an accidental\nmoderate's opinions will be scattered over a broad range, but will,\nlike those of the intentional moderate, average to about 50.Intentional moderates are similar to those on the far left and the\nfar right in that their opinions are, in a sense, not their own.\nThe defining quality of an ideologue, whether on the left or the\nright, is to acquire one's opinions in bulk. You don't get to pick\nand choose. Your opinions about taxation can be predicted from your\nopinions about sex. And although intentional moderates\nmight seem to be the opposite of ideologues, their beliefs (though\nin their case the word \"positions\" might be more accurate) are also\nacquired in bulk. If the median opinion shifts to the right or left,\nthe intentional moderate must shift with it. Otherwise they stop\nbeing moderate.Accidental moderates, on the other hand, not only choose their own\nanswers, but choose their own questions. They may not care at all\nabout questions that the left and right both think are terribly\nimportant. So you can only even measure the politics of an accidental\nmoderate from the intersection of the questions they care about and\nthose the left and right care about, and this can\nsometimes be vanishingly small.It is not merely a manipulative rhetorical trick to say \"if you're\nnot with us, you're against us,\" but often simply false.Moderates are sometimes derided as cowards, particularly by \nthe extreme left. But while it may be accurate to call intentional\nmoderates cowards, openly being an accidental moderate requires the\nmost courage of all, because you get attacked from both right and\nleft, and you don't have the comfort of being an orthodox member\nof a large group to sustain you.Nearly all the most impressive people I know are accidental moderates.\nIf I knew a lot of professional athletes, or people in the entertainment\nbusiness, that might be different. Being on the far left or far\nright doesn't affect how fast you run or how well you sing. But\nsomeone who works with ideas has to be independent-minded to do it\nwell.Or more precisely, you have to be independent-minded about the ideas\nyou work with. You could be mindlessly doctrinaire in your politics\nand still be a good mathematician. In the 20th century, a lot of\nvery smart people were Marxists \u0097 just no one who was smart about\nthe subjects Marxism involves. But if the ideas you use in your\nwork intersect with the politics of your time, you have two choices:\nbe an accidental moderate, or be mediocre.Notes[1] It's possible in theory for one side to be entirely right and\nthe other to be entirely wrong. Indeed, ideologues must always\nbelieve this is the case. But historically it rarely has been.[2] For some reason the far right tend to ignore moderates rather\nthan despise them as backsliders. I'm not sure why. Perhaps it\nmeans that the far right is less ideological than the far left. Or\nperhaps that they are more confident, or more resigned, or simply\nmore disorganized. I just don't know.[3] Having heretical opinions doesn't mean you have to express\nthem openly. It may be\neasier to have them if you don't.\nThanks to Austen Allred, Trevor Blackwell, Patrick Collison, Jessica Livingston,\nAmjad Masad, Ryan Petersen, and Harj Taggar for reading drafts of this."} {"title": "rootsoflisp", "text": "May 2001\n\n(I wrote this article to help myself understand exactly\nwhat McCarthy discovered. You don't need to know this stuff\nto program in Lisp, but it should be helpful to \nanyone who wants to\nunderstand the essence of Lisp \u0097 both in the sense of its\norigins and its semantic core. The fact that it has such a core\nis one of Lisp's distinguishing features, and the reason why,\nunlike other languages, Lisp has dialects.)In 1960, John \nMcCarthy published a remarkable paper in\nwhich he did for programming something like what Euclid did for\ngeometry. He showed how, given a handful of simple\noperators and a notation for functions, you can\nbuild a whole programming language.\nHe called this language Lisp, for \"List Processing,\"\nbecause one of his key ideas was to use a simple\ndata structure called a list for both\ncode and data.It's worth understanding what McCarthy discovered, not\njust as a landmark in the history of computers, but as\na model for what programming is tending to become in\nour own time. It seems to me that there have been\ntwo really clean, consistent models of programming so\nfar: the C model and the Lisp model.\nThese two seem points of high ground, with swampy lowlands\nbetween them. As computers have grown more powerful,\nthe new languages being developed have been moving\nsteadily toward the Lisp model. A popular recipe\nfor new programming languages in the past 20 years \nhas been to take the C model of computing and add to\nit, piecemeal, parts taken from the Lisp model,\nlike runtime typing and garbage collection.In this article I'm going to try to explain in the\nsimplest possible terms what McCarthy discovered.\nThe point is not just to learn about an interesting\ntheoretical result someone figured out forty years ago,\nbut to show where languages are heading.\nThe unusual thing about Lisp \u0097 in fact, the defining\nquality of Lisp \u0097 is that it can be written in\nitself. To understand what McCarthy meant by this,\nwe're going to retrace his steps, with his mathematical\nnotation translated into running Common Lisp code."} {"title": "siliconvalley", "text": "May 2006(This essay is derived from a keynote at Xtech.)Could you reproduce Silicon Valley elsewhere, or is there something\nunique about it?It wouldn't be surprising if it were hard to reproduce in other\ncountries, because you couldn't reproduce it in most of the US\neither. What does it take to make a silicon valley even here?What it takes is the right people. If you could get the right ten\nthousand people to move from Silicon Valley to Buffalo, Buffalo\nwould become Silicon Valley. \n[1]That's a striking departure from the past. Up till a couple decades\nago, geography was destiny for cities. All great cities were located\non waterways, because cities made money by trade, and water was the\nonly economical way to ship.Now you could make a great city anywhere, if you could get the right\npeople to move there. So the question of how to make a silicon\nvalley becomes: who are the right people, and how do you get them\nto move?Two TypesI think you only need two kinds of people to create a technology\nhub: rich people and nerds. They're the limiting reagents in the\nreaction that produces startups, because they're the only ones\npresent when startups get started. Everyone else will move.Observation bears this out: within the US, towns have become startup\nhubs if and only if they have both rich people and nerds. Few\nstartups happen in Miami, for example, because although it's full\nof rich people, it has few nerds. It's not the kind of place nerds\nlike.Whereas Pittsburgh has the opposite problem: plenty of nerds, but\nno rich people. The top US Computer Science departments are said\nto be MIT, Stanford, Berkeley, and Carnegie-Mellon. MIT yielded\nRoute 128. Stanford and Berkeley yielded Silicon Valley. But\nCarnegie-Mellon? The record skips at that point. Lower down the\nlist, the University of Washington yielded a high-tech community\nin Seattle, and the University of Texas at Austin yielded one in\nAustin. But what happened in Pittsburgh? And in Ithaca, home of\nCornell, which is also high on the list?I grew up in Pittsburgh and went to college at Cornell, so I can\nanswer for both. The weather is terrible, particularly in winter,\nand there's no interesting old city to make up for it, as there is\nin Boston. Rich people don't want to live in Pittsburgh or Ithaca.\nSo while there are plenty of hackers who could start startups,\nthere's no one to invest in them.Not BureaucratsDo you really need the rich people? Wouldn't it work to have the\ngovernment invest in the nerds? No, it would not. Startup investors\nare a distinct type of rich people. They tend to have a lot of\nexperience themselves in the technology business. This (a) helps\nthem pick the right startups, and (b) means they can supply advice\nand connections as well as money. And the fact that they have a\npersonal stake in the outcome makes them really pay attention.Bureaucrats by their nature are the exact opposite sort of people\nfrom startup investors. The idea of them making startup investments\nis comic. It would be like mathematicians running Vogue-- or\nperhaps more accurately, Vogue editors running a math journal.\n[2]Though indeed, most things bureaucrats do, they do badly. We just\ndon't notice usually, because they only have to compete against\nother bureaucrats. But as startup investors they'd have to compete\nagainst pros with a great deal more experience and motivation.Even corporations that have in-house VC groups generally forbid\nthem to make their own investment decisions. Most are only allowed\nto invest in deals where some reputable private VC firm is willing\nto act as lead investor.Not BuildingsIf you go to see Silicon Valley, what you'll see are buildings.\nBut it's the people that make it Silicon Valley, not the buildings.\nI read occasionally about attempts to set up \"technology\nparks\" in other places, as if the active ingredient of Silicon\nValley were the office space. An article about Sophia Antipolis\nbragged that companies there included Cisco, Compaq, IBM, NCR, and\nNortel. Don't the French realize these aren't startups?Building office buildings for technology companies won't get you a\nsilicon valley, because the key stage in the life of a startup\nhappens before they want that kind of space. The key stage is when\nthey're three guys operating out of an apartment. Wherever the\nstartup is when it gets funded, it will stay. The defining quality\nof Silicon Valley is not that Intel or Apple or Google have offices\nthere, but that they were started there.So if you want to reproduce Silicon Valley, what you need to reproduce\nis those two or three founders sitting around a kitchen table\ndeciding to start a company. And to reproduce that you need those\npeople.UniversitiesThe exciting thing is, all you need are the people. If you could\nattract a critical mass of nerds and investors to live somewhere,\nyou could reproduce Silicon Valley. And both groups are highly\nmobile. They'll go where life is good. So what makes a place good\nto them?What nerds like is other nerds. Smart people will go wherever other\nsmart people are. And in particular, to great universities. In\ntheory there could be other ways to attract them, but so far\nuniversities seem to be indispensable. Within the US, there are\nno technology hubs without first-rate universities-- or at least,\nfirst-rate computer science departments.So if you want to make a silicon valley, you not only need a\nuniversity, but one of the top handful in the world. It has to be\ngood enough to act as a magnet, drawing the best people from thousands\nof miles away. And that means it has to stand up to existing magnets\nlike MIT and Stanford.This sounds hard. Actually it might be easy. My professor friends,\nwhen they're deciding where they'd like to work, consider one thing\nabove all: the quality of the other faculty. What attracts professors\nis good colleagues. So if you managed to recruit, en masse, a\nsignificant number of the best young researchers, you could create\na first-rate university from nothing overnight. And you could do\nthat for surprisingly little. If you paid 200 people hiring bonuses\nof $3 million apiece, you could put together a faculty that would\nbear comparison with any in the world. And from that point the\nchain reaction would be self-sustaining. So whatever it costs to\nestablish a mediocre university, for an additional half billion or\nso you could have a great one. \n[3]PersonalityHowever, merely creating a new university would not be enough to\nstart a silicon valley. The university is just the seed. It has\nto be planted in the right soil, or it won't germinate. Plant it\nin the wrong place, and you just create Carnegie-Mellon.To spawn startups, your university has to be in a town that has\nattractions other than the university. It has to be a place where\ninvestors want to live, and students want to stay after they graduate.The two like much the same things, because most startup investors\nare nerds themselves. So what do nerds look for in a town? Their\ntastes aren't completely different from other people's, because a\nlot of the towns they like most in the US are also big tourist\ndestinations: San Francisco, Boston, Seattle. But their tastes\ncan't be quite mainstream either, because they dislike other big\ntourist destinations, like New York, Los Angeles, and Las Vegas.There has been a lot written lately about the \"creative class.\" The\nthesis seems to be that as wealth derives increasingly from ideas,\ncities will prosper only if they attract those who have them. That\nis certainly true; in fact it was the basis of Amsterdam's prosperity\n400 years ago.A lot of nerd tastes they share with the creative class in general.\nFor example, they like well-preserved old neighborhoods instead of\ncookie-cutter suburbs, and locally-owned shops and restaurants\ninstead of national chains. Like the rest of the creative class,\nthey want to live somewhere with personality.What exactly is personality? I think it's the feeling that each\nbuilding is the work of a distinct group of people. A town with\npersonality is one that doesn't feel mass-produced. So if you want\nto make a startup hub-- or any town to attract the \"creative class\"--\nyou probably have to ban large development projects.\nWhen a large tract has been developed by a single organization, you\ncan always tell. \n[4]Most towns with personality are old, but they don't have to be.\nOld towns have two advantages: they're denser, because they were\nlaid out before cars, and they're more varied, because they were\nbuilt one building at a time. You could have both now. Just have\nbuilding codes that ensure density, and ban large scale developments.A corollary is that you have to keep out the biggest developer of\nall: the government. A government that asks \"How can we build a\nsilicon valley?\" has probably ensured failure by the way they framed\nthe question. You don't build a silicon valley; you let one grow.NerdsIf you want to attract nerds, you need more than a town with\npersonality. You need a town with the right personality. Nerds\nare a distinct subset of the creative class, with different tastes\nfrom the rest. You can see this most clearly in New York, which\nattracts a lot of creative people, but few nerds. \n[5]What nerds like is the kind of town where people walk around smiling.\nThis excludes LA, where no one walks at all, and also New York,\nwhere people walk, but not smiling. When I was in grad school in\nBoston, a friend came to visit from New York. On the subway back\nfrom the airport she asked \"Why is everyone smiling?\" I looked and\nthey weren't smiling. They just looked like they were compared to\nthe facial expressions she was used to.If you've lived in New York, you know where these facial expressions\ncome from. It's the kind of place where your mind may be excited,\nbut your body knows it's having a bad time. People don't so much\nenjoy living there as endure it for the sake of the excitement.\nAnd if you like certain kinds of excitement, New York is incomparable.\nIt's a hub of glamour, a magnet for all the shorter half-life\nisotopes of style and fame.Nerds don't care about glamour, so to them the appeal of New York\nis a mystery. People who like New York will pay a fortune for a\nsmall, dark, noisy apartment in order to live in a town where the\ncool people are really cool. A nerd looks at that deal and sees\nonly: pay a fortune for a small, dark, noisy apartment.Nerds will pay a premium to live in a town where the smart people\nare really smart, but you don't have to pay as much for that. It's\nsupply and demand: glamour is popular, so you have to pay a lot for\nit.Most nerds like quieter pleasures. They like cafes instead of\nclubs; used bookshops instead of fashionable clothing shops; hiking\ninstead of dancing; sunlight instead of tall buildings. A nerd's\nidea of paradise is Berkeley or Boulder.YouthIt's the young nerds who start startups, so it's those specifically\nthe city has to appeal to. The startup hubs in the US are all\nyoung-feeling towns. This doesn't mean they have to be new.\nCambridge has the oldest town plan in America, but it feels young\nbecause it's full of students.What you can't have, if you want to create a silicon valley, is a\nlarge, existing population of stodgy people. It would be a waste\nof time to try to reverse the fortunes of a declining industrial town\nlike Detroit or Philadelphia by trying to encourage startups. Those\nplaces have too much momentum in the wrong direction. You're better\noff starting with a blank slate in the form of a small town. Or\nbetter still, if there's a town young people already flock to, that\none.The Bay Area was a magnet for the young and optimistic for decades\nbefore it was associated with technology. It was a place people\nwent in search of something new. And so it became synonymous with\nCalifornia nuttiness. There's still a lot of that there. If you\nwanted to start a new fad-- a new way to focus one's \"energy,\" for\nexample, or a new category of things not to eat-- the Bay Area would\nbe the place to do it. But a place that tolerates oddness in the\nsearch for the new is exactly what you want in a startup hub, because\neconomically that's what startups are. Most good startup ideas\nseem a little crazy; if they were obviously good ideas, someone\nwould have done them already.(How many people are going to want computers in their houses?\nWhat, another search engine?)That's the connection between technology and liberalism. Without\nexception the high-tech cities in the US are also the most liberal.\nBut it's not because liberals are smarter that this is so. It's\nbecause liberal cities tolerate odd ideas, and smart people by\ndefinition have odd ideas.Conversely, a town that gets praised for being \"solid\" or representing\n\"traditional values\" may be a fine place to live, but it's never\ngoing to succeed as a startup hub. The 2004 presidential election,\nthough a disaster in other respects, conveniently supplied us with\na county-by-county \nmap of such places. \n[6]To attract the young, a town must have an intact center. In most\nAmerican cities the center has been abandoned, and the growth, if\nany, is in the suburbs. Most American cities have been turned\ninside out. But none of the startup hubs has: not San Francisco,\nor Boston, or Seattle. They all have intact centers.\n[7]\nMy guess is that no city with a dead center could be turned into a\nstartup hub. Young people don't want to live in the suburbs.Within the US, the two cities I think could most easily be turned\ninto new silicon valleys are Boulder and Portland. Both have the\nkind of effervescent feel that attracts the young. They're each\nonly a great university short of becoming a silicon valley, if they\nwanted to.TimeA great university near an attractive town. Is that all it takes?\nThat was all it took to make the original Silicon Valley. Silicon\nValley traces its origins to William Shockley, one of the inventors\nof the transistor. He did the research that won him the Nobel Prize\nat Bell Labs, but when he started his own company in 1956 he moved\nto Palo Alto to do it. At the time that was an odd thing to do.\nWhy did he? Because he had grown up there and remembered how nice\nit was. Now Palo Alto is suburbia, but then it was a charming\ncollege town-- a charming college town with perfect weather and San\nFrancisco only an hour away.The companies that rule Silicon Valley now are all descended in\nvarious ways from Shockley Semiconductor. Shockley was a difficult\nman, and in 1957 his top people-- \"the traitorous eight\"-- left to\nstart a new company, Fairchild Semiconductor. Among them were\nGordon Moore and Robert Noyce, who went on to found Intel, and\nEugene Kleiner, who founded the VC firm Kleiner Perkins. Forty-two\nyears later, Kleiner Perkins funded Google, and the partner responsible\nfor the deal was John Doerr, who came to Silicon Valley in 1974 to\nwork for Intel.So although a lot of the newest companies in Silicon Valley don't\nmake anything out of silicon, there always seem to be multiple links\nback to Shockley. There's a lesson here: startups beget startups.\nPeople who work for startups start their own. People who get rich\nfrom startups fund new ones. I suspect this kind of organic growth\nis the only way to produce a startup hub, because it's the only way\nto grow the expertise you need.That has two important implications. The first is that you need\ntime to grow a silicon valley. The university you could create in\na couple years, but the startup community around it has to grow\norganically. The cycle time is limited by the time it takes a\ncompany to succeed, which probably averages about five years.The other implication of the organic growth hypothesis is that you\ncan't be somewhat of a startup hub. You either have a self-sustaining\nchain reaction, or not. Observation confirms this too: cities\neither have a startup scene, or they don't. There is no middle\nground. Chicago has the third largest metropolitan area in America.\nAs source of startups it's negligible compared to Seattle, number 15.The good news is that the initial seed can be quite small. Shockley\nSemiconductor, though itself not very successful, was big enough.\nIt brought a critical mass of experts in an important new technology\ntogether in a place they liked enough to stay.CompetingOf course, a would-be silicon valley faces an obstacle the original\none didn't: it has to compete with Silicon Valley. Can that be\ndone? Probably.One of Silicon Valley's biggest advantages is its venture capital\nfirms. This was not a factor in Shockley's day, because VC funds\ndidn't exist. In fact, Shockley Semiconductor and Fairchild\nSemiconductor were not startups at all in our sense. They were\nsubsidiaries-- of Beckman Instruments and Fairchild Camera and\nInstrument respectively. Those companies were apparently willing\nto establish subsidiaries wherever the experts wanted to live.Venture investors, however, prefer to fund startups within an hour's\ndrive. For one, they're more likely to notice startups nearby.\nBut when they do notice startups in other towns they prefer them\nto move. They don't want to have to travel to attend board meetings,\nand in any case the odds of succeeding are higher in a startup hub.The centralizing effect of venture firms is a double one: they cause\nstartups to form around them, and those draw in more startups through\nacquisitions. And although the first may be weakening because it's\nnow so cheap to start some startups, the second seems as strong as ever.\nThree of the most admired\n\"Web 2.0\" companies were started outside the usual startup hubs,\nbut two of them have already been reeled in through acquisitions.Such centralizing forces make it harder for new silicon valleys to\nget started. But by no means impossible. Ultimately power rests\nwith the founders. A startup with the best people will beat one\nwith funding from famous VCs, and a startup that was sufficiently\nsuccessful would never have to move. So a town that\ncould exert enough pull over the right people could resist and\nperhaps even surpass Silicon Valley.For all its power, Silicon Valley has a great weakness: the paradise\nShockley found in 1956 is now one giant parking lot. San Francisco\nand Berkeley are great, but they're forty miles away. Silicon\nValley proper is soul-crushing suburban sprawl. It\nhas fabulous weather, which makes it significantly better than the\nsoul-crushing sprawl of most other American cities. But a competitor\nthat managed to avoid sprawl would have real leverage. All a city\nneeds is to be the kind of place the next traitorous eight look at\nand say \"I want to stay here,\" and that would be enough to get the\nchain reaction started.Notes[1]\nIt's interesting to consider how low this number could be\nmade. I suspect five hundred would be enough, even if they could\nbring no assets with them. Probably just thirty, if I could pick them, \nwould be enough to turn Buffalo into a significant startup hub.[2]\nBureaucrats manage to allocate research funding moderately\nwell, but only because (like an in-house VC fund) they outsource\nmost of the work of selection. A professor at a famous university\nwho is highly regarded by his peers will get funding, pretty much\nregardless of the proposal. That wouldn't work for startups, whose\nfounders aren't sponsored by organizations, and are often unknowns.[3]\nYou'd have to do it all at once, or at least a whole department\nat a time, because people would be more likely to come if they\nknew their friends were. And you should probably start from scratch,\nrather than trying to upgrade an existing university, or much energy\nwould be lost in friction.[4]\nHypothesis: Any plan in which multiple independent buildings\nare gutted or demolished to be \"redeveloped\" as a single project\nis a net loss of personality for the city, with the exception of\nthe conversion of buildings not previously public, like warehouses.[5]\nA few startups get started in New York, but less\nthan a tenth as many per capita as in Boston, and mostly\nin less nerdy fields like finance and media.[6]\nSome blue counties are false positives (reflecting the\nremaining power of Democractic party machines), but there are no\nfalse negatives. You can safely write off all the red counties.[7]\nSome \"urban renewal\" experts took a shot at destroying Boston's\nin the 1960s, leaving the area around city hall a bleak wasteland,\nbut most neighborhoods successfully resisted them.Thanks to Chris Anderson, Trevor Blackwell, Marc Hedlund,\nJessica Livingston, Robert Morris, Greg Mcadoo, Fred Wilson,\nand Stephen Wolfram for\nreading drafts of this, and to Ed Dumbill for inviting me to speak.(The second part of this talk became Why Startups\nCondense in America.)"} {"title": "corpdev", "text": "January 2015Corporate Development, aka corp dev, is the group within companies\nthat buys other companies. If you're talking to someone from corp\ndev, that's why, whether you realize it yet or not.It's usually a mistake to talk to corp dev unless (a) you want to\nsell your company right now and (b) you're sufficiently likely to\nget an offer at an acceptable price. In practice that means startups\nshould only talk to corp dev when they're either doing really well\nor really badly. If you're doing really badly, meaning the company\nis about to die, you may as well talk to them, because you have\nnothing to lose. And if you're doing really well, you can safely\ntalk to them, because you both know the price will have to be high,\nand if they show the slightest sign of wasting your time, you'll\nbe confident enough to tell them to get lost.The danger is to companies in the middle. Particularly to young\ncompanies that are growing fast, but haven't been doing it for long\nenough to have grown big yet. It's usually a mistake for a promising\ncompany less than a year old even to talk to corp dev.But it's a mistake founders constantly make. When someone from\ncorp dev wants to meet, the founders tell themselves they should\nat least find out what they want. Besides, they don't want to\noffend Big Company by refusing to meet.Well, I'll tell you what they want. They want to talk about buying\nyou. That's what the title \"corp dev\" means. So before agreeing\nto meet with someone from corp dev, ask yourselves, \"Do we want to\nsell the company right now?\" And if the answer is no, tell them\n\"Sorry, but we're focusing on growing the company.\" They won't be\noffended. And certainly the founders of Big Company won't be\noffended. If anything they'll think more highly of you. You'll\nremind them of themselves. They didn't sell either; that's why\nthey're in a position now to buy other companies.\n[1]Most founders who get contacted by corp dev already know what it\nmeans. And yet even when they know what corp dev does and know\nthey don't want to sell, they take the meeting. Why do they do it?\nThe same mix of denial and wishful thinking that underlies most\nmistakes founders make. It's flattering to talk to someone who wants\nto buy you. And who knows, maybe their offer will be surprisingly\nhigh. You should at least see what it is, right?No. If they were going to send you an offer immediately by email,\nsure, you might as well open it. But that is not how conversations\nwith corp dev work. If you get an offer at all, it will be at the\nend of a long and unbelievably distracting process. And if the\noffer is surprising, it will be surprisingly low.Distractions are the thing you can least afford in a startup. And\nconversations with corp dev are the worst sort of distraction,\nbecause as well as consuming your attention they undermine your\nmorale. One of the tricks to surviving a grueling process is not\nto stop and think how tired you are. Instead you get into a sort\nof flow. \n[2]\nImagine what it would do to you if at mile 20 of a\nmarathon, someone ran up beside you and said \"You must feel really\ntired. Would you like to stop and take a rest?\" Conversations\nwith corp dev are like that but worse, because the suggestion of\nstopping gets combined in your mind with the imaginary high price\nyou think they'll offer.And then you're really in trouble. If they can, corp dev people\nlike to turn the tables on you. They like to get you to the point\nwhere you're trying to convince them to buy instead of them trying\nto convince you to sell. And surprisingly often they succeed.This is a very slippery slope, greased with some of the most powerful\nforces that can work on founders' minds, and attended by an experienced\nprofessional whose full time job is to push you down it.Their tactics in pushing you down that slope are usually fairly\nbrutal. Corp dev people's whole job is to buy companies, and they\ndon't even get to choose which. The only way their performance is\nmeasured is by how cheaply they can buy you, and the more ambitious\nones will stop at nothing to achieve that. For example, they'll\nalmost always start with a lowball offer, just to see if you'll\ntake it. Even if you don't, a low initial offer will demoralize you\nand make you easier to manipulate.And that is the most innocent of their tactics. Just wait till\nyou've agreed on a price and think you have a done deal, and then\nthey come back and say their boss has vetoed the deal and won't do\nit for more than half the agreed upon price. Happens all the time.\nIf you think investors can behave badly, it's nothing compared to\nwhat corp dev people can do. Even corp dev people at companies\nthat are otherwise benevolent.I remember once complaining to a\nfriend at Google about some nasty trick their corp dev people had\npulled on a YC startup.\"What happened to Don't be Evil?\" I asked.\"I don't think corp dev got the memo,\" he replied.The tactics you encounter in M&A conversations can be like nothing\nyou've experienced in the otherwise comparatively \nupstanding world\nof Silicon Valley. It's as if a chunk of genetic material from the\nold-fashioned robber baron business world got incorporated into the\nstartup world.\n[3]The simplest way to protect yourself is to use the trick that John\nD. Rockefeller, whose grandfather was an alcoholic, used to protect\nhimself from becoming one. He once told a Sunday school class\n\n Boys, do you know why I never became a drunkard? Because I never\n took the first drink.\n\nDo you want to sell your company right now? Not eventually, right\nnow. If not, just don't take the first meeting. They won't be\noffended. And you in turn will be guaranteed to be spared one of\nthe worst experiences that can happen to a startup.If you do want to sell, there's another set of \ntechniques\n for doing\nthat. But the biggest mistake founders make in dealing with corp\ndev is not doing a bad job of talking to them when they're ready\nto, but talking to them before they are. So if you remember only\nthe title of this essay, you already know most of what you need to\nknow about M&A in the first year.Notes[1]\nI'm not saying you should never sell. I'm saying you should\nbe clear in your own mind about whether you want to sell or not,\nand not be led by manipulation or wishful thinking into trying to\nsell earlier than you otherwise would have.[2]\nIn a startup, as in most competitive sports, the task at hand\nalmost does this for you; you're too busy to feel tired. But when\nyou lose that protection, e.g. at the final whistle, the fatigue\nhits you like a wave. To talk to corp dev is to let yourself feel\nit mid-game.[3]\nTo be fair, the apparent misdeeds of corp dev people are magnified\nby the fact that they function as the face of a large organization\nthat often doesn't know its own mind. Acquirers can be surprisingly\nindecisive about acquisitions, and their flakiness is indistinguishable\nfrom dishonesty by the time it filters down to you.Thanks to Marc Andreessen, Jessica Livingston, Geoff\nRalston, and Qasar Younis for reading drafts of this."} {"title": "langdes", "text": "May 2001\n\n(These are some notes I made\nfor a panel discussion on programming language design\nat MIT on May 10, 2001.)1. Programming Languages Are for People.Programming languages\nare how people talk to computers. The computer would be just as\nhappy speaking any language that was unambiguous. The reason we\nhave high level languages is because people can't deal with\nmachine language. The point of programming\nlanguages is to prevent our poor frail human brains from being \noverwhelmed by a mass of detail.Architects know that some kinds of design problems are more personal\nthan others. One of the cleanest, most abstract design problems\nis designing bridges. There your job is largely a matter of spanning\na given distance with the least material. The other end of the\nspectrum is designing chairs. Chair designers have to spend their\ntime thinking about human butts.Software varies in the same way. Designing algorithms for routing\ndata through a network is a nice, abstract problem, like designing\nbridges. Whereas designing programming languages is like designing\nchairs: it's all about dealing with human weaknesses.Most of us hate to acknowledge this. Designing systems of great\nmathematical elegance sounds a lot more appealing to most of us\nthan pandering to human weaknesses. And there is a role for mathematical\nelegance: some kinds of elegance make programs easier to understand.\nBut elegance is not an end in itself.And when I say languages have to be designed to suit human weaknesses,\nI don't mean that languages have to be designed for bad programmers.\nIn fact I think you ought to design for the \nbest programmers, but\neven the best programmers have limitations. I don't think anyone\nwould like programming in a language where all the variables were\nthe letter x with integer subscripts.2. Design for Yourself and Your Friends.If you look at the history of programming languages, a lot of the best\nones were languages designed for their own authors to use, and a\nlot of the worst ones were designed for other people to use.When languages are designed for other people, it's always a specific\ngroup of other people: people not as smart as the language designer.\nSo you get a language that talks down to you. Cobol is the most\nextreme case, but a lot of languages are pervaded by this spirit.It has nothing to do with how abstract the language is. C is pretty\nlow-level, but it was designed for its authors to use, and that's\nwhy hackers like it.The argument for designing languages for bad programmers is that\nthere are more bad programmers than good programmers. That may be\nso. But those few good programmers write a disproportionately\nlarge percentage of the software.I'm interested in the question, how do you design a language that\nthe very best hackers will like? I happen to think this is\nidentical to the question, how do you design a good programming\nlanguage?, but even if it isn't, it is at least an interesting\nquestion.3. Give the Programmer as Much Control as Possible.Many languages\n(especially the ones designed for other people) have the attitude\nof a governess: they try to prevent you from\ndoing things that they think aren't good for you. I like the \nopposite approach: give the programmer as much\ncontrol as you can.When I first learned Lisp, what I liked most about it was\nthat it considered me an equal partner. In the other languages\nI had learned up till then, there was the language and there was my \nprogram, written in the language, and the two were very separate.\nBut in Lisp the functions and macros I wrote were just like those\nthat made up the language itself. I could rewrite the language\nif I wanted. It had the same appeal as open-source software.4. Aim for Brevity.Brevity is underestimated and even scorned.\nBut if you look into the hearts of hackers, you'll see that they\nreally love it. How many times have you heard hackers speak fondly\nof how in, say, APL, they could do amazing things with just a couple\nlines of code? I think anything that really smart people really\nlove is worth paying attention to.I think almost anything\nyou can do to make programs shorter is good. There should be lots\nof library functions; anything that can be implicit should be;\nthe syntax should be terse to a fault; even the names of things\nshould be short.And it's not only programs that should be short. The manual should\nbe thin as well. A good part of manuals is taken up with clarifications\nand reservations and warnings and special cases. If you force \nyourself to shorten the manual, in the best case you do it by fixing\nthe things in the language that required so much explanation.5. Admit What Hacking Is.A lot of people wish that hacking was\nmathematics, or at least something like a natural science. I think\nhacking is more like architecture. Architecture is\nrelated to physics, in the sense that architects have to design\nbuildings that don't fall down, but the actual goal of architects\nis to make great buildings, not to make discoveries about statics.What hackers like to do is make great programs.\nAnd I think, at least in our own minds, we have to remember that it's\nan admirable thing to write great programs, even when this work \ndoesn't translate easily into the conventional intellectual\ncurrency of research papers. Intellectually, it is just as\nworthwhile to design a language programmers will love as it is to design a\nhorrible one that embodies some idea you can publish a paper\nabout.1. How to Organize Big Libraries?Libraries are becoming an\nincreasingly important component of programming languages. They're\nalso getting bigger, and this can be dangerous. If it takes longer\nto find the library function that will do what you want than it\nwould take to write it yourself, then all that code is doing nothing\nbut make your manual thick. (The Symbolics manuals were a case in \npoint.) So I think we will have to work on ways to organize\nlibraries. The ideal would be to design them so that the programmer\ncould guess what library call would do the right thing.2. Are People Really Scared of Prefix Syntax?This is an open\nproblem in the sense that I have wondered about it for years and\nstill don't know the answer. Prefix syntax seems perfectly natural\nto me, except possibly for math. But it could be that a lot of \nLisp's unpopularity is simply due to having an unfamiliar syntax. \nWhether to do anything about it, if it is true, is another question. \n\n3. What Do You Need for Server-Based Software?\n\nI think a lot of the most exciting new applications that get written\nin the next twenty years will be Web-based applications, meaning\nprograms that sit on the server and talk to you through a Web\nbrowser. And to write these kinds of programs we may need some\nnew things.One thing we'll need is support for the new way that server-based \napps get released. Instead of having one or two big releases a\nyear, like desktop software, server-based apps get released as a\nseries of small changes. You may have as many as five or ten\nreleases a day. And as a rule everyone will always use the latest\nversion.You know how you can design programs to be debuggable?\nWell, server-based software likewise has to be designed to be\nchangeable. You have to be able to change it easily, or at least\nto know what is a small change and what is a momentous one.Another thing that might turn out to be useful for server based\nsoftware, surprisingly, is continuations. In Web-based software\nyou can use something like continuation-passing style to get the\neffect of subroutines in the inherently \nstateless world of a Web\nsession. Maybe it would be worthwhile having actual continuations,\nif it was not too expensive.4. What New Abstractions Are Left to Discover?I'm not sure how\nreasonable a hope this is, but one thing I would really love to \ndo, personally, is discover a new abstraction-- something that would\nmake as much of a difference as having first class functions or\nrecursion or even keyword parameters. This may be an impossible\ndream. These things don't get discovered that often. But I am always\nlooking.1. You Can Use Whatever Language You Want.Writing application\nprograms used to mean writing desktop software. And in desktop\nsoftware there is a big bias toward writing the application in the\nsame language as the operating system. And so ten years ago,\nwriting software pretty much meant writing software in C.\nEventually a tradition evolved:\napplication programs must not be written in unusual languages. \nAnd this tradition had so long to develop that nontechnical people\nlike managers and venture capitalists also learned it.Server-based software blows away this whole model. With server-based\nsoftware you can use any language you want. Almost nobody understands\nthis yet (especially not managers and venture capitalists).\nA few hackers understand it, and that's why we even hear\nabout new, indy languages like Perl and Python. We're not hearing\nabout Perl and Python because people are using them to write Windows\napps.What this means for us, as people interested in designing programming\nlanguages, is that there is now potentially an actual audience for\nour work.2. Speed Comes from Profilers.Language designers, or at least\nlanguage implementors, like to write compilers that generate fast\ncode. But I don't think this is what makes languages fast for users.\nKnuth pointed out long ago that speed only matters in a few critical\nbottlenecks. And anyone who's tried it knows that you can't guess\nwhere these bottlenecks are. Profilers are the answer.Language designers are solving the wrong problem. Users don't need\nbenchmarks to run fast. What they need is a language that can show\nthem what parts of their own programs need to be rewritten. That's\nwhere speed comes from in practice. So maybe it would be a net \nwin if language implementors took half the time they would\nhave spent doing compiler optimizations and spent it writing a\ngood profiler instead.3. You Need an Application to Drive the Design of a Language.This may not be an absolute rule, but it seems like the best languages\nall evolved together with some application they were being used to\nwrite. C was written by people who needed it for systems programming.\nLisp was developed partly to do symbolic differentiation, and\nMcCarthy was so eager to get started that he was writing differentiation\nprograms even in the first paper on Lisp, in 1960.It's especially good if your application solves some new problem.\nThat will tend to drive your language to have new features that \nprogrammers need. I personally am interested in writing\na language that will be good for writing server-based applications.[During the panel, Guy Steele also made this point, with the\nadditional suggestion that the application should not consist of\nwriting the compiler for your language, unless your language\nhappens to be intended for writing compilers.]4. A Language Has to Be Good for Writing Throwaway Programs.You know what a throwaway program is: something you write quickly for\nsome limited task. I think if you looked around you'd find that \na lot of big, serious programs started as throwaway programs. I\nwould not be surprised if most programs started as throwaway\nprograms. And so if you want to make a language that's good for\nwriting software in general, it has to be good for writing throwaway\nprograms, because that is the larval stage of most software.5. Syntax Is Connected to Semantics.It's traditional to think of\nsyntax and semantics as being completely separate. This will\nsound shocking, but it may be that they aren't.\nI think that what you want in your language may be related\nto how you express it.I was talking recently to Robert Morris, and he pointed out that\noperator overloading is a bigger win in languages with infix\nsyntax. In a language with prefix syntax, any function you define\nis effectively an operator. If you want to define a plus for a\nnew type of number you've made up, you can just define a new function\nto add them. If you do that in a language with infix syntax,\nthere's a big difference in appearance between the use of an\noverloaded operator and a function call.1. New Programming Languages.Back in the 1970s\nit was fashionable to design new programming languages. Recently\nit hasn't been. But I think server-based software will make new \nlanguages fashionable again. With server-based software, you can\nuse any language you want, so if someone does design a language that\nactually seems better than others that are available, there will be\npeople who take a risk and use it.2. Time-Sharing.Richard Kelsey gave this as an idea whose time\nhas come again in the last panel, and I completely agree with him.\nMy guess (and Microsoft's guess, it seems) is that much computing\nwill move from the desktop onto remote servers. In other words, \ntime-sharing is back. And I think there will need to be support\nfor it at the language level. For example, I know that Richard\nand Jonathan Rees have done a lot of work implementing process \nscheduling within Scheme 48.3. Efficiency.Recently it was starting to seem that computers\nwere finally fast enough. More and more we were starting to hear\nabout byte code, which implies to me at least that we feel we have\ncycles to spare. But I don't think we will, with server-based\nsoftware. Someone is going to have to pay for the servers that\nthe software runs on, and the number of users they can support per\nmachine will be the divisor of their capital cost.So I think efficiency will matter, at least in computational\nbottlenecks. It will be especially important to do i/o fast,\nbecause server-based applications do a lot of i/o.It may turn out that byte code is not a win, in the end. Sun and\nMicrosoft seem to be facing off in a kind of a battle of the byte\ncodes at the moment. But they're doing it because byte code is a\nconvenient place to insert themselves into the process, not because\nbyte code is in itself a good idea. It may turn out that this\nwhole battleground gets bypassed. That would be kind of amusing.1. Clients.This is just a guess, but my guess is that\nthe winning model for most applications will be purely server-based.\nDesigning software that works on the assumption that everyone will \nhave your client is like designing a society on the assumption that\neveryone will just be honest. It would certainly be convenient, but\nyou have to assume it will never happen.I think there will be a proliferation of devices that have some\nkind of Web access, and all you'll be able to assume about them is\nthat they can support simple html and forms. Will you have a\nbrowser on your cell phone? Will there be a phone in your palm \npilot? Will your blackberry get a bigger screen? Will you be able\nto browse the Web on your gameboy? Your watch? I don't know. \nAnd I don't have to know if I bet on\neverything just being on the server. It's\njust so much more robust to have all the \nbrains on the server.2. Object-Oriented Programming.I realize this is a\ncontroversial one, but I don't think object-oriented programming\nis such a big deal. I think it is a fine model for certain kinds\nof applications that need that specific kind of data structure, \nlike window systems, simulations, and cad programs. But I don't\nsee why it ought to be the model for all programming.I think part of the reason people in big companies like object-oriented\nprogramming is because it yields a lot of what looks like work.\nSomething that might naturally be represented as, say, a list of\nintegers, can now be represented as a class with all kinds of\nscaffolding and hustle and bustle.Another attraction of\nobject-oriented programming is that methods give you some of the\neffect of first class functions. But this is old news to Lisp\nprogrammers. When you have actual first class functions, you can\njust use them in whatever way is appropriate to the task at hand,\ninstead of forcing everything into a mold of classes and methods.What this means for language design, I think, is that you shouldn't\nbuild object-oriented programming in too deeply. Maybe the\nanswer is to offer more general, underlying stuff, and let people design\nwhatever object systems they want as libraries.3. Design by Committee.Having your language designed by a committee is a big pitfall, \nand not just for the reasons everyone knows about. Everyone\nknows that committees tend to yield lumpy, inconsistent designs. \nBut I think a greater danger is that they won't take risks.\nWhen one person is in charge he can take risks\nthat a committee would never agree on.Is it necessary to take risks to design a good language though?\nMany people might suspect\nthat language design is something where you should stick fairly\nclose to the conventional wisdom. I bet this isn't true.\nIn everything else people do, reward is proportionate to risk.\nWhy should language design be any different?"} {"title": "laundry", "text": "October 2004\nAs E. B. White said, \"good writing is rewriting.\" I didn't\nrealize this when I was in school. In writing, as in math and \nscience, they only show you the finished product.\nYou don't see all the false starts. This gives students a\nmisleading view of how things get made.Part of the reason it happens is that writers don't want \npeople to see their mistakes. But I'm willing to let people\nsee an early draft if it will show how much you have\nto rewrite to beat an essay into shape.Below is the oldest version I can find of\nThe Age of the Essay \n(probably the second or third day), with\ntext that ultimately survived in \nred and text that later\ngot deleted in gray.\nThere seem to be several categories of cuts: things I got wrong,\nthings that seem like bragging, flames,\ndigressions, stretches of awkward prose, and unnecessary words.I discarded more from the beginning. That's\nnot surprising; it takes a while to hit your stride. There\nare more digressions at the start, because I'm not sure where\nI'm heading.The amount of cutting is about average. I probably write\nthree to four words for every one that appears in the final\nversion of an essay.(Before anyone gets mad at me for opinions expressed here, remember\nthat anything you see here that's not in the final version is obviously\nsomething I chose not to publish, often because I disagree\nwith it.)\nRecently a friend said that what he liked about\nmy essays was that they weren't written the way\nwe'd been taught to write essays in school. You\nremember: topic sentence, introductory paragraph,\nsupporting paragraphs, conclusion. It hadn't\noccurred to me till then that those horrible things\nwe had to write in school were even connected to\nwhat I was doing now. But sure enough, I thought,\nthey did call them \"essays,\" didn't they?Well, they're not. Those things you have to write\nin school are not only not essays, they're one of the\nmost pointless of all the pointless hoops you have\nto jump through in school. And I worry that they\nnot only teach students the wrong things about writing,\nbut put them off writing entirely.So I'm going to give the other side of the story: what\nan essay really is, and how you write one. Or at least,\nhow I write one. Students be forewarned: if you actually write\nthe kind of essay I describe, you'll probably get bad\ngrades. But knowing how it's really done should\nat least help you to understand the feeling of futility\nyou have when you're writing the things they tell you to.\nThe most obvious difference between real essays and\nthe things one has to write in school is that real\nessays are not exclusively about English literature.\nIt's a fine thing for schools to\n\nteach students how to\nwrite. But for some bizarre reason (actually, a very specific bizarre\nreason that I'll explain in a moment),\n\nthe teaching of\nwriting has gotten mixed together with the study\nof literature. And so all over the country, students are\nwriting not about how a baseball team with a small budget \nmight compete with the Yankees, or the role of color in\nfashion, or what constitutes a good dessert, but about\nsymbolism in Dickens.With obvious \nresults. Only a few people really\n\ncare about\nsymbolism in Dickens. The teacher doesn't.\nThe students don't. Most of the people who've had to write PhD\ndisserations about Dickens don't. And certainly\n\nDickens himself would be more interested in an essay\nabout color or baseball.How did things get this way? To answer that we have to go back\nalmost a thousand years. Between about 500 and 1000, life was\nnot very good in Europe. The term \"dark ages\" is presently\nout of fashion as too judgemental (the period wasn't dark; \nit was just different), but if this label didn't already\nexist, it would seem an inspired metaphor. What little\noriginal thought there was took place in lulls between\nconstant wars and had something of the character of\nthe thoughts of parents with a new baby.\nThe most amusing thing written during this\nperiod, Liudprand of Cremona's Embassy to Constantinople, is,\nI suspect, mostly inadvertantly so.Around 1000 Europe began to catch its breath.\nAnd once they\nhad the luxury of curiosity, one of the first things they discovered\nwas what we call \"the classics.\"\nImagine if we were visited \nby aliens. If they could even get here they'd presumably know a\nfew things we don't. Immediately Alien Studies would become\nthe most dynamic field of scholarship: instead of painstakingly\ndiscovering things for ourselves, we could simply suck up\neverything they'd discovered. So it was in Europe in 1200.\nWhen classical texts began to circulate in Europe, they contained\nnot just new answers, but new questions. (If anyone proved\na theorem in christian Europe before 1200, for example, there\nis no record of it.)For a couple centuries, some of the most important work\nbeing done was intellectual archaelogy. Those were also\nthe centuries during which schools were first established.\nAnd since reading ancient texts was the essence of what\nscholars did then, it became the basis of the curriculum.By 1700, someone who wanted to learn about\nphysics didn't need to start by mastering Greek in order to read Aristotle. But schools\nchange slower than scholarship: the study of\nancient texts\nhad such prestige that it remained the backbone of \neducation\nuntil the late 19th century. By then it was merely a tradition.\nIt did serve some purposes: reading a foreign language was difficult,\nand thus taught discipline, or at least, kept students busy;\nit introduced students to\ncultures quite different from their own; and its very uselessness\nmade it function (like white gloves) as a social bulwark.\nBut it certainly wasn't\ntrue, and hadn't been true for centuries, that students were\nserving apprenticeships in the hottest area of scholarship.Classical scholarship had also changed. In the early era, philology\nactually mattered. The texts that filtered into Europe were\nall corrupted to some degree by the errors of translators and\ncopyists. Scholars had to figure out what Aristotle said\nbefore they could figure out what he meant. But by the modern\nera such questions were answered as well as they were ever\ngoing to be. And so the study of ancient texts became less\nabout ancientness and more about texts.The time was then ripe for the question: if the study of\nancient texts is a valid field for scholarship, why not modern\ntexts? The answer, of course, is that the raison d'etre\nof classical scholarship was a kind of intellectual archaelogy that\ndoes not need to be done in the case of contemporary authors.\nBut for obvious reasons no one wanted to give that answer.\nThe archaeological work being mostly done, it implied that\nthe people studying the classics were, if not wasting their\ntime, at least working on problems of minor importance.And so began the study of modern literature. There was some\ninitial resistance, but it didn't last long.\nThe limiting\nreagent in the growth of university departments is what\nparents will let undergraduates study. If parents will let\ntheir children major in x, the rest follows straightforwardly.\nThere will be jobs teaching x, and professors to fill them.\nThe professors will establish scholarly journals and publish\none another's papers. Universities with x departments will\nsubscribe to the journals. Graduate students who want jobs\nas professors of x will write dissertations about it. It may\ntake a good long while for the more prestigious universities\nto cave in and establish departments in cheesier xes, but\nat the other end of the scale there are so many universities\ncompeting to attract students that the mere establishment of\na discipline requires little more than the desire to do it.High schools imitate universities.\nAnd so once university\nEnglish departments were established in the late nineteenth century,\nthe 'riting component of the 3 Rs \nwas morphed into English.\nWith the bizarre consequence that high school students now\nhad to write about English literature-- to write, without\neven realizing it, imitations of whatever\nEnglish professors had been publishing in their journals a\nfew decades before. It's no wonder if this seems to the\nstudent a pointless exercise, because we're now three steps\nremoved from real work: the students are imitating English\nprofessors, who are imitating classical scholars, who are\nmerely the inheritors of a tradition growing out of what\nwas, 700 years ago, fascinating and urgently needed work.Perhaps high schools should drop English and just teach writing.\nThe valuable part of English classes is learning to write, and\nthat could be taught better by itself. Students learn better\nwhen they're interested in what they're doing, and it's hard\nto imagine a topic less interesting than symbolism in Dickens.\nMost of the people who write about that sort of thing professionally\nare not really interested in it. (Though indeed, it's been a\nwhile since they were writing about symbolism; now they're\nwriting about gender.)I have no illusions about how eagerly this suggestion will \nbe adopted. Public schools probably couldn't stop teaching\nEnglish even if they wanted to; they're probably required to by\nlaw. But here's a related suggestion that goes with the grain\ninstead of against it: that universities establish a\nwriting major. Many of the students who now major in English\nwould major in writing if they could, and most would\nbe better off.It will be argued that it is a good thing for students to be\nexposed to their literary heritage. Certainly. But is that\nmore important than that they learn to write well? And are\nEnglish classes even the place to do it? After all,\nthe average public high school student gets zero exposure to \nhis artistic heritage. No disaster results.\nThe people who are interested in art learn about it for\nthemselves, and those who aren't don't. I find that American\nadults are no better or worse informed about literature than\nart, despite the fact that they spent years studying literature\nin high school and no time at all studying art. Which presumably\nmeans that what they're taught in school is rounding error \ncompared to what they pick up on their own.Indeed, English classes may even be harmful. In my case they\nwere effectively aversion therapy. Want to make someone dislike\na book? Force him to read it and write an essay about it.\nAnd make the topic so intellectually bogus that you\ncould not, if asked, explain why one ought to write about it.\nI love to read more than anything, but by the end of high school\nI never read the books we were assigned. I was so disgusted with\nwhat we were doing that it became a point of honor\nwith me to write nonsense at least as good at the other students'\nwithout having more than glanced over the book to learn the names\nof the characters and a few random events in it.I hoped this might be fixed in college, but I found the same\nproblem there. It was not the teachers. It was English. \nWe were supposed to read novels and write essays about them.\nAbout what, and why? That no one seemed to be able to explain.\nEventually by trial and error I found that what the teacher \nwanted us to do was pretend that the story had really taken\nplace, and to analyze based on what the characters said and did (the\nsubtler clues, the better) what their motives must have been.\nOne got extra credit for motives having to do with class,\nas I suspect one must now for those involving gender and \nsexuality. I learned how to churn out such stuff well enough\nto get an A, but I never took another English class.And the books we did these disgusting things to, like those\nwe mishandled in high school, I find still have black marks\nagainst them in my mind. The one saving grace was that \nEnglish courses tend to favor pompous, dull writers like\nHenry James, who deserve black marks against their names anyway.\nOne of the principles the IRS uses in deciding whether to\nallow deductions is that, if something is fun, it isn't work.\nFields that are intellectually unsure of themselves rely on\na similar principle. Reading P.G. Wodehouse or Evelyn Waugh or\nRaymond Chandler is too obviously pleasing to seem like\nserious work, as reading Shakespeare would have been before \nEnglish evolved enough to make it an effort to understand him. [sh]\nAnd so good writers (just you wait and see who's still in\nprint in 300 years) are less likely to have readers turned \nagainst them by clumsy, self-appointed tour guides.\nThe other big difference between a real essay and the \nthings\nthey make you write in school is that a real essay doesn't \ntake a position and then defend it. That principle,\nlike the idea that we ought to be writing about literature, \nturns out to be another intellectual hangover of long\nforgotten origins. It's often mistakenly believed that\nmedieval universities were mostly seminaries. In fact they\nwere more law schools. And at least in our tradition\nlawyers are advocates: they are\ntrained to be able to\ntake\neither side of an argument and make as good a case for it \nas they can. Whether or not this is a good idea (in the case of prosecutors,\nit probably isn't), it tended to pervade\nthe atmosphere of\nearly universities. After the lecture the most common form\nof discussion was the disputation. This idea\nis at least\nnominally preserved in our present-day thesis defense-- indeed,\nin the very word thesis. Most people treat the words \nthesis\nand dissertation as interchangeable, but originally, at least,\na thesis was a position one took and the dissertation was\nthe argument by which one defended it.I'm not complaining that we blur these two words together.\nAs far as I'm concerned, the sooner we lose the original\nsense of the word thesis, the better. For many, perhaps most, \ngraduate students, it is stuffing a square peg into a round\nhole to try to recast one's work as a single thesis. And\nas for the disputation, that seems clearly a net lose.\nArguing two sides of a case may be a necessary evil in a\nlegal dispute, but it's not the best way to get at the truth,\nas I think lawyers would be the first to admit.\nAnd yet this principle is built into the very structure of \nthe essays\nthey teach you to write in high school. The topic\nsentence is your thesis, chosen in advance, the supporting \nparagraphs the blows you strike in the conflict, and the\nconclusion--- uh, what it the conclusion? I was never sure \nabout that in high school. If your thesis was well expressed,\nwhat need was there to restate it? In theory it seemed that\nthe conclusion of a really good essay ought not to need to \nsay any more than QED.\nBut when you understand the origins\nof this sort of \"essay\", you can see where the\nconclusion comes from. It's the concluding remarks to the \njury.\nWhat other alternative is there? To answer that\nwe have to\nreach back into history again, though this time not so far.\nTo Michel de Montaigne, inventor of the essay.\nHe was\ndoing something quite different from what a\nlawyer does,\nand\nthe difference is embodied in the name. Essayer is the French\nverb meaning \"to try\" (the cousin of our word assay),\n\nand an \"essai\" is an effort.\nAn essay is something you\nwrite in order\nto figure something out.Figure out what? You don't know yet. And so you can't begin with a\nthesis, because you don't have one, and may never have \none. An essay doesn't begin with a statement, but with a \nquestion. In a real essay, you don't take a position and\ndefend it. You see a door that's ajar, and you open it and\nwalk in to see what's inside.If all you want to do is figure things out, why do you need\nto write anything, though? Why not just sit and think? Well,\nthere precisely is Montaigne's great discovery. Expressing\nideas helps to form them. Indeed, helps is far too weak a\nword. 90%\nof what ends up in my essays was stuff\nI only\nthought of when I sat down to write them. That's why I\nwrite them.So there's another difference between essays and\nthe things\nyou have to write in school. In school\n\nyou are, in theory,\nexplaining yourself to someone else. In the best case---if\nyou're really organized---you're just writing it down.\nIn a real essay you're writing for yourself. You're\nthinking out loud.But not quite. Just as inviting people over forces you to\nclean up your apartment, writing something that you know\n\nother people will read forces you to think well. So it\ndoes matter to have an audience. The things I've written\njust for myself are no good. Indeed, they're bad in\na particular way:\nthey tend to peter out. When I run into\ndifficulties, I notice that I\ntend to conclude with a few vague\nquestions and then drift off to get a cup of tea.This seems a common problem.\nIt's practically the standard\nending in blog entries--- with the addition of a \"heh\" or an \nemoticon, prompted by the all too accurate sense that\nsomething is missing.And indeed, a lot of\npublished essays peter out in this\nsame way.\nParticularly the sort written by the staff writers of newsmagazines. Outside writers tend to supply\neditorials of the defend-a-position variety, which\nmake a beeline toward a rousing (and\nforeordained) conclusion. But the staff writers feel\nobliged to write something more\nbalanced, which in\npractice ends up meaning blurry.\nSince they're\nwriting for a popular magazine, they start with the\nmost radioactively controversial questions, from which\n(because they're writing for a popular magazine)\nthey then proceed to recoil from\nin terror.\nGay marriage, for or\nagainst? This group says one thing. That group says\nanother. One thing is certain: the question is a\ncomplex one. (But don't get mad at us. We didn't\ndraw any conclusions.)Questions aren't enough. An essay has to come up with answers.\nThey don't always, of course. Sometimes you start with a \npromising question and get nowhere. But those you don't\npublish. Those are like experiments that get inconclusive\nresults. Something you publish ought to tell the reader \nsomething he didn't already know.\nBut what you tell him doesn't matter, so long as \nit's interesting. I'm sometimes accused of meandering.\nIn defend-a-position writing that would be a flaw.\nThere you're not concerned with truth. You already\nknow where you're going, and you want to go straight there,\nblustering through obstacles, and hand-waving\nyour way across swampy ground. But that's not what\nyou're trying to do in an essay. An essay is supposed to\nbe a search for truth. It would be suspicious if it didn't\nmeander.The Meander is a river in Asia Minor (aka\nTurkey).\nAs you might expect, it winds all over the place.\nBut does it\ndo this out of frivolity? Quite the opposite.\nLike all rivers, it's rigorously following the laws of physics.\nThe path it has discovered,\nwinding as it is, represents\nthe most economical route to the sea.The river's algorithm is simple. At each step, flow down.\nFor the essayist this translates to: flow interesting.\nOf all the places to go next, choose\nwhichever seems\nmost interesting.I'm pushing this metaphor a bit. An essayist\ncan't have\nquite as little foresight as a river. In fact what you do\n(or what I do) is somewhere between a river and a roman\nroad-builder. I have a general idea of the direction\nI want to go in, and\nI choose the next topic with that in mind. This essay is\nabout writing, so I do occasionally yank it back in that\ndirection, but it is not all the sort of essay I\nthought I was going to write about writing.Note too that hill-climbing (which is what this algorithm is\ncalled) can get you in trouble.\nSometimes, just\nlike a river,\nyou\nrun up against a blank wall. What\nI do then is just \nwhat the river does: backtrack.\nAt one point in this essay\nI found that after following a certain thread I ran out\nof ideas. I had to go back n\nparagraphs and start over\nin another direction. For illustrative purposes I've left\nthe abandoned branch as a footnote.\nErr on the side of the river. An essay is not a reference\nwork. It's not something you read looking for a specific\nanswer, and feel cheated if you don't find it. I'd much\nrather read an essay that went off in an unexpected but\ninteresting direction than one that plodded dutifully along\na prescribed course.So what's interesting? For me, interesting means surprise.\nDesign, as Matz\nhas said, should follow the principle of\nleast surprise.\nA button that looks like it will make a\nmachine stop should make it stop, not speed up. Essays\nshould do the opposite. Essays should aim for maximum\nsurprise.I was afraid of flying for a long time and could only travel\nvicariously. When friends came back from faraway places,\nit wasn't just out of politeness that I asked them about\ntheir trip.\nI really wanted to know. And I found that\nthe best way to get information out of them was to ask\nwhat surprised them. How was the place different from what\nthey expected? This is an extremely useful question.\nYou can ask it of even\nthe most unobservant people, and it will\nextract information they didn't even know they were\nrecording. Indeed, you can ask it in real time. Now when I go somewhere\nnew, I make a note of what surprises me about it. Sometimes I\neven make a conscious effort to visualize the place beforehand,\nso I'll have a detailed image to diff with reality.\nSurprises are facts\nyou didn't already \nknow.\nBut they're\nmore than that. They're facts\nthat contradict things you\nthought you knew. And so they're the most valuable sort of\nfact you can get. They're like a food that's not merely\nhealthy, but counteracts the unhealthy effects of things\nyou've already eaten.\nHow do you find surprises? Well, therein lies half\nthe work of essay writing. (The other half is expressing\nyourself well.) You can at least\nuse yourself as a\nproxy for the reader. You should only write about things\nyou've thought about a lot. And anything you come across\nthat surprises you, who've thought about the topic a lot,\nwill probably surprise most readers.For example, in a recent essay I pointed out that because\nyou can only judge computer programmers by working with\nthem, no one knows in programming who the heroes should\nbe.\nI\ncertainly\ndidn't realize this when I started writing\nthe \nessay, and even now I find it kind of weird. That's\nwhat you're looking for.So if you want to write essays, you need two ingredients:\nyou need\na few topics that you think about a lot, and you\nneed some ability to ferret out the unexpected.What should you think about? My guess is that it\ndoesn't matter. Almost everything is\ninteresting if you get deeply\nenough into it. The one possible exception\nare\nthings\nlike working in fast food, which\nhave deliberately had all\nthe variation sucked out of them.\nIn retrospect, was there\nanything interesting about working in Baskin-Robbins?\nWell, it was interesting to notice\nhow important color was\nto the customers. Kids a certain age would point into\nthe case and say that they wanted yellow. Did they want\nFrench Vanilla or Lemon? They would just look at you\nblankly. They wanted yellow. And then there was the\nmystery of why the perennial favorite Pralines n' Cream\nwas so appealing. I'm inclined now to\nthink it was the salt.\nAnd the mystery of why Passion Fruit tasted so disgusting.\nPeople would order it because of the name, and were always\ndisappointed. It should have been called In-sink-erator\nFruit.\nAnd there was\nthe difference in the way fathers and\nmothers bought ice cream for their kids.\nFathers tended to\nadopt the attitude of\nbenevolent kings bestowing largesse,\nand mothers that of\nharried bureaucrats,\ngiving in to\npressure against their better judgement.\nSo, yes, there does seem to be material, even in\nfast food.What about the other half, ferreting out the unexpected?\nThat may require some natural ability. I've noticed for\na long time that I'm pathologically observant. ....[That was as far as I'd gotten at the time.]Notes[sh] In Shakespeare's own time, serious writing meant theological\ndiscourses, not the bawdy plays acted over on the other \nside of the river among the bear gardens and whorehouses.The other extreme, the work that seems formidable from the moment\nit's created (indeed, is deliberately intended to be)\nis represented by Milton. Like the Aeneid, Paradise Lost is a\nrock imitating a butterfly that happened to get fossilized.\nEven Samuel Johnson seems to have balked at this, on the one \nhand paying Milton the compliment of an extensive biography,\nand on the other writing of Paradise Lost that \"none who read it\never wished it longer.\""} {"title": "love", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nJanuary 2006To do something well you have to like it. That idea is not exactly\nnovel. We've got it down to four words: \"Do what you love.\" But\nit's not enough just to tell people that. Doing what you love is\ncomplicated.The very idea is foreign to what most of us learn as kids. When I\nwas a kid, it seemed as if work and fun were opposites by definition.\nLife had two states: some of the time adults were making you do\nthings, and that was called work; the rest of the time you could\ndo what you wanted, and that was called playing. Occasionally the\nthings adults made you do were fun, just as, occasionally, playing\nwasn't\u2014for example, if you fell and hurt yourself. But except\nfor these few anomalous cases, work was pretty much defined as\nnot-fun.And it did not seem to be an accident. School, it was implied, was\ntedious because it was preparation for grownup work.The world then was divided into two groups, grownups and kids.\nGrownups, like some kind of cursed race, had to work. Kids didn't,\nbut they did have to go to school, which was a dilute version of\nwork meant to prepare us for the real thing. Much as we disliked\nschool, the grownups all agreed that grownup work was worse, and\nthat we had it easy.Teachers in particular all seemed to believe implicitly that work\nwas not fun. Which is not surprising: work wasn't fun for most of\nthem. Why did we have to memorize state capitals instead of playing\ndodgeball? For the same reason they had to watch over a bunch of\nkids instead of lying on a beach. You couldn't just do what you\nwanted.I'm not saying we should let little kids do whatever they want.\nThey may have to be made to work on certain things. But if we make\nkids work on dull stuff, it might be wise to tell them that tediousness\nis not the defining quality of work, and indeed that the reason\nthey have to work on dull stuff now is so they can work on more\ninteresting stuff later.\n[1]Once, when I was about 9 or 10, my father told me I could be whatever\nI wanted when I grew up, so long as I enjoyed it. I remember that\nprecisely because it seemed so anomalous. It was like being told\nto use dry water. Whatever I thought he meant, I didn't think he\nmeant work could literally be fun\u2014fun like playing. It\ntook me years to grasp that.JobsBy high school, the prospect of an actual job was on the horizon.\nAdults would sometimes come to speak to us about their work, or we\nwould go to see them at work. It was always understood that they\nenjoyed what they did. In retrospect I think one may have: the\nprivate jet pilot. But I don't think the bank manager really did.The main reason they all acted as if they enjoyed their work was\npresumably the upper-middle class convention that you're supposed\nto. It would not merely be bad for your career to say that you\ndespised your job, but a social faux-pas.Why is it conventional to pretend to like what you do? The first\nsentence of this essay explains that. If you have to like something\nto do it well, then the most successful people will all like what\nthey do. That's where the upper-middle class tradition comes from.\nJust as houses all over America are full of \nchairs\nthat are, without\nthe owners even knowing it, nth-degree imitations of chairs designed\n250 years ago for French kings, conventional attitudes about work\nare, without the owners even knowing it, nth-degree imitations of\nthe attitudes of people who've done great things.What a recipe for alienation. By the time they reach an age to\nthink about what they'd like to do, most kids have been thoroughly\nmisled about the idea of loving one's work. School has trained\nthem to regard work as an unpleasant duty. Having a job is said\nto be even more onerous than schoolwork. And yet all the adults\nclaim to like what they do. You can't blame kids for thinking \"I\nam not like these people; I am not suited to this world.\"Actually they've been told three lies: the stuff they've been taught\nto regard as work in school is not real work; grownup work is not\n(necessarily) worse than schoolwork; and many of the adults around\nthem are lying when they say they like what they do.The most dangerous liars can be the kids' own parents. If you take\na boring job to give your family a high standard of living, as so\nmany people do, you risk infecting your kids with the idea that\nwork is boring. \n[2]\nMaybe it would be better for kids in this one\ncase if parents were not so unselfish. A parent who set an example\nof loving their work might help their kids more than an expensive\nhouse.\n[3]It was not till I was in college that the idea of work finally broke\nfree from the idea of making a living. Then the important question\nbecame not how to make money, but what to work on. Ideally these\ncoincided, but some spectacular boundary cases (like Einstein in\nthe patent office) proved they weren't identical.The definition of work was now to make some original contribution\nto the world, and in the process not to starve. But after the habit\nof so many years my idea of work still included a large component\nof pain. Work still seemed to require discipline, because only\nhard problems yielded grand results, and hard problems couldn't\nliterally be fun. Surely one had to force oneself to work on them.If you think something's supposed to hurt, you're less likely to\nnotice if you're doing it wrong. That about sums up my experience\nof graduate school.BoundsHow much are you supposed to like what you do? Unless you\nknow that, you don't know when to stop searching. And if, like most\npeople, you underestimate it, you'll tend to stop searching too\nearly. You'll end up doing something chosen for you by your parents,\nor the desire to make money, or prestige\u2014or sheer inertia.Here's an upper bound: Do what you love doesn't mean, do what you\nwould like to do most this second. Even Einstein probably\nhad moments when he wanted to have a cup of coffee, but told himself\nhe ought to finish what he was working on first.It used to perplex me when I read about people who liked what they\ndid so much that there was nothing they'd rather do. There didn't\nseem to be any sort of work I liked that much. If I had a\nchoice of (a) spending the next hour working on something or (b)\nbe teleported to Rome and spend the next hour wandering about, was\nthere any sort of work I'd prefer? Honestly, no.But the fact is, almost anyone would rather, at any given moment,\nfloat about in the Carribbean, or have sex, or eat some delicious\nfood, than work on hard problems. The rule about doing what you\nlove assumes a certain length of time. It doesn't mean, do what\nwill make you happiest this second, but what will make you happiest\nover some longer period, like a week or a month.Unproductive pleasures pall eventually. After a while you get tired\nof lying on the beach. If you want to stay happy, you have to do\nsomething.As a lower bound, you have to like your work more than any unproductive\npleasure. You have to like what you do enough that the concept of\n\"spare time\" seems mistaken. Which is not to say you have to spend\nall your time working. You can only work so much before you get\ntired and start to screw up. Then you want to do something else\u2014even something mindless. But you don't regard this time as the\nprize and the time you spend working as the pain you endure to earn\nit.I put the lower bound there for practical reasons. If your work\nis not your favorite thing to do, you'll have terrible problems\nwith procrastination. You'll have to force yourself to work, and\nwhen you resort to that the results are distinctly inferior.To be happy I think you have to be doing something you not only\nenjoy, but admire. You have to be able to say, at the end, wow,\nthat's pretty cool. This doesn't mean you have to make something.\nIf you learn how to hang glide, or to speak a foreign language\nfluently, that will be enough to make you say, for a while at least,\nwow, that's pretty cool. What there has to be is a test.So one thing that falls just short of the standard, I think, is\nreading books. Except for some books in math and the hard sciences,\nthere's no test of how well you've read a book, and that's why\nmerely reading books doesn't quite feel like work. You have to do\nsomething with what you've read to feel productive.I think the best test is one Gino Lee taught me: to try to do things\nthat would make your friends say wow. But it probably wouldn't\nstart to work properly till about age 22, because most people haven't\nhad a big enough sample to pick friends from before then.SirensWhat you should not do, I think, is worry about the opinion of\nanyone beyond your friends. You shouldn't worry about prestige.\nPrestige is the opinion of the rest of the world. When you can ask\nthe opinions of people whose judgement you respect, what does it\nadd to consider the opinions of people you don't even know? \n[4]This is easy advice to give. It's hard to follow, especially when\nyou're young. \n[5]\nPrestige is like a powerful magnet that warps\neven your beliefs about what you enjoy. It causes you to work not\non what you like, but what you'd like to like.That's what leads people to try to write novels, for example. They\nlike reading novels. They notice that people who write them win\nNobel prizes. What could be more wonderful, they think, than to\nbe a novelist? But liking the idea of being a novelist is not\nenough; you have to like the actual work of novel-writing if you're\ngoing to be good at it; you have to like making up elaborate lies.Prestige is just fossilized inspiration. If you do anything well\nenough, you'll make it prestigious. Plenty of things we now\nconsider prestigious were anything but at first. Jazz comes to\nmind\u2014though almost any established art form would do. So just\ndo what you like, and let prestige take care of itself.Prestige is especially dangerous to the ambitious. If you want to\nmake ambitious people waste their time on errands, the way to do\nit is to bait the hook with prestige. That's the recipe for getting\npeople to give talks, write forewords, serve on committees, be\ndepartment heads, and so on. It might be a good rule simply to\navoid any prestigious task. If it didn't suck, they wouldn't have\nhad to make it prestigious.Similarly, if you admire two kinds of work equally, but one is more\nprestigious, you should probably choose the other. Your opinions\nabout what's admirable are always going to be slightly influenced\nby prestige, so if the two seem equal to you, you probably have\nmore genuine admiration for the less prestigious one.The other big force leading people astray is money. Money by itself\nis not that dangerous. When something pays well but is regarded\nwith contempt, like telemarketing, or prostitution, or personal\ninjury litigation, ambitious people aren't tempted by it. That\nkind of work ends up being done by people who are \"just trying to\nmake a living.\" (Tip: avoid any field whose practitioners say\nthis.) The danger is when money is combined with prestige, as in,\nsay, corporate law, or medicine. A comparatively safe and prosperous\ncareer with some automatic baseline prestige is dangerously tempting\nto someone young, who hasn't thought much about what they really\nlike.The test of whether people love what they do is whether they'd do\nit even if they weren't paid for it\u2014even if they had to work at\nanother job to make a living. How many corporate lawyers would do\ntheir current work if they had to do it for free, in their spare\ntime, and take day jobs as waiters to support themselves?This test is especially helpful in deciding between different kinds\nof academic work, because fields vary greatly in this respect. Most\ngood mathematicians would work on math even if there were no jobs\nas math professors, whereas in the departments at the other end of\nthe spectrum, the availability of teaching jobs is the driver:\npeople would rather be English professors than work in ad agencies,\nand publishing papers is the way you compete for such jobs. Math\nwould happen without math departments, but it is the existence of\nEnglish majors, and therefore jobs teaching them, that calls into\nbeing all those thousands of dreary papers about gender and identity\nin the novels of Conrad. No one does \nthat \nkind of thing for fun.The advice of parents will tend to err on the side of money. It\nseems safe to say there are more undergrads who want to be novelists\nand whose parents want them to be doctors than who want to be doctors\nand whose parents want them to be novelists. The kids think their\nparents are \"materialistic.\" Not necessarily. All parents tend to\nbe more conservative for their kids than they would for themselves,\nsimply because, as parents, they share risks more than rewards. If\nyour eight year old son decides to climb a tall tree, or your teenage\ndaughter decides to date the local bad boy, you won't get a share\nin the excitement, but if your son falls, or your daughter gets\npregnant, you'll have to deal with the consequences.DisciplineWith such powerful forces leading us astray, it's not surprising\nwe find it so hard to discover what we like to work on. Most people\nare doomed in childhood by accepting the axiom that work = pain.\nThose who escape this are nearly all lured onto the rocks by prestige\nor money. How many even discover something they love to work on?\nA few hundred thousand, perhaps, out of billions.It's hard to find work you love; it must be, if so few do. So don't\nunderestimate this task. And don't feel bad if you haven't succeeded\nyet. In fact, if you admit to yourself that you're discontented,\nyou're a step ahead of most people, who are still in denial. If\nyou're surrounded by colleagues who claim to enjoy work that you\nfind contemptible, odds are they're lying to themselves. Not\nnecessarily, but probably.Although doing great work takes less discipline than people think\u2014because the way to do great work is to find something you like so\nmuch that you don't have to force yourself to do it\u2014finding\nwork you love does usually require discipline. Some people are\nlucky enough to know what they want to do when they're 12, and just\nglide along as if they were on railroad tracks. But this seems the\nexception. More often people who do great things have careers with\nthe trajectory of a ping-pong ball. They go to school to study A,\ndrop out and get a job doing B, and then become famous for C after\ntaking it up on the side.Sometimes jumping from one sort of work to another is a sign of\nenergy, and sometimes it's a sign of laziness. Are you dropping\nout, or boldly carving a new path? You often can't tell yourself.\nPlenty of people who will later do great things seem to be disappointments\nearly on, when they're trying to find their niche.Is there some test you can use to keep yourself honest? One is to\ntry to do a good job at whatever you're doing, even if you don't\nlike it. Then at least you'll know you're not using dissatisfaction\nas an excuse for being lazy. Perhaps more importantly, you'll get\ninto the habit of doing things well.Another test you can use is: always produce. For example, if you\nhave a day job you don't take seriously because you plan to be a\nnovelist, are you producing? Are you writing pages of fiction,\nhowever bad? As long as you're producing, you'll know you're not\nmerely using the hazy vision of the grand novel you plan to write\none day as an opiate. The view of it will be obstructed by the all\ntoo palpably flawed one you're actually writing.\"Always produce\" is also a heuristic for finding the work you love.\nIf you subject yourself to that constraint, it will automatically\npush you away from things you think you're supposed to work on,\ntoward things you actually like. \"Always produce\" will discover\nyour life's work the way water, with the aid of gravity, finds the\nhole in your roof.Of course, figuring out what you like to work on doesn't mean you\nget to work on it. That's a separate question. And if you're\nambitious you have to keep them separate: you have to make a conscious\neffort to keep your ideas about what you want from being contaminated\nby what seems possible. \n[6]It's painful to keep them apart, because it's painful to observe\nthe gap between them. So most people pre-emptively lower their\nexpectations. For example, if you asked random people on the street\nif they'd like to be able to draw like Leonardo, you'd find most\nwould say something like \"Oh, I can't draw.\" This is more a statement\nof intention than fact; it means, I'm not going to try. Because\nthe fact is, if you took a random person off the street and somehow\ngot them to work as hard as they possibly could at drawing for the\nnext twenty years, they'd get surprisingly far. But it would require\na great moral effort; it would mean staring failure in the eye every\nday for years. And so to protect themselves people say \"I can't.\"Another related line you often hear is that not everyone can do\nwork they love\u2014that someone has to do the unpleasant jobs. Really?\nHow do you make them? In the US the only mechanism for forcing\npeople to do unpleasant jobs is the draft, and that hasn't been\ninvoked for over 30 years. All we can do is encourage people to\ndo unpleasant work, with money and prestige.If there's something people still won't do, it seems as if society\njust has to make do without. That's what happened with domestic\nservants. For millennia that was the canonical example of a job\n\"someone had to do.\" And yet in the mid twentieth century servants\npractically disappeared in rich countries, and the rich have just\nhad to do without.So while there may be some things someone has to do, there's a good\nchance anyone saying that about any particular job is mistaken.\nMost unpleasant jobs would either get automated or go undone if no\none were willing to do them.Two RoutesThere's another sense of \"not everyone can do work they love\"\nthat's all too true, however. One has to make a living, and it's\nhard to get paid for doing work you love. There are two routes to\nthat destination:\n\n The organic route: as you become more eminent, gradually to\n increase the parts of your job that you like at the expense of\n those you don't.The two-job route: to work at things you don't like to get money\n to work on things you do.\n\nThe organic route is more common. It happens naturally to anyone\nwho does good work. A young architect has to take whatever work\nhe can get, but if he does well he'll gradually be in a position\nto pick and choose among projects. The disadvantage of this route\nis that it's slow and uncertain. Even tenure is not real freedom.The two-job route has several variants depending on how long you\nwork for money at a time. At one extreme is the \"day job,\" where\nyou work regular hours at one job to make money, and work on what\nyou love in your spare time. At the other extreme you work at\nsomething till you make enough not to \nhave to work for money again.The two-job route is less common than the organic route, because\nit requires a deliberate choice. It's also more dangerous. Life\ntends to get more expensive as you get older, so it's easy to get\nsucked into working longer than you expected at the money job.\nWorse still, anything you work on changes you. If you work too\nlong on tedious stuff, it will rot your brain. And the best paying\njobs are most dangerous, because they require your full attention.The advantage of the two-job route is that it lets you jump over\nobstacles. The landscape of possible jobs isn't flat; there are\nwalls of varying heights between different kinds of work. \n[7]\nThe trick of maximizing the parts of your job that you like can get you\nfrom architecture to product design, but not, probably, to music.\nIf you make money doing one thing and then work on another, you\nhave more freedom of choice.Which route should you take? That depends on how sure you are of\nwhat you want to do, how good you are at taking orders, how much\nrisk you can stand, and the odds that anyone will pay (in your\nlifetime) for what you want to do. If you're sure of the general\narea you want to work in and it's something people are likely to\npay you for, then you should probably take the organic route. But\nif you don't know what you want to work on, or don't like to take\norders, you may want to take the two-job route, if you can stand\nthe risk.Don't decide too soon. Kids who know early what they want to do\nseem impressive, as if they got the answer to some math question\nbefore the other kids. They have an answer, certainly, but odds\nare it's wrong.A friend of mine who is a quite successful doctor complains constantly\nabout her job. When people applying to medical school ask her for\nadvice, she wants to shake them and yell \"Don't do it!\" (But she\nnever does.) How did she get into this fix? In high school she\nalready wanted to be a doctor. And she is so ambitious and determined\nthat she overcame every obstacle along the way\u2014including,\nunfortunately, not liking it.Now she has a life chosen for her by a high-school kid.When you're young, you're given the impression that you'll get\nenough information to make each choice before you need to make it.\nBut this is certainly not so with work. When you're deciding what\nto do, you have to operate on ridiculously incomplete information.\nEven in college you get little idea what various types of work are\nlike. At best you may have a couple internships, but not all jobs\noffer internships, and those that do don't teach you much more about\nthe work than being a batboy teaches you about playing baseball.In the design of lives, as in the design of most other things, you\nget better results if you use flexible media. So unless you're\nfairly sure what you want to do, your best bet may be to choose a\ntype of work that could turn into either an organic or two-job\ncareer. That was probably part of the reason I chose computers.\nYou can be a professor, or make a lot of money, or morph it into\nany number of other kinds of work.It's also wise, early on, to seek jobs that let you do many different\nthings, so you can learn faster what various kinds of work are like.\nConversely, the extreme version of the two-job route is dangerous\nbecause it teaches you so little about what you like. If you work\nhard at being a bond trader for ten years, thinking that you'll\nquit and write novels when you have enough money, what happens when\nyou quit and then discover that you don't actually like writing\nnovels?Most people would say, I'd take that problem. Give me a million\ndollars and I'll figure out what to do. But it's harder than it\nlooks. Constraints give your life shape. Remove them and most\npeople have no idea what to do: look at what happens to those who\nwin lotteries or inherit money. Much as everyone thinks they want\nfinancial security, the happiest people are not those who have it,\nbut those who like what they do. So a plan that promises freedom\nat the expense of knowing what to do with it may not be as good as\nit seems.Whichever route you take, expect a struggle. Finding work you love\nis very difficult. Most people fail. Even if you succeed, it's\nrare to be free to work on what you want till your thirties or\nforties. But if you have the destination in sight you'll be more\nlikely to arrive at it. If you know you can love work, you're in\nthe home stretch, and if you know what work you love, you're\npractically there.Notes[1]\nCurrently we do the opposite: when we make kids do boring work,\nlike arithmetic drills, instead of admitting frankly that it's\nboring, we try to disguise it with superficial decorations.[2]\nOne father told me about a related phenomenon: he found himself\nconcealing from his family how much he liked his work. When he\nwanted to go to work on a saturday, he found it easier to say that\nit was because he \"had to\" for some reason, rather than admitting\nhe preferred to work than stay home with them.[3]\nSomething similar happens with suburbs. Parents move to suburbs\nto raise their kids in a safe environment, but suburbs are so dull\nand artificial that by the time they're fifteen the kids are convinced\nthe whole world is boring.[4]\nI'm not saying friends should be the only audience for your\nwork. The more people you can help, the better. But friends should\nbe your compass.[5]\nDonald Hall said young would-be poets were mistaken to be so\nobsessed with being published. But you can imagine what it would\ndo for a 24 year old to get a poem published in The New Yorker.\nNow to people he meets at parties he's a real poet. Actually he's\nno better or worse than he was before, but to a clueless audience\nlike that, the approval of an official authority makes all the\ndifference. So it's a harder problem than Hall realizes. The\nreason the young care so much about prestige is that the people\nthey want to impress are not very discerning.[6]\nThis is isomorphic to the principle that you should prevent\nyour beliefs about how things are from being contaminated by how\nyou wish they were. Most people let them mix pretty promiscuously.\nThe continuing popularity of religion is the most visible index of\nthat.[7]\nA more accurate metaphor would be to say that the graph of jobs\nis not very well connected.Thanks to Trevor Blackwell, Dan Friedman, Sarah Harlin,\nJessica Livingston, Jackie McDonough, Robert Morris, Peter Norvig, \nDavid Sloo, and Aaron Swartz\nfor reading drafts of this."} {"title": "nft", "text": "May 2021Noora Health, a nonprofit I've \nsupported for years, just launched\na new NFT. It has a dramatic name, Save Thousands of Lives,\nbecause that's what the proceeds will do.Noora has been saving lives for 7 years. They run programs in\nhospitals in South Asia to teach new mothers how to take care of\ntheir babies once they get home. They're in 165 hospitals now. And\nbecause they know the numbers before and after they start at a new\nhospital, they can measure the impact they have. It is massive.\nFor every 1000 live births, they save 9 babies.This number comes from a study\nof 133,733 families at 28 different\nhospitals that Noora conducted in collaboration with the Better\nBirth team at Ariadne Labs, a joint center for health systems\ninnovation at Brigham and Women\u0092s Hospital and Harvard T.H. Chan\nSchool of Public Health.Noora is so effective that even if you measure their costs in the\nmost conservative way, by dividing their entire budget by the number\nof lives saved, the cost of saving a life is the lowest I've seen.\n$1,235.For this NFT, they're going to issue a public report tracking how\nthis specific tranche of money is spent, and estimating the number\nof lives saved as a result.NFTs are a new territory, and this way of using them is especially\nnew, but I'm excited about its potential. And I'm excited to see\nwhat happens with this particular auction, because unlike an NFT\nrepresenting something that has already happened,\nthis NFT gets better as the price gets higher.The reserve price was about $2.5 million, because that's what it\ntakes for the name to be accurate: that's what it costs to save\n2000 lives. But the higher the price of this NFT goes, the more\nlives will be saved. What a sentence to be able to write."} {"title": "startuplessons", "text": "April 2006(This essay is derived from a talk at the 2006 \nStartup School.)The startups we've funded so far are pretty quick, but they seem\nquicker to learn some lessons than others. I think it's because\nsome things about startups are kind of counterintuitive.We've now \ninvested \nin enough companies that I've learned a trick\nfor determining which points are the counterintuitive ones:\nthey're the ones I have to keep repeating.So I'm going to number these points, and maybe with future startups\nI'll be able to pull off a form of Huffman coding. I'll make them\nall read this, and then instead of nagging them in detail, I'll\njust be able to say: number four!\n1. Release Early.The thing I probably repeat most is this recipe for a startup: get\na version 1 out fast, then improve it based on users' reactions.By \"release early\" I don't mean you should release something full\nof bugs, but that you should release something minimal. Users hate\nbugs, but they don't seem to mind a minimal version 1, if there's\nmore coming soon.There are several reasons it pays to get version 1 done fast. One\nis that this is simply the right way to write software, whether for\na startup or not. I've been repeating that since 1993, and I haven't seen much since to\ncontradict it. I've seen a lot of startups die because they were\ntoo slow to release stuff, and none because they were too quick.\n[1]One of the things that will surprise you if you build something\npopular is that you won't know your users. Reddit now has almost half a million\nunique visitors a month. Who are all those people? They have no\nidea. No web startup does. And since you don't know your users,\nit's dangerous to guess what they'll like. Better to release\nsomething and let them tell you.Wufoo took this to heart and released\ntheir form-builder before the underlying database. You can't even\ndrive the thing yet, but 83,000 people came to sit in the driver's\nseat and hold the steering wheel. And Wufoo got valuable feedback\nfrom it: Linux users complained they used too much Flash, so they\nrewrote their software not to. If they'd waited to release everything\nat once, they wouldn't have discovered this problem till it was\nmore deeply wired in.Even if you had no users, it would still be important to release\nquickly, because for a startup the initial release acts as a shakedown\ncruise. If anything major is broken-- if the idea's no good,\nfor example, or the founders hate one another-- the stress of getting\nthat first version out will expose it. And if you have such problems\nyou want to find them early.Perhaps the most important reason to release early, though, is that\nit makes you work harder. When you're working on something that\nisn't released, problems are intriguing. In something that's out\nthere, problems are alarming. There is a lot more urgency once you\nrelease. And I think that's precisely why people put it off. They\nknow they'll have to work a lot harder once they do. \n[2]\n2. Keep Pumping Out Features.Of course, \"release early\" has a second component, without which\nit would be bad advice. If you're going to start with something\nthat doesn't do much, you better improve it fast.What I find myself repeating is \"pump out features.\" And this rule\nisn't just for the initial stages. This is something all startups\nshould do for as long as they want to be considered startups.I don't mean, of course, that you should make your application ever\nmore complex. By \"feature\" I mean one unit of hacking-- one quantum\nof making users' lives better.As with exercise, improvements beget improvements. If you run every\nday, you'll probably feel like running tomorrow. But if you skip\nrunning for a couple weeks, it will be an effort to drag yourself\nout. So it is with hacking: the more ideas you implement, the more\nideas you'll have. You should make your system better at least in\nsome small way every day or two.This is not just a good way to get development done; it is also a\nform of marketing. Users love a site that's constantly improving.\nIn fact, users expect a site to improve. Imagine if you visited a\nsite that seemed very good, and then returned two months later and\nnot one thing had changed. Wouldn't it start to seem lame? \n[3]They'll like you even better when you improve in response to their\ncomments, because customers are used to companies ignoring them.\nIf you're the rare exception-- a company that actually listens--\nyou'll generate fanatical loyalty. You won't need to advertise,\nbecause your users will do it for you.This seems obvious too, so why do I have to keep repeating it? I\nthink the problem here is that people get used to how things are.\nOnce a product gets past the stage where it has glaring flaws, you\nstart to get used to it, and gradually whatever features it happens\nto have become its identity. For example, I doubt many people at\nYahoo (or Google for that matter) realized how much better web mail\ncould be till Paul Buchheit showed them.I think the solution is to assume that anything you've made is far\nshort of what it could be. Force yourself, as a sort of intellectual\nexercise, to keep thinking of improvements. Ok, sure, what you\nhave is perfect. But if you had to change something, what would\nit be?If your product seems finished, there are two possible explanations:\n(a) it is finished, or (b) you lack imagination. Experience suggests\n(b) is a thousand times more likely.\n3. Make Users Happy.Improving constantly is an instance of a more general rule: make\nusers happy. One thing all startups have in common is that they\ncan't force anyone to do anything. They can't force anyone to use\ntheir software, and they can't force anyone to do deals with them.\nA startup has to sing for its supper. That's why the successful\nones make great things. They have to, or die.When you're running a startup you feel like a little bit of debris\nblown about by powerful winds. The most powerful wind is users.\nThey can either catch you and loft you up into the sky, as they did\nwith Google, or leave you flat on the pavement, as they do with\nmost startups. Users are a fickle wind, but more powerful than any\nother. If they take you up, no competitor can keep you down.As a little piece of debris, the rational thing for you to do is\nnot to lie flat, but to curl yourself into a shape the wind will\ncatch.I like the wind metaphor because it reminds you how impersonal the\nstream of traffic is. The vast majority of people who visit your\nsite will be casual visitors. It's them you have to design your\nsite for. The people who really care will find what they want by\nthemselves.The median visitor will arrive with their finger poised on the Back\nbutton. Think about your own experience: most links you\nfollow lead to something lame. Anyone who has used the web for\nmore than a couple weeks has been trained to click on Back after\nfollowing a link. So your site has to say \"Wait! Don't click on\nBack. This site isn't lame. Look at this, for example.\"There are two things you have to do to make people pause. The most\nimportant is to explain, as concisely as possible, what the hell\nyour site is about. How often have you visited a site that seemed\nto assume you already knew what they did? For example, the corporate\nsite that says the\ncompany makes\n\n enterprise content management solutions for business that enable\n organizations to unify people, content and processes to minimize\n business risk, accelerate time-to-value and sustain lower total\n cost of ownership.\n\nAn established company may get away with such an opaque description,\nbut no startup can. A startup\nshould be able to explain in one or two sentences exactly what it\ndoes. \n[4]\nAnd not just to users. You need this for everyone:\ninvestors, acquirers, partners, reporters, potential employees, and\neven current employees. You probably shouldn't even start a company\nto do something that can't be described compellingly in one or two\nsentences.The other thing I repeat is to give people everything you've got,\nright away. If you have something impressive, try to put it on the\nfront page, because that's the only one most visitors will see.\nThough indeed there's a paradox here: the more you push the good\nstuff toward the front, the more likely visitors are to explore\nfurther. \n[5]In the best case these two suggestions get combined: you tell\nvisitors what your site is about by showing them. One of the\nstandard pieces of advice in fiction writing is \"show, don't tell.\"\nDon't say that a character's angry; have him grind his teeth, or\nbreak his pencil in half. Nothing will explain what your site does\nso well as using it.The industry term here is \"conversion.\" The job of your site is\nto convert casual visitors into users-- whatever your definition\nof a user is. You can measure this in your growth rate. Either\nyour site is catching on, or it isn't, and you must know which. If\nyou have decent growth, you'll win in the end, no matter how obscure\nyou are now. And if you don't, you need to fix something.\n4. Fear the Right Things.Another thing I find myself saying a lot is \"don't worry.\" Actually,\nit's more often \"don't worry about this; worry about that instead.\"\nStartups are right to be paranoid, but they sometimes fear the wrong\nthings.Most visible disasters are not so alarming as they seem. Disasters\nare normal in a startup: a founder quits, you discover a patent\nthat covers what you're doing, your servers keep crashing, you run\ninto an insoluble technical problem, you have to change your name,\na deal falls through-- these are all par for the course. They won't\nkill you unless you let them.Nor will most competitors. A lot of startups worry \"what if Google\nbuilds something like us?\" Actually big companies are not the ones\nyou have to worry about-- not even Google. The people at Google\nare smart, but no smarter than you; they're not as motivated, because\nGoogle is not going to go out of business if this one product fails;\nand even at Google they have a lot of bureaucracy to slow them down.What you should fear, as a startup, is not the established players,\nbut other startups you don't know exist yet. They're way more\ndangerous than Google because, like you, they're cornered animals.Looking just at existing competitors can give you a false sense of\nsecurity. You should compete against what someone else could be\ndoing, not just what you can see people doing. A corollary is that\nyou shouldn't relax just because you have no visible competitors\nyet. No matter what your idea, there's someone else out there\nworking on the same thing.That's the downside of it being easier to start a startup: more people\nare doing it. But I disagree with Caterina Fake when she says that\nmakes this a bad time to start a startup. More people are starting\nstartups, but not as many more as could. Most college graduates\nstill think they have to get a job. The average person can't ignore\nsomething that's been beaten into their head since they were three\njust because serving web pages recently got a lot cheaper.And in any case, competitors are not the biggest threat. Way more\nstartups hose themselves than get crushed by competitors. There\nare a lot of ways to do it, but the three main ones are internal\ndisputes, inertia, and ignoring users. Each is, by itself, enough\nto kill you. But if I had to pick the worst, it would be ignoring\nusers. If you want a recipe for a startup that's going to die,\nhere it is: a couple of founders who have some great idea they know\neveryone is going to love, and that's what they're going to build,\nno matter what.Almost everyone's initial plan is broken. If companies stuck to\ntheir initial plans, Microsoft would be selling programming languages,\nand Apple would be selling printed circuit boards. In both cases\ntheir customers told them what their business should be-- and they\nwere smart enough to listen.As Richard Feynman said, the imagination of nature is greater than\nthe imagination of man. You'll find more interesting things by\nlooking at the world than you could ever produce just by thinking.\nThis principle is very powerful. It's why the best abstract painting\nstill falls short of Leonardo, for example. And it applies to\nstartups too. No idea for a product could ever be so clever as the\nones you can discover by smashing a beam of prototypes into a beam\nof users.\n5. Commitment Is a Self-Fulfilling Prophecy.I now have enough experience with startups to be able to say what\nthe most important quality is in a startup founder, and it's not\nwhat you might think. The most important quality in a startup\nfounder is determination. Not intelligence-- determination.This is a little depressing. I'd like to believe Viaweb succeeded\nbecause we were smart, not merely determined. A lot of people in\nthe startup world want to believe that. Not just founders, but\ninvestors too. They like the idea of inhabiting a world ruled by\nintelligence. And you can tell they really believe this, because\nit affects their investment decisions.Time after time VCs invest in startups founded by eminent professors.\nThis may work in biotech, where a lot of startups simply commercialize\nexisting research, but in software you want to invest in students,\nnot professors. Microsoft, Yahoo, and Google were all founded by\npeople who dropped out of school to do it. What students lack in\nexperience they more than make up in dedication.Of course, if you want to get rich, it's not enough merely to be\ndetermined. You have to be smart too, right? I'd like to think\nso, but I've had an experience that convinced me otherwise: I spent\nseveral years living in New York.You can lose quite a lot in the brains department and it won't kill\nyou. But lose even a little bit in the commitment department, and\nthat will kill you very rapidly.Running a startup is like walking on your hands: it's possible, but\nit requires extraordinary effort. If an ordinary employee were\nasked to do the things a startup founder has to, he'd be very\nindignant. Imagine if you were hired at some big company, and in\naddition to writing software ten times faster than you'd ever had\nto before, they expected you to answer support calls, administer\nthe servers, design the web site, cold-call customers, find the\ncompany office space, and go out and get everyone lunch.And to do all this not in the calm, womb-like atmosphere of a big\ncompany, but against a backdrop of constant disasters. That's the\npart that really demands determination. In a startup, there's\nalways some disaster happening. So if you're the least bit inclined\nto find an excuse to quit, there's always one right there.But if you lack commitment, chances are it will have been hurting\nyou long before you actually quit. Everyone who deals with startups\nknows how important commitment is, so if they sense you're ambivalent,\nthey won't give you much attention. If you lack commitment, you'll\njust find that for some mysterious reason good things happen to\nyour competitors but not to you. If you lack commitment, it will\nseem to you that you're unlucky.Whereas if you're determined to stick around, people will pay\nattention to you, because odds are they'll have to deal with you\nlater. You're a local, not just a tourist, so everyone has to come\nto terms with you.At Y Combinator we sometimes mistakenly fund teams who have the\nattitude that they're going to give this startup thing a shot for\nthree months, and if something great happens, they'll stick with\nit-- \"something great\" meaning either that someone wants to buy\nthem or invest millions of dollars in them. But if this is your\nattitude, \"something great\" is very unlikely to happen to you,\nbecause both acquirers and investors judge you by your level of\ncommitment.If an acquirer thinks you're going to stick around no matter what,\nthey'll be more likely to buy you, because if they don't and you\nstick around, you'll probably grow, your price will go up, and\nthey'll be left wishing they'd bought you earlier. Ditto for\ninvestors. What really motivates investors, even big VCs, is not\nthe hope of good returns, but the fear of missing out. \n[6]\nSo if\nyou make it clear you're going to succeed no matter what, and the only\nreason you need them is to make it happen a little faster, you're\nmuch more likely to get money.You can't fake this. The only way to convince everyone that you're\nready to fight to the death is actually to be ready to.You have to be the right kind of determined, though. I carefully\nchose the word determined rather than stubborn, because stubbornness\nis a disastrous quality in a startup. You have to be determined,\nbut flexible, like a running back. A successful running back doesn't\njust put his head down and try to run through people. He improvises:\nif someone appears in front of him, he runs around them; if someone\ntries to grab him, he spins out of their grip; he'll even run in\nthe wrong direction briefly if that will help. The one thing he'll\nnever do is stand still. \n[7]\n6. There Is Always Room.I was talking recently to a startup founder about whether it might\nbe good to add a social component to their software. He said he\ndidn't think so, because the whole social thing was tapped out.\nReally? So in a hundred years the only social networking sites\nwill be the Facebook, MySpace, Flickr, and Del.icio.us? Not likely.There is always room for new stuff. At every point in history,\neven the darkest bits of the dark ages, people were discovering\nthings that made everyone say \"why didn't anyone think of that\nbefore?\" We know this continued to be true up till 2004, when the\nFacebook was founded-- though strictly speaking someone else did\nthink of that.The reason we don't see the opportunities all around us is that we\nadjust to however things are, and assume that's how things have to\nbe. For example, it would seem crazy to most people to try to make\na better search engine than Google. Surely that field, at least,\nis tapped out. Really? In a hundred years-- or even twenty-- are\npeople still going to search for information using something like\nthe current Google? Even Google probably doesn't think that.In particular, I don't think there's any limit to the number of\nstartups. Sometimes you hear people saying \"All these guys starting\nstartups now are going to be disappointed. How many little startups\nare Google and Yahoo going to buy, after all?\" That sounds cleverly\nskeptical, but I can prove it's mistaken. No one proposes that\nthere's some limit to the number of people who can be employed in\nan economy consisting of big, slow-moving companies with a couple\nthousand people each. Why should there be any limit to the number\nwho could be employed by small, fast-moving companies with ten each?\nIt seems to me the only limit would be the number of people who\nwant to work that hard.The limit on the number of startups is not the number that can get\nacquired by Google and Yahoo-- though it seems even that should\nbe unlimited, if the startups were actually worth buying-- but the\namount of wealth that can be created. And I don't think there's\nany limit on that, except cosmological ones.So for all practical purposes, there is no limit to the number of\nstartups. Startups make wealth, which means they make things people\nwant, and if there's a limit on the number of things people want,\nwe are nowhere near it. I still don't even have a flying car.\n7. Don't Get Your Hopes Up.This is another one I've been repeating since long before Y Combinator.\nIt was practically the corporate motto at Viaweb.Startup founders are naturally optimistic. They wouldn't do it\notherwise. But you should treat your optimism the way you'd treat\nthe core of a nuclear reactor: as a source of power that's also\nvery dangerous. You have to build a shield around it, or it will\nfry you.The shielding of a reactor is not uniform; the reactor would be\nuseless if it were. It's pierced in a few places to let pipes in.\nAn optimism shield has to be pierced too. I think the place to\ndraw the line is between what you expect of yourself, and what you\nexpect of other people. It's ok to be optimistic about what you\ncan do, but assume the worst about machines and other people.This is particularly necessary in a startup, because you tend to\nbe pushing the limits of whatever you're doing. So things don't\nhappen in the smooth, predictable way they do in the rest of the\nworld. Things change suddenly, and usually for the worse.Shielding your optimism is nowhere more important than with deals.\nIf your startup is doing a deal, just assume it's not going to\nhappen. The VCs who say they're going to invest in you aren't.\nThe company that says they're going to buy you isn't. The big\ncustomer who wants to use your system in their whole company won't.\nThen if things work out you can be pleasantly surprised.The reason I warn startups not to get their hopes up is not to save\nthem from being disappointed when things fall through. It's\nfor a more practical reason: to prevent them from leaning their\ncompany against something that's going to fall over, taking them\nwith it.For example, if someone says they want to invest in you, there's a\nnatural tendency to stop looking for other investors. That's why\npeople proposing deals seem so positive: they want you to\nstop looking. And you want to stop too, because doing deals is a\npain. Raising money, in particular, is a huge time sink. So you\nhave to consciously force yourself to keep looking.Even if you ultimately do the first deal, it will be to your advantage\nto have kept looking, because you'll get better terms. Deals are\ndynamic; unless you're negotiating with someone unusually honest,\nthere's not a single point where you shake hands and the deal's\ndone. There are usually a lot of subsidiary questions to be cleared\nup after the handshake, and if the other side senses weakness-- if\nthey sense you need this deal-- they will be very tempted to screw\nyou in the details.VCs and corp dev guys are professional negotiators. They're trained\nto take advantage of weakness. \n[8]\nSo while they're often nice\nguys, they just can't help it. And as pros they do this more than\nyou. So don't even try to bluff them. The only way a startup can\nhave any leverage in a deal is genuinely not to need it. And if\nyou don't believe in a deal, you'll be less likely to depend on it.So I want to plant a hypnotic suggestion in your heads: when you\nhear someone say the words \"we want to invest in you\" or \"we want\nto acquire you,\" I want the following phrase to appear automatically\nin your head: don't get your hopes up. Just continue running\nyour company as if this deal didn't exist. Nothing is more likely\nto make it close.The way to succeed in a startup is to focus on the goal of getting\nlots of users, and keep walking swiftly toward it while investors\nand acquirers scurry alongside trying to wave money in your face.\nSpeed, not MoneyThe way I've described it, starting a startup sounds pretty stressful.\nIt is. When I talk to the founders of the companies we've funded,\nthey all say the same thing: I knew it would be hard, but I didn't\nrealize it would be this hard.So why do it? It would be worth enduring a lot of pain and stress\nto do something grand or heroic, but just to make money? Is making\nmoney really that important?No, not really. It seems ridiculous to me when people take business\ntoo seriously. I regard making money as a boring errand to be got\nout of the way as soon as possible. There is nothing grand or\nheroic about starting a startup per se.So why do I spend so much time thinking about startups? I'll tell\nyou why. Economically, a startup is best seen not as a way to get\nrich, but as a way to work faster. You have to make a living, and\na startup is a way to get that done quickly, instead of letting it\ndrag on through your whole life.\n[9]We take it for granted most of the time, but human life is fairly\nmiraculous. It is also palpably short. You're given this marvellous\nthing, and then poof, it's taken away. You can see why people\ninvent gods to explain it. But even to people who don't believe\nin gods, life commands respect. There are times in most of our\nlives when the days go by in a blur, and almost everyone has a\nsense, when this happens, of wasting something precious. As Ben\nFranklin said, if you love life, don't waste time, because time is\nwhat life is made of.So no, there's nothing particularly grand about making money. That's\nnot what makes startups worth the trouble. What's important about\nstartups is the speed. By compressing the dull but necessary task\nof making a living into the smallest possible time, you show respect\nfor life, and there is something grand about that.Notes[1]\nStartups can die from releasing something full of bugs, and not\nfixing them fast enough, but I don't know of any that died from\nreleasing something stable but minimal very early, then promptly\nimproving it.[2]\nI know this is why I haven't released Arc. The moment I do,\nI'll have people nagging me for features.[3]\nA web site is different from a book or movie or desktop application\nin this respect. Users judge a site not as a single snapshot, but\nas an animation with multiple frames. Of the two, I'd say the rate of\nimprovement is more important to users than where you currently\nare.[4]\nIt should not always tell this to users, however. For example,\nMySpace is basically a replacement mall for mallrats. But it was\nwiser for them, initially, to pretend that the site was about bands.[5]\nSimilarly, don't make users register to try your site. Maybe\nwhat you have is so valuable that visitors should gladly register\nto get at it. But they've been trained to expect the opposite.\nMost of the things they've tried on the web have sucked-- and\nprobably especially those that made them register.[6]\nVCs have rational reasons for behaving this way. They don't\nmake their money (if they make money) off their median investments.\nIn a typical fund, half the companies fail, most of the rest generate\nmediocre returns, and one or two \"make the fund\" by succeeding\nspectacularly. So if they miss just a few of the most promising\nopportunities, it could hose the whole fund.[7]\nThe attitude of a running back doesn't translate to soccer.\nThough it looks great when a forward dribbles past multiple defenders,\na player who persists in trying such things will do worse in the\nlong term than one who passes.[8]\nThe reason Y Combinator never negotiates valuations\nis that we're not professional negotiators, and don't want to turn\ninto them.[9]\nThere are two ways to do \nwork you love: (a) to make money, then work\non what you love, or (b) to get a job where you get paid to work on\nstuff you love. In practice the first phases of both\nconsist mostly of unedifying schleps, and in (b) the second phase is less\nsecure.Thanks to Sam Altman, Trevor Blackwell, Beau Hartshorne, Jessica \nLivingston, and Robert Morris for reading drafts of this."} {"title": "gba", "text": "April 2004To the popular press, \"hacker\" means someone who breaks\ninto computers. Among programmers it means a good programmer.\nBut the two meanings are connected. To programmers,\n\"hacker\" connotes mastery in the most literal sense: someone\nwho can make a computer do what he wants\u2014whether the computer\nwants to or not.To add to the confusion, the noun \"hack\" also has two senses. It can\nbe either a compliment or an insult. It's called a hack when\nyou do something in an ugly way. But when you do something\nso clever that you somehow beat the system, that's also\ncalled a hack. The word is used more often in the former than\nthe latter sense, probably because ugly solutions are more\ncommon than brilliant ones.Believe it or not, the two senses of \"hack\" are also\nconnected. Ugly and imaginative solutions have something in\ncommon: they both break the rules. And there is a gradual\ncontinuum between rule breaking that's merely ugly (using\nduct tape to attach something to your bike) and rule breaking\nthat is brilliantly imaginative (discarding Euclidean space).Hacking predates computers. When he\nwas working on the Manhattan Project, Richard Feynman used to\namuse himself by breaking into safes containing secret documents.\nThis tradition continues today.\nWhen we were in grad school, a hacker friend of mine who spent too much\ntime around MIT had\nhis own lock picking kit.\n(He now runs a hedge fund, a not unrelated enterprise.)It is sometimes hard to explain to authorities why one would\nwant to do such things.\nAnother friend of mine once got in trouble with the government for\nbreaking into computers. This had only recently been declared\na crime, and the FBI found that their usual investigative\ntechnique didn't work. Police investigation apparently begins with\na motive. The usual motives are few: drugs, money, sex,\nrevenge. Intellectual curiosity was not one of the motives on\nthe FBI's list. Indeed, the whole concept seemed foreign to\nthem.Those in authority tend to be annoyed by hackers'\ngeneral attitude of disobedience. But that disobedience is\na byproduct of the qualities that make them good programmers.\nThey may laugh at the CEO when he talks in generic corporate\nnewspeech, but they also laugh at someone who tells them\na certain problem can't be solved.\nSuppress one, and you suppress the other.This attitude is sometimes affected. Sometimes young programmers\nnotice the eccentricities of eminent hackers and decide to\nadopt some of their own in order to seem smarter.\nThe fake version is not merely\nannoying; the prickly attitude of these posers\ncan actually slow the process of innovation.But even factoring in their annoying eccentricities,\nthe disobedient attitude of hackers is a net win. I wish its\nadvantages were better understood.For example, I suspect people in Hollywood are\nsimply mystified by\nhackers' attitudes toward copyrights. They are a perennial\ntopic of heated discussion on Slashdot.\nBut why should people who program computers\nbe so concerned about copyrights, of all things?Partly because some companies use mechanisms to prevent\ncopying. Show any hacker a lock and his first thought is\nhow to pick it. But there is a deeper reason that\nhackers are alarmed by measures like copyrights and patents.\nThey see increasingly aggressive measures to protect\n\"intellectual property\"\nas a threat to the intellectual\nfreedom they need to do their job.\nAnd they are right.It is by poking about inside current technology that\nhackers get ideas for the next generation. No thanks,\nintellectual homeowners may say, we don't need any\noutside help. But they're wrong.\nThe next generation of computer technology has\noften\u2014perhaps more often than not\u2014been developed by outsiders.In 1977 there was no doubt some group within IBM developing\nwhat they expected to be\nthe next generation of business computer. They were mistaken.\nThe next generation of business computer was\nbeing developed on entirely different lines by two long-haired\nguys called Steve in a garage in Los Altos. At about the\nsame time, the powers that be\nwere cooperating to develop the\nofficial next generation operating system, Multics.\nBut two guys who thought Multics excessively complex went off\nand wrote their own. They gave it a name that\nwas a joking reference to Multics: Unix.The latest intellectual property laws impose\nunprecedented restrictions on the sort of poking around that\nleads to new ideas. In the past, a competitor might use patents\nto prevent you from selling a copy of something they\nmade, but they couldn't prevent you from\ntaking one apart to see how it worked. The latest\nlaws make this a crime. How are we\nto develop new technology if we can't study current\ntechnology to figure out how to improve it?Ironically, hackers have brought this on themselves.\nComputers are responsible for the problem. The control systems\ninside machines used to be physical: gears and levers and cams.\nIncreasingly, the brains (and thus the value) of products is\nin software. And by this I mean software in the general sense:\ni.e. data. A song on an LP is physically stamped into the\nplastic. A song on an iPod's disk is merely stored on it.Data is by definition easy to copy. And the Internet\nmakes copies easy to distribute. So it is no wonder\ncompanies are afraid. But, as so often happens, fear has\nclouded their judgement. The government has responded\nwith draconian laws to protect intellectual property.\nThey probably mean well. But\nthey may not realize that such laws will do more harm\nthan good.Why are programmers so violently opposed to these laws?\nIf I were a legislator, I'd be interested in this\nmystery\u2014for the same reason that, if I were a farmer and suddenly\nheard a lot of squawking coming from my hen house one night,\nI'd want to go out and investigate. Hackers are not stupid,\nand unanimity is very rare in this world.\nSo if they're all squawking, \nperhaps there is something amiss.Could it be that such laws, though intended to protect America,\nwill actually harm it? Think about it. There is something\nvery American about Feynman breaking into safes during\nthe Manhattan Project. It's hard to imagine the authorities\nhaving a sense of humor about such things over\nin Germany at that time. Maybe it's not a coincidence.Hackers are unruly. That is the essence of hacking. And it\nis also the essence of Americanness. It is no accident\nthat Silicon Valley\nis in America, and not France, or Germany,\nor England, or Japan. In those countries, people color inside\nthe lines.I lived for a while in Florence. But after I'd been there\na few months I realized that what I'd been unconsciously hoping\nto find there was back in the place I'd just left.\nThe reason Florence is famous is that in 1450, it was New York.\nIn 1450 it was filled with the kind of turbulent and ambitious\npeople you find now in America. (So I went back to America.)It is greatly to America's advantage that it is\na congenial atmosphere for the right sort of unruliness\u2014that\nit is a home not just for the smart, but for smart-alecks.\nAnd hackers are invariably smart-alecks. If we had a national\nholiday, it would be April 1st. It says a great deal about\nour work that we use the same word for a brilliant or a\nhorribly cheesy solution. When we cook one up we're not\nalways 100% sure which kind it is. But as long as it has\nthe right sort of wrongness, that's a promising sign.\nIt's odd that people\nthink of programming as precise and methodical. Computers\nare precise and methodical. Hacking is something you do\nwith a gleeful laugh.In our world some of the most characteristic solutions\nare not far removed from practical\njokes. IBM was no doubt rather surprised by the consequences\nof the licensing deal for DOS, just as the hypothetical\n\"adversary\" must be when Michael Rabin solves a problem by\nredefining it as one that's easier to solve.Smart-alecks have to develop a keen sense of how much they\ncan get away with. And lately hackers \nhave sensed a change\nin the atmosphere.\nLately hackerliness seems rather frowned upon.To hackers the recent contraction in civil liberties seems\nespecially ominous. That must also mystify outsiders. \nWhy should we care especially about civil\nliberties? Why programmers, more than\ndentists or salesmen or landscapers?Let me put the case in terms a government official would appreciate.\nCivil liberties are not just an ornament, or a quaint\nAmerican tradition. Civil liberties make countries rich.\nIf you made a graph of\nGNP per capita vs. civil liberties, you'd notice a definite\ntrend. Could civil liberties really be a cause, rather\nthan just an effect? I think so. I think a society in which\npeople can do and say what they want will also tend to\nbe one in which the most efficient solutions win, rather than\nthose sponsored by the most influential people.\nAuthoritarian countries become corrupt;\ncorrupt countries become poor; and poor countries are weak. \nIt seems to me there is\na Laffer curve for government power, just as for\ntax revenues. At least, it seems likely enough that it\nwould be stupid to try the experiment and find out. Unlike\nhigh tax rates, you can't repeal totalitarianism if it\nturns out to be a mistake.This is why hackers worry. The government spying on people doesn't\nliterally make programmers write worse code. It just leads\neventually to a world in which bad ideas win. And because\nthis is so important to hackers, they're especially sensitive\nto it. They can sense totalitarianism approaching from a\ndistance, as animals can sense an approaching \nthunderstorm.It would be ironic if, as hackers fear, recent measures\nintended to protect national security and intellectual property\nturned out to be a missile aimed right at what makes \nAmerica successful. But it would not be the first time that\nmeasures taken in an atmosphere of panic had\nthe opposite of the intended effect.There is such a thing as Americanness.\nThere's nothing like living abroad to teach you that. \nAnd if you want to know whether something will nurture or squash\nthis quality, it would be hard to find a better focus\ngroup than hackers, because they come closest of any group\nI know to embodying it. Closer, probably, than\nthe men running our government,\nwho for all their talk of patriotism\nremind me more of Richelieu or Mazarin\nthan Thomas Jefferson or George Washington.When you read what the founding fathers had to say for\nthemselves, they sound more like hackers.\n\"The spirit of resistance to government,\"\nJefferson wrote, \"is so valuable on certain occasions, that I wish\nit always to be kept alive.\"Imagine an American president saying that today.\nLike the remarks of an outspoken old grandmother, the sayings of\nthe founding fathers have embarrassed generations of\ntheir less confident successors. They remind us where we come from.\nThey remind us that it is the people who break rules that are\nthe source of America's wealth and power.Those in a position to impose rules naturally want them to be\nobeyed. But be careful what you ask for. You might get it.Thanks to Ken Anderson, Trevor Blackwell, Daniel Giffin, \nSarah Harlin, Shiro Kawai, Jessica Livingston, Matz, \nJackie McDonough, Robert Morris, Eric Raymond, Guido van Rossum,\nDavid Weinberger, and\nSteven Wolfram for reading drafts of this essay.\n(The image shows Steves Jobs and Wozniak \nwith a \"blue box.\"\nPhoto by Margret Wozniak. Reproduced by permission of Steve\nWozniak.)"} {"title": "island", "text": "July 2006I've discovered a handy test for figuring out what you're addicted\nto. Imagine you were going to spend the weekend at a friend's house\non a little island off the coast of Maine. There are no shops on\nthe island and you won't be able to leave while you're there. Also,\nyou've never been to this house before, so you can't assume it will\nhave more than any house might.What, besides clothes and toiletries, do you make a point of packing?\nThat's what you're addicted to. For example, if you find yourself\npacking a bottle of vodka (just in case), you may want to stop and\nthink about that.For me the list is four things: books, earplugs, a notebook, and a\npen.There are other things I might bring if I thought of it, like music,\nor tea, but I can live without them. I'm not so addicted to caffeine\nthat I wouldn't risk the house not having any tea, just for a\nweekend.Quiet is another matter. I realize it seems a bit eccentric to\ntake earplugs on a trip to an island off the coast of Maine. If\nanywhere should be quiet, that should. But what if the person in\nthe next room snored? What if there was a kid playing basketball?\n(Thump, thump, thump... thump.) Why risk it? Earplugs are small.Sometimes I can think with noise. If I already have momentum on\nsome project, I can work in noisy places. I can edit an essay or\ndebug code in an airport. But airports are not so bad: most of the\nnoise is whitish. I couldn't work with the sound of a sitcom coming\nthrough the wall, or a car in the street playing thump-thump music.And of course there's another kind of thinking, when you're starting\nsomething new, that requires complete quiet. You never\nknow when this will strike. It's just as well to carry plugs.The notebook and pen are professional equipment, as it were. Though\nactually there is something druglike about them, in the sense that\ntheir main purpose is to make me feel better. I hardly ever go\nback and read stuff I write down in notebooks. It's just that if\nI can't write things down, worrying about remembering one idea gets\nin the way of having the next. Pen and paper wick ideas.The best notebooks I've found are made by a company called Miquelrius.\nI use their smallest size, which is about 2.5 x 4 in.\nThe secret to writing on such\nnarrow pages is to break words only when you run out of space, like\na Latin inscription. I use the cheapest plastic Bic ballpoints,\npartly because their gluey ink doesn't seep through pages, and\npartly so I don't worry about losing them.I only started carrying a notebook about three years ago. Before\nthat I used whatever scraps of paper I could find. But the problem\nwith scraps of paper is that they're not ordered. In a notebook\nyou can guess what a scribble means by looking at the pages\naround it. In the scrap era I was constantly finding notes I'd\nwritten years before that might say something I needed to remember,\nif I could only figure out what.As for books, I know the house would probably have something to\nread. On the average trip I bring four books and only read one of\nthem, because I find new books to read en route. Really bringing\nbooks is insurance.I realize this dependence on books is not entirely good\u2014that what\nI need them for is distraction. The books I bring on trips are\noften quite virtuous, the sort of stuff that might be assigned\nreading in a college class. But I know my motives aren't virtuous.\nI bring books because if the world gets boring I need to be able\nto slip into another distilled by some writer. It's like eating\njam when you know you should be eating fruit.There is a point where I'll do without books. I was walking in\nsome steep mountains once, and decided I'd rather just think, if I\nwas bored, rather than carry a single unnecessary ounce. It wasn't\nso bad. I found I could entertain myself by having ideas instead\nof reading other people's. If you stop eating jam, fruit starts\nto taste better.So maybe I'll try not bringing books on some future trip. They're\ngoing to have to pry the plugs out of my cold, dead ears, however."} {"title": "vcsqueeze", "text": "November 2005In the next few years, venture capital funds will find themselves\nsqueezed from four directions. They're already stuck with a seller's\nmarket, because of the huge amounts they raised at the end of the\nBubble and still haven't invested. This by itself is not the end\nof the world. In fact, it's just a more extreme version of the\nnorm\nin the VC business: too much money chasing too few deals.Unfortunately, those few deals now want less and less money, because\nit's getting so cheap to start a startup. The four causes: open\nsource, which makes software free; Moore's law, which makes hardware\ngeometrically closer to free; the Web, which makes promotion free\nif you're good; and better languages, which make development a lot\ncheaper.When we started our startup in 1995, the first three were our biggest\nexpenses. We had to pay $5000 for the Netscape Commerce Server,\nthe only software that then supported secure http connections. We\npaid $3000 for a server with a 90 MHz processor and 32 meg of\nmemory. And we paid a PR firm about $30,000 to promote our launch.Now you could get all three for nothing. You can get the software\nfor free; people throw away computers more powerful than our first\nserver; and if you make something good you can generate ten times\nas much traffic by word of mouth online than our first PR firm got\nthrough the print media.And of course another big change for the average startup is that\nprogramming languages have improved-- or rather, the median language has. At most startups ten years\nago, software development meant ten programmers writing code in\nC++. Now the same work might be done by one or two using Python\nor Ruby.During the Bubble, a lot of people predicted that startups would\noutsource their development to India. I think a better model for\nthe future is David Heinemeier Hansson, who outsourced his development\nto a more powerful language instead. A lot of well-known applications\nare now, like BaseCamp, written by just one programmer. And one\nguy is more than 10x cheaper than ten, because (a) he won't waste\nany time in meetings, and (b) since he's probably a founder, he can\npay himself nothing.Because starting a startup is so cheap, venture capitalists now\noften want to give startups more money than the startups want to\ntake. VCs like to invest several million at a time. But as one\nVC told me after a startup he funded would only take about half a\nmillion, \"I don't know what we're going to do. Maybe we'll just\nhave to give some of it back.\" Meaning give some of the fund back\nto the institutional investors who supplied it, because it wasn't\ngoing to be possible to invest it all.Into this already bad situation comes the third problem: Sarbanes-Oxley.\nSarbanes-Oxley is a law, passed after the Bubble, that drastically\nincreases the regulatory burden on public companies. And in addition\nto the cost of compliance, which is at least two million dollars a\nyear, the law introduces frightening legal exposure for corporate\nofficers. An experienced CFO I know said flatly: \"I would not\nwant to be CFO of a public company now.\"You might think that responsible corporate governance is an area\nwhere you can't go too far. But you can go too far in any law, and\nthis remark convinced me that Sarbanes-Oxley must have. This CFO\nis both the smartest and the most upstanding money guy I know. If\nSarbanes-Oxley deters people like him from being CFOs of public \ncompanies, that's proof enough that it's broken.Largely because of Sarbanes-Oxley, few startups go public now. For\nall practical purposes, succeeding now equals getting bought. Which\nmeans VCs are now in the business of finding promising little 2-3\nman startups and pumping them up into companies that cost $100\nmillion to acquire. They didn't mean to be in this business; it's\njust what their business has evolved into.Hence the fourth problem: the acquirers have begun to realize they\ncan buy wholesale. Why should they wait for VCs to make the startups\nthey want more expensive? Most of what the VCs add, acquirers don't\nwant anyway. The acquirers already have brand recognition and HR\ndepartments. What they really want is the software and the developers,\nand that's what the startup is in the early phase: concentrated\nsoftware and developers.Google, typically, seems to have been the first to figure this out.\n\"Bring us your startups early,\" said Google's speaker at the Startup School. They're quite\nexplicit about it: they like to acquire startups at just the point\nwhere they would do a Series A round. (The Series A round is the\nfirst round of real VC funding; it usually happens in the first\nyear.) It is a brilliant strategy, and one that other big technology\ncompanies will no doubt try to duplicate. Unless they want to have \nstill more of their lunch eaten by Google.Of course, Google has an advantage in buying startups: a lot of the\npeople there are rich, or expect to be when their options vest.\nOrdinary employees find it very hard to recommend an acquisition;\nit's just too annoying to see a bunch of twenty year olds get rich\nwhen you're still working for salary. Even if it's the right thing \nfor your company to do.The Solution(s)Bad as things look now, there is a way for VCs to save themselves.\nThey need to do two things, one of which won't surprise them, and \nanother that will seem an anathema.Let's start with the obvious one: lobby to get Sarbanes-Oxley \nloosened. This law was created to prevent future Enrons, not to\ndestroy the IPO market. Since the IPO market was practically dead\nwhen it passed, few saw what bad effects it would have. But now \nthat technology has recovered from the last bust, we can see clearly\nwhat a bottleneck Sarbanes-Oxley has become.Startups are fragile plants\u2014seedlings, in fact. These seedlings\nare worth protecting, because they grow into the trees of the\neconomy. Much of the economy's growth is their growth. I think\nmost politicians realize that. But they don't realize just how \nfragile startups are, and how easily they can become collateral\ndamage of laws meant to fix some other problem.Still more dangerously, when you destroy startups, they make very\nlittle noise. If you step on the toes of the coal industry, you'll\nhear about it. But if you inadvertantly squash the startup industry,\nall that happens is that the founders of the next Google stay in \ngrad school instead of starting a company.My second suggestion will seem shocking to VCs: let founders cash \nout partially in the Series A round. At the moment, when VCs invest\nin a startup, all the stock they get is newly issued and all the \nmoney goes to the company. They could buy some stock directly from\nthe founders as well.Most VCs have an almost religious rule against doing this. They\ndon't want founders to get a penny till the company is sold or goes\npublic. VCs are obsessed with control, and they worry that they'll\nhave less leverage over the founders if the founders have any money.This is a dumb plan. In fact, letting the founders sell a little stock\nearly would generally be better for the company, because it would\ncause the founders' attitudes toward risk to be aligned with the\nVCs'. As things currently work, their attitudes toward risk tend\nto be diametrically opposed: the founders, who have nothing, would\nprefer a 100% chance of $1 million to a 20% chance of $10 million,\nwhile the VCs can afford to be \"rational\" and prefer the latter.Whatever they say, the reason founders are selling their companies\nearly instead of doing Series A rounds is that they get paid up\nfront. That first million is just worth so much more than the\nsubsequent ones. If founders could sell a little stock early,\nthey'd be happy to take VC money and bet the rest on a bigger\noutcome.So why not let the founders have that first million, or at least\nhalf million? The VCs would get same number of shares for the \nmoney. So what if some of the money would go to the \nfounders instead of the company?Some VCs will say this is\nunthinkable\u2014that they want all their money to be put to work\ngrowing the company. But the fact is, the huge size of current VC\ninvestments is dictated by the structure\nof VC funds, not the needs of startups. Often as not these large \ninvestments go to work destroying the company rather than growing\nit.The angel investors who funded our startup let the founders sell\nsome stock directly to them, and it was a good deal for everyone. \nThe angels made a huge return on that investment, so they're happy.\nAnd for us founders it blunted the terrifying all-or-nothingness\nof a startup, which in its raw form is more a distraction than a\nmotivator.If VCs are frightened at the idea of letting founders partially\ncash out, let me tell them something still more frightening: you\nare now competing directly with Google.\nThanks to Trevor Blackwell, Sarah Harlin, Jessica\nLivingston, and Robert Morris for reading drafts of this."} {"title": "wisdom", "text": "February 2007A few days ago I finally figured out something I've wondered about\nfor 25 years: the relationship between wisdom and intelligence.\nAnyone can see they're not the same by the number of people who are\nsmart, but not very wise. And yet intelligence and wisdom do seem\nrelated. How?What is wisdom? I'd say it's knowing what to do in a lot of\nsituations. I'm not trying to make a deep point here about the\ntrue nature of wisdom, just to figure out how we use the word. A\nwise person is someone who usually knows the right thing to do.And yet isn't being smart also knowing what to do in certain\nsituations? For example, knowing what to do when the teacher tells\nyour elementary school class to add all the numbers from 1 to 100?\n[1]Some say wisdom and intelligence apply to different types of\nproblems\u2014wisdom to human problems and intelligence to abstract\nones. But that isn't true. Some wisdom has nothing to do with\npeople: for example, the wisdom of the engineer who knows certain\nstructures are less prone to failure than others. And certainly\nsmart people can find clever solutions to human problems as well\nas abstract ones. \n[2]Another popular explanation is that wisdom comes from experience\nwhile intelligence is innate. But people are not simply wise in\nproportion to how much experience they have. Other things must\ncontribute to wisdom besides experience, and some may be innate: a\nreflective disposition, for example.Neither of the conventional explanations of the difference between\nwisdom and intelligence stands up to scrutiny. So what is the\ndifference? If we look at how people use the words \"wise\" and\n\"smart,\" what they seem to mean is different shapes of performance.Curve\"Wise\" and \"smart\" are both ways of saying someone knows what to\ndo. The difference is that \"wise\" means one has a high average\noutcome across all situations, and \"smart\" means one does spectacularly\nwell in a few. That is, if you had a graph in which the x axis\nrepresented situations and the y axis the outcome, the graph of the\nwise person would be high overall, and the graph of the smart person\nwould have high peaks.The distinction is similar to the rule that one should judge talent\nat its best and character at its worst. Except you judge intelligence\nat its best, and wisdom by its average. That's how the two are\nrelated: they're the two different senses in which the same curve\ncan be high.So a wise person knows what to do in most situations, while a smart\nperson knows what to do in situations where few others could. We\nneed to add one more qualification: we should ignore cases where\nsomeone knows what to do because they have inside information. \n[3]\nBut aside from that, I don't think we can get much more specific\nwithout starting to be mistaken.Nor do we need to. Simple as it is, this explanation predicts, or\nat least accords with, both of the conventional stories about the\ndistinction between wisdom and intelligence. Human problems are\nthe most common type, so being good at solving those is key in\nachieving a high average outcome. And it seems natural that a\nhigh average outcome depends mostly on experience, but that dramatic\npeaks can only be achieved by people with certain rare, innate\nqualities; nearly anyone can learn to be a good swimmer, but to be\nan Olympic swimmer you need a certain body type.This explanation also suggests why wisdom is such an elusive concept:\nthere's no such thing. \"Wise\" means something\u2014that one is\non average good at making the right choice. But giving the name\n\"wisdom\" to the supposed quality that enables one to do that doesn't\nmean such a thing exists. To the extent \"wisdom\" means anything,\nit refers to a grab-bag of qualities as various as self-discipline,\nexperience, and empathy. \n[4]Likewise, though \"intelligent\" means something, we're asking for\ntrouble if we insist on looking for a single thing called \"intelligence.\"\nAnd whatever its components, they're not all innate. We use the\nword \"intelligent\" as an indication of ability: a smart person can\ngrasp things few others could. It does seem likely there's some\ninborn predisposition to intelligence (and wisdom too), but this\npredisposition is not itself intelligence.One reason we tend to think of intelligence as inborn is that people\ntrying to measure it have concentrated on the aspects of it that\nare most measurable. A quality that's inborn will obviously be\nmore convenient to work with than one that's influenced by experience,\nand thus might vary in the course of a study. The problem comes\nwhen we drag the word \"intelligence\" over onto what they're measuring.\nIf they're measuring something inborn, they can't be measuring\nintelligence. Three year olds aren't smart. When we describe one\nas smart, it's shorthand for \"smarter than other three year olds.\"SplitPerhaps it's a technicality to point out that a predisposition to\nintelligence is not the same as intelligence. But it's an important\ntechnicality, because it reminds us that we can become smarter,\njust as we can become wiser.The alarming thing is that we may have to choose between the two.If wisdom and intelligence are the average and peaks of the same\ncurve, then they converge as the number of points on the curve\ndecreases. If there's just one point, they're identical: the average\nand maximum are the same. But as the number of points increases,\nwisdom and intelligence diverge. And historically the number of\npoints on the curve seems to have been increasing: our ability is\ntested in an ever wider range of situations.In the time of Confucius and Socrates, people seem to have regarded\nwisdom, learning, and intelligence as more closely related than we\ndo. Distinguishing between \"wise\" and \"smart\" is a modern habit.\n[5]\nAnd the reason we do is that they've been diverging. As knowledge\ngets more specialized, there are more points on the curve, and the\ndistinction between the spikes and the average becomes sharper,\nlike a digital image rendered with more pixels.One consequence is that some old recipes may have become obsolete.\nAt the very least we have to go back and figure out if they were\nreally recipes for wisdom or intelligence. But the really striking\nchange, as intelligence and wisdom drift apart, is that we may have\nto decide which we prefer. We may not be able to optimize for both\nsimultaneously.Society seems to have voted for intelligence. We no longer admire\nthe sage\u2014not the way people did two thousand years ago. Now\nwe admire the genius. Because in fact the distinction we began\nwith has a rather brutal converse: just as you can be smart without\nbeing very wise, you can be wise without being very smart. That\ndoesn't sound especially admirable. That gets you James Bond, who\nknows what to do in a lot of situations, but has to rely on Q for\nthe ones involving math.Intelligence and wisdom are obviously not mutually exclusive. In\nfact, a high average may help support high peaks. But there are\nreasons to believe that at some point you have to choose between\nthem. One is the example of very smart people, who are so often\nunwise that in popular culture this now seems to be regarded as the\nrule rather than the exception. Perhaps the absent-minded professor\nis wise in his way, or wiser than he seems, but he's not wise in\nthe way Confucius or Socrates wanted people to be. \n[6]NewFor both Confucius and Socrates, wisdom, virtue, and happiness were\nnecessarily related. The wise man was someone who knew what the\nright choice was and always made it; to be the right choice, it had\nto be morally right; he was therefore always happy, knowing he'd\ndone the best he could. I can't think of many ancient philosophers\nwho would have disagreed with that, so far as it goes.\"The superior man is always happy; the small man sad,\" said Confucius.\n[7]Whereas a few years ago I read an interview with a mathematician\nwho said that most nights he went to bed discontented, feeling he\nhadn't made enough progress. \n[8]\nThe Chinese and Greek words we\ntranslate as \"happy\" didn't mean exactly what we do by it, but\nthere's enough overlap that this remark contradicts them.Is the mathematician a small man because he's discontented? No;\nhe's just doing a kind of work that wasn't very common in Confucius's\nday.Human knowledge seems to grow fractally. Time after time, something\nthat seemed a small and uninteresting area\u2014experimental error,\neven\u2014turns out, when examined up close, to have as much in\nit as all knowledge up to that point. Several of the fractal buds\nthat have exploded since ancient times involve inventing and\ndiscovering new things. Math, for example, used to be something a\nhandful of people did part-time. Now it's the career of thousands.\nAnd in work that involves making new things, some old rules don't\napply.Recently I've spent some time advising people, and there I find the\nancient rule still works: try to understand the situation as well\nas you can, give the best advice you can based on your experience,\nand then don't worry about it, knowing you did all you could. But\nI don't have anything like this serenity when I'm writing an essay.\nThen I'm worried. What if I run out of ideas? And when I'm writing,\nfour nights out of five I go to bed discontented, feeling I didn't\nget enough done.Advising people and writing are fundamentally different types of\nwork. When people come to you with a problem and you have to figure\nout the right thing to do, you don't (usually) have to invent\nanything. You just weigh the alternatives and try to judge which\nis the prudent choice. But prudence can't tell me what sentence\nto write next. The search space is too big.Someone like a judge or a military officer can in much of his work\nbe guided by duty, but duty is no guide in making things. Makers\ndepend on something more precarious: inspiration. And like most\npeople who lead a precarious existence, they tend to be worried,\nnot contented. In that respect they're more like the small man of\nConfucius's day, always one bad harvest (or ruler) away from\nstarvation. Except instead of being at the mercy of weather and\nofficials, they're at the mercy of their own imagination.LimitsTo me it was a relief just to realize it might be ok to be discontented.\nThe idea that a successful person should be happy has thousands of\nyears of momentum behind it. If I was any good, why didn't I have\nthe easy confidence winners are supposed to have? But that, I now\nbelieve, is like a runner asking \"If I'm such a good athlete, why\ndo I feel so tired?\" Good runners still get tired; they just get\ntired at higher speeds.People whose work is to invent or discover things are in the same\nposition as the runner. There's no way for them to do the best\nthey can, because there's no limit to what they could do. The\nclosest you can come is to compare yourself to other people. But\nthe better you do, the less this matters. An undergrad who gets\nsomething published feels like a star. But for someone at the top\nof the field, what's the test of doing well? Runners can at least\ncompare themselves to others doing exactly the same thing; if you\nwin an Olympic gold medal, you can be fairly content, even if you\nthink you could have run a bit faster. But what is a novelist to\ndo?Whereas if you're doing the kind of work in which problems are\npresented to you and you have to choose between several alternatives,\nthere's an upper bound on your performance: choosing the best every\ntime. In ancient societies, nearly all work seems to have been of\nthis type. The peasant had to decide whether a garment was worth\nmending, and the king whether or not to invade his neighbor, but\nneither was expected to invent anything. In principle they could\nhave; the king could have invented firearms, then invaded his\nneighbor. But in practice innovations were so rare that they weren't\nexpected of you, any more than goalkeepers are expected to score\ngoals. \n[9]\nIn practice, it seemed as if there was a correct decision\nin every situation, and if you made it you'd done your job perfectly,\njust as a goalkeeper who prevents the other team from scoring is\nconsidered to have played a perfect game.In this world, wisdom seemed paramount. \n[10]\nEven now, most people\ndo work in which problems are put before them and they have to\nchoose the best alternative. But as knowledge has grown more\nspecialized, there are more and more types of work in which people\nhave to make up new things, and in which performance is therefore\nunbounded. Intelligence has become increasingly important relative\nto wisdom because there is more room for spikes.RecipesAnother sign we may have to choose between intelligence and wisdom\nis how different their recipes are. Wisdom seems to come largely\nfrom curing childish qualities, and intelligence largely from\ncultivating them.Recipes for wisdom, particularly ancient ones, tend to have a\nremedial character. To achieve wisdom one must cut away all the\ndebris that fills one's head on emergence from childhood, leaving\nonly the important stuff. Both self-control and experience have\nthis effect: to eliminate the random biases that come from your own\nnature and from the circumstances of your upbringing respectively.\nThat's not all wisdom is, but it's a large part of it. Much of\nwhat's in the sage's head is also in the head of every twelve year\nold. The difference is that in the head of the twelve year old\nit's mixed together with a lot of random junk.The path to intelligence seems to be through working on hard problems.\nYou develop intelligence as you might develop muscles, through\nexercise. But there can't be too much compulsion here. No amount\nof discipline can replace genuine curiosity. So cultivating\nintelligence seems to be a matter of identifying some bias in one's\ncharacter\u2014some tendency to be interested in certain types of\nthings\u2014and nurturing it. Instead of obliterating your\nidiosyncrasies in an effort to make yourself a neutral vessel for\nthe truth, you select one and try to grow it from a seedling into\na tree.The wise are all much alike in their wisdom, but very smart people\ntend to be smart in distinctive ways.Most of our educational traditions aim at wisdom. So perhaps one\nreason schools work badly is that they're trying to make intelligence\nusing recipes for wisdom. Most recipes for wisdom have an element\nof subjection. At the very least, you're supposed to do what the\nteacher says. The more extreme recipes aim to break down your\nindividuality the way basic training does. But that's not the route\nto intelligence. Whereas wisdom comes through humility, it may\nactually help, in cultivating intelligence, to have a mistakenly\nhigh opinion of your abilities, because that encourages you to keep\nworking. Ideally till you realize how mistaken you were.(The reason it's hard to learn new skills late in life is not just\nthat one's brain is less malleable. Another probably even worse\nobstacle is that one has higher standards.)I realize we're on dangerous ground here. I'm not proposing the\nprimary goal of education should be to increase students' \"self-esteem.\"\nThat just breeds laziness. And in any case, it doesn't really fool\nthe kids, not the smart ones. They can tell at a young age that a\ncontest where everyone wins is a fraud.A teacher has to walk a narrow path: you want to encourage kids to\ncome up with things on their own, but you can't simply applaud\neverything they produce. You have to be a good audience: appreciative,\nbut not too easily impressed. And that's a lot of work. You have\nto have a good enough grasp of kids' capacities at different ages\nto know when to be surprised.That's the opposite of traditional recipes for education. Traditionally\nthe student is the audience, not the teacher; the student's job is\nnot to invent, but to absorb some prescribed body of material. (The\nuse of the term \"recitation\" for sections in some colleges is a\nfossil of this.) The problem with these old traditions is that\nthey're too much influenced by recipes for wisdom.DifferentI deliberately gave this essay a provocative title; of course it's\nworth being wise. But I think it's important to understand the\nrelationship between intelligence and wisdom, and particularly what\nseems to be the growing gap between them. That way we can avoid\napplying rules and standards to intelligence that are really meant\nfor wisdom. These two senses of \"knowing what to do\" are more\ndifferent than most people realize. The path to wisdom is through\ndiscipline, and the path to intelligence through carefully selected\nself-indulgence. Wisdom is universal, and intelligence idiosyncratic.\nAnd while wisdom yields calmness, intelligence much of the time\nleads to discontentment.That's particularly worth remembering. A physicist friend recently\ntold me half his department was on Prozac. Perhaps if we acknowledge\nthat some amount of frustration is inevitable in certain kinds\nof work, we can mitigate its effects. Perhaps we can box it up and\nput it away some of the time, instead of letting it flow together\nwith everyday sadness to produce what seems an alarmingly large\npool. At the very least, we can avoid being discontented about\nbeing discontented.If you feel exhausted, it's not necessarily because there's something\nwrong with you. Maybe you're just running fast.Notes[1]\nGauss was supposedly asked this when he was 10. Instead of\nlaboriously adding together the numbers like the other students,\nhe saw that they consisted of 50 pairs that each summed to 101 (100\n+ 1, 99 + 2, etc), and that he could just multiply 101 by 50 to get\nthe answer, 5050.[2]\nA variant is that intelligence is the ability to solve problems,\nand wisdom the judgement to know how to use those solutions. But\nwhile this is certainly an important relationship between wisdom\nand intelligence, it's not the distinction between them. Wisdom\nis useful in solving problems too, and intelligence can help in\ndeciding what to do with the solutions.[3]\nIn judging both intelligence and wisdom we have to factor out\nsome knowledge. People who know the combination of a safe will be\nbetter at opening it than people who don't, but no one would say\nthat was a test of intelligence or wisdom.But knowledge overlaps with wisdom and probably also intelligence.\nA knowledge of human nature is certainly part of wisdom. So where\ndo we draw the line?Perhaps the solution is to discount knowledge that at some point\nhas a sharp drop in utility. For example, understanding French\nwill help you in a large number of situations, but its value drops\nsharply as soon as no one else involved knows French. Whereas the\nvalue of understanding vanity would decline more gradually.The knowledge whose utility drops sharply is the kind that has\nlittle relation to other knowledge. This includes mere conventions,\nlike languages and safe combinations, and also what we'd call\n\"random\" facts, like movie stars' birthdays, or how to distinguish\n1956 from 1957 Studebakers.[4]\nPeople seeking some single thing called \"wisdom\" have been\nfooled by grammar. Wisdom is just knowing the right thing to do,\nand there are a hundred and one different qualities that help in\nthat. Some, like selflessness, might come from meditating in an\nempty room, and others, like a knowledge of human nature, might\ncome from going to drunken parties.Perhaps realizing this will help dispel the cloud of semi-sacred\nmystery that surrounds wisdom in so many people's eyes. The mystery\ncomes mostly from looking for something that doesn't exist. And\nthe reason there have historically been so many different schools\nof thought about how to achieve wisdom is that they've focused on\ndifferent components of it.When I use the word \"wisdom\" in this essay, I mean no more than\nwhatever collection of qualities helps people make the right choice\nin a wide variety of situations.[5]\nEven in English, our sense of the word \"intelligence\" is\nsurprisingly recent. Predecessors like \"understanding\" seem to\nhave had a broader meaning.[6]\nThere is of course some uncertainty about how closely the remarks\nattributed to Confucius and Socrates resemble their actual opinions.\nI'm using these names as we use the name \"Homer,\" to mean the\nhypothetical people who said the things attributed to them.[7]\nAnalects VII:36, Fung trans.Some translators use \"calm\" instead of \"happy.\" One source of\ndifficulty here is that present-day English speakers have a different\nidea of happiness from many older societies. Every language probably\nhas a word meaning \"how one feels when things are going well,\" but\ndifferent cultures react differently when things go well. We react\nlike children, with smiles and laughter. But in a more reserved\nsociety, or in one where life was tougher, the reaction might be a\nquiet contentment.[8]\nIt may have been Andrew Wiles, but I'm not sure. If anyone\nremembers such an interview, I'd appreciate hearing from you.[9]\nConfucius claimed proudly that he had never invented\nanything\u2014that he had simply passed on an accurate account of\nancient traditions. [Analects VII:1] It's hard for us now to\nappreciate how important a duty it must have been in preliterate\nsocieties to remember and pass on the group's accumulated knowledge.\nEven in Confucius's time it still seems to have been the first duty\nof the scholar.[10]\nThe bias toward wisdom in ancient philosophy may be exaggerated\nby the fact that, in both Greece and China, many of the first\nphilosophers (including Confucius and Plato) saw themselves as\nteachers of administrators, and so thought disproportionately about\nsuch matters. The few people who did invent things, like storytellers,\nmust have seemed an outlying data point that could be ignored.Thanks to Trevor Blackwell, Sarah Harlin, Jessica Livingston,\nand Robert Morris for reading drafts of this."} {"title": "worked", "text": "February 2021Before college the two main things I worked on, outside of school,\nwere writing and programming. I didn't write essays. I wrote what\nbeginning writers were supposed to write then, and probably still\nare: short stories. My stories were awful. They had hardly any plot,\njust characters with strong feelings, which I imagined made them\ndeep.The first programs I tried writing were on the IBM 1401 that our\nschool district used for what was then called \"data processing.\"\nThis was in 9th grade, so I was 13 or 14. The school district's\n1401 happened to be in the basement of our junior high school, and\nmy friend Rich Draves and I got permission to use it. It was like\na mini Bond villain's lair down there, with all these alien-looking\nmachines \u0097 CPU, disk drives, printer, card reader \u0097 sitting up\non a raised floor under bright fluorescent lights.The language we used was an early version of Fortran. You had to\ntype programs on punch cards, then stack them in the card reader\nand press a button to load the program into memory and run it. The\nresult would ordinarily be to print something on the spectacularly\nloud printer.I was puzzled by the 1401. I couldn't figure out what to do with\nit. And in retrospect there's not much I could have done with it.\nThe only form of input to programs was data stored on punched cards,\nand I didn't have any data stored on punched cards. The only other\noption was to do things that didn't rely on any input, like calculate\napproximations of pi, but I didn't know enough math to do anything\ninteresting of that type. So I'm not surprised I can't remember any\nprograms I wrote, because they can't have done much. My clearest\nmemory is of the moment I learned it was possible for programs not\nto terminate, when one of mine didn't. On a machine without\ntime-sharing, this was a social as well as a technical error, as\nthe data center manager's expression made clear.With microcomputers, everything changed. Now you could have a\ncomputer sitting right in front of you, on a desk, that could respond\nto your keystrokes as it was running instead of just churning through\na stack of punch cards and then stopping. \n[1]The first of my friends to get a microcomputer built it himself.\nIt was sold as a kit by Heathkit. I remember vividly how impressed\nand envious I felt watching him sitting in front of it, typing\nprograms right into the computer.Computers were expensive in those days and it took me years of\nnagging before I convinced my father to buy one, a TRS-80, in about\n1980. The gold standard then was the Apple II, but a TRS-80 was\ngood enough. This was when I really started programming. I wrote\nsimple games, a program to predict how high my model rockets would\nfly, and a word processor that my father used to write at least one\nbook. There was only room in memory for about 2 pages of text, so\nhe'd write 2 pages at a time and then print them out, but it was a\nlot better than a typewriter.Though I liked programming, I didn't plan to study it in college.\nIn college I was going to study philosophy, which sounded much more\npowerful. It seemed, to my naive high school self, to be the study\nof the ultimate truths, compared to which the things studied in\nother fields would be mere domain knowledge. What I discovered when\nI got to college was that the other fields took up so much of the\nspace of ideas that there wasn't much left for these supposed\nultimate truths. All that seemed left for philosophy were edge cases\nthat people in other fields felt could safely be ignored.I couldn't have put this into words when I was 18. All I knew at\nthe time was that I kept taking philosophy courses and they kept\nbeing boring. So I decided to switch to AI.AI was in the air in the mid 1980s, but there were two things\nespecially that made me want to work on it: a novel by Heinlein\ncalled The Moon is a Harsh Mistress, which featured an intelligent\ncomputer called Mike, and a PBS documentary that showed Terry\nWinograd using SHRDLU. I haven't tried rereading The Moon is a Harsh\nMistress, so I don't know how well it has aged, but when I read it\nI was drawn entirely into its world. It seemed only a matter of\ntime before we'd have Mike, and when I saw Winograd using SHRDLU,\nit seemed like that time would be a few years at most. All you had\nto do was teach SHRDLU more words.There weren't any classes in AI at Cornell then, not even graduate\nclasses, so I started trying to teach myself. Which meant learning\nLisp, since in those days Lisp was regarded as the language of AI.\nThe commonly used programming languages then were pretty primitive,\nand programmers' ideas correspondingly so. The default language at\nCornell was a Pascal-like language called PL/I, and the situation\nwas similar elsewhere. Learning Lisp expanded my concept of a program\nso fast that it was years before I started to have a sense of where\nthe new limits were. This was more like it; this was what I had\nexpected college to do. It wasn't happening in a class, like it was\nsupposed to, but that was ok. For the next couple years I was on a\nroll. I knew what I was going to do.For my undergraduate thesis, I reverse-engineered SHRDLU. My God\ndid I love working on that program. It was a pleasing bit of code,\nbut what made it even more exciting was my belief \u0097 hard to imagine\nnow, but not unique in 1985 \u0097 that it was already climbing the\nlower slopes of intelligence.I had gotten into a program at Cornell that didn't make you choose\na major. You could take whatever classes you liked, and choose\nwhatever you liked to put on your degree. I of course chose \"Artificial\nIntelligence.\" When I got the actual physical diploma, I was dismayed\nto find that the quotes had been included, which made them read as\nscare-quotes. At the time this bothered me, but now it seems amusingly\naccurate, for reasons I was about to discover.I applied to 3 grad schools: MIT and Yale, which were renowned for\nAI at the time, and Harvard, which I'd visited because Rich Draves\nwent there, and was also home to Bill Woods, who'd invented the\ntype of parser I used in my SHRDLU clone. Only Harvard accepted me,\nso that was where I went.I don't remember the moment it happened, or if there even was a\nspecific moment, but during the first year of grad school I realized\nthat AI, as practiced at the time, was a hoax. By which I mean the\nsort of AI in which a program that's told \"the dog is sitting on\nthe chair\" translates this into some formal representation and adds\nit to the list of things it knows.What these programs really showed was that there's a subset of\nnatural language that's a formal language. But a very proper subset.\nIt was clear that there was an unbridgeable gap between what they\ncould do and actually understanding natural language. It was not,\nin fact, simply a matter of teaching SHRDLU more words. That whole\nway of doing AI, with explicit data structures representing concepts,\nwas not going to work. Its brokenness did, as so often happens,\ngenerate a lot of opportunities to write papers about various\nband-aids that could be applied to it, but it was never going to\nget us Mike.So I looked around to see what I could salvage from the wreckage\nof my plans, and there was Lisp. I knew from experience that Lisp\nwas interesting for its own sake and not just for its association\nwith AI, even though that was the main reason people cared about\nit at the time. So I decided to focus on Lisp. In fact, I decided\nto write a book about Lisp hacking. It's scary to think how little\nI knew about Lisp hacking when I started writing that book. But\nthere's nothing like writing a book about something to help you\nlearn it. The book, On Lisp, wasn't published till 1993, but I wrote\nmuch of it in grad school.Computer Science is an uneasy alliance between two halves, theory\nand systems. The theory people prove things, and the systems people\nbuild things. I wanted to build things. I had plenty of respect for\ntheory \u0097 indeed, a sneaking suspicion that it was the more admirable\nof the two halves \u0097 but building things seemed so much more exciting.The problem with systems work, though, was that it didn't last.\nAny program you wrote today, no matter how good, would be obsolete\nin a couple decades at best. People might mention your software in\nfootnotes, but no one would actually use it. And indeed, it would\nseem very feeble work. Only people with a sense of the history of\nthe field would even realize that, in its time, it had been good.There were some surplus Xerox Dandelions floating around the computer\nlab at one point. Anyone who wanted one to play around with could\nhave one. I was briefly tempted, but they were so slow by present\nstandards; what was the point? No one else wanted one either, so\noff they went. That was what happened to systems work.I wanted not just to build things, but to build things that would\nlast.In this dissatisfied state I went in 1988 to visit Rich Draves at\nCMU, where he was in grad school. One day I went to visit the\nCarnegie Institute, where I'd spent a lot of time as a kid. While\nlooking at a painting there I realized something that might seem\nobvious, but was a big surprise to me. There, right on the wall,\nwas something you could make that would last. Paintings didn't\nbecome obsolete. Some of the best ones were hundreds of years old.And moreover this was something you could make a living doing. Not\nas easily as you could by writing software, of course, but I thought\nif you were really industrious and lived really cheaply, it had to\nbe possible to make enough to survive. And as an artist you could\nbe truly independent. You wouldn't have a boss, or even need to get\nresearch funding.I had always liked looking at paintings. Could I make them? I had\nno idea. I'd never imagined it was even possible. I knew intellectually\nthat people made art \u0097 that it didn't just appear spontaneously\n\u0097 but it was as if the people who made it were a different species.\nThey either lived long ago or were mysterious geniuses doing strange\nthings in profiles in Life magazine. The idea of actually being\nable to make art, to put that verb before that noun, seemed almost\nmiraculous.That fall I started taking art classes at Harvard. Grad students\ncould take classes in any department, and my advisor, Tom Cheatham,\nwas very easy going. If he even knew about the strange classes I\nwas taking, he never said anything.So now I was in a PhD program in computer science, yet planning to\nbe an artist, yet also genuinely in love with Lisp hacking and\nworking away at On Lisp. In other words, like many a grad student,\nI was working energetically on multiple projects that were not my\nthesis.I didn't see a way out of this situation. I didn't want to drop out\nof grad school, but how else was I going to get out? I remember\nwhen my friend Robert Morris got kicked out of Cornell for writing\nthe internet worm of 1988, I was envious that he'd found such a\nspectacular way to get out of grad school.Then one day in April 1990 a crack appeared in the wall. I ran into\nprofessor Cheatham and he asked if I was far enough along to graduate\nthat June. I didn't have a word of my dissertation written, but in\nwhat must have been the quickest bit of thinking in my life, I\ndecided to take a shot at writing one in the 5 weeks or so that\nremained before the deadline, reusing parts of On Lisp where I\ncould, and I was able to respond, with no perceptible delay \"Yes,\nI think so. I'll give you something to read in a few days.\"I picked applications of continuations as the topic. In retrospect\nI should have written about macros and embedded languages. There's\na whole world there that's barely been explored. But all I wanted\nwas to get out of grad school, and my rapidly written dissertation\nsufficed, just barely.Meanwhile I was applying to art schools. I applied to two: RISD in\nthe US, and the Accademia di Belli Arti in Florence, which, because\nit was the oldest art school, I imagined would be good. RISD accepted\nme, and I never heard back from the Accademia, so off to Providence\nI went.I'd applied for the BFA program at RISD, which meant in effect that\nI had to go to college again. This was not as strange as it sounds,\nbecause I was only 25, and art schools are full of people of different\nages. RISD counted me as a transfer sophomore and said I had to do\nthe foundation that summer. The foundation means the classes that\neveryone has to take in fundamental subjects like drawing, color,\nand design.Toward the end of the summer I got a big surprise: a letter from\nthe Accademia, which had been delayed because they'd sent it to\nCambridge England instead of Cambridge Massachusetts, inviting me\nto take the entrance exam in Florence that fall. This was now only\nweeks away. My nice landlady let me leave my stuff in her attic. I\nhad some money saved from consulting work I'd done in grad school;\nthere was probably enough to last a year if I lived cheaply. Now\nall I had to do was learn Italian.Only stranieri (foreigners) had to take this entrance exam. In\nretrospect it may well have been a way of excluding them, because\nthere were so many stranieri attracted by the idea of studying\nart in Florence that the Italian students would otherwise have been\noutnumbered. I was in decent shape at painting and drawing from the\nRISD foundation that summer, but I still don't know how I managed\nto pass the written exam. I remember that I answered the essay\nquestion by writing about Cezanne, and that I cranked up the\nintellectual level as high as I could to make the most of my limited\nvocabulary. \n[2]I'm only up to age 25 and already there are such conspicuous patterns.\nHere I was, yet again about to attend some august institution in\nthe hopes of learning about some prestigious subject, and yet again\nabout to be disappointed. The students and faculty in the painting\ndepartment at the Accademia were the nicest people you could imagine,\nbut they had long since arrived at an arrangement whereby the\nstudents wouldn't require the faculty to teach anything, and in\nreturn the faculty wouldn't require the students to learn anything.\nAnd at the same time all involved would adhere outwardly to the\nconventions of a 19th century atelier. We actually had one of those\nlittle stoves, fed with kindling, that you see in 19th century\nstudio paintings, and a nude model sitting as close to it as possible\nwithout getting burned. Except hardly anyone else painted her besides\nme. The rest of the students spent their time chatting or occasionally\ntrying to imitate things they'd seen in American art magazines.Our model turned out to live just down the street from me. She made\na living from a combination of modelling and making fakes for a\nlocal antique dealer. She'd copy an obscure old painting out of a\nbook, and then he'd take the copy and maltreat it to make it look\nold. \n[3]While I was a student at the Accademia I started painting still\nlives in my bedroom at night. These paintings were tiny, because\nthe room was, and because I painted them on leftover scraps of\ncanvas, which was all I could afford at the time. Painting still\nlives is different from painting people, because the subject, as\nits name suggests, can't move. People can't sit for more than about\n15 minutes at a time, and when they do they don't sit very still.\nSo the traditional m.o. for painting people is to know how to paint\na generic person, which you then modify to match the specific person\nyou're painting. Whereas a still life you can, if you want, copy\npixel by pixel from what you're seeing. You don't want to stop\nthere, of course, or you get merely photographic accuracy, and what\nmakes a still life interesting is that it's been through a head.\nYou want to emphasize the visual cues that tell you, for example,\nthat the reason the color changes suddenly at a certain point is\nthat it's the edge of an object. By subtly emphasizing such things\nyou can make paintings that are more realistic than photographs not\njust in some metaphorical sense, but in the strict information-theoretic\nsense. \n[4]I liked painting still lives because I was curious about what I was\nseeing. In everyday life, we aren't consciously aware of much we're\nseeing. Most visual perception is handled by low-level processes\nthat merely tell your brain \"that's a water droplet\" without telling\nyou details like where the lightest and darkest points are, or\n\"that's a bush\" without telling you the shape and position of every\nleaf. This is a feature of brains, not a bug. In everyday life it\nwould be distracting to notice every leaf on every bush. But when\nyou have to paint something, you have to look more closely, and\nwhen you do there's a lot to see. You can still be noticing new\nthings after days of trying to paint something people usually take\nfor granted, just as you can after\ndays of trying to write an essay about something people usually\ntake for granted.This is not the only way to paint. I'm not 100% sure it's even a\ngood way to paint. But it seemed a good enough bet to be worth\ntrying.Our teacher, professor Ulivi, was a nice guy. He could see I worked\nhard, and gave me a good grade, which he wrote down in a sort of\npassport each student had. But the Accademia wasn't teaching me\nanything except Italian, and my money was running out, so at the\nend of the first year I went back to the US.I wanted to go back to RISD, but I was now broke and RISD was very\nexpensive, so I decided to get a job for a year and then return to\nRISD the next fall. I got one at a company called Interleaf, which\nmade software for creating documents. You mean like Microsoft Word?\nExactly. That was how I learned that low end software tends to eat\nhigh end software. But Interleaf still had a few years to live yet.\n[5]Interleaf had done something pretty bold. Inspired by Emacs, they'd\nadded a scripting language, and even made the scripting language a\ndialect of Lisp. Now they wanted a Lisp hacker to write things in\nit. This was the closest thing I've had to a normal job, and I\nhereby apologize to my boss and coworkers, because I was a bad\nemployee. Their Lisp was the thinnest icing on a giant C cake, and\nsince I didn't know C and didn't want to learn it, I never understood\nmost of the software. Plus I was terribly irresponsible. This was\nback when a programming job meant showing up every day during certain\nworking hours. That seemed unnatural to me, and on this point the\nrest of the world is coming around to my way of thinking, but at\nthe time it caused a lot of friction. Toward the end of the year I\nspent much of my time surreptitiously working on On Lisp, which I\nhad by this time gotten a contract to publish.The good part was that I got paid huge amounts of money, especially\nby art student standards. In Florence, after paying my part of the\nrent, my budget for everything else had been $7 a day. Now I was\ngetting paid more than 4 times that every hour, even when I was\njust sitting in a meeting. By living cheaply I not only managed to\nsave enough to go back to RISD, but also paid off my college loans.I learned some useful things at Interleaf, though they were mostly\nabout what not to do. I learned that it's better for technology\ncompanies to be run by product people than sales people (though\nsales is a real skill and people who are good at it are really good\nat it), that it leads to bugs when code is edited by too many people,\nthat cheap office space is no bargain if it's depressing, that\nplanned meetings are inferior to corridor conversations, that big,\nbureaucratic customers are a dangerous source of money, and that\nthere's not much overlap between conventional office hours and the\noptimal time for hacking, or conventional offices and the optimal\nplace for it.But the most important thing I learned, and which I used in both\nViaweb and Y Combinator, is that the low end eats the high end:\nthat it's good to be the \"entry level\" option, even though that\nwill be less prestigious, because if you're not, someone else will\nbe, and will squash you against the ceiling. Which in turn means\nthat prestige is a danger sign.When I left to go back to RISD the next fall, I arranged to do\nfreelance work for the group that did projects for customers, and\nthis was how I survived for the next several years. When I came\nback to visit for a project later on, someone told me about a new\nthing called HTML, which was, as he described it, a derivative of\nSGML. Markup language enthusiasts were an occupational hazard at\nInterleaf and I ignored him, but this HTML thing later became a big\npart of my life.In the fall of 1992 I moved back to Providence to continue at RISD.\nThe foundation had merely been intro stuff, and the Accademia had\nbeen a (very civilized) joke. Now I was going to see what real art\nschool was like. But alas it was more like the Accademia than not.\nBetter organized, certainly, and a lot more expensive, but it was\nnow becoming clear that art school did not bear the same relationship\nto art that medical school bore to medicine. At least not the\npainting department. The textile department, which my next door\nneighbor belonged to, seemed to be pretty rigorous. No doubt\nillustration and architecture were too. But painting was post-rigorous.\nPainting students were supposed to express themselves, which to the\nmore worldly ones meant to try to cook up some sort of distinctive\nsignature style.A signature style is the visual equivalent of what in show business\nis known as a \"schtick\": something that immediately identifies the\nwork as yours and no one else's. For example, when you see a painting\nthat looks like a certain kind of cartoon, you know it's by Roy\nLichtenstein. So if you see a big painting of this type hanging in\nthe apartment of a hedge fund manager, you know he paid millions\nof dollars for it. That's not always why artists have a signature\nstyle, but it's usually why buyers pay a lot for such work.\n[6]There were plenty of earnest students too: kids who \"could draw\"\nin high school, and now had come to what was supposed to be the\nbest art school in the country, to learn to draw even better. They\ntended to be confused and demoralized by what they found at RISD,\nbut they kept going, because painting was what they did. I was not\none of the kids who could draw in high school, but at RISD I was\ndefinitely closer to their tribe than the tribe of signature style\nseekers.I learned a lot in the color class I took at RISD, but otherwise I\nwas basically teaching myself to paint, and I could do that for\nfree. So in 1993 I dropped out. I hung around Providence for a bit,\nand then my college friend Nancy Parmet did me a big favor. A\nrent-controlled apartment in a building her mother owned in New\nYork was becoming vacant. Did I want it? It wasn't much more than\nmy current place, and New York was supposed to be where the artists\nwere. So yes, I wanted it!\n[7]Asterix comics begin by zooming in on a tiny corner of Roman Gaul\nthat turns out not to be controlled by the Romans. You can do\nsomething similar on a map of New York City: if you zoom in on the\nUpper East Side, there's a tiny corner that's not rich, or at least\nwasn't in 1993. It's called Yorkville, and that was my new home.\nNow I was a New York artist \u0097 in the strictly technical sense of\nmaking paintings and living in New York.I was nervous about money, because I could sense that Interleaf was\non the way down. Freelance Lisp hacking work was very rare, and I\ndidn't want to have to program in another language, which in those\ndays would have meant C++ if I was lucky. So with my unerring nose\nfor financial opportunity, I decided to write another book on Lisp.\nThis would be a popular book, the sort of book that could be used\nas a textbook. I imagined myself living frugally off the royalties\nand spending all my time painting. (The painting on the cover of\nthis book, ANSI Common Lisp, is one that I painted around this\ntime.)The best thing about New York for me was the presence of Idelle and\nJulian Weber. Idelle Weber was a painter, one of the early\nphotorealists, and I'd taken her painting class at Harvard. I've\nnever known a teacher more beloved by her students. Large numbers\nof former students kept in touch with her, including me. After I\nmoved to New York I became her de facto studio assistant.She liked to paint on big, square canvases, 4 to 5 feet on a side.\nOne day in late 1994 as I was stretching one of these monsters there\nwas something on the radio about a famous fund manager. He wasn't\nthat much older than me, and was super rich. The thought suddenly\noccurred to me: why don't I become rich? Then I'll be able to work\non whatever I want.Meanwhile I'd been hearing more and more about this new thing called\nthe World Wide Web. Robert Morris showed it to me when I visited\nhim in Cambridge, where he was now in grad school at Harvard. It\nseemed to me that the web would be a big deal. I'd seen what graphical\nuser interfaces had done for the popularity of microcomputers. It\nseemed like the web would do the same for the internet.If I wanted to get rich, here was the next train leaving the station.\nI was right about that part. What I got wrong was the idea. I decided\nwe should start a company to put art galleries online. I can't\nhonestly say, after reading so many Y Combinator applications, that\nthis was the worst startup idea ever, but it was up there. Art\ngalleries didn't want to be online, and still don't, not the fancy\nones. That's not how they sell. I wrote some software to generate\nweb sites for galleries, and Robert wrote some to resize images and\nset up an http server to serve the pages. Then we tried to sign up\ngalleries. To call this a difficult sale would be an understatement.\nIt was difficult to give away. A few galleries let us make sites\nfor them for free, but none paid us.Then some online stores started to appear, and I realized that\nexcept for the order buttons they were identical to the sites we'd\nbeen generating for galleries. This impressive-sounding thing called\nan \"internet storefront\" was something we already knew how to build.So in the summer of 1995, after I submitted the camera-ready copy\nof ANSI Common Lisp to the publishers, we started trying to write\nsoftware to build online stores. At first this was going to be\nnormal desktop software, which in those days meant Windows software.\nThat was an alarming prospect, because neither of us knew how to\nwrite Windows software or wanted to learn. We lived in the Unix\nworld. But we decided we'd at least try writing a prototype store\nbuilder on Unix. Robert wrote a shopping cart, and I wrote a new\nsite generator for stores \u0097 in Lisp, of course.We were working out of Robert's apartment in Cambridge. His roommate\nwas away for big chunks of time, during which I got to sleep in his\nroom. For some reason there was no bed frame or sheets, just a\nmattress on the floor. One morning as I was lying on this mattress\nI had an idea that made me sit up like a capital L. What if we ran\nthe software on the server, and let users control it by clicking\non links? Then we'd never have to write anything to run on users'\ncomputers. We could generate the sites on the same server we'd serve\nthem from. Users wouldn't need anything more than a browser.This kind of software, known as a web app, is common now, but at\nthe time it wasn't clear that it was even possible. To find out,\nwe decided to try making a version of our store builder that you\ncould control through the browser. A couple days later, on August\n12, we had one that worked. The UI was horrible, but it proved you\ncould build a whole store through the browser, without any client\nsoftware or typing anything into the command line on the server.Now we felt like we were really onto something. I had visions of a\nwhole new generation of software working this way. You wouldn't\nneed versions, or ports, or any of that crap. At Interleaf there\nhad been a whole group called Release Engineering that seemed to\nbe at least as big as the group that actually wrote the software.\nNow you could just update the software right on the server.We started a new company we called Viaweb, after the fact that our\nsoftware worked via the web, and we got $10,000 in seed funding\nfrom Idelle's husband Julian. In return for that and doing the\ninitial legal work and giving us business advice, we gave him 10%\nof the company. Ten years later this deal became the model for Y\nCombinator's. We knew founders needed something like this, because\nwe'd needed it ourselves.At this stage I had a negative net worth, because the thousand\ndollars or so I had in the bank was more than counterbalanced by\nwhat I owed the government in taxes. (Had I diligently set aside\nthe proper proportion of the money I'd made consulting for Interleaf?\nNo, I had not.) So although Robert had his graduate student stipend,\nI needed that seed funding to live on.We originally hoped to launch in September, but we got more ambitious\nabout the software as we worked on it. Eventually we managed to\nbuild a WYSIWYG site builder, in the sense that as you were creating\npages, they looked exactly like the static ones that would be\ngenerated later, except that instead of leading to static pages,\nthe links all referred to closures stored in a hash table on the\nserver.It helped to have studied art, because the main goal of an online\nstore builder is to make users look legit, and the key to looking\nlegit is high production values. If you get page layouts and fonts\nand colors right, you can make a guy running a store out of his\nbedroom look more legit than a big company.(If you're curious why my site looks so old-fashioned, it's because\nit's still made with this software. It may look clunky today, but\nin 1996 it was the last word in slick.)In September, Robert rebelled. \"We've been working on this for a\nmonth,\" he said, \"and it's still not done.\" This is funny in\nretrospect, because he would still be working on it almost 3 years\nlater. But I decided it might be prudent to recruit more programmers,\nand I asked Robert who else in grad school with him was really good.\nHe recommended Trevor Blackwell, which surprised me at first, because\nat that point I knew Trevor mainly for his plan to reduce everything\nin his life to a stack of notecards, which he carried around with\nhim. But Rtm was right, as usual. Trevor turned out to be a\nfrighteningly effective hacker.It was a lot of fun working with Robert and Trevor. They're the two\nmost independent-minded people \nI know, and in completely different\nways. If you could see inside Rtm's brain it would look like a\ncolonial New England church, and if you could see inside Trevor's\nit would look like the worst excesses of Austrian Rococo.We opened for business, with 6 stores, in January 1996. It was just\nas well we waited a few months, because although we worried we were\nlate, we were actually almost fatally early. There was a lot of\ntalk in the press then about ecommerce, but not many people actually\nwanted online stores.\n[8]There were three main parts to the software: the editor, which\npeople used to build sites and which I wrote, the shopping cart,\nwhich Robert wrote, and the manager, which kept track of orders and\nstatistics, and which Trevor wrote. In its time, the editor was one\nof the best general-purpose site builders. I kept the code tight\nand didn't have to integrate with any other software except Robert's\nand Trevor's, so it was quite fun to work on. If all I'd had to do\nwas work on this software, the next 3 years would have been the\neasiest of my life. Unfortunately I had to do a lot more, all of\nit stuff I was worse at than programming, and the next 3 years were\ninstead the most stressful.There were a lot of startups making ecommerce software in the second\nhalf of the 90s. We were determined to be the Microsoft Word, not\nthe Interleaf. Which meant being easy to use and inexpensive. It\nwas lucky for us that we were poor, because that caused us to make\nViaweb even more inexpensive than we realized. We charged $100 a\nmonth for a small store and $300 a month for a big one. This low\nprice was a big attraction, and a constant thorn in the sides of\ncompetitors, but it wasn't because of some clever insight that we\nset the price low. We had no idea what businesses paid for things.\n$300 a month seemed like a lot of money to us.We did a lot of things right by accident like that. For example,\nwe did what's now called \"doing things that \ndon't scale,\" although\nat the time we would have described it as \"being so lame that we're\ndriven to the most desperate measures to get users.\" The most common\nof which was building stores for them. This seemed particularly\nhumiliating, since the whole raison d'etre of our software was that\npeople could use it to make their own stores. But anything to get\nusers.We learned a lot more about retail than we wanted to know. For\nexample, that if you could only have a small image of a man's shirt\n(and all images were small then by present standards), it was better\nto have a closeup of the collar than a picture of the whole shirt.\nThe reason I remember learning this was that it meant I had to\nrescan about 30 images of men's shirts. My first set of scans were\nso beautiful too.Though this felt wrong, it was exactly the right thing to be doing.\nBuilding stores for users taught us about retail, and about how it\nfelt to use our software. I was initially both mystified and repelled\nby \"business\" and thought we needed a \"business person\" to be in\ncharge of it, but once we started to get users, I was converted,\nin much the same way I was converted to \nfatherhood once I had kids.\nWhatever users wanted, I was all theirs. Maybe one day we'd have\nso many users that I couldn't scan their images for them, but in\nthe meantime there was nothing more important to do.Another thing I didn't get at the time is that \ngrowth rate is the\nultimate test of a startup. Our growth rate was fine. We had about\n70 stores at the end of 1996 and about 500 at the end of 1997. I\nmistakenly thought the thing that mattered was the absolute number\nof users. And that is the thing that matters in the sense that\nthat's how much money you're making, and if you're not making enough,\nyou might go out of business. But in the long term the growth rate\ntakes care of the absolute number. If we'd been a startup I was\nadvising at Y Combinator, I would have said: Stop being so stressed\nout, because you're doing fine. You're growing 7x a year. Just don't\nhire too many more people and you'll soon be profitable, and then\nyou'll control your own destiny.Alas I hired lots more people, partly because our investors wanted\nme to, and partly because that's what startups did during the\nInternet Bubble. A company with just a handful of employees would\nhave seemed amateurish. So we didn't reach breakeven until about\nwhen Yahoo bought us in the summer of 1998. Which in turn meant we\nwere at the mercy of investors for the entire life of the company.\nAnd since both we and our investors were noobs at startups, the\nresult was a mess even by startup standards.It was a huge relief when Yahoo bought us. In principle our Viaweb\nstock was valuable. It was a share in a business that was profitable\nand growing rapidly. But it didn't feel very valuable to me; I had\nno idea how to value a business, but I was all too keenly aware of\nthe near-death experiences we seemed to have every few months. Nor\nhad I changed my grad student lifestyle significantly since we\nstarted. So when Yahoo bought us it felt like going from rags to\nriches. Since we were going to California, I bought a car, a yellow\n1998 VW GTI. I remember thinking that its leather seats alone were\nby far the most luxurious thing I owned.The next year, from the summer of 1998 to the summer of 1999, must\nhave been the least productive of my life. I didn't realize it at\nthe time, but I was worn out from the effort and stress of running\nViaweb. For a while after I got to California I tried to continue\nmy usual m.o. of programming till 3 in the morning, but fatigue\ncombined with Yahoo's prematurely aged\nculture and grim cube farm\nin Santa Clara gradually dragged me down. After a few months it\nfelt disconcertingly like working at Interleaf.Yahoo had given us a lot of options when they bought us. At the\ntime I thought Yahoo was so overvalued that they'd never be worth\nanything, but to my astonishment the stock went up 5x in the next\nyear. I hung on till the first chunk of options vested, then in the\nsummer of 1999 I left. It had been so long since I'd painted anything\nthat I'd half forgotten why I was doing this. My brain had been\nentirely full of software and men's shirts for 4 years. But I had\ndone this to get rich so I could paint, I reminded myself, and now\nI was rich, so I should go paint.When I said I was leaving, my boss at Yahoo had a long conversation\nwith me about my plans. I told him all about the kinds of pictures\nI wanted to paint. At the time I was touched that he took such an\ninterest in me. Now I realize it was because he thought I was lying.\nMy options at that point were worth about $2 million a month. If I\nwas leaving that kind of money on the table, it could only be to\ngo and start some new startup, and if I did, I might take people\nwith me. This was the height of the Internet Bubble, and Yahoo was\nground zero of it. My boss was at that moment a billionaire. Leaving\nthen to start a new startup must have seemed to him an insanely,\nand yet also plausibly, ambitious plan.But I really was quitting to paint, and I started immediately.\nThere was no time to lose. I'd already burned 4 years getting rich.\nNow when I talk to founders who are leaving after selling their\ncompanies, my advice is always the same: take a vacation. That's\nwhat I should have done, just gone off somewhere and done nothing\nfor a month or two, but the idea never occurred to me.So I tried to paint, but I just didn't seem to have any energy or\nambition. Part of the problem was that I didn't know many people\nin California. I'd compounded this problem by buying a house up in\nthe Santa Cruz Mountains, with a beautiful view but miles from\nanywhere. I stuck it out for a few more months, then in desperation\nI went back to New York, where unless you understand about rent\ncontrol you'll be surprised to hear I still had my apartment, sealed\nup like a tomb of my old life. Idelle was in New York at least, and\nthere were other people trying to paint there, even though I didn't\nknow any of them.When I got back to New York I resumed my old life, except now I was\nrich. It was as weird as it sounds. I resumed all my old patterns,\nexcept now there were doors where there hadn't been. Now when I was\ntired of walking, all I had to do was raise my hand, and (unless\nit was raining) a taxi would stop to pick me up. Now when I walked\npast charming little restaurants I could go in and order lunch. It\nwas exciting for a while. Painting started to go better. I experimented\nwith a new kind of still life where I'd paint one painting in the\nold way, then photograph it and print it, blown up, on canvas, and\nthen use that as the underpainting for a second still life, painted\nfrom the same objects (which hopefully hadn't rotted yet).Meanwhile I looked for an apartment to buy. Now I could actually\nchoose what neighborhood to live in. Where, I asked myself and\nvarious real estate agents, is the Cambridge of New York? Aided by\noccasional visits to actual Cambridge, I gradually realized there\nwasn't one. Huh.Around this time, in the spring of 2000, I had an idea. It was clear\nfrom our experience with Viaweb that web apps were the future. Why\nnot build a web app for making web apps? Why not let people edit\ncode on our server through the browser, and then host the resulting\napplications for them?\n[9]\nYou could run all sorts of services\non the servers that these applications could use just by making an\nAPI call: making and receiving phone calls, manipulating images,\ntaking credit card payments, etc.I got so excited about this idea that I couldn't think about anything\nelse. It seemed obvious that this was the future. I didn't particularly\nwant to start another company, but it was clear that this idea would\nhave to be embodied as one, so I decided to move to Cambridge and\nstart it. I hoped to lure Robert into working on it with me, but\nthere I ran into a hitch. Robert was now a postdoc at MIT, and\nthough he'd made a lot of money the last time I'd lured him into\nworking on one of my schemes, it had also been a huge time sink.\nSo while he agreed that it sounded like a plausible idea, he firmly\nrefused to work on it.Hmph. Well, I'd do it myself then. I recruited Dan Giffin, who had\nworked for Viaweb, and two undergrads who wanted summer jobs, and\nwe got to work trying to build what it's now clear is about twenty\ncompanies and several open source projects worth of software. The\nlanguage for defining applications would of course be a dialect of\nLisp. But I wasn't so naive as to assume I could spring an overt\nLisp on a general audience; we'd hide the parentheses, like Dylan\ndid.By then there was a name for the kind of company Viaweb was, an\n\"application service provider,\" or ASP. This name didn't last long\nbefore it was replaced by \"software as a service,\" but it was current\nfor long enough that I named this new company after it: it was going\nto be called Aspra.I started working on the application builder, Dan worked on network\ninfrastructure, and the two undergrads worked on the first two\nservices (images and phone calls). But about halfway through the\nsummer I realized I really didn't want to run a company \u0097 especially\nnot a big one, which it was looking like this would have to be. I'd\nonly started Viaweb because I needed the money. Now that I didn't\nneed money anymore, why was I doing this? If this vision had to be\nrealized as a company, then screw the vision. I'd build a subset\nthat could be done as an open source project.Much to my surprise, the time I spent working on this stuff was not\nwasted after all. After we started Y Combinator, I would often\nencounter startups working on parts of this new architecture, and\nit was very useful to have spent so much time thinking about it and\neven trying to write some of it.The subset I would build as an open source project was the new Lisp,\nwhose parentheses I now wouldn't even have to hide. A lot of Lisp\nhackers dream of building a new Lisp, partly because one of the\ndistinctive features of the language is that it has dialects, and\npartly, I think, because we have in our minds a Platonic form of\nLisp that all existing dialects fall short of. I certainly did. So\nat the end of the summer Dan and I switched to working on this new\ndialect of Lisp, which I called Arc, in a house I bought in Cambridge.The following spring, lightning struck. I was invited to give a\ntalk at a Lisp conference, so I gave one about how we'd used Lisp\nat Viaweb. Afterward I put a postscript file of this talk online,\non paulgraham.com, which I'd created years before using Viaweb but\nhad never used for anything. In one day it got 30,000 page views.\nWhat on earth had happened? The referring urls showed that someone\nhad posted it on Slashdot.\n[10]Wow, I thought, there's an audience. If I write something and put\nit on the web, anyone can read it. That may seem obvious now, but\nit was surprising then. In the print era there was a narrow channel\nto readers, guarded by fierce monsters known as editors. The only\nway to get an audience for anything you wrote was to get it published\nas a book, or in a newspaper or magazine. Now anyone could publish\nanything.This had been possible in principle since 1993, but not many people\nhad realized it yet. I had been intimately involved with building\nthe infrastructure of the web for most of that time, and a writer\nas well, and it had taken me 8 years to realize it. Even then it\ntook me several years to understand the implications. It meant there\nwould be a whole new generation of \nessays.\n[11]In the print era, the channel for publishing essays had been\nvanishingly small. Except for a few officially anointed thinkers\nwho went to the right parties in New York, the only people allowed\nto publish essays were specialists writing about their specialties.\nThere were so many essays that had never been written, because there\nhad been no way to publish them. Now they could be, and I was going\nto write them.\n[12]I've worked on several different things, but to the extent there\nwas a turning point where I figured out what to work on, it was\nwhen I started publishing essays online. From then on I knew that\nwhatever else I did, I'd always write essays too.I knew that online essays would be a \nmarginal medium at first.\nSocially they'd seem more like rants posted by nutjobs on their\nGeoCities sites than the genteel and beautifully typeset compositions\npublished in The New Yorker. But by this point I knew enough to\nfind that encouraging instead of discouraging.One of the most conspicuous patterns I've noticed in my life is how\nwell it has worked, for me at least, to work on things that weren't\nprestigious. Still life has always been the least prestigious form\nof painting. Viaweb and Y Combinator both seemed lame when we started\nthem. I still get the glassy eye from strangers when they ask what\nI'm writing, and I explain that it's an essay I'm going to publish\non my web site. Even Lisp, though prestigious intellectually in\nsomething like the way Latin is, also seems about as hip.It's not that unprestigious types of work are good per se. But when\nyou find yourself drawn to some kind of work despite its current\nlack of prestige, it's a sign both that there's something real to\nbe discovered there, and that you have the right kind of motives.\nImpure motives are a big danger for the ambitious. If anything is\ngoing to lead you astray, it will be the desire to impress people.\nSo while working on things that aren't prestigious doesn't guarantee\nyou're on the right track, it at least guarantees you're not on the\nmost common type of wrong one.Over the next several years I wrote lots of essays about all kinds\nof different topics. O'Reilly reprinted a collection of them as a\nbook, called Hackers & Painters after one of the essays in it. I\nalso worked on spam filters, and did some more painting. I used to\nhave dinners for a group of friends every thursday night, which\ntaught me how to cook for groups. And I bought another building in\nCambridge, a former candy factory (and later, twas said, porn\nstudio), to use as an office.One night in October 2003 there was a big party at my house. It was\na clever idea of my friend Maria Daniels, who was one of the thursday\ndiners. Three separate hosts would all invite their friends to one\nparty. So for every guest, two thirds of the other guests would be\npeople they didn't know but would probably like. One of the guests\nwas someone I didn't know but would turn out to like a lot: a woman\ncalled Jessica Livingston. A couple days later I asked her out.Jessica was in charge of marketing at a Boston investment bank.\nThis bank thought it understood startups, but over the next year,\nas she met friends of mine from the startup world, she was surprised\nhow different reality was. And how colorful their stories were. So\nshe decided to compile a book of \ninterviews with startup founders.When the bank had financial problems and she had to fire half her\nstaff, she started looking for a new job. In early 2005 she interviewed\nfor a marketing job at a Boston VC firm. It took them weeks to make\nup their minds, and during this time I started telling her about\nall the things that needed to be fixed about venture capital. They\nshould make a larger number of smaller investments instead of a\nhandful of giant ones, they should be funding younger, more technical\nfounders instead of MBAs, they should let the founders remain as\nCEO, and so on.One of my tricks for writing essays had always been to give talks.\nThe prospect of having to stand up in front of a group of people\nand tell them something that won't waste their time is a great\nspur to the imagination. When the Harvard Computer Society, the\nundergrad computer club, asked me to give a talk, I decided I would\ntell them how to start a startup. Maybe they'd be able to avoid the\nworst of the mistakes we'd made.So I gave this talk, in the course of which I told them that the\nbest sources of seed funding were successful startup founders,\nbecause then they'd be sources of advice too. Whereupon it seemed\nthey were all looking expectantly at me. Horrified at the prospect\nof having my inbox flooded by business plans (if I'd only known),\nI blurted out \"But not me!\" and went on with the talk. But afterward\nit occurred to me that I should really stop procrastinating about\nangel investing. I'd been meaning to since Yahoo bought us, and now\nit was 7 years later and I still hadn't done one angel investment.Meanwhile I had been scheming with Robert and Trevor about projects\nwe could work on together. I missed working with them, and it seemed\nlike there had to be something we could collaborate on.As Jessica and I were walking home from dinner on March 11, at the\ncorner of Garden and Walker streets, these three threads converged.\nScrew the VCs who were taking so long to make up their minds. We'd\nstart our own investment firm and actually implement the ideas we'd\nbeen talking about. I'd fund it, and Jessica could quit her job and\nwork for it, and we'd get Robert and Trevor as partners too.\n[13]Once again, ignorance worked in our favor. We had no idea how to\nbe angel investors, and in Boston in 2005 there were no Ron Conways\nto learn from. So we just made what seemed like the obvious choices,\nand some of the things we did turned out to be novel.There are multiple components to Y Combinator, and we didn't figure\nthem all out at once. The part we got first was to be an angel firm.\nIn those days, those two words didn't go together. There were VC\nfirms, which were organized companies with people whose job it was\nto make investments, but they only did big, million dollar investments.\nAnd there were angels, who did smaller investments, but these were\nindividuals who were usually focused on other things and made\ninvestments on the side. And neither of them helped founders enough\nin the beginning. We knew how helpless founders were in some respects,\nbecause we remembered how helpless we'd been. For example, one thing\nJulian had done for us that seemed to us like magic was to get us\nset up as a company. We were fine writing fairly difficult software,\nbut actually getting incorporated, with bylaws and stock and all\nthat stuff, how on earth did you do that? Our plan was not only to\nmake seed investments, but to do for startups everything Julian had\ndone for us.YC was not organized as a fund. It was cheap enough to run that we\nfunded it with our own money. That went right by 99% of readers,\nbut professional investors are thinking \"Wow, that means they got\nall the returns.\" But once again, this was not due to any particular\ninsight on our part. We didn't know how VC firms were organized.\nIt never occurred to us to try to raise a fund, and if it had, we\nwouldn't have known where to start.\n[14]The most distinctive thing about YC is the batch model: to fund a\nbunch of startups all at once, twice a year, and then to spend three\nmonths focusing intensively on trying to help them. That part we\ndiscovered by accident, not merely implicitly but explicitly due\nto our ignorance about investing. We needed to get experience as\ninvestors. What better way, we thought, than to fund a whole bunch\nof startups at once? We knew undergrads got temporary jobs at tech\ncompanies during the summer. Why not organize a summer program where\nthey'd start startups instead? We wouldn't feel guilty for being\nin a sense fake investors, because they would in a similar sense\nbe fake founders. So while we probably wouldn't make much money out\nof it, we'd at least get to practice being investors on them, and\nthey for their part would probably have a more interesting summer\nthan they would working at Microsoft.We'd use the building I owned in Cambridge as our headquarters.\nWe'd all have dinner there once a week \u0097 on tuesdays, since I was\nalready cooking for the thursday diners on thursdays \u0097 and after\ndinner we'd bring in experts on startups to give talks.We knew undergrads were deciding then about summer jobs, so in a\nmatter of days we cooked up something we called the Summer Founders\nProgram, and I posted an \nannouncement \non my site, inviting undergrads\nto apply. I had never imagined that writing essays would be a way\nto get \"deal flow,\" as investors call it, but it turned out to be\nthe perfect source.\n[15]\nWe got 225 applications for the Summer\nFounders Program, and we were surprised to find that a lot of them\nwere from people who'd already graduated, or were about to that\nspring. Already this SFP thing was starting to feel more serious\nthan we'd intended.We invited about 20 of the 225 groups to interview in person, and\nfrom those we picked 8 to fund. They were an impressive group. That\nfirst batch included reddit, Justin Kan and Emmett Shear, who went\non to found Twitch, Aaron Swartz, who had already helped write the\nRSS spec and would a few years later become a martyr for open access,\nand Sam Altman, who would later become the second president of YC.\nI don't think it was entirely luck that the first batch was so good.\nYou had to be pretty bold to sign up for a weird thing like the\nSummer Founders Program instead of a summer job at a legit place\nlike Microsoft or Goldman Sachs.The deal for startups was based on a combination of the deal we did\nwith Julian ($10k for 10%) and what Robert said MIT grad students\ngot for the summer ($6k). We invested $6k per founder, which in the\ntypical two-founder case was $12k, in return for 6%. That had to\nbe fair, because it was twice as good as the deal we ourselves had\ntaken. Plus that first summer, which was really hot, Jessica brought\nthe founders free air conditioners.\n[16]Fairly quickly I realized that we had stumbled upon the way to scale\nstartup funding. Funding startups in batches was more convenient\nfor us, because it meant we could do things for a lot of startups\nat once, but being part of a batch was better for the startups too.\nIt solved one of the biggest problems faced by founders: the\nisolation. Now you not only had colleagues, but colleagues who\nunderstood the problems you were facing and could tell you how they\nwere solving them.As YC grew, we started to notice other advantages of scale. The\nalumni became a tight community, dedicated to helping one another,\nand especially the current batch, whose shoes they remembered being\nin. We also noticed that the startups were becoming one another's\ncustomers. We used to refer jokingly to the \"YC GDP,\" but as YC\ngrows this becomes less and less of a joke. Now lots of startups\nget their initial set of customers almost entirely from among their\nbatchmates.I had not originally intended YC to be a full-time job. I was going\nto do three things: hack, write essays, and work on YC. As YC grew,\nand I grew more excited about it, it started to take up a lot more\nthan a third of my attention. But for the first few years I was\nstill able to work on other things.In the summer of 2006, Robert and I started working on a new version\nof Arc. This one was reasonably fast, because it was compiled into\nScheme. To test this new Arc, I wrote Hacker News in it. It was\noriginally meant to be a news aggregator for startup founders and\nwas called Startup News, but after a few months I got tired of\nreading about nothing but startups. Plus it wasn't startup founders\nwe wanted to reach. It was future startup founders. So I changed\nthe name to Hacker News and the topic to whatever engaged one's\nintellectual curiosity.HN was no doubt good for YC, but it was also by far the biggest\nsource of stress for me. If all I'd had to do was select and help\nfounders, life would have been so easy. And that implies that HN\nwas a mistake. Surely the biggest source of stress in one's work\nshould at least be something close to the core of the work. Whereas\nI was like someone who was in pain while running a marathon not\nfrom the exertion of running, but because I had a blister from an\nill-fitting shoe. When I was dealing with some urgent problem during\nYC, there was about a 60% chance it had to do with HN, and a 40%\nchance it had do with everything else combined.\n[17]As well as HN, I wrote all of YC's internal software in Arc. But\nwhile I continued to work a good deal in Arc, I gradually stopped\nworking on Arc, partly because I didn't have time to, and partly\nbecause it was a lot less attractive to mess around with the language\nnow that we had all this infrastructure depending on it. So now my\nthree projects were reduced to two: writing essays and working on\nYC.YC was different from other kinds of work I've done. Instead of\ndeciding for myself what to work on, the problems came to me. Every\n6 months there was a new batch of startups, and their problems,\nwhatever they were, became our problems. It was very engaging work,\nbecause their problems were quite varied, and the good founders\nwere very effective. If you were trying to learn the most you could\nabout startups in the shortest possible time, you couldn't have\npicked a better way to do it.There were parts of the job I didn't like. Disputes between cofounders,\nfiguring out when people were lying to us, fighting with people who\nmaltreated the startups, and so on. But I worked hard even at the\nparts I didn't like. I was haunted by something Kevin Hale once\nsaid about companies: \"No one works harder than the boss.\" He meant\nit both descriptively and prescriptively, and it was the second\npart that scared me. I wanted YC to be good, so if how hard I worked\nset the upper bound on how hard everyone else worked, I'd better\nwork very hard.One day in 2010, when he was visiting California for interviews,\nRobert Morris did something astonishing: he offered me unsolicited\nadvice. I can only remember him doing that once before. One day at\nViaweb, when I was bent over double from a kidney stone, he suggested\nthat it would be a good idea for him to take me to the hospital.\nThat was what it took for Rtm to offer unsolicited advice. So I\nremember his exact words very clearly. \"You know,\" he said, \"you\nshould make sure Y Combinator isn't the last cool thing you do.\"At the time I didn't understand what he meant, but gradually it\ndawned on me that he was saying I should quit. This seemed strange\nadvice, because YC was doing great. But if there was one thing rarer\nthan Rtm offering advice, it was Rtm being wrong. So this set me\nthinking. It was true that on my current trajectory, YC would be\nthe last thing I did, because it was only taking up more of my\nattention. It had already eaten Arc, and was in the process of\neating essays too. Either YC was my life's work or I'd have to leave\neventually. And it wasn't, so I would.In the summer of 2012 my mother had a stroke, and the cause turned\nout to be a blood clot caused by colon cancer. The stroke destroyed\nher balance, and she was put in a nursing home, but she really\nwanted to get out of it and back to her house, and my sister and I\nwere determined to help her do it. I used to fly up to Oregon to\nvisit her regularly, and I had a lot of time to think on those\nflights. On one of them I realized I was ready to hand YC over to\nsomeone else.I asked Jessica if she wanted to be president, but she didn't, so\nwe decided we'd try to recruit Sam Altman. We talked to Robert and\nTrevor and we agreed to make it a complete changing of the guard.\nUp till that point YC had been controlled by the original LLC we\nfour had started. But we wanted YC to last for a long time, and to\ndo that it couldn't be controlled by the founders. So if Sam said\nyes, we'd let him reorganize YC. Robert and I would retire, and\nJessica and Trevor would become ordinary partners.When we asked Sam if he wanted to be president of YC, initially he\nsaid no. He wanted to start a startup to make nuclear reactors.\nBut I kept at it, and in October 2013 he finally agreed. We decided\nhe'd take over starting with the winter 2014 batch. For the rest\nof 2013 I left running YC more and more to Sam, partly so he could\nlearn the job, and partly because I was focused on my mother, whose\ncancer had returned.She died on January 15, 2014. We knew this was coming, but it was\nstill hard when it did.I kept working on YC till March, to help get that batch of startups\nthrough Demo Day, then I checked out pretty completely. (I still\ntalk to alumni and to new startups working on things I'm interested\nin, but that only takes a few hours a week.)What should I do next? Rtm's advice hadn't included anything about\nthat. I wanted to do something completely different, so I decided\nI'd paint. I wanted to see how good I could get if I really focused\non it. So the day after I stopped working on YC, I started painting.\nI was rusty and it took a while to get back into shape, but it was\nat least completely engaging.\n[18]I spent most of the rest of 2014 painting. I'd never been able to\nwork so uninterruptedly before, and I got to be better than I had\nbeen. Not good enough, but better. Then in November, right in the\nmiddle of a painting, I ran out of steam. Up till that point I'd\nalways been curious to see how the painting I was working on would\nturn out, but suddenly finishing this one seemed like a chore. So\nI stopped working on it and cleaned my brushes and haven't painted\nsince. So far anyway.I realize that sounds rather wimpy. But attention is a zero sum\ngame. If you can choose what to work on, and you choose a project\nthat's not the best one (or at least a good one) for you, then it's\ngetting in the way of another project that is. And at 50 there was\nsome opportunity cost to screwing around.I started writing essays again, and wrote a bunch of new ones over\nthe next few months. I even wrote a couple that \nweren't about\nstartups. Then in March 2015 I started working on Lisp again.The distinctive thing about Lisp is that its core is a language\ndefined by writing an interpreter in itself. It wasn't originally\nintended as a programming language in the ordinary sense. It was\nmeant to be a formal model of computation, an alternative to the\nTuring machine. If you want to write an interpreter for a language\nin itself, what's the minimum set of predefined operators you need?\nThe Lisp that John McCarthy invented, or more accurately discovered,\nis an answer to that question.\n[19]McCarthy didn't realize this Lisp could even be used to program\ncomputers till his grad student Steve Russell suggested it. Russell\ntranslated McCarthy's interpreter into IBM 704 machine language,\nand from that point Lisp started also to be a programming language\nin the ordinary sense. But its origins as a model of computation\ngave it a power and elegance that other languages couldn't match.\nIt was this that attracted me in college, though I didn't understand\nwhy at the time.McCarthy's 1960 Lisp did nothing more than interpret Lisp expressions.\nIt was missing a lot of things you'd want in a programming language.\nSo these had to be added, and when they were, they weren't defined\nusing McCarthy's original axiomatic approach. That wouldn't have\nbeen feasible at the time. McCarthy tested his interpreter by\nhand-simulating the execution of programs. But it was already getting\nclose to the limit of interpreters you could test that way \u0097 indeed,\nthere was a bug in it that McCarthy had overlooked. To test a more\ncomplicated interpreter, you'd have had to run it, and computers\nthen weren't powerful enough.Now they are, though. Now you could continue using McCarthy's\naxiomatic approach till you'd defined a complete programming language.\nAnd as long as every change you made to McCarthy's Lisp was a\ndiscoveredness-preserving transformation, you could, in principle,\nend up with a complete language that had this quality. Harder to\ndo than to talk about, of course, but if it was possible in principle,\nwhy not try? So I decided to take a shot at it. It took 4 years,\nfrom March 26, 2015 to October 12, 2019. It was fortunate that I\nhad a precisely defined goal, or it would have been hard to keep\nat it for so long.I wrote this new Lisp, called Bel, \nin itself in Arc. That may sound\nlike a contradiction, but it's an indication of the sort of trickery\nI had to engage in to make this work. By means of an egregious\ncollection of hacks I managed to make something close enough to an\ninterpreter written in itself that could actually run. Not fast,\nbut fast enough to test.I had to ban myself from writing essays during most of this time,\nor I'd never have finished. In late 2015 I spent 3 months writing\nessays, and when I went back to working on Bel I could barely\nunderstand the code. Not so much because it was badly written as\nbecause the problem is so convoluted. When you're working on an\ninterpreter written in itself, it's hard to keep track of what's\nhappening at what level, and errors can be practically encrypted\nby the time you get them.So I said no more essays till Bel was done. But I told few people\nabout Bel while I was working on it. So for years it must have\nseemed that I was doing nothing, when in fact I was working harder\nthan I'd ever worked on anything. Occasionally after wrestling for\nhours with some gruesome bug I'd check Twitter or HN and see someone\nasking \"Does Paul Graham still code?\"Working on Bel was hard but satisfying. I worked on it so intensively\nthat at any given time I had a decent chunk of the code in my head\nand could write more there. I remember taking the boys to the\ncoast on a sunny day in 2015 and figuring out how to deal with some\nproblem involving continuations while I watched them play in the\ntide pools. It felt like I was doing life right. I remember that\nbecause I was slightly dismayed at how novel it felt. The good news\nis that I had more moments like this over the next few years.In the summer of 2016 we moved to England. We wanted our kids to\nsee what it was like living in another country, and since I was a\nBritish citizen by birth, that seemed the obvious choice. We only\nmeant to stay for a year, but we liked it so much that we still\nlive there. So most of Bel was written in England.In the fall of 2019, Bel was finally finished. Like McCarthy's\noriginal Lisp, it's a spec rather than an implementation, although\nlike McCarthy's Lisp it's a spec expressed as code.Now that I could write essays again, I wrote a bunch about topics\nI'd had stacked up. I kept writing essays through 2020, but I also\nstarted to think about other things I could work on. How should I\nchoose what to do? Well, how had I chosen what to work on in the\npast? I wrote an essay for myself to answer that question, and I\nwas surprised how long and messy the answer turned out to be. If\nthis surprised me, who'd lived it, then I thought perhaps it would\nbe interesting to other people, and encouraging to those with\nsimilarly messy lives. So I wrote a more detailed version for others\nto read, and this is the last sentence of it.\nNotes[1]\nMy experience skipped a step in the evolution of computers:\ntime-sharing machines with interactive OSes. I went straight from\nbatch processing to microcomputers, which made microcomputers seem\nall the more exciting.[2]\nItalian words for abstract concepts can nearly always be\npredicted from their English cognates (except for occasional traps\nlike polluzione). It's the everyday words that differ. So if you\nstring together a lot of abstract concepts with a few simple verbs,\nyou can make a little Italian go a long way.[3]\nI lived at Piazza San Felice 4, so my walk to the Accademia\nwent straight down the spine of old Florence: past the Pitti, across\nthe bridge, past Orsanmichele, between the Duomo and the Baptistery,\nand then up Via Ricasoli to Piazza San Marco. I saw Florence at\nstreet level in every possible condition, from empty dark winter\nevenings to sweltering summer days when the streets were packed with\ntourists.[4]\nYou can of course paint people like still lives if you want\nto, and they're willing. That sort of portrait is arguably the apex\nof still life painting, though the long sitting does tend to produce\npained expressions in the sitters.[5]\nInterleaf was one of many companies that had smart people and\nbuilt impressive technology, and yet got crushed by Moore's Law.\nIn the 1990s the exponential growth in the power of commodity (i.e.\nIntel) processors rolled up high-end, special-purpose hardware and\nsoftware companies like a bulldozer.[6]\nThe signature style seekers at RISD weren't specifically\nmercenary. In the art world, money and coolness are tightly coupled.\nAnything expensive comes to be seen as cool, and anything seen as\ncool will soon become equally expensive.[7]\nTechnically the apartment wasn't rent-controlled but\nrent-stabilized, but this is a refinement only New Yorkers would\nknow or care about. The point is that it was really cheap, less\nthan half market price.[8]\nMost software you can launch as soon as it's done. But when\nthe software is an online store builder and you're hosting the\nstores, if you don't have any users yet, that fact will be painfully\nobvious. So before we could launch publicly we had to launch\nprivately, in the sense of recruiting an initial set of users and\nmaking sure they had decent-looking stores.[9]\nWe'd had a code editor in Viaweb for users to define their\nown page styles. They didn't know it, but they were editing Lisp\nexpressions underneath. But this wasn't an app editor, because the\ncode ran when the merchants' sites were generated, not when shoppers\nvisited them.[10]\nThis was the first instance of what is now a familiar experience,\nand so was what happened next, when I read the comments and found\nthey were full of angry people. How could I claim that Lisp was\nbetter than other languages? Weren't they all Turing complete?\nPeople who see the responses to essays I write sometimes tell me\nhow sorry they feel for me, but I'm not exaggerating when I reply\nthat it has always been like this, since the very beginning. It\ncomes with the territory. An essay must tell readers things they\ndon't already know, and some \npeople dislike being told such things.[11]\nPeople put plenty of stuff on the internet in the 90s of\ncourse, but putting something online is not the same as publishing\nit online. Publishing online means you treat the online version as\nthe (or at least a) primary version.[12]\nThere is a general lesson here that our experience with Y\nCombinator also teaches: Customs continue to constrain you long\nafter the restrictions that caused them have disappeared. Customary\nVC practice had once, like the customs about publishing essays,\nbeen based on real constraints. Startups had once been much more\nexpensive to start, and proportionally rare. Now they could be cheap\nand common, but the VCs' customs still reflected the old world,\njust as customs about writing essays still reflected the constraints\nof the print era.Which in turn implies that people who are independent-minded (i.e.\nless influenced by custom) will have an advantage in fields affected\nby rapid change (where customs are more likely to be obsolete).Here's an interesting point, though: you can't always predict which\nfields will be affected by rapid change. Obviously software and\nventure capital will be, but who would have predicted that essay\nwriting would be?[13]\nY Combinator was not the original name. At first we were\ncalled Cambridge Seed. But we didn't want a regional name, in case\nsomeone copied us in Silicon Valley, so we renamed ourselves after\none of the coolest tricks in the lambda calculus, the Y combinator.I picked orange as our color partly because it's the warmest, and\npartly because no VC used it. In 2005 all the VCs used staid colors\nlike maroon, navy blue, and forest green, because they were trying\nto appeal to LPs, not founders. The YC logo itself is an inside\njoke: the Viaweb logo had been a white V on a red circle, so I made\nthe YC logo a white Y on an orange square.[14]\nYC did become a fund for a couple years starting in 2009,\nbecause it was getting so big I could no longer afford to fund it\npersonally. But after Heroku got bought we had enough money to go\nback to being self-funded.[15]\nI've never liked the term \"deal flow,\" because it implies\nthat the number of new startups at any given time is fixed. This\nis not only false, but it's the purpose of YC to falsify it, by\ncausing startups to be founded that would not otherwise have existed.[16]\nShe reports that they were all different shapes and sizes,\nbecause there was a run on air conditioners and she had to get\nwhatever she could, but that they were all heavier than she could\ncarry now.[17]\nAnother problem with HN was a bizarre edge case that occurs\nwhen you both write essays and run a forum. When you run a forum,\nyou're assumed to see if not every conversation, at least every\nconversation involving you. And when you write essays, people post\nhighly imaginative misinterpretations of them on forums. Individually\nthese two phenomena are tedious but bearable, but the combination\nis disastrous. You actually have to respond to the misinterpretations,\nbecause the assumption that you're present in the conversation means\nthat not responding to any sufficiently upvoted misinterpretation\nreads as a tacit admission that it's correct. But that in turn\nencourages more; anyone who wants to pick a fight with you senses\nthat now is their chance.[18]\nThe worst thing about leaving YC was not working with Jessica\nanymore. We'd been working on YC almost the whole time we'd known\neach other, and we'd neither tried nor wanted to separate it from\nour personal lives, so leaving was like pulling up a deeply rooted\ntree.[19]\nOne way to get more precise about the concept of invented vs\ndiscovered is to talk about space aliens. Any sufficiently advanced\nalien civilization would certainly know about the Pythagorean\ntheorem, for example. I believe, though with less certainty, that\nthey would also know about the Lisp in McCarthy's 1960 paper.But if so there's no reason to suppose that this is the limit of\nthe language that might be known to them. Presumably aliens need\nnumbers and errors and I/O too. So it seems likely there exists at\nleast one path out of McCarthy's Lisp along which discoveredness\nis preserved.Thanks to Trevor Blackwell, John Collison, Patrick Collison, Daniel\nGackle, Ralph Hazell, Jessica Livingston, Robert Morris, and Harj\nTaggar for reading drafts of this."} {"title": "popular", "text": "May 2001(This article was written as a kind of business plan for a\nnew language.\nSo it is missing (because it takes for granted) the most important\nfeature of a good programming language: very powerful abstractions.)A friend of mine once told an eminent operating systems\nexpert that he wanted to design a really good\nprogramming language. The expert told him that it would be a\nwaste of time, that programming languages don't become popular\nor unpopular based on their merits, and so no matter how\ngood his language was, no one would use it. At least, that\nwas what had happened to the language he had designed.What does make a language popular? Do popular\nlanguages deserve their popularity? Is it worth trying to\ndefine a good programming language? How would you do it?I think the answers to these questions can be found by looking \nat hackers, and learning what they want. Programming\nlanguages are for hackers, and a programming language\nis good as a programming language (rather than, say, an\nexercise in denotational semantics or compiler design)\nif and only if hackers like it.1 The Mechanics of PopularityIt's true, certainly, that most people don't choose programming\nlanguages simply based on their merits. Most programmers are told\nwhat language to use by someone else. And yet I think the effect\nof such external factors on the popularity of programming languages\nis not as great as it's sometimes thought to be. I think a bigger\nproblem is that a hacker's idea of a good programming language is\nnot the same as most language designers'.Between the two, the hacker's opinion is the one that matters.\nProgramming languages are not theorems. They're tools, designed\nfor people, and they have to be designed to suit human strengths\nand weaknesses as much as shoes have to be designed for human feet.\nIf a shoe pinches when you put it on, it's a bad shoe, however\nelegant it may be as a piece of sculpture.It may be that the majority of programmers can't tell a good language\nfrom a bad one. But that's no different with any other tool. It\ndoesn't mean that it's a waste of time to try designing a good\nlanguage. Expert hackers \ncan tell a good language when they see\none, and they'll use it. Expert hackers are a tiny minority,\nadmittedly, but that tiny minority write all the good software,\nand their influence is such that the rest of the programmers will\ntend to use whatever language they use. Often, indeed, it is not\nmerely influence but command: often the expert hackers are the very\npeople who, as their bosses or faculty advisors, tell the other\nprogrammers what language to use.The opinion of expert hackers is not the only force that determines\nthe relative popularity of programming languages \u2014 legacy software\n(Cobol) and hype (Ada, Java) also play a role \u2014 but I think it is\nthe most powerful force over the long term. Given an initial critical\nmass and enough time, a programming language probably becomes about\nas popular as it deserves to be. And popularity further separates\ngood languages from bad ones, because feedback from real live users\nalways leads to improvements. Look at how much any popular language\nhas changed during its life. Perl and Fortran are extreme cases,\nbut even Lisp has changed a lot. Lisp 1.5 didn't have macros, for\nexample; these evolved later, after hackers at MIT had spent a\ncouple years using Lisp to write real programs. [1]So whether or not a language has to be good to be popular, I think\na language has to be popular to be good. And it has to stay popular\nto stay good. The state of the art in programming languages doesn't\nstand still. And yet the Lisps we have today are still pretty much\nwhat they had at MIT in the mid-1980s, because that's the last time\nLisp had a sufficiently large and demanding user base.Of course, hackers have to know about a language before they can\nuse it. How are they to hear? From other hackers. But there has to\nbe some initial group of hackers using the language for others even\nto hear about it. I wonder how large this group has to be; how many\nusers make a critical mass? Off the top of my head, I'd say twenty.\nIf a language had twenty separate users, meaning twenty users who\ndecided on their own to use it, I'd consider it to be real.Getting there can't be easy. I would not be surprised if it is\nharder to get from zero to twenty than from twenty to a thousand.\nThe best way to get those initial twenty users is probably to use\na trojan horse: to give people an application they want, which\nhappens to be written in the new language.2 External FactorsLet's start by acknowledging one external factor that does affect\nthe popularity of a programming language. To become popular, a\nprogramming language has to be the scripting language of a popular\nsystem. Fortran and Cobol were the scripting languages of early\nIBM mainframes. C was the scripting language of Unix, and so, later,\nwas Perl. Tcl is the scripting language of Tk. Java and Javascript\nare intended to be the scripting languages of web browsers.Lisp is not a massively popular language because it is not the\nscripting language of a massively popular system. What popularity\nit retains dates back to the 1960s and 1970s, when it was the\nscripting language of MIT. A lot of the great programmers of the\nday were associated with MIT at some point. And in the early 1970s,\nbefore C, MIT's dialect of Lisp, called MacLisp, was one of the\nonly programming languages a serious hacker would want to use.Today Lisp is the scripting language of two moderately popular\nsystems, Emacs and Autocad, and for that reason I suspect that most\nof the Lisp programming done today is done in Emacs Lisp or AutoLisp.Programming languages don't exist in isolation. To hack is a\ntransitive verb \u2014 hackers are usually hacking something \u2014 and in\npractice languages are judged relative to whatever they're used to\nhack. So if you want to design a popular language, you either have\nto supply more than a language, or you have to design your language\nto replace the scripting language of some existing system.Common Lisp is unpopular partly because it's an orphan. It did\noriginally come with a system to hack: the Lisp Machine. But Lisp\nMachines (along with parallel computers) were steamrollered by the\nincreasing power of general purpose processors in the 1980s. Common\nLisp might have remained popular if it had been a good scripting\nlanguage for Unix. It is, alas, an atrociously bad one.One way to describe this situation is to say that a language isn't\njudged on its own merits. Another view is that a programming language\nreally isn't a programming language unless it's also the scripting\nlanguage of something. This only seems unfair if it comes as a\nsurprise. I think it's no more unfair than expecting a programming\nlanguage to have, say, an implementation. It's just part of what\na programming language is.A programming language does need a good implementation, of course,\nand this must be free. Companies will pay for software, but individual\nhackers won't, and it's the hackers you need to attract.A language also needs to have a book about it. The book should be\nthin, well-written, and full of good examples. K&R is the ideal\nhere. At the moment I'd almost say that a language has to have a\nbook published by O'Reilly. That's becoming the test of mattering\nto hackers.There should be online documentation as well. In fact, the book\ncan start as online documentation. But I don't think that physical\nbooks are outmoded yet. Their format is convenient, and the de\nfacto censorship imposed by publishers is a useful if imperfect\nfilter. Bookstores are one of the most important places for learning\nabout new languages.3 BrevityGiven that you can supply the three things any language needs \u2014 a\nfree implementation, a book, and something to hack \u2014 how do you\nmake a language that hackers will like?One thing hackers like is brevity. Hackers are lazy, in the same\nway that mathematicians and modernist architects are lazy: they\nhate anything extraneous. It would not be far from the truth to\nsay that a hacker about to write a program decides what language\nto use, at least subconsciously, based on the total number of\ncharacters he'll have to type. If this isn't precisely how hackers\nthink, a language designer would do well to act as if it were.It is a mistake to try to baby the user with long-winded expressions\nthat are meant to resemble English. Cobol is notorious for this\nflaw. A hacker would consider being asked to writeadd x to y giving zinstead ofz = x+yas something between an insult to his intelligence and a sin against\nGod.It has sometimes been said that Lisp should use first and rest\ninstead of car and cdr, because it would make programs easier to\nread. Maybe for the first couple hours. But a hacker can learn\nquickly enough that car means the first element of a list and cdr\nmeans the rest. Using first and rest means 50% more typing. And\nthey are also different lengths, meaning that the arguments won't\nline up when they're called, as car and cdr often are, in successive\nlines. I've found that it matters a lot how code lines up on the\npage. I can barely read Lisp code when it is set in a variable-width\nfont, and friends say this is true for other languages too.Brevity is one place where strongly typed languages lose. All other\nthings being equal, no one wants to begin a program with a bunch\nof declarations. Anything that can be implicit, should be.The individual tokens should be short as well. Perl and Common Lisp\noccupy opposite poles on this question. Perl programs can be almost\ncryptically dense, while the names of built-in Common Lisp operators\nare comically long. The designers of Common Lisp probably expected\nusers to have text editors that would type these long names for\nthem. But the cost of a long name is not just the cost of typing\nit. There is also the cost of reading it, and the cost of the space\nit takes up on your screen.4 HackabilityThere is one thing more important than brevity to a hacker: being\nable to do what you want. In the history of programming languages\na surprising amount of effort has gone into preventing programmers\nfrom doing things considered to be improper. This is a dangerously\npresumptuous plan. How can the language designer know what the\nprogrammer is going to need to do? I think language designers would\ndo better to consider their target user to be a genius who will\nneed to do things they never anticipated, rather than a bumbler\nwho needs to be protected from himself. The bumbler will shoot\nhimself in the foot anyway. You may save him from referring to\nvariables in another package, but you can't save him from writing\na badly designed program to solve the wrong problem, and taking\nforever to do it.Good programmers often want to do dangerous and unsavory things.\nBy unsavory I mean things that go behind whatever semantic facade\nthe language is trying to present: getting hold of the internal\nrepresentation of some high-level abstraction, for example. Hackers\nlike to hack, and hacking means getting inside things and second\nguessing the original designer.Let yourself be second guessed. When you make any tool, people use\nit in ways you didn't intend, and this is especially true of a\nhighly articulated tool like a programming language. Many a hacker\nwill want to tweak your semantic model in a way that you never\nimagined. I say, let them; give the programmer access to as much\ninternal stuff as you can without endangering runtime systems like\nthe garbage collector.In Common Lisp I have often wanted to iterate through the fields\nof a struct \u2014 to comb out references to a deleted object, for example,\nor find fields that are uninitialized. I know the structs are just\nvectors underneath. And yet I can't write a general purpose function\nthat I can call on any struct. I can only access the fields by\nname, because that's what a struct is supposed to mean.A hacker may only want to subvert the intended model of things once\nor twice in a big program. But what a difference it makes to be\nable to. And it may be more than a question of just solving a\nproblem. There is a kind of pleasure here too. Hackers share the\nsurgeon's secret pleasure in poking about in gross innards, the\nteenager's secret pleasure in popping zits. [2] For boys, at least,\ncertain kinds of horrors are fascinating. Maxim magazine publishes\nan annual volume of photographs, containing a mix of pin-ups and\ngrisly accidents. They know their audience.Historically, Lisp has been good at letting hackers have their way.\nThe political correctness of Common Lisp is an aberration. Early\nLisps let you get your hands on everything. A good deal of that\nspirit is, fortunately, preserved in macros. What a wonderful thing,\nto be able to make arbitrary transformations on the source code.Classic macros are a real hacker's tool \u2014 simple, powerful, and\ndangerous. It's so easy to understand what they do: you call a\nfunction on the macro's arguments, and whatever it returns gets\ninserted in place of the macro call. Hygienic macros embody the\nopposite principle. They try to protect you from understanding what\nthey're doing. I have never heard hygienic macros explained in one\nsentence. And they are a classic example of the dangers of deciding\nwhat programmers are allowed to want. Hygienic macros are intended\nto protect me from variable capture, among other things, but variable\ncapture is exactly what I want in some macros.A really good language should be both clean and dirty: cleanly\ndesigned, with a small core of well understood and highly orthogonal\noperators, but dirty in the sense that it lets hackers have their\nway with it. C is like this. So were the early Lisps. A real hacker's\nlanguage will always have a slightly raffish character.A good programming language should have features that make the kind\nof people who use the phrase \"software engineering\" shake their\nheads disapprovingly. At the other end of the continuum are languages\nlike Ada and Pascal, models of propriety that are good for teaching\nand not much else.5 Throwaway ProgramsTo be attractive to hackers, a language must be good for writing\nthe kinds of programs they want to write. And that means, perhaps\nsurprisingly, that it has to be good for writing throwaway programs.A throwaway program is a program you write quickly for some limited\ntask: a program to automate some system administration task, or\ngenerate test data for a simulation, or convert data from one format\nto another. The surprising thing about throwaway programs is that,\nlike the \"temporary\" buildings built at so many American universities\nduring World War II, they often don't get thrown away. Many evolve\ninto real programs, with real features and real users.I have a hunch that the best big programs begin life this way,\nrather than being designed big from the start, like the Hoover Dam.\nIt's terrifying to build something big from scratch. When people\ntake on a project that's too big, they become overwhelmed. The\nproject either gets bogged down, or the result is sterile and\nwooden: a shopping mall rather than a real downtown, Brasilia rather\nthan Rome, Ada rather than C.Another way to get a big program is to start with a throwaway\nprogram and keep improving it. This approach is less daunting, and\nthe design of the program benefits from evolution. I think, if one\nlooked, that this would turn out to be the way most big programs\nwere developed. And those that did evolve this way are probably\nstill written in whatever language they were first written in,\nbecause it's rare for a program to be ported, except for political\nreasons. And so, paradoxically, if you want to make a language that\nis used for big systems, you have to make it good for writing\nthrowaway programs, because that's where big systems come from.Perl is a striking example of this idea. It was not only designed\nfor writing throwaway programs, but was pretty much a throwaway\nprogram itself. Perl began life as a collection of utilities for\ngenerating reports, and only evolved into a programming language\nas the throwaway programs people wrote in it grew larger. It was\nnot until Perl 5 (if then) that the language was suitable for\nwriting serious programs, and yet it was already massively popular.What makes a language good for throwaway programs? To start with,\nit must be readily available. A throwaway program is something that\nyou expect to write in an hour. So the language probably must\nalready be installed on the computer you're using. It can't be\nsomething you have to install before you use it. It has to be there.\nC was there because it came with the operating system. Perl was\nthere because it was originally a tool for system administrators,\nand yours had already installed it.Being available means more than being installed, though. An\ninteractive language, with a command-line interface, is more\navailable than one that you have to compile and run separately. A\npopular programming language should be interactive, and start up\nfast.Another thing you want in a throwaway program is brevity. Brevity\nis always attractive to hackers, and never more so than in a program\nthey expect to turn out in an hour.6 LibrariesOf course the ultimate in brevity is to have the program already\nwritten for you, and merely to call it. And this brings us to what\nI think will be an increasingly important feature of programming\nlanguages: library functions. Perl wins because it has large\nlibraries for manipulating strings. This class of library functions\nare especially important for throwaway programs, which are often\noriginally written for converting or extracting data. Many Perl\nprograms probably begin as just a couple library calls stuck\ntogether.I think a lot of the advances that happen in programming languages\nin the next fifty years will have to do with library functions. I\nthink future programming languages will have libraries that are as\ncarefully designed as the core language. Programming language design\nwill not be about whether to make your language strongly or weakly\ntyped, or object oriented, or functional, or whatever, but about\nhow to design great libraries. The kind of language designers who\nlike to think about how to design type systems may shudder at this.\nIt's almost like writing applications! Too bad. Languages are for\nprogrammers, and libraries are what programmers need.It's hard to design good libraries. It's not simply a matter of\nwriting a lot of code. Once the libraries get too big, it can\nsometimes take longer to find the function you need than to write\nthe code yourself. Libraries need to be designed using a small set\nof orthogonal operators, just like the core language. It ought to\nbe possible for the programmer to guess what library call will do\nwhat he needs.Libraries are one place Common Lisp falls short. There are only\nrudimentary libraries for manipulating strings, and almost none\nfor talking to the operating system. For historical reasons, Common\nLisp tries to pretend that the OS doesn't exist. And because you\ncan't talk to the OS, you're unlikely to be able to write a serious\nprogram using only the built-in operators in Common Lisp. You have\nto use some implementation-specific hacks as well, and in practice\nthese tend not to give you everything you want. Hackers would think\na lot more highly of Lisp if Common Lisp had powerful string\nlibraries and good OS support.7 SyntaxCould a language with Lisp's syntax, or more precisely, lack of\nsyntax, ever become popular? I don't know the answer to this\nquestion. I do think that syntax is not the main reason Lisp isn't\ncurrently popular. Common Lisp has worse problems than unfamiliar\nsyntax. I know several programmers who are comfortable with prefix\nsyntax and yet use Perl by default, because it has powerful string\nlibraries and can talk to the os.There are two possible problems with prefix notation: that it is\nunfamiliar to programmers, and that it is not dense enough. The\nconventional wisdom in the Lisp world is that the first problem is\nthe real one. I'm not so sure. Yes, prefix notation makes ordinary\nprogrammers panic. But I don't think ordinary programmers' opinions\nmatter. Languages become popular or unpopular based on what expert\nhackers think of them, and I think expert hackers might be able to\ndeal with prefix notation. Perl syntax can be pretty incomprehensible,\nbut that has not stood in the way of Perl's popularity. If anything\nit may have helped foster a Perl cult.A more serious problem is the diffuseness of prefix notation. For\nexpert hackers, that really is a problem. No one wants to write\n(aref a x y) when they could write a[x,y].In this particular case there is a way to finesse our way out of\nthe problem. If we treat data structures as if they were functions\non indexes, we could write (a x y) instead, which is even shorter\nthan the Perl form. Similar tricks may shorten other types of\nexpressions.We can get rid of (or make optional) a lot of parentheses by making\nindentation significant. That's how programmers read code anyway:\nwhen indentation says one thing and delimiters say another, we go\nby the indentation. Treating indentation as significant would\neliminate this common source of bugs as well as making programs\nshorter.Sometimes infix syntax is easier to read. This is especially true\nfor math expressions. I've used Lisp my whole programming life and\nI still don't find prefix math expressions natural. And yet it is\nconvenient, especially when you're generating code, to have operators\nthat take any number of arguments. So if we do have infix syntax,\nit should probably be implemented as some kind of read-macro.I don't think we should be religiously opposed to introducing syntax\ninto Lisp, as long as it translates in a well-understood way into\nunderlying s-expressions. There is already a good deal of syntax\nin Lisp. It's not necessarily bad to introduce more, as long as no\none is forced to use it. In Common Lisp, some delimiters are reserved\nfor the language, suggesting that at least some of the designers\nintended to have more syntax in the future.One of the most egregiously unlispy pieces of syntax in Common Lisp\noccurs in format strings; format is a language in its own right,\nand that language is not Lisp. If there were a plan for introducing\nmore syntax into Lisp, format specifiers might be able to be included\nin it. It would be a good thing if macros could generate format\nspecifiers the way they generate any other kind of code.An eminent Lisp hacker told me that his copy of CLTL falls open to\nthe section format. Mine too. This probably indicates room for\nimprovement. It may also mean that programs do a lot of I/O.8 EfficiencyA good language, as everyone knows, should generate fast code. But\nin practice I don't think fast code comes primarily from things\nyou do in the design of the language. As Knuth pointed out long\nago, speed only matters in certain critical bottlenecks. And as\nmany programmers have observed since, one is very often mistaken\nabout where these bottlenecks are.So, in practice, the way to get fast code is to have a very good\nprofiler, rather than by, say, making the language strongly typed.\nYou don't need to know the type of every argument in every call in\nthe program. You do need to be able to declare the types of arguments\nin the bottlenecks. And even more, you need to be able to find out\nwhere the bottlenecks are.One complaint people have had with Lisp is that it's hard to tell\nwhat's expensive. This might be true. It might also be inevitable,\nif you want to have a very abstract language. And in any case I\nthink good profiling would go a long way toward fixing the problem:\nyou'd soon learn what was expensive.Part of the problem here is social. Language designers like to\nwrite fast compilers. That's how they measure their skill. They\nthink of the profiler as an add-on, at best. But in practice a good\nprofiler may do more to improve the speed of actual programs written\nin the language than a compiler that generates fast code. Here,\nagain, language designers are somewhat out of touch with their\nusers. They do a really good job of solving slightly the wrong\nproblem.It might be a good idea to have an active profiler \u2014 to push\nperformance data to the programmer instead of waiting for him to\ncome asking for it. For example, the editor could display bottlenecks\nin red when the programmer edits the source code. Another approach\nwould be to somehow represent what's happening in running programs.\nThis would be an especially big win in server-based applications,\nwhere you have lots of running programs to look at. An active\nprofiler could show graphically what's happening in memory as a\nprogram's running, or even make sounds that tell what's happening.Sound is a good cue to problems. In one place I worked, we had a\nbig board of dials showing what was happening to our web servers.\nThe hands were moved by little servomotors that made a slight noise\nwhen they turned. I couldn't see the board from my desk, but I\nfound that I could tell immediately, by the sound, when there was\na problem with a server.It might even be possible to write a profiler that would automatically\ndetect inefficient algorithms. I would not be surprised if certain\npatterns of memory access turned out to be sure signs of bad\nalgorithms. If there were a little guy running around inside the\ncomputer executing our programs, he would probably have as long\nand plaintive a tale to tell about his job as a federal government\nemployee. I often have a feeling that I'm sending the processor on\na lot of wild goose chases, but I've never had a good way to look\nat what it's doing.A number of Lisps now compile into byte code, which is then executed\nby an interpreter. This is usually done to make the implementation\neasier to port, but it could be a useful language feature. It might\nbe a good idea to make the byte code an official part of the\nlanguage, and to allow programmers to use inline byte code in\nbottlenecks. Then such optimizations would be portable too.The nature of speed, as perceived by the end-user, may be changing.\nWith the rise of server-based applications, more and more programs\nmay turn out to be i/o-bound. It will be worth making i/o fast.\nThe language can help with straightforward measures like simple,\nfast, formatted output functions, and also with deep structural\nchanges like caching and persistent objects.Users are interested in response time. But another kind of efficiency\nwill be increasingly important: the number of simultaneous users\nyou can support per processor. Many of the interesting applications\nwritten in the near future will be server-based, and the number of\nusers per server is the critical question for anyone hosting such\napplications. In the capital cost of a business offering a server-based\napplication, this is the divisor.For years, efficiency hasn't mattered much in most end-user\napplications. Developers have been able to assume that each user\nwould have an increasingly powerful processor sitting on their\ndesk. And by Parkinson's Law, software has expanded to use the\nresources available. That will change with server-based applications.\nIn that world, the hardware and software will be supplied together.\nFor companies that offer server-based applications, it will make\na very big difference to the bottom line how many users they can\nsupport per server.In some applications, the processor will be the limiting factor,\nand execution speed will be the most important thing to optimize.\nBut often memory will be the limit; the number of simultaneous\nusers will be determined by the amount of memory you need for each\nuser's data. The language can help here too. Good support for\nthreads will enable all the users to share a single heap. It may\nalso help to have persistent objects and/or language level support\nfor lazy loading.9 TimeThe last ingredient a popular language needs is time. No one wants\nto write programs in a language that might go away, as so many\nprogramming languages do. So most hackers will tend to wait until\na language has been around for a couple years before even considering\nusing it.Inventors of wonderful new things are often surprised to discover\nthis, but you need time to get any message through to people. A\nfriend of mine rarely does anything the first time someone asks\nhim. He knows that people sometimes ask for things that they turn\nout not to want. To avoid wasting his time, he waits till the third\nor fourth time he's asked to do something; by then, whoever's asking\nhim may be fairly annoyed, but at least they probably really do\nwant whatever they're asking for.Most people have learned to do a similar sort of filtering on new\nthings they hear about. They don't even start paying attention\nuntil they've heard about something ten times. They're perfectly\njustified: the majority of hot new whatevers do turn out to be a\nwaste of time, and eventually go away. By delaying learning VRML,\nI avoided having to learn it at all.So anyone who invents something new has to expect to keep repeating\ntheir message for years before people will start to get it. We\nwrote what was, as far as I know, the first web-server based\napplication, and it took us years to get it through to people that\nit didn't have to be downloaded. It wasn't that they were stupid.\nThey just had us tuned out.The good news is, simple repetition solves the problem. All you\nhave to do is keep telling your story, and eventually people will\nstart to hear. It's not when people notice you're there that they\npay attention; it's when they notice you're still there.It's just as well that it usually takes a while to gain momentum.\nMost technologies evolve a good deal even after they're first\nlaunched \u2014 programming languages especially. Nothing could be better,\nfor a new techology, than a few years of being used only by a small\nnumber of early adopters. Early adopters are sophisticated and\ndemanding, and quickly flush out whatever flaws remain in your\ntechnology. When you only have a few users you can be in close\ncontact with all of them. And early adopters are forgiving when\nyou improve your system, even if this causes some breakage.There are two ways new technology gets introduced: the organic\ngrowth method, and the big bang method. The organic growth method\nis exemplified by the classic seat-of-the-pants underfunded garage\nstartup. A couple guys, working in obscurity, develop some new\ntechnology. They launch it with no marketing and initially have\nonly a few (fanatically devoted) users. They continue to improve\nthe technology, and meanwhile their user base grows by word of\nmouth. Before they know it, they're big.The other approach, the big bang method, is exemplified by the\nVC-backed, heavily marketed startup. They rush to develop a product,\nlaunch it with great publicity, and immediately (they hope) have\na large user base.Generally, the garage guys envy the big bang guys. The big bang\nguys are smooth and confident and respected by the VCs. They can\nafford the best of everything, and the PR campaign surrounding the\nlaunch has the side effect of making them celebrities. The organic\ngrowth guys, sitting in their garage, feel poor and unloved. And\nyet I think they are often mistaken to feel sorry for themselves.\nOrganic growth seems to yield better technology and richer founders\nthan the big bang method. If you look at the dominant technologies\ntoday, you'll find that most of them grew organically.This pattern doesn't only apply to companies. You see it in sponsored\nresearch too. Multics and Common Lisp were big-bang projects, and\nUnix and MacLisp were organic growth projects.10 Redesign\"The best writing is rewriting,\" wrote E. B. White. Every good\nwriter knows this, and it's true for software too. The most important\npart of design is redesign. Programming languages, especially,\ndon't get redesigned enough.To write good software you must simultaneously keep two opposing\nideas in your head. You need the young hacker's naive faith in\nhis abilities, and at the same time the veteran's skepticism. You\nhave to be able to think \nhow hard can it be? with one half of\nyour brain while thinking \nit will never work with the other.The trick is to realize that there's no real contradiction here.\nYou want to be optimistic and skeptical about two different things.\nYou have to be optimistic about the possibility of solving the\nproblem, but skeptical about the value of whatever solution you've\ngot so far.People who do good work often think that whatever they're working\non is no good. Others see what they've done and are full of wonder,\nbut the creator is full of worry. This pattern is no coincidence:\nit is the worry that made the work good.If you can keep hope and worry balanced, they will drive a project\nforward the same way your two legs drive a bicycle forward. In the\nfirst phase of the two-cycle innovation engine, you work furiously\non some problem, inspired by your confidence that you'll be able\nto solve it. In the second phase, you look at what you've done in\nthe cold light of morning, and see all its flaws very clearly. But\nas long as your critical spirit doesn't outweigh your hope, you'll\nbe able to look at your admittedly incomplete system, and think,\nhow hard can it be to get the rest of the way?, thereby continuing\nthe cycle.It's tricky to keep the two forces balanced. In young hackers,\noptimism predominates. They produce something, are convinced it's\ngreat, and never improve it. In old hackers, skepticism predominates,\nand they won't even dare to take on ambitious projects.Anything you can do to keep the redesign cycle going is good. Prose\ncan be rewritten over and over until you're happy with it. But\nsoftware, as a rule, doesn't get redesigned enough. Prose has\nreaders, but software has users. If a writer rewrites an essay,\npeople who read the old version are unlikely to complain that their\nthoughts have been broken by some newly introduced incompatibility.Users are a double-edged sword. They can help you improve your\nlanguage, but they can also deter you from improving it. So choose\nyour users carefully, and be slow to grow their number. Having\nusers is like optimization: the wise course is to delay it. Also,\nas a general rule, you can at any given time get away with changing\nmore than you think. Introducing change is like pulling off a\nbandage: the pain is a memory almost as soon as you feel it.Everyone knows that it's not a good idea to have a language designed\nby a committee. Committees yield bad design. But I think the worst\ndanger of committees is that they interfere with redesign. It is\nso much work to introduce changes that no one wants to bother.\nWhatever a committee decides tends to stay that way, even if most\nof the members don't like it.Even a committee of two gets in the way of redesign. This happens\nparticularly in the interfaces between pieces of software written\nby two different people. To change the interface both have to agree\nto change it at once. And so interfaces tend not to change at all,\nwhich is a problem because they tend to be one of the most ad hoc\nparts of any system.One solution here might be to design systems so that interfaces\nare horizontal instead of vertical \u2014 so that modules are always\nvertically stacked strata of abstraction. Then the interface will\ntend to be owned by one of them. The lower of two levels will either\nbe a language in which the upper is written, in which case the\nlower level will own the interface, or it will be a slave, in which\ncase the interface can be dictated by the upper level.11 LispWhat all this implies is that there is hope for a new Lisp. There\nis hope for any language that gives hackers what they want, including\nLisp. I think we may have made a mistake in thinking that hackers\nare turned off by Lisp's strangeness. This comforting illusion may\nhave prevented us from seeing the real problem with Lisp, or at\nleast Common Lisp, which is that it sucks for doing what hackers\nwant to do. A hacker's language needs powerful libraries and\nsomething to hack. Common Lisp has neither. A hacker's language is\nterse and hackable. Common Lisp is not.The good news is, it's not Lisp that sucks, but Common Lisp. If we\ncan develop a new Lisp that is a real hacker's language, I think\nhackers will use it. They will use whatever language does the job.\nAll we have to do is make sure this new Lisp does some important\njob better than other languages.History offers some encouragement. Over time, successive new\nprogramming languages have taken more and more features from Lisp.\nThere is no longer much left to copy before the language you've\nmade is Lisp. The latest hot language, Python, is a watered-down\nLisp with infix syntax and no macros. A new Lisp would be a natural\nstep in this progression.I sometimes think that it would be a good marketing trick to call\nit an improved version of Python. That sounds hipper than Lisp. To\nmany people, Lisp is a slow AI language with a lot of parentheses.\nFritz Kunze's official biography carefully avoids mentioning the\nL-word. But my guess is that we shouldn't be afraid to call the\nnew Lisp Lisp. Lisp still has a lot of latent respect among the\nvery best hackers \u2014 the ones who took 6.001 and understood it, for\nexample. And those are the users you need to win.In \"How to Become a Hacker,\" Eric Raymond describes Lisp as something\nlike Latin or Greek \u2014 a language you should learn as an intellectual\nexercise, even though you won't actually use it:\n\n Lisp is worth learning for the profound enlightenment experience\n you will have when you finally get it; that experience will make\n you a better programmer for the rest of your days, even if you\n never actually use Lisp itself a lot.\n\nIf I didn't know Lisp, reading this would set me asking questions.\nA language that would make me a better programmer, if it means\nanything at all, means a language that would be better for programming.\nAnd that is in fact the implication of what Eric is saying.As long as that idea is still floating around, I think hackers will\nbe receptive enough to a new Lisp, even if it is called Lisp. But\nthis Lisp must be a hacker's language, like the classic Lisps of\nthe 1970s. It must be terse, simple, and hackable. And it must have\npowerful libraries for doing what hackers want to do now.In the matter of libraries I think there is room to beat languages\nlike Perl and Python at their own game. A lot of the new applications\nthat will need to be written in the coming years will be \nserver-based\napplications. There's no reason a new Lisp shouldn't have string\nlibraries as good as Perl, and if this new Lisp also had powerful\nlibraries for server-based applications, it could be very popular.\nReal hackers won't turn up their noses at a new tool that will let\nthem solve hard problems with a few library calls. Remember, hackers\nare lazy.It could be an even bigger win to have core language support for\nserver-based applications. For example, explicit support for programs\nwith multiple users, or data ownership at the level of type tags.Server-based applications also give us the answer to the question\nof what this new Lisp will be used to hack. It would not hurt to\nmake Lisp better as a scripting language for Unix. (It would be\nhard to make it worse.) But I think there are areas where existing\nlanguages would be easier to beat. I think it might be better to\nfollow the model of Tcl, and supply the Lisp together with a complete\nsystem for supporting server-based applications. Lisp is a natural\nfit for server-based applications. Lexical closures provide a way\nto get the effect of subroutines when the ui is just a series of\nweb pages. S-expressions map nicely onto html, and macros are good\nat generating it. There need to be better tools for writing\nserver-based applications, and there needs to be a new Lisp, and\nthe two would work very well together.12 The Dream LanguageBy way of summary, let's try describing the hacker's dream language.\nThe dream language is \nbeautiful, clean, and terse. It has an\ninteractive toplevel that starts up fast. You can write programs\nto solve common problems with very little code. Nearly all the\ncode in any program you write is code that's specific to your\napplication. Everything else has been done for you.The syntax of the language is brief to a fault. You never have to\ntype an unnecessary character, or even to use the shift key much.Using big abstractions you can write the first version of a program\nvery quickly. Later, when you want to optimize, there's a really\ngood profiler that tells you where to focus your attention. You\ncan make inner loops blindingly fast, even writing inline byte code\nif you need to.There are lots of good examples to learn from, and the language is\nintuitive enough that you can learn how to use it from examples in\na couple minutes. You don't need to look in the manual much. The\nmanual is thin, and has few warnings and qualifications.The language has a small core, and powerful, highly orthogonal\nlibraries that are as carefully designed as the core language. The\nlibraries all work well together; everything in the language fits\ntogether like the parts in a fine camera. Nothing is deprecated,\nor retained for compatibility. The source code of all the libraries\nis readily available. It's easy to talk to the operating system\nand to applications written in other languages.The language is built in layers. The higher-level abstractions are\nbuilt in a very transparent way out of lower-level abstractions,\nwhich you can get hold of if you want.Nothing is hidden from you that doesn't absolutely have to be. The\nlanguage offers abstractions only as a way of saving you work,\nrather than as a way of telling you what to do. In fact, the language\nencourages you to be an equal participant in its design. You can\nchange everything about it, including even its syntax, and anything\nyou write has, as much as possible, the same status as what comes\npredefined.Notes[1] Macros very close to the modern idea were proposed by Timothy\nHart in 1964, two years after Lisp 1.5 was released. What was\nmissing, initially, were ways to avoid variable capture and multiple\nevaluation; Hart's examples are subject to both.[2] In When the Air Hits Your Brain, neurosurgeon Frank Vertosick\nrecounts a conversation in which his chief resident, Gary, talks\nabout the difference between surgeons and internists (\"fleas\"):\n\n Gary and I ordered a large pizza and found an open booth. The\n chief lit a cigarette. \"Look at those goddamn fleas, jabbering\n about some disease they'll see once in their lifetimes. That's\n the trouble with fleas, they only like the bizarre stuff. They\n hate their bread and butter cases. That's the difference between\n us and the fucking fleas. See, we love big juicy lumbar disc\n herniations, but they hate hypertension....\"\n\nIt's hard to think of a lumbar disc herniation as juicy (except\nliterally). And yet I think I know what they mean. I've often had\na juicy bug to track down. Someone who's not a programmer would\nfind it hard to imagine that there could be pleasure in a bug.\nSurely it's better if everything just works. In one way, it is.\nAnd yet there is undeniably a grim satisfaction in hunting down\ncertain sorts of bugs."} {"title": "pow", "text": "January 2017People who are powerful but uncharismatic will tend to be disliked.\nTheir power makes them a target for criticism that they don't have\nthe charisma to disarm. That was Hillary Clinton's problem. It also\ntends to be a problem for any CEO who is more of a builder than a\nschmoozer. And yet the builder-type CEO is (like Hillary) probably\nthe best person for the job.I don't think there is any solution to this problem. It's human\nnature. The best we can do is to recognize that it's happening, and\nto understand that being a magnet for criticism is sometimes a sign\nnot that someone is the wrong person for a job, but that they're\nthe right one."} {"title": "submarine", "text": "April 2005\"Suits make a corporate comeback,\" says the New\nYork Times. Why does this sound familiar? Maybe because\nthe suit was also back in February,\n\nSeptember\n2004, June\n2004, March\n2004, September\n2003, \n\nNovember\n2002, \nApril 2002,\nand February\n2002.\n\nWhy do the media keep running stories saying suits are back? Because\nPR firms tell \nthem to. One of the most surprising things I discovered\nduring my brief business career was the existence of the PR industry,\nlurking like a huge, quiet submarine beneath the news. Of the\nstories you read in traditional media that aren't about politics,\ncrimes, or disasters, more than half probably come from PR firms.I know because I spent years hunting such \"press hits.\" Our startup spent\nits entire marketing budget on PR: at a time when we were assembling\nour own computers to save money, we were paying a PR firm $16,000\na month. And they were worth it. PR is the news equivalent of\nsearch engine optimization; instead of buying ads, which readers\nignore, you get yourself inserted directly into the stories. [1]Our PR firm\nwas one of the best in the business. In 18 months, they got press\nhits in over 60 different publications. \nAnd we weren't the only ones they did great things for. \nIn 1997 I got a call from another\nstartup founder considering hiring them to promote his company. I\ntold him they were PR gods, worth every penny of their outrageous \nfees. But I remember thinking his company's name was odd.\nWhy call an auction site \"eBay\"?\nSymbiosisPR is not dishonest. Not quite. In fact, the reason the best PR\nfirms are so effective is precisely that they aren't dishonest.\nThey give reporters genuinely valuable information. A good PR firm\nwon't bug reporters just because the client tells them to; they've\nworked hard to build their credibility with reporters, and they\ndon't want to destroy it by feeding them mere propaganda.If anyone is dishonest, it's the reporters. The main reason PR \nfirms exist is that reporters are lazy. Or, to put it more nicely,\noverworked. Really they ought to be out there digging up stories\nfor themselves. But it's so tempting to sit in their offices and\nlet PR firms bring the stories to them. After all, they know good\nPR firms won't lie to them.A good flatterer doesn't lie, but tells his victim selective truths\n(what a nice color your eyes are). Good PR firms use the same\nstrategy: they give reporters stories that are true, but whose truth\nfavors their clients.For example, our PR firm often pitched stories about how the Web \nlet small merchants compete with big ones. This was perfectly true.\nBut the reason reporters ended up writing stories about this\nparticular truth, rather than some other one, was that small merchants\nwere our target market, and we were paying the piper.Different publications vary greatly in their reliance on PR firms.\nAt the bottom of the heap are the trade press, who make most of\ntheir money from advertising and would give the magazines away for\nfree if advertisers would let them. [2] The average\ntrade publication is a bunch of ads, glued together by just enough\narticles to make it look like a magazine. They're so desperate for\n\"content\" that some will print your press releases almost verbatim,\nif you take the trouble to write them to read like articles.At the other extreme are publications like the New York Times\nand the Wall Street Journal. Their reporters do go out and\nfind their own stories, at least some of the time. They'll listen \nto PR firms, but briefly and skeptically. We managed to get press \nhits in almost every publication we wanted, but we never managed \nto crack the print edition of the Times. [3]The weak point of the top reporters is not laziness, but vanity.\nYou don't pitch stories to them. You have to approach them as if\nyou were a specimen under their all-seeing microscope, and make it\nseem as if the story you want them to run is something they thought \nof themselves.Our greatest PR coup was a two-part one. We estimated, based on\nsome fairly informal math, that there were about 5000 stores on the\nWeb. We got one paper to print this number, which seemed neutral \nenough. But once this \"fact\" was out there in print, we could quote\nit to other publications, and claim that with 1000 users we had 20%\nof the online store market.This was roughly true. We really did have the biggest share of the\nonline store market, and 5000 was our best guess at its size. But\nthe way the story appeared in the press sounded a lot more definite.Reporters like definitive statements. For example, many of the\nstories about Jeremy Jaynes's conviction say that he was one of the\n10 worst spammers. This \"fact\" originated in Spamhaus's ROKSO list,\nwhich I think even Spamhaus would admit is a rough guess at the top\nspammers. The first stories about Jaynes cited this source, but\nnow it's simply repeated as if it were part of the indictment. \n[4]All you can say with certainty about Jaynes is that he was a fairly\nbig spammer. But reporters don't want to print vague stuff like\n\"fairly big.\" They want statements with punch, like \"top ten.\" And\nPR firms give them what they want.\nWearing suits, we're told, will make us \n3.6\npercent more productive.BuzzWhere the work of PR firms really does get deliberately misleading is in\nthe generation of \"buzz.\" They usually feed the same story to \nseveral different publications at once. And when readers see similar\nstories in multiple places, they think there is some important trend\nafoot. Which is exactly what they're supposed to think.When Windows 95 was launched, people waited outside stores\nat midnight to buy the first copies. None of them would have been\nthere without PR firms, who generated such a buzz in\nthe news media that it became self-reinforcing, like a nuclear chain\nreaction.I doubt PR firms realize it yet, but the Web makes it possible to \ntrack them at work. If you search for the obvious phrases, you\nturn up several efforts over the years to place stories about the \nreturn of the suit. For example, the Reuters article \n\nthat got picked up by USA\nToday in September 2004. \"The suit is back,\" it begins.Trend articles like this are almost always the work of\nPR firms. Once you know how to read them, it's straightforward to\nfigure out who the client is. With trend stories, PR firms usually\nline up one or more \"experts\" to talk about the industry generally. \nIn this case we get three: the NPD Group, the creative director of\nGQ, and a research director at Smith Barney. [5] When\nyou get to the end of the experts, look for the client. And bingo, \nthere it is: The Men's Wearhouse.Not surprising, considering The Men's Wearhouse was at that moment \nrunning ads saying \"The Suit is Back.\" Talk about a successful\npress hit-- a wire service article whose first sentence is your own\nad copy.The secret to finding other press hits from a given pitch\nis to realize that they all started from the same document back at\nthe PR firm. Search for a few key phrases and the names of the\nclients and the experts, and you'll turn up other variants of this \nstory.Casual\nfridays are out and dress codes are in writes Diane E. Lewis\nin The Boston Globe. In a remarkable coincidence, Ms. Lewis's\nindustry contacts also include the creative director of GQ.Ripped jeans and T-shirts are out, writes Mary Kathleen Flynn in\nUS News & World Report. And she too knows the \ncreative director of GQ.Men's suits\nare back writes Nicole Ford in Sexbuzz.Com (\"the ultimate men's\nentertainment magazine\").Dressing\ndown loses appeal as men suit up at the office writes Tenisha\nMercer of The Detroit News.\nNow that so many news articles are online, I suspect you could find\na similar pattern for most trend stories placed by PR firms. I\npropose we call this new sport \"PR diving,\" and I'm sure there are\nfar more striking examples out there than this clump of five stories.OnlineAfter spending years chasing them, it's now second nature\nto me to recognize press hits for what they are. But before we\nhired a PR firm I had no idea where articles in the mainstream media\ncame from. I could tell a lot of them were crap, but I didn't\nrealize why.Remember the exercises in critical reading you did in school, where\nyou had to look at a piece of writing and step back and ask whether\nthe author was telling the whole truth? If you really want to be\na critical reader, it turns out you have to step back one step\nfurther, and ask not just whether the author is telling the truth,\nbut why he's writing about this subject at all.Online, the answer tends to be a lot simpler. Most people who\npublish online write what they write for the simple reason that\nthey want to. You\ncan't see the fingerprints of PR firms all over the articles, as\nyou can in so many print publications-- which is one of the reasons,\nthough they may not consciously realize it, that readers trust\nbloggers more than Business Week.I was talking recently to a friend who works for a\nbig newspaper. He thought the print media were in serious trouble,\nand that they were still mostly in denial about it. \"They think\nthe decline is cyclic,\" he said. \"Actually it's structural.\"In other words, the readers are leaving, and they're not coming\nback.\nWhy? I think the main reason is that the writing online is more honest.\nImagine how incongruous the New York Times article about\nsuits would sound if you read it in a blog:\n The urge to look corporate-- sleek, commanding,\n prudent, yet with just a touch of hubris on your well-cut sleeve--\n is an unexpected development in a time of business disgrace.\n \nThe problem\nwith this article is not just that it originated in a PR firm.\nThe whole tone is bogus. This is the tone of someone writing down\nto their audience.Whatever its flaws, the writing you find online\nis authentic. It's not mystery meat cooked up\nout of scraps of pitch letters and press releases, and pressed into \nmolds of zippy\njournalese. It's people writing what they think.I didn't realize, till there was an alternative, just how artificial\nmost of the writing in the mainstream media was. I'm not saying\nI used to believe what I read in Time and Newsweek. Since high\nschool, at least, I've thought of magazines like that more as\nguides to what ordinary people were being\ntold to think than as \nsources of information. But I didn't realize till the last \nfew years that writing for publication didn't have to mean writing\nthat way. I didn't realize you could write as candidly and\ninformally as you would if you were writing to a friend.Readers aren't the only ones who've noticed the\nchange. The PR industry has too.\nA hilarious article\non the site of the PR Society of America gets to the heart of the \nmatter:\n Bloggers are sensitive about becoming mouthpieces\n for other organizations and companies, which is the reason they\n began blogging in the first place. \nPR people fear bloggers for the same reason readers\nlike them. And that means there may be a struggle ahead. As\nthis new kind of writing draws readers away from traditional media, we\nshould be prepared for whatever PR mutates into to compensate. \nWhen I think \nhow hard PR firms work to score press hits in the traditional \nmedia, I can't imagine they'll work any less hard to feed stories\nto bloggers, if they can figure out how.\nNotes[1] PR has at least \none beneficial feature: it favors small companies. If PR didn't \nwork, the only alternative would be to advertise, and only big\ncompanies can afford that.[2] Advertisers pay \nless for ads in free publications, because they assume readers \nignore something they get for free. This is why so many trade\npublications nominally have a cover price and yet give away free\nsubscriptions with such abandon.[3] Different sections\nof the Times vary so much in their standards that they're\npractically different papers. Whoever fed the style section reporter\nthis story about suits coming back would have been sent packing by\nthe regular news reporters.[4] The most striking\nexample I know of this type is the \"fact\" that the Internet worm \nof 1988 infected 6000 computers. I was there when it was cooked up,\nand this was the recipe: someone guessed that there were about\n60,000 computers attached to the Internet, and that the worm might\nhave infected ten percent of them.Actually no one knows how many computers the worm infected, because\nthe remedy was to reboot them, and this destroyed all traces. But\npeople like numbers. And so this one is now replicated\nall over the Internet, like a little worm of its own.[5] Not all were\nnecessarily supplied by the PR firm. Reporters sometimes call a few\nadditional sources on their own, like someone adding a few fresh \nvegetables to a can of soup.\nThanks to Ingrid Basset, Trevor Blackwell, Sarah Harlin, Jessica \nLivingston, Jackie McDonough, Robert Morris, and Aaron Swartz (who\nalso found the PRSA article) for reading drafts of this.Correction: Earlier versions used a recent\nBusiness Week article mentioning del.icio.us as an example\nof a press hit, but Joshua Schachter tells me \nit was spontaneous."} {"title": "sun", "text": "September 2017The most valuable insights are both general and surprising. \nF\u00a0=\u00a0ma for example. But general and surprising is a hard\ncombination to achieve. That territory tends to be picked\nclean, precisely because those insights are so valuable.Ordinarily, the best that people can do is one without the\nother: either surprising without being general (e.g.\ngossip), or general without being surprising (e.g.\nplatitudes).Where things get interesting is the moderately valuable\ninsights. You get those from small additions of whichever\nquality was missing. The more common case is a small\naddition of generality: a piece of gossip that's more than\njust gossip, because it teaches something interesting about\nthe world. But another less common approach is to focus on\nthe most general ideas and see if you can find something new\nto say about them. Because these start out so general, you\nonly need a small delta of novelty to produce a useful\ninsight.A small delta of novelty is all you'll be able to get most\nof the time. Which means if you take this route, your ideas\nwill seem a lot like ones that already exist. Sometimes\nyou'll find you've merely rediscovered an idea that did\nalready exist. But don't be discouraged. Remember the huge\nmultiplier that kicks in when you do manage to think of\nsomething even a little new.Corollary: the more general the ideas you're talking about,\nthe less you should worry about repeating yourself. If you\nwrite enough, it's inevitable you will. Your brain is much\nthe same from year to year and so are the stimuli that hit\nit. I feel slightly bad when I find I've said something\nclose to what I've said before, as if I were plagiarizing\nmyself. But rationally one shouldn't. You won't say\nsomething exactly the same way the second time, and that\nvariation increases the chance you'll get that tiny but\ncritical delta of novelty.And of course, ideas beget ideas. (That sounds \nfamiliar.)\nAn idea with a small amount of novelty could lead to one\nwith more. But only if you keep going. So it's doubly\nimportant not to let yourself be discouraged by people who\nsay there's not much new about something you've discovered.\n\"Not much new\" is a real achievement when you're talking\nabout the most general ideas. It's not true that there's nothing new under the sun. There\nare some domains where there's almost nothing new. But\nthere's a big difference between nothing and almost nothing,\nwhen it's multiplied by the area under the sun.\nThanks to Sam Altman, Patrick Collison, and Jessica\nLivingston for reading drafts of this."} {"title": "weird", "text": "August 2021When people say that in their experience all programming languages\nare basically equivalent, they're making a statement not about\nlanguages but about the kind of programming they've done.99.5% of programming consists of gluing together calls to library\nfunctions. All popular languages are equally good at this. So one\ncan easily spend one's whole career operating in the intersection\nof popular programming languages.But the other .5% of programming is disproportionately interesting.\nIf you want to learn what it consists of, the weirdness of weird\nlanguages is a good clue to follow.Weird languages aren't weird by accident. Not the good ones, at\nleast. The weirdness of the good ones usually implies the existence\nof some form of programming that's not just the usual gluing together\nof library calls.A concrete example: Lisp macros. Lisp macros seem weird even to\nmany Lisp programmers. They're not only not in the intersection of\npopular languages, but by their nature would be hard to implement\nproperly in a language without turning it into a dialect of\nLisp. And macros are definitely evidence of techniques that go\nbeyond glue programming. For example, solving problems by first\nwriting a language for problems of that type, and then writing\nyour specific application in it. Nor is this all you can do with\nmacros; it's just one region in a space of program-manipulating\ntechniques that even now is far from fully explored.So if you want to expand your concept of what programming can be,\none way to do it is by learning weird languages. Pick a language\nthat most programmers consider weird but whose median user is smart,\nand then focus on the differences between this language and the\nintersection of popular languages. What can you say in this language\nthat would be impossibly inconvenient to say in others? In the\nprocess of learning how to say things you couldn't previously say,\nyou'll probably be learning how to think things you couldn't\npreviously think.\nThanks to Trevor Blackwell, Patrick Collison, Daniel Gackle, Amjad\nMasad, and Robert Morris for reading drafts of this.\n"} {"title": "diff", "text": "December 2001 (rev. May 2002)\n\n(This article came about in response to some questions on\nthe LL1 mailing list. It is now\nincorporated in Revenge of the Nerds.)When McCarthy designed Lisp in the late 1950s, it was\na radical departure from existing languages,\nthe most important of which was Fortran.Lisp embodied nine new ideas:\n1. Conditionals. A conditional is an if-then-else\nconstruct. We take these for granted now. They were \ninvented\nby McCarthy in the course of developing Lisp. \n(Fortran at that time only had a conditional\ngoto, closely based on the branch instruction in the \nunderlying hardware.) McCarthy, who was on the Algol committee, got\nconditionals into Algol, whence they spread to most other\nlanguages.2. A function type. In Lisp, functions are first class \nobjects-- they're a data type just like integers, strings,\netc, and have a literal representation, can be stored in variables,\ncan be passed as arguments, and so on.3. Recursion. Recursion existed as a mathematical concept\nbefore Lisp of course, but Lisp was the first programming language to support\nit. (It's arguably implicit in making functions first class\nobjects.)4. A new concept of variables. In Lisp, all variables\nare effectively pointers. Values are what\nhave types, not variables, and assigning or binding\nvariables means copying pointers, not what they point to.5. Garbage-collection.6. Programs composed of expressions. Lisp programs are \ntrees of expressions, each of which returns a value. \n(In some Lisps expressions\ncan return multiple values.) This is in contrast to Fortran\nand most succeeding languages, which distinguish between\nexpressions and statements.It was natural to have this\ndistinction in Fortran because (not surprisingly in a language\nwhere the input format was punched cards) the language was\nline-oriented. You could not nest statements. And\nso while you needed expressions for math to work, there was\nno point in making anything else return a value, because\nthere could not be anything waiting for it.This limitation\nwent away with the arrival of block-structured languages,\nbut by then it was too late. The distinction between\nexpressions and statements was entrenched. It spread from \nFortran into Algol and thence to both their descendants.When a language is made entirely of expressions, you can\ncompose expressions however you want. You can say either\n(using Arc syntax)(if foo (= x 1) (= x 2))or(= x (if foo 1 2))7. A symbol type. Symbols differ from strings in that\nyou can test equality by comparing a pointer.8. A notation for code using trees of symbols.9. The whole language always available. \nThere is\nno real distinction between read-time, compile-time, and runtime.\nYou can compile or run code while reading, read or run code\nwhile compiling, and read or compile code at runtime.Running code at read-time lets users reprogram Lisp's syntax;\nrunning code at compile-time is the basis of macros; compiling\nat runtime is the basis of Lisp's use as an extension\nlanguage in programs like Emacs; and reading at runtime\nenables programs to communicate using s-expressions, an\nidea recently reinvented as XML.\nWhen Lisp was first invented, all these ideas were far\nremoved from ordinary programming practice, which was\ndictated largely by the hardware available in the late 1950s.Over time, the default language, embodied\nin a succession of popular languages, has\ngradually evolved toward Lisp. 1-5 are now widespread.\n6 is starting to appear in the mainstream.\nPython has a form of 7, though there doesn't seem to be\nany syntax for it. \n8, which (with 9) is what makes Lisp macros\npossible, is so far still unique to Lisp,\nperhaps because (a) it requires those parens, or something \njust as bad, and (b) if you add that final increment of power, \nyou can no \nlonger claim to have invented a new language, but only\nto have designed a new dialect of Lisp ; -)Though useful to present-day programmers, it's\nstrange to describe Lisp in terms of its\nvariation from the random expedients other languages\nadopted. That was not, probably, how McCarthy\nthought of it. Lisp wasn't designed to fix the mistakes\nin Fortran; it came about more as the byproduct of an\nattempt to axiomatize computation."} {"title": "hubs", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2011If you look at a list of US cities sorted by population, the number\nof successful startups per capita varies by orders of magnitude.\nSomehow it's as if most places were sprayed with startupicide.I wondered about this for years. I could see the average town was\nlike a roach motel for startup ambitions: smart, ambitious people\nwent in, but no startups came out. But I was never able to figure\nout exactly what happened inside the motel\u2014exactly what was\nkilling all the potential startups.\n[1]A couple weeks ago I finally figured it out. I was framing the\nquestion wrong. The problem is not that most towns kill startups.\nIt's that death is the default for startups,\nand most towns don't save them. Instead of thinking of most places\nas being sprayed with startupicide, it's more accurate to think of\nstartups as all being poisoned, and a few places being sprayed with\nthe antidote.Startups in other places are just doing what startups naturally do:\nfail. The real question is, what's saving startups in places\nlike Silicon Valley?\n[2]EnvironmentI think there are two components to the antidote: being in a place\nwhere startups are the cool thing to do, and chance meetings with\npeople who can help you. And what drives them both is the number\nof startup people around you.The first component is particularly helpful in the first stage of\na startup's life, when you go from merely having an interest in\nstarting a company to actually doing it. It's quite a leap to start\na startup. It's an unusual thing to do. But in Silicon Valley it\nseems normal.\n[3]In most places, if you start a startup, people treat you as if\nyou're unemployed. People in the Valley aren't automatically\nimpressed with you just because you're starting a company, but they\npay attention. Anyone who's been here any amount of time knows not\nto default to skepticism, no matter how inexperienced you seem or\nhow unpromising your idea sounds at first, because they've all seen\ninexperienced founders with unpromising sounding ideas who a few\nyears later were billionaires.Having people around you care about what you're doing is an\nextraordinarily powerful force. Even the\nmost willful people are susceptible to it. About a year after we\nstarted Y Combinator I said something to a partner at a well known\nVC firm that gave him the (mistaken) impression I was considering\nstarting another startup. He responded so eagerly that for about\nhalf a second I found myself considering doing it.In most other cities, the prospect of starting a startup just doesn't\nseem real. In the Valley it's not only real but fashionable. That\nno doubt causes a lot of people to start startups who shouldn't.\nBut I think that's ok. Few people are suited to running a startup,\nand it's very hard to predict beforehand which are (as I know all\ntoo well from being in the business of trying to predict beforehand),\nso lots of people starting startups who shouldn't is probably the\noptimal state of affairs. As long as you're at a point in your\nlife when you can bear the risk of failure, the best way to find\nout if you're suited to running a startup is to try\nit.ChanceThe second component of the antidote is chance meetings with people\nwho can help you. This force works in both phases: both in the\ntransition from the desire to start a startup to starting one, and\nthe transition from starting a company to succeeding. The power\nof chance meetings is more variable than people around you caring\nabout startups, which is like a sort of background radiation that\naffects everyone equally, but at its strongest it is far stronger.Chance meetings produce miracles to compensate for the disasters\nthat characteristically befall startups. In the Valley, terrible\nthings happen to startups all the time, just like they do to startups\neverywhere. The reason startups are more likely to make it here\nis that great things happen to them too. In the Valley, lightning\nhas a sign bit.For example, you start a site for college students and you decide\nto move to the Valley for the summer to work on it. And then on a\nrandom suburban street in Palo Alto you happen to run into Sean\nParker, who understands the domain really well because he started\na similar startup himself, and also knows all the investors. And\nmoreover has advanced views, for 2004, on founders retaining control of their companies.You can't say precisely what the miracle will be, or even for sure\nthat one will happen. The best one can say is: if you're in a\nstartup hub, unexpected good things will probably happen to you,\nespecially if you deserve them.I bet this is true even for startups we fund. Even with us working\nto make things happen for them on purpose rather than by accident,\nthe frequency of helpful chance meetings in the Valley is so high\nthat it's still a significant increment on what we can deliver.Chance meetings play a role like the role relaxation plays in having\nideas. Most people have had the experience of working hard on some\nproblem, not being able to solve it, giving up and going to bed,\nand then thinking of the answer in the shower in the morning. What\nmakes the answer appear is letting your thoughts drift a bit\u2014and thus drift off the wrong\npath you'd been pursuing last night and onto the right one adjacent\nto it.Chance meetings let your acquaintance drift in the same way taking\na shower lets your thoughts drift. The critical thing in both cases\nis that they drift just the right amount. The meeting between Larry\nPage and Sergey Brin was a good example. They let their acquaintance\ndrift, but only a little; they were both meeting someone they had\na lot in common with.For Larry Page the most important component of the antidote was\nSergey Brin, and vice versa. The antidote is \npeople. It's not the\nphysical infrastructure of Silicon Valley that makes it work, or\nthe weather, or anything like that. Those helped get it started,\nbut now that the reaction is self-sustaining what drives it is the\npeople.Many observers have noticed that one of the most distinctive things\nabout startup hubs is the degree to which people help one another\nout, with no expectation of getting anything in return. I'm not\nsure why this is so. Perhaps it's because startups are less of a\nzero sum game than most types of business; they are rarely killed\nby competitors. Or perhaps it's because so many startup founders\nhave backgrounds in the sciences, where collaboration is encouraged.A large part of YC's function is to accelerate that process. We're\na sort of Valley within the Valley, where the density of people\nworking on startups and their willingness to help one another are\nboth artificially amplified.NumbersBoth components of the antidote\u2014an environment that encourages\nstartups, and chance meetings with people who help you\u2014are\ndriven by the same underlying cause: the number of startup people\naround you. To make a startup hub, you need a lot of people\ninterested in startups.There are three reasons. The first, obviously, is that if you don't\nhave enough density, the chance meetings don't happen.\n[4]\nThe second is that different startups need such different things, so\nyou need a lot of people to supply each startup with what they need\nmost. Sean Parker was exactly what Facebook needed in 2004. Another\nstartup might have needed a database guy, or someone with connections\nin the movie business.This is one of the reasons we fund such a large number of companies,\nincidentally. The bigger the community, the greater the chance it\nwill contain the person who has that one thing you need most.The third reason you need a lot of people to make a startup hub is\nthat once you have enough people interested in the same problem,\nthey start to set the social norms. And it is a particularly\nvaluable thing when the atmosphere around you encourages you to do\nsomething that would otherwise seem too ambitious. In most places\nthe atmosphere pulls you back toward the mean.I flew into the Bay Area a few days ago. I notice this every time\nI fly over the Valley: somehow you can sense something is going on. \nObviously you can sense prosperity in how well kept a\nplace looks. But there are different kinds of prosperity. Silicon\nValley doesn't look like Boston, or New York, or LA, or DC. I tried\nasking myself what word I'd use to describe the feeling the Valley\nradiated, and the word that came to mind was optimism.Notes[1]\nI'm not saying it's impossible to succeed in a city with few\nother startups, just harder. If you're sufficiently good at\ngenerating your own morale, you can survive without external\nencouragement. Wufoo was based in Tampa and they succeeded. But\nthe Wufoos are exceptionally disciplined.[2]\nIncidentally, this phenomenon is not limited to startups. Most\nunusual ambitions fail, unless the person who has them manages to\nfind the right sort of community.[3]\nStarting a company is common, but starting a startup is rare.\nI've talked about the distinction between the two elsewhere, but\nessentially a startup is a new business designed for scale. Most\nnew businesses are service businesses and except in rare cases those\ndon't scale.[4]\nAs I was writing this, I had a demonstration of the density of\nstartup people in the Valley. Jessica and I bicycled to University\nAve in Palo Alto to have lunch at the fabulous Oren's Hummus. As\nwe walked in, we met Charlie Cheever sitting near the door. Selina\nTobaccowala stopped to say hello on her way out. Then Josh Wilson\ncame in to pick up a take out order. After lunch we went to get\nfrozen yogurt. On the way we met Rajat Suri. When we got to the\nyogurt place, we found Dave Shen there, and as we walked out we ran\ninto Yuri Sagalov. We walked with him for a block or so and we ran\ninto Muzzammil Zaveri, and then a block later we met Aydin Senkut.\nThis is everyday life in Palo Alto. I wasn't trying to meet people;\nI was just having lunch. And I'm sure for every startup founder\nor investor I saw that I knew, there were 5 more I didn't. If Ron\nConway had been with us he would have met 30 people he knew.Thanks to Sam Altman, Paul Buchheit, Jessica Livingston, and\nHarj Taggar for reading drafts of this."} {"title": "iflisp", "text": "May 2003If Lisp is so great, why don't more people use it? I was \nasked this question by a student in the audience at a \ntalk I gave recently. Not for the first time, either.In languages, as in so many things, there's not much \ncorrelation between popularity and quality. Why does \nJohn Grisham (King of Torts sales rank, 44) outsell\nJane Austen (Pride and Prejudice sales rank, 6191)?\nWould even Grisham claim that it's because he's a better\nwriter?Here's the first sentence of Pride and Prejudice:\n\nIt is a truth universally acknowledged, that a single man \nin possession of a good fortune must be in want of a\nwife.\n\n\"It is a truth universally acknowledged?\" Long words for\nthe first sentence of a love story.Like Jane Austen, Lisp looks hard. Its syntax, or lack\nof syntax, makes it look completely unlike \nthe languages\nmost people are used to. Before I learned Lisp, I was afraid\nof it too. I recently came across a notebook from 1983\nin which I'd written:\n\nI suppose I should learn Lisp, but it seems so foreign.\n\nFortunately, I was 19 at the time and not too resistant to learning\nnew things. I was so ignorant that learning\nalmost anything meant learning new things.People frightened by Lisp make up other reasons for not\nusing it. The standard\nexcuse, back when C was the default language, was that Lisp\nwas too slow. Now that Lisp dialects are among\nthe faster\nlanguages available, that excuse has gone away.\nNow the standard excuse is openly circular: that other languages\nare more popular.(Beware of such reasoning. It gets you Windows.)Popularity is always self-perpetuating, but it's especially\nso in programming languages. More libraries\nget written for popular languages, which makes them still\nmore popular. Programs often have to work with existing programs,\nand this is easier if they're written in the same language,\nso languages spread from program to program like a virus.\nAnd managers prefer popular languages, because they give them \nmore leverage over developers, who can more easily be replaced.Indeed, if programming languages were all more or less equivalent,\nthere would be little justification for using any but the most\npopular. But they aren't all equivalent, not by a long\nshot. And that's why less popular languages, like Jane Austen's \nnovels, continue to survive at all. When everyone else is reading \nthe latest John Grisham novel, there will always be a few people \nreading Jane Austen instead."} {"title": "know", "text": "December 2014I've read Villehardouin's chronicle of the Fourth Crusade at least\ntwo times, maybe three. And yet if I had to write down everything\nI remember from it, I doubt it would amount to much more than a\npage. Multiply this times several hundred, and I get an uneasy\nfeeling when I look at my bookshelves. What use is it to read all\nthese books if I remember so little from them?A few months ago, as I was reading Constance Reid's excellent\nbiography of Hilbert, I figured out if not the answer to this\nquestion, at least something that made me feel better about it.\nShe writes:\n\n Hilbert had no patience with mathematical lectures which filled\n the students with facts but did not teach them how to frame a\n problem and solve it. He often used to tell them that \"a perfect\n formulation of a problem is already half its solution.\"\n\nThat has always seemed to me an important point, and I was even\nmore convinced of it after hearing it confirmed by Hilbert.But how had I come to believe in this idea in the first place? A\ncombination of my own experience and other things I'd read. None\nof which I could at that moment remember! And eventually I'd forget\nthat Hilbert had confirmed it too. But my increased belief in the\nimportance of this idea would remain something I'd learned from\nthis book, even after I'd forgotten I'd learned it.Reading and experience train your model of the world. And even if\nyou forget the experience or what you read, its effect on your model\nof the world persists. Your mind is like a compiled program you've\nlost the source of. It works, but you don't know why.The place to look for what I learned from Villehardouin's chronicle\nis not what I remember from it, but my mental models of the crusades,\nVenice, medieval culture, siege warfare, and so on. Which doesn't\nmean I couldn't have read more attentively, but at least the harvest\nof reading is not so miserably small as it might seem.This is one of those things that seem obvious in retrospect. But\nit was a surprise to me and presumably would be to anyone else who\nfelt uneasy about (apparently) forgetting so much they'd read.Realizing it does more than make you feel a little better about\nforgetting, though. There are specific implications.For example, reading and experience are usually \"compiled\" at the\ntime they happen, using the state of your brain at that time. The\nsame book would get compiled differently at different points in\nyour life. Which means it is very much worth reading important\nbooks multiple times. I always used to feel some misgivings about\nrereading books. I unconsciously lumped reading together with work\nlike carpentry, where having to do something again is a sign you\ndid it wrong the first time. Whereas now the phrase \"already read\"\nseems almost ill-formed.Intriguingly, this implication isn't limited to books. Technology\nwill increasingly make it possible to relive our experiences. When\npeople do that today it's usually to enjoy them again (e.g. when\nlooking at pictures of a trip) or to find the origin of some bug in\ntheir compiled code (e.g. when Stephen Fry succeeded in remembering\nthe childhood trauma that prevented him from singing). But as\ntechnologies for recording and playing back your life improve, it\nmay become common for people to relive experiences without any goal\nin mind, simply to learn from them again as one might when rereading\na book.Eventually we may be able not just to play back experiences but\nalso to index and even edit them. So although not knowing how you\nknow things may seem part of being human, it may not be.\nThanks to Sam Altman, Jessica Livingston, and Robert Morris for reading \ndrafts of this."} {"title": "rss", "text": "Aaron Swartz created a scraped\nfeed\nof the essays page."} {"title": "todo", "text": "April 2012A palliative care nurse called Bronnie Ware made a list of the\nbiggest regrets\nof the dying. Her list seems plausible. I could see\nmyself \u2014 can see myself \u2014 making at least 4 of these\n5 mistakes.If you had to compress them into a single piece of advice, it might\nbe: don't be a cog. The 5 regrets paint a portrait of post-industrial\nman, who shrinks himself into a shape that fits his circumstances,\nthen turns dutifully till he stops.The alarming thing is, the mistakes that produce these regrets are\nall errors of omission. You forget your dreams, ignore your family,\nsuppress your feelings, neglect your friends, and forget to be\nhappy. Errors of omission are a particularly dangerous type of\nmistake, because you make them by default.I would like to avoid making these mistakes. But how do you avoid\nmistakes you make by default? Ideally you transform your life so\nit has other defaults. But it may not be possible to do that\ncompletely. As long as these mistakes happen by default, you probably\nhave to be reminded not to make them. So I inverted the 5 regrets,\nyielding a list of 5 commands\n\n Don't ignore your dreams; don't work too much; say what you\n think; cultivate friendships; be happy.\n\nwhich I then put at the top of the file I use as a todo list."} {"title": "vb", "text": "January 2016Life is short, as everyone knows. When I was a kid I used to wonder\nabout this. Is life actually short, or are we really complaining\nabout its finiteness? Would we be just as likely to feel life was\nshort if we lived 10 times as long?Since there didn't seem any way to answer this question, I stopped\nwondering about it. Then I had kids. That gave me a way to answer\nthe question, and the answer is that life actually is short.Having kids showed me how to convert a continuous quantity, time,\ninto discrete quantities. You only get 52 weekends with your 2 year\nold. If Christmas-as-magic lasts from say ages 3 to 10, you only\nget to watch your child experience it 8 times. And while it's\nimpossible to say what is a lot or a little of a continuous quantity\nlike time, 8 is not a lot of something. If you had a handful of 8\npeanuts, or a shelf of 8 books to choose from, the quantity would\ndefinitely seem limited, no matter what your lifespan was.Ok, so life actually is short. Does it make any difference to know\nthat?It has for me. It means arguments of the form \"Life is too short\nfor x\" have great force. It's not just a figure of speech to say\nthat life is too short for something. It's not just a synonym for\nannoying. If you find yourself thinking that life is too short for\nsomething, you should try to eliminate it if you can.When I ask myself what I've found life is too short for, the word\nthat pops into my head is \"bullshit.\" I realize that answer is\nsomewhat tautological. It's almost the definition of bullshit that\nit's the stuff that life is too short for. And yet bullshit does\nhave a distinctive character. There's something fake about it.\nIt's the junk food of experience.\n[1]If you ask yourself what you spend your time on that's bullshit,\nyou probably already know the answer. Unnecessary meetings, pointless\ndisputes, bureaucracy, posturing, dealing with other people's\nmistakes, traffic jams, addictive but unrewarding pastimes.There are two ways this kind of thing gets into your life: it's\neither forced on you, or it tricks you. To some extent you have to\nput up with the bullshit forced on you by circumstances. You need\nto make money, and making money consists mostly of errands. Indeed,\nthe law of supply and demand insures that: the more rewarding some\nkind of work is, the cheaper people will do it. It may be that\nless bullshit is forced on you than you think, though. There has\nalways been a stream of people who opt out of the default grind and\ngo live somewhere where opportunities are fewer in the conventional\nsense, but life feels more authentic. This could become more common.You can do it on a smaller scale without moving. The amount of\ntime you have to spend on bullshit varies between employers. Most\nlarge organizations (and many small ones) are steeped in it. But\nif you consciously prioritize bullshit avoidance over other factors\nlike money and prestige, you can probably find employers that will\nwaste less of your time.If you're a freelancer or a small company, you can do this at the\nlevel of individual customers. If you fire or avoid toxic customers,\nyou can decrease the amount of bullshit in your life by more than\nyou decrease your income.But while some amount of bullshit is inevitably forced on you, the\nbullshit that sneaks into your life by tricking you is no one's\nfault but your own. And yet the bullshit you choose may be harder\nto eliminate than the bullshit that's forced on you. Things that\nlure you into wasting your time have to be really good at\ntricking you. An example that will be familiar to a lot of people\nis arguing online. When someone\ncontradicts you, they're in a sense attacking you. Sometimes pretty\novertly. Your instinct when attacked is to defend yourself. But\nlike a lot of instincts, this one wasn't designed for the world we\nnow live in. Counterintuitive as it feels, it's better most of\nthe time not to defend yourself. Otherwise these people are literally\ntaking your life.\n[2]Arguing online is only incidentally addictive. There are more\ndangerous things than that. As I've written before, one byproduct\nof technical progress is that things we like tend to become more\naddictive. Which means we will increasingly have to make a conscious\neffort to avoid addictions \u0097 to stand outside ourselves and ask \"is\nthis how I want to be spending my time?\"As well as avoiding bullshit, one should actively seek out things\nthat matter. But different things matter to different people, and\nmost have to learn what matters to them. A few are lucky and realize\nearly on that they love math or taking care of animals or writing,\nand then figure out a way to spend a lot of time doing it. But\nmost people start out with a life that's a mix of things that\nmatter and things that don't, and only gradually learn to distinguish\nbetween them.For the young especially, much of this confusion is induced by the\nartificial situations they find themselves in. In middle school and\nhigh school, what the other kids think of you seems the most important\nthing in the world. But when you ask adults what they got wrong\nat that age, nearly all say they cared too much what other kids\nthought of them.One heuristic for distinguishing stuff that matters is to ask\nyourself whether you'll care about it in the future. Fake stuff\nthat matters usually has a sharp peak of seeming to matter. That's\nhow it tricks you. The area under the curve is small, but its shape\njabs into your consciousness like a pin.The things that matter aren't necessarily the ones people would\ncall \"important.\" Having coffee with a friend matters. You won't\nfeel later like that was a waste of time.One great thing about having small children is that they make you\nspend time on things that matter: them. They grab your sleeve as\nyou're staring at your phone and say \"will you play with me?\" And\nodds are that is in fact the bullshit-minimizing option.If life is short, we should expect its shortness to take us by\nsurprise. And that is just what tends to happen. You take things\nfor granted, and then they're gone. You think you can always write\nthat book, or climb that mountain, or whatever, and then you realize\nthe window has closed. The saddest windows close when other people\ndie. Their lives are short too. After my mother died, I wished I'd\nspent more time with her. I lived as if she'd always be there.\nAnd in her typical quiet way she encouraged that illusion. But an\nillusion it was. I think a lot of people make the same mistake I\ndid.The usual way to avoid being taken by surprise by something is to\nbe consciously aware of it. Back when life was more precarious,\npeople used to be aware of death to a degree that would now seem a\nbit morbid. I'm not sure why, but it doesn't seem the right answer\nto be constantly reminding oneself of the grim reaper hovering at\neveryone's shoulder. Perhaps a better solution is to look at the\nproblem from the other end. Cultivate a habit of impatience about\nthe things you most want to do. Don't wait before climbing that\nmountain or writing that book or visiting your mother. You don't\nneed to be constantly reminding yourself why you shouldn't wait.\nJust don't wait.I can think of two more things one does when one doesn't have much\nof something: try to get more of it, and savor what one has. Both\nmake sense here.How you live affects how long you live. Most people could do better.\nMe among them.But you can probably get even more effect by paying closer attention\nto the time you have. It's easy to let the days rush by. The\n\"flow\" that imaginative people love so much has a darker cousin\nthat prevents you from pausing to savor life amid the daily slurry\nof errands and alarms. One of the most striking things I've read\nwas not in a book, but the title of one: James Salter's Burning\nthe Days.It is possible to slow time somewhat. I've gotten better at it.\nKids help. When you have small children, there are a lot of moments\nso perfect that you can't help noticing.It does help too to feel that you've squeezed everything out of\nsome experience. The reason I'm sad about my mother is not just\nthat I miss her but that I think of all the things we could have\ndone that we didn't. My oldest son will be 7 soon. And while I\nmiss the 3 year old version of him, I at least don't have any regrets\nover what might have been. We had the best time a daddy and a 3\nyear old ever had.Relentlessly prune bullshit, don't wait to do things that matter,\nand savor the time you have. That's what you do when life is short.Notes[1]\nAt first I didn't like it that the word that came to mind was\none that had other meanings. But then I realized the other meanings\nare fairly closely related. Bullshit in the sense of things you\nwaste your time on is a lot like intellectual bullshit.[2]\nI chose this example deliberately as a note to self. I get\nattacked a lot online. People tell the craziest lies about me.\nAnd I have so far done a pretty mediocre job of suppressing the\nnatural human inclination to say \"Hey, that's not true!\"Thanks to Jessica Livingston and Geoff Ralston for reading drafts\nof this."} {"title": "web20", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nNovember 2005Does \"Web 2.0\" mean anything? Till recently I thought it didn't,\nbut the truth turns out to be more complicated. Originally, yes,\nit was meaningless. Now it seems to have acquired a meaning. And\nyet those who dislike the term are probably right, because if it\nmeans what I think it does, we don't need it.I first heard the phrase \"Web 2.0\" in the name of the Web 2.0\nconference in 2004. At the time it was supposed to mean using \"the\nweb as a platform,\" which I took to refer to web-based applications.\n[1]So I was surprised at a conference this summer when Tim O'Reilly\nled a session intended to figure out a definition of \"Web 2.0.\"\nDidn't it already mean using the web as a platform? And if it\ndidn't already mean something, why did we need the phrase at all?OriginsTim says the phrase \"Web 2.0\" first\narose in \"a brainstorming session between\nO'Reilly and Medialive International.\" What is Medialive International?\n\"Producers of technology tradeshows and conferences,\" according to\ntheir site. So presumably that's what this brainstorming session\nwas about. O'Reilly wanted to organize a conference about the web,\nand they were wondering what to call it.I don't think there was any deliberate plan to suggest there was a\nnew version of the web. They just wanted to make the point\nthat the web mattered again. It was a kind of semantic deficit\nspending: they knew new things were coming, and the \"2.0\" referred\nto whatever those might turn out to be.And they were right. New things were coming. But the new version\nnumber led to some awkwardness in the short term. In the process\nof developing the pitch for the first conference, someone must have\ndecided they'd better take a stab at explaining what that \"2.0\"\nreferred to. Whatever it meant, \"the web as a platform\" was at\nleast not too constricting.The story about \"Web 2.0\" meaning the web as a platform didn't live\nmuch past the first conference. By the second conference, what\n\"Web 2.0\" seemed to mean was something about democracy. At least,\nit did when people wrote about it online. The conference itself\ndidn't seem very grassroots. It cost $2800, so the only people who\ncould afford to go were VCs and people from big companies.And yet, oddly enough, Ryan Singel's article\nabout the conference in Wired News spoke of \"throngs of\ngeeks.\" When a friend of mine asked Ryan about this, it was news\nto him. He said he'd originally written something like \"throngs\nof VCs and biz dev guys\" but had later shortened it just to \"throngs,\"\nand that this must have in turn been expanded by the editors into\n\"throngs of geeks.\" After all, a Web 2.0 conference would presumably\nbe full of geeks, right?Well, no. There were about 7. Even Tim O'Reilly was wearing a \nsuit, a sight so alien I couldn't parse it at first. I saw\nhim walk by and said to one of the O'Reilly people \"that guy looks\njust like Tim.\"\"Oh, that's Tim. He bought a suit.\"\nI ran after him, and sure enough, it was. He explained that he'd\njust bought it in Thailand.The 2005 Web 2.0 conference reminded me of Internet trade shows\nduring the Bubble, full of prowling VCs looking for the next hot\nstartup. There was that same odd atmosphere created by a large \nnumber of people determined not to miss out. Miss out on what?\nThey didn't know. Whatever was going to happen\u2014whatever Web 2.0\nturned out to be.I wouldn't quite call it \"Bubble 2.0\" just because VCs are eager\nto invest again. The Internet is a genuinely big deal. The bust\nwas as much an overreaction as\nthe boom. It's to be expected that once we started to pull out of\nthe bust, there would be a lot of growth in this area, just as there\nwas in the industries that spiked the sharpest before the Depression.The reason this won't turn into a second Bubble is that the IPO\nmarket is gone. Venture investors\nare driven by exit strategies. The reason they were funding all \nthose laughable startups during the late 90s was that they hoped\nto sell them to gullible retail investors; they hoped to be laughing\nall the way to the bank. Now that route is closed. Now the default\nexit strategy is to get bought, and acquirers are less prone to\nirrational exuberance than IPO investors. The closest you'll get \nto Bubble valuations is Rupert Murdoch paying $580 million for \nMyspace. That's only off by a factor of 10 or so.1. AjaxDoes \"Web 2.0\" mean anything more than the name of a conference\nyet? I don't like to admit it, but it's starting to. When people\nsay \"Web 2.0\" now, I have some idea what they mean. And the fact\nthat I both despise the phrase and understand it is the surest proof\nthat it has started to mean something.One ingredient of its meaning is certainly Ajax, which I can still\nonly just bear to use without scare quotes. Basically, what \"Ajax\"\nmeans is \"Javascript now works.\" And that in turn means that\nweb-based applications can now be made to work much more like desktop\nones.As you read this, a whole new generation\nof software is being written to take advantage of Ajax. There\nhasn't been such a wave of new applications since microcomputers\nfirst appeared. Even Microsoft sees it, but it's too late for them\nto do anything more than leak \"internal\" \ndocuments designed to give the impression they're on top of this\nnew trend.In fact the new generation of software is being written way too\nfast for Microsoft even to channel it, let alone write their own\nin house. Their only hope now is to buy all the best Ajax startups\nbefore Google does. And even that's going to be hard, because\nGoogle has as big a head start in buying microstartups as it did\nin search a few years ago. After all, Google Maps, the canonical\nAjax application, was the result of a startup they bought.So ironically the original description of the Web 2.0 conference\nturned out to be partially right: web-based applications are a big\ncomponent of Web 2.0. But I'm convinced they got this right by \naccident. The Ajax boom didn't start till early 2005, when Google\nMaps appeared and the term \"Ajax\" was coined.2. DemocracyThe second big element of Web 2.0 is democracy. We now have several\nexamples to prove that amateurs can \nsurpass professionals, when they have the right kind of system to \nchannel their efforts. Wikipedia\nmay be the most famous. Experts have given Wikipedia middling\nreviews, but they miss the critical point: it's good enough. And \nit's free, which means people actually read it. On the web, articles\nyou have to pay for might as well not exist. Even if you were \nwilling to pay to read them yourself, you can't link to them. \nThey're not part of the conversation.Another place democracy seems to win is in deciding what counts as\nnews. I never look at any news site now except Reddit.\n[2]\n I know if something major\nhappens, or someone writes a particularly interesting article, it \nwill show up there. Why bother checking the front page of any\nspecific paper or magazine? Reddit's like an RSS feed for the whole\nweb, with a filter for quality. Similar sites include Digg, a technology news site that's\nrapidly approaching Slashdot in popularity, and del.icio.us, the collaborative\nbookmarking network that set off the \"tagging\" movement. And whereas\nWikipedia's main appeal is that it's good enough and free, these\nsites suggest that voters do a significantly better job than human\neditors.The most dramatic example of Web 2.0 democracy is not in the selection\nof ideas, but their production. \nI've noticed for a while that the stuff I read on individual people's\nsites is as good as or better than the stuff I read in newspapers\nand magazines. And now I have independent evidence: the top links\non Reddit are generally links to individual people's sites rather \nthan to magazine articles or news stories.My experience of writing\nfor magazines suggests an explanation. Editors. They control the\ntopics you can write about, and they can generally rewrite whatever\nyou produce. The result is to damp extremes. Editing yields 95th\npercentile writing\u201495% of articles are improved by it, but 5% are\ndragged down. 5% of the time you get \"throngs of geeks.\"On the web, people can publish whatever they want. Nearly all of\nit falls short of the editor-damped writing in print publications.\nBut the pool of writers is very, very large. If it's large enough,\nthe lack of damping means the best writing online should surpass \nthe best in print.\n[3] \nAnd now that the web has evolved mechanisms\nfor selecting good stuff, the web wins net. Selection beats damping,\nfor the same reason market economies beat centrally planned ones.Even the startups are different this time around. They are to the \nstartups of the Bubble what bloggers are to the print media. During\nthe Bubble, a startup meant a company headed by an MBA that was \nblowing through several million dollars of VC money to \"get big\nfast\" in the most literal sense. Now it means a smaller, younger, more technical group that just \ndecided to make something great. They'll decide later if they want \nto raise VC-scale funding, and if they take it, they'll take it on\ntheir terms.3. Don't Maltreat UsersI think everyone would agree that democracy and Ajax are elements\nof \"Web 2.0.\" I also see a third: not to maltreat users. During\nthe Bubble a lot of popular sites were quite high-handed with users.\nAnd not just in obvious ways, like making them register, or subjecting\nthem to annoying ads. The very design of the average site in the \nlate 90s was an abuse. Many of the most popular sites were loaded\nwith obtrusive branding that made them slow to load and sent the\nuser the message: this is our site, not yours. (There's a physical\nanalog in the Intel and Microsoft stickers that come on some\nlaptops.)I think the root of the problem was that sites felt they were giving\nsomething away for free, and till recently a company giving anything\naway for free could be pretty high-handed about it. Sometimes it\nreached the point of economic sadism: site owners assumed that the\nmore pain they caused the user, the more benefit it must be to them. \nThe most dramatic remnant of this model may be at salon.com, where \nyou can read the beginning of a story, but to get the rest you have\nsit through a movie.At Y Combinator we advise all the startups we fund never to lord\nit over users. Never make users register, unless you need to in\norder to store something for them. If you do make users register, \nnever make them wait for a confirmation link in an email; in fact,\ndon't even ask for their email address unless you need it for some\nreason. Don't ask them any unnecessary questions. Never send them\nemail unless they explicitly ask for it. Never frame pages you\nlink to, or open them in new windows. If you have a free version \nand a pay version, don't make the free version too restricted. And\nif you find yourself asking \"should we allow users to do x?\" just \nanswer \"yes\" whenever you're unsure. Err on the side of generosity.In How to Start a Startup I advised startups\nnever to let anyone fly under them, meaning never to let any other\ncompany offer a cheaper, easier solution. Another way to fly low \nis to give users more power. Let users do what they want. If you \ndon't and a competitor does, you're in trouble.iTunes is Web 2.0ish in this sense. Finally you can buy individual\nsongs instead of having to buy whole albums. The recording industry\nhated the idea and resisted it as long as possible. But it was\nobvious what users wanted, so Apple flew under the labels.\n[4]\nThough really it might be better to describe iTunes as Web 1.5. \nWeb 2.0 applied to music would probably mean individual bands giving\naway DRMless songs for free.The ultimate way to be nice to users is to give them something for\nfree that competitors charge for. During the 90s a lot of people \nprobably thought we'd have some working system for micropayments \nby now. In fact things have gone in the other direction. The most \nsuccessful sites are the ones that figure out new ways to give stuff\naway for free. Craigslist has largely destroyed the classified ad\nsites of the 90s, and OkCupid looks likely to do the same to the\nprevious generation of dating sites.Serving web pages is very, very cheap. If you can make even a \nfraction of a cent per page view, you can make a profit. And\ntechnology for targeting ads continues to improve. I wouldn't be\nsurprised if ten years from now eBay had been supplanted by an \nad-supported freeBay (or, more likely, gBay).Odd as it might sound, we tell startups that they should try to\nmake as little money as possible. If you can figure out a way to\nturn a billion dollar industry into a fifty million dollar industry,\nso much the better, if all fifty million go to you. Though indeed,\nmaking things cheaper often turns out to generate more money in the\nend, just as automating things often turns out to generate more\njobs.The ultimate target is Microsoft. What a bang that balloon is going\nto make when someone pops it by offering a free web-based alternative \nto MS Office.\n[5]\nWho will? Google? They seem to be taking their\ntime. I suspect the pin will be wielded by a couple of 20 year old\nhackers who are too naive to be intimidated by the idea. (How hard\ncan it be?)The Common ThreadAjax, democracy, and not dissing users. What do they all have in \ncommon? I didn't realize they had anything in common till recently,\nwhich is one of the reasons I disliked the term \"Web 2.0\" so much.\nIt seemed that it was being used as a label for whatever happened\nto be new\u2014that it didn't predict anything.But there is a common thread. Web 2.0 means using the web the way\nit's meant to be used. The \"trends\" we're seeing now are simply\nthe inherent nature of the web emerging from under the broken models\nthat got imposed on it during the Bubble.I realized this when I read an interview with\nJoe Kraus, the co-founder of Excite.\n[6]\n\n Excite really never got the business model right at all. We fell \n into the classic problem of how when a new medium comes out it\n adopts the practices, the content, the business models of the old\n medium\u2014which fails, and then the more appropriate models get\n figured out.\n\nIt may have seemed as if not much was happening during the years\nafter the Bubble burst. But in retrospect, something was happening:\nthe web was finding its natural angle of repose. The democracy \ncomponent, for example\u2014that's not an innovation, in the sense of\nsomething someone made happen. That's what the web naturally tends\nto produce.Ditto for the idea of delivering desktop-like applications over the\nweb. That idea is almost as old as the web. But the first time \naround it was co-opted by Sun, and we got Java applets. Java has\nsince been remade into a generic replacement for C++, but in 1996\nthe story about Java was that it represented a new model of software.\nInstead of desktop applications, you'd run Java \"applets\" delivered\nfrom a server.This plan collapsed under its own weight. Microsoft helped kill it,\nbut it would have died anyway. There was no uptake among hackers.\nWhen you find PR firms promoting\nsomething as the next development platform, you can be sure it's\nnot. If it were, you wouldn't need PR firms to tell you, because \nhackers would already be writing stuff on top of it, the way sites \nlike Busmonster used Google Maps as a\nplatform before Google even meant it to be one.The proof that Ajax is the next hot platform is that thousands of \nhackers have spontaneously started building things on top\nof it. Mikey likes it.There's another thing all three components of Web 2.0 have in common.\nHere's a clue. Suppose you approached investors with the following\nidea for a Web 2.0 startup:\n\n Sites like del.icio.us and flickr allow users to \"tag\" content\n with descriptive tokens. But there is also huge source of\n implicit tags that they ignore: the text within web links.\n Moreover, these links represent a social network connecting the \n individuals and organizations who created the pages, and by using\n graph theory we can compute from this network an estimate of the\n reputation of each member. We plan to mine the web for these \n implicit tags, and use them together with the reputation hierarchy\n they embody to enhance web searches.\n\nHow long do you think it would take them on average to realize that\nit was a description of Google?Google was a pioneer in all three components of Web 2.0: their core\nbusiness sounds crushingly hip when described in Web 2.0 terms, \n\"Don't maltreat users\" is a subset of \"Don't be evil,\" and of course\nGoogle set off the whole Ajax boom with Google Maps.Web 2.0 means using the web as it was meant to be used, and Google\ndoes. That's their secret. They're sailing with the wind, instead of sitting \nbecalmed praying for a business model, like the print media, or \ntrying to tack upwind by suing their customers, like Microsoft and \nthe record labels.\n[7]Google doesn't try to force things to happen their way. They try \nto figure out what's going to happen, and arrange to be standing \nthere when it does. That's the way to approach technology\u2014and \nas business includes an ever larger technological component, the\nright way to do business.The fact that Google is a \"Web 2.0\" company shows that, while\nmeaningful, the term is also rather bogus. It's like the word\n\"allopathic.\" It just means doing things right, and it's a bad \nsign when you have a special word for that.\nNotes[1]\nFrom the conference\nsite, June 2004: \"While the first wave of the Web was closely \ntied to the browser, the second wave extends applications across \nthe web and enables a new generation of services and business\nopportunities.\" To the extent this means anything, it seems to be\nabout \nweb-based applications.[2]\nDisclosure: Reddit was funded by \nY Combinator. But although\nI started using it out of loyalty to the home team, I've become a\ngenuine addict. While we're at it, I'm also an investor in\n!MSFT, having sold all my shares earlier this year.[3]\nI'm not against editing. I spend more time editing than\nwriting, and I have a group of picky friends who proofread almost\neverything I write. What I dislike is editing done after the fact \nby someone else.[4]\nObvious is an understatement. Users had been climbing in through \nthe window for years before Apple finally moved the door.[5]\nHint: the way to create a web-based alternative to Office may\nnot be to write every component yourself, but to establish a protocol\nfor web-based apps to share a virtual home directory spread across\nmultiple servers. Or it may be to write it all yourself.[6]\nIn Jessica Livingston's\nFounders at\nWork.[7]\nMicrosoft didn't sue their customers directly, but they seem \nto have done all they could to help SCO sue them.Thanks to Trevor Blackwell, Sarah Harlin, Jessica Livingston, Peter\nNorvig, Aaron Swartz, and Jeff Weiner for reading drafts of this, and to the\nguys at O'Reilly and Adaptive Path for answering my questions."} {"title": "addiction", "text": "July 2010What hard liquor, cigarettes, heroin, and crack have in common is\nthat they're all more concentrated forms of less addictive predecessors.\nMost if not all the things we describe as addictive are. And the\nscary thing is, the process that created them is accelerating.We wouldn't want to stop it. It's the same process that cures\ndiseases: technological progress. Technological progress means\nmaking things do more of what we want. When the thing we want is\nsomething we want to want, we consider technological progress good.\nIf some new technique makes solar cells x% more efficient, that\nseems strictly better. When progress concentrates something we\ndon't want to want\u2014when it transforms opium into heroin\u2014it seems\nbad. But it's the same process at work.\n[1]No one doubts this process is accelerating, which means increasing\nnumbers of things we like will be transformed into things we like\ntoo much.\n[2]As far as I know there's no word for something we like too much.\nThe closest is the colloquial sense of \"addictive.\" That usage has\nbecome increasingly common during my lifetime. And it's clear why:\nthere are an increasing number of things we need it for. At the\nextreme end of the spectrum are crack and meth. Food has been\ntransformed by a combination of factory farming and innovations in\nfood processing into something with way more immediate bang for the\nbuck, and you can see the results in any town in America. Checkers\nand solitaire have been replaced by World of Warcraft and FarmVille.\nTV has become much more engaging, and even so it can't compete with Facebook.The world is more addictive than it was 40 years ago. And unless\nthe forms of technological progress that produced these things are\nsubject to different laws than technological progress in general,\nthe world will get more addictive in the next 40 years than it did\nin the last 40.The next 40 years will bring us some wonderful things. I don't\nmean to imply they're all to be avoided. Alcohol is a dangerous\ndrug, but I'd rather live in a world with wine than one without.\nMost people can coexist with alcohol; but you have to be careful.\nMore things we like will mean more things we have to be careful\nabout.Most people won't, unfortunately. Which means that as the world\nbecomes more addictive, the two senses in which one can live a\nnormal life will be driven ever further apart. One sense of \"normal\"\nis statistically normal: what everyone else does. The other is the\nsense we mean when we talk about the normal operating range of a\npiece of machinery: what works best.These two senses are already quite far apart. Already someone\ntrying to live well would seem eccentrically abstemious in most of\nthe US. That phenomenon is only going to become more pronounced.\nYou can probably take it as a rule of thumb from now on that if\npeople don't think you're weird, you're living badly.Societies eventually develop antibodies to addictive new things.\nI've seen that happen with cigarettes. When cigarettes first\nappeared, they spread the way an infectious disease spreads through\na previously isolated population. Smoking rapidly became a\n(statistically) normal thing. There were ashtrays everywhere. We\nhad ashtrays in our house when I was a kid, even though neither of\nmy parents smoked. You had to for guests.As knowledge spread about the dangers of smoking, customs changed.\nIn the last 20 years, smoking has been transformed from something\nthat seemed totally normal into a rather seedy habit: from something\nmovie stars did in publicity shots to something small huddles of\naddicts do outside the doors of office buildings. A lot of the\nchange was due to legislation, of course, but the legislation\ncouldn't have happened if customs hadn't already changed.It took a while though\u2014on the order of 100 years. And unless the\nrate at which social antibodies evolve can increase to match the\naccelerating rate at which technological progress throws off new\naddictions, we'll be increasingly unable to rely on customs to\nprotect us.\n[3]\nUnless we want to be canaries in the coal mine\nof each new addiction\u2014the people whose sad example becomes a\nlesson to future generations\u2014we'll have to figure out for ourselves\nwhat to avoid and how. It will actually become a reasonable strategy\n(or a more reasonable strategy) to suspect \neverything new.In fact, even that won't be enough. We'll have to worry not just\nabout new things, but also about existing things becoming more\naddictive. That's what bit me. I've avoided most addictions, but\nthe Internet got me because it became addictive while I was using\nit.\n[4]Most people I know have problems with Internet addiction. We're\nall trying to figure out our own customs for getting free of it.\nThat's why I don't have an iPhone, for example; the last thing I\nwant is for the Internet to follow me out into the world.\n[5]\nMy latest trick is taking long hikes. I used to think running was a\nbetter form of exercise than hiking because it took less time. Now\nthe slowness of hiking seems an advantage, because the longer I\nspend on the trail, the longer I have to think without interruption.Sounds pretty eccentric, doesn't it? It always will when you're\ntrying to solve problems where there are no customs yet to guide\nyou. Maybe I can't plead Occam's razor; maybe I'm simply eccentric.\nBut if I'm right about the acceleration of addictiveness, then this\nkind of lonely squirming to avoid it will increasingly be the fate\nof anyone who wants to get things done. We'll increasingly be\ndefined by what we say no to.\nNotes[1]\nCould you restrict technological progress to areas where you\nwanted it? Only in a limited way, without becoming a police state.\nAnd even then your restrictions would have undesirable side effects.\n\"Good\" and \"bad\" technological progress aren't sharply differentiated,\nso you'd find you couldn't slow the latter without also slowing the\nformer. And in any case, as Prohibition and the \"war on drugs\"\nshow, bans often do more harm than good.[2]\nTechnology has always been accelerating. By Paleolithic\nstandards, technology evolved at a blistering pace in the Neolithic\nperiod.[3]\nUnless we mass produce social customs. I suspect the recent\nresurgence of evangelical Christianity in the US is partly a reaction\nto drugs. In desperation people reach for the sledgehammer; if\ntheir kids won't listen to them, maybe they'll listen to God. But\nthat solution has broader consequences than just getting kids to\nsay no to drugs. You end up saying no to \nscience as well.\nI worry we may be heading for a future in which only a few people\nplot their own itinerary through no-land, while everyone else books\na package tour. Or worse still, has one booked for them by the\ngovernment.[4]\nPeople commonly use the word \"procrastination\" to describe\nwhat they do on the Internet. It seems to me too mild to describe\nwhat's happening as merely not-doing-work. We don't call it\nprocrastination when someone gets drunk instead of working.[5]\nSeveral people have told me they like the iPad because it\nlets them bring the Internet into situations where a laptop would\nbe too conspicuous. In other words, it's a hip flask. (This is\ntrue of the iPhone too, of course, but this advantage isn't as\nobvious because it reads as a phone, and everyone's used to those.)Thanks to Sam Altman, Patrick Collison, Jessica Livingston, and\nRobert Morris for reading drafts of this."} {"title": "philosophy", "text": "September 2007In high school I decided I was going to study philosophy in college.\nI had several motives, some more honorable than others. One of the\nless honorable was to shock people. College was regarded as job\ntraining where I grew up, so studying philosophy seemed an impressively\nimpractical thing to do. Sort of like slashing holes in your clothes\nor putting a safety pin through your ear, which were other forms\nof impressive impracticality then just coming into fashion.But I had some more honest motives as well. I thought studying\nphilosophy would be a shortcut straight to wisdom. All the people\nmajoring in other things would just end up with a bunch of domain\nknowledge. I would be learning what was really what.I'd tried to read a few philosophy books. Not recent ones; you\nwouldn't find those in our high school library. But I tried to\nread Plato and Aristotle. I doubt I believed I understood them,\nbut they sounded like they were talking about something important.\nI assumed I'd learn what in college.The summer before senior year I took some college classes. I learned\na lot in the calculus class, but I didn't learn much in Philosophy\n101. And yet my plan to study philosophy remained intact. It was\nmy fault I hadn't learned anything. I hadn't read the books we\nwere assigned carefully enough. I'd give Berkeley's Principles\nof Human Knowledge another shot in college. Anything so admired\nand so difficult to read must have something in it, if one could\nonly figure out what.Twenty-six years later, I still don't understand Berkeley. I have\na nice edition of his collected works. Will I ever read it? Seems\nunlikely.The difference between then and now is that now I understand why\nBerkeley is probably not worth trying to understand. I think I see\nnow what went wrong with philosophy, and how we might fix it.WordsI did end up being a philosophy major for most of college. It\ndidn't work out as I'd hoped. I didn't learn any magical truths\ncompared to which everything else was mere domain knowledge. But\nI do at least know now why I didn't. Philosophy doesn't really\nhave a subject matter in the way math or history or most other\nuniversity subjects do. There is no core of knowledge one must\nmaster. The closest you come to that is a knowledge of what various\nindividual philosophers have said about different topics over the\nyears. Few were sufficiently correct that people have forgotten\nwho discovered what they discovered.Formal logic has some subject matter. I took several classes in\nlogic. I don't know if I learned anything from them.\n[1]\nIt does seem to me very important to be able to flip ideas around in\none's head: to see when two ideas don't fully cover the space of\npossibilities, or when one idea is the same as another but with a\ncouple things changed. But did studying logic teach me the importance\nof thinking this way, or make me any better at it? I don't know.There are things I know I learned from studying philosophy. The\nmost dramatic I learned immediately, in the first semester of\nfreshman year, in a class taught by Sydney Shoemaker. I learned\nthat I don't exist. I am (and you are) a collection of cells that\nlurches around driven by various forces, and calls itself I. But\nthere's no central, indivisible thing that your identity goes with.\nYou could conceivably lose half your brain and live. Which means\nyour brain could conceivably be split into two halves and each\ntransplanted into different bodies. Imagine waking up after such\nan operation. You have to imagine being two people.The real lesson here is that the concepts we use in everyday life\nare fuzzy, and break down if pushed too hard. Even a concept as\ndear to us as I. It took me a while to grasp this, but when I\ndid it was fairly sudden, like someone in the nineteenth century\ngrasping evolution and realizing the story of creation they'd been\ntold as a child was all wrong. \n[2]\nOutside of math there's a limit\nto how far you can push words; in fact, it would not be a bad\ndefinition of math to call it the study of terms that have precise\nmeanings. Everyday words are inherently imprecise. They work well\nenough in everyday life that you don't notice. Words seem to work,\njust as Newtonian physics seems to. But you can always make them\nbreak if you push them far enough.I would say that this has been, unfortunately for philosophy, the\ncentral fact of philosophy. Most philosophical debates are not\nmerely afflicted by but driven by confusions over words. Do we\nhave free will? Depends what you mean by \"free.\" Do abstract ideas\nexist? Depends what you mean by \"exist.\"Wittgenstein is popularly credited with the idea that most philosophical\ncontroversies are due to confusions over language. I'm not sure\nhow much credit to give him. I suspect a lot of people realized\nthis, but reacted simply by not studying philosophy, rather than\nbecoming philosophy professors.How did things get this way? Can something people have spent\nthousands of years studying really be a waste of time? Those are\ninteresting questions. In fact, some of the most interesting\nquestions you can ask about philosophy. The most valuable way to\napproach the current philosophical tradition may be neither to get\nlost in pointless speculations like Berkeley, nor to shut them down\nlike Wittgenstein, but to study it as an example of reason gone\nwrong.HistoryWestern philosophy really begins with Socrates, Plato, and Aristotle.\nWhat we know of their predecessors comes from fragments and references\nin later works; their doctrines could be described as speculative\ncosmology that occasionally strays into analysis. Presumably they\nwere driven by whatever makes people in every other society invent\ncosmologies.\n[3]With Socrates, Plato, and particularly Aristotle, this tradition\nturned a corner. There started to be a lot more analysis. I suspect\nPlato and Aristotle were encouraged in this by progress in math.\nMathematicians had by then shown that you could figure things out\nin a much more conclusive way than by making up fine sounding stories\nabout them. \n[4]People talk so much about abstractions now that we don't realize\nwhat a leap it must have been when they first started to. It was\npresumably many thousands of years between when people first started\ndescribing things as hot or cold and when someone asked \"what is\nheat?\" No doubt it was a very gradual process. We don't know if\nPlato or Aristotle were the first to ask any of the questions they\ndid. But their works are the oldest we have that do this on a large\nscale, and there is a freshness (not to say naivete) about them\nthat suggests some of the questions they asked were new to them,\nat least.Aristotle in particular reminds me of the phenomenon that happens\nwhen people discover something new, and are so excited by it that\nthey race through a huge percentage of the newly discovered territory\nin one lifetime. If so, that's evidence of how new this kind of\nthinking was. \n[5]This is all to explain how Plato and Aristotle can be very impressive\nand yet naive and mistaken. It was impressive even to ask the\nquestions they did. That doesn't mean they always came up with\ngood answers. It's not considered insulting to say that ancient\nGreek mathematicians were naive in some respects, or at least lacked\nsome concepts that would have made their lives easier. So I hope\npeople will not be too offended if I propose that ancient philosophers\nwere similarly naive. In particular, they don't seem to have fully\ngrasped what I earlier called the central fact of philosophy: that\nwords break if you push them too far.\"Much to the surprise of the builders of the first digital computers,\"\nRod Brooks wrote, \"programs written for them usually did not work.\"\n[6]\nSomething similar happened when people first started trying\nto talk about abstractions. Much to their surprise, they didn't\narrive at answers they agreed upon. In fact, they rarely seemed\nto arrive at answers at all.They were in effect arguing about artifacts induced by sampling at\ntoo low a resolution.The proof of how useless some of their answers turned out to be is\nhow little effect they have. No one after reading Aristotle's\nMetaphysics does anything differently as a result.\n[7]Surely I'm not claiming that ideas have to have practical applications\nto be interesting? No, they may not have to. Hardy's boast that\nnumber theory had no use whatsoever wouldn't disqualify it. But\nhe turned out to be mistaken. In fact, it's suspiciously hard to\nfind a field of math that truly has no practical use. And Aristotle's\nexplanation of the ultimate goal of philosophy in Book A of the\nMetaphysics implies that philosophy should be useful too.Theoretical KnowledgeAristotle's goal was to find the most general of general principles.\nThe examples he gives are convincing: an ordinary worker builds\nthings a certain way out of habit; a master craftsman can do more\nbecause he grasps the underlying principles. The trend is clear:\nthe more general the knowledge, the more admirable it is. But then\nhe makes a mistake\u2014possibly the most important mistake in the\nhistory of philosophy. He has noticed that theoretical knowledge\nis often acquired for its own sake, out of curiosity, rather than\nfor any practical need. So he proposes there are two kinds of\ntheoretical knowledge: some that's useful in practical matters and\nsome that isn't. Since people interested in the latter are interested\nin it for its own sake, it must be more noble. So he sets as his\ngoal in the Metaphysics the exploration of knowledge that has no\npractical use. Which means no alarms go off when he takes on grand\nbut vaguely understood questions and ends up getting lost in a sea\nof words.His mistake was to confuse motive and result. Certainly, people\nwho want a deep understanding of something are often driven by\ncuriosity rather than any practical need. But that doesn't mean\nwhat they end up learning is useless. It's very valuable in practice\nto have a deep understanding of what you're doing; even if you're\nnever called on to solve advanced problems, you can see shortcuts\nin the solution of simple ones, and your knowledge won't break down\nin edge cases, as it would if you were relying on formulas you\ndidn't understand. Knowledge is power. That's what makes theoretical\nknowledge prestigious. It's also what causes smart people to be\ncurious about certain things and not others; our DNA is not so\ndisinterested as we might think.So while ideas don't have to have immediate practical applications\nto be interesting, the kinds of things we find interesting will\nsurprisingly often turn out to have practical applications.The reason Aristotle didn't get anywhere in the Metaphysics was\npartly that he set off with contradictory aims: to explore the most\nabstract ideas, guided by the assumption that they were useless.\nHe was like an explorer looking for a territory to the north of\nhim, starting with the assumption that it was located to the south.And since his work became the map used by generations of future\nexplorers, he sent them off in the wrong direction as well. \n[8]\nPerhaps worst of all, he protected them from both the criticism of\noutsiders and the promptings of their own inner compass by establishing\nthe principle that the most noble sort of theoretical knowledge had\nto be useless.The Metaphysics is mostly a failed experiment. A few ideas from\nit turned out to be worth keeping; the bulk of it has had no effect\nat all. The Metaphysics is among the least read of all famous\nbooks. It's not hard to understand the way Newton's Principia\nis, but the way a garbled message is.Arguably it's an interesting failed experiment. But unfortunately\nthat was not the conclusion Aristotle's successors derived from\nworks like the Metaphysics. \n[9]\nSoon after, the western world\nfell on intellectual hard times. Instead of version 1s to be\nsuperseded, the works of Plato and Aristotle became revered texts\nto be mastered and discussed. And so things remained for a shockingly\nlong time. It was not till around 1600 (in Europe, where the center\nof gravity had shifted by then) that one found people confident\nenough to treat Aristotle's work as a catalog of mistakes. And\neven then they rarely said so outright.If it seems surprising that the gap was so long, consider how little\nprogress there was in math between Hellenistic times and the\nRenaissance.In the intervening years an unfortunate idea took hold: that it\nwas not only acceptable to produce works like the Metaphysics,\nbut that it was a particularly prestigious line of work, done by a\nclass of people called philosophers. No one thought to go back and\ndebug Aristotle's motivating argument. And so instead of correcting\nthe problem Aristotle discovered by falling into it\u2014that you can\neasily get lost if you talk too loosely about very abstract ideas\u2014they \ncontinued to fall into it.The SingularityCuriously, however, the works they produced continued to attract\nnew readers. Traditional philosophy occupies a kind of singularity\nin this respect. If you write in an unclear way about big ideas,\nyou produce something that seems tantalizingly attractive to\ninexperienced but intellectually ambitious students. Till one knows\nbetter, it's hard to distinguish something that's hard to understand\nbecause the writer was unclear in his own mind from something like\na mathematical proof that's hard to understand because the ideas\nit represents are hard to understand. To someone who hasn't learned\nthe difference, traditional philosophy seems extremely attractive:\nas hard (and therefore impressive) as math, yet broader in scope.\nThat was what lured me in as a high school student.This singularity is even more singular in having its own defense\nbuilt in. When things are hard to understand, people who suspect\nthey're nonsense generally keep quiet. There's no way to prove a\ntext is meaningless. The closest you can get is to show that the\nofficial judges of some class of texts can't distinguish them from\nplacebos. \n[10]And so instead of denouncing philosophy, most people who suspected\nit was a waste of time just studied other things. That alone is\nfairly damning evidence, considering philosophy's claims. It's\nsupposed to be about the ultimate truths. Surely all smart people\nwould be interested in it, if it delivered on that promise.Because philosophy's flaws turned away the sort of people who might\nhave corrected them, they tended to be self-perpetuating. Bertrand\nRussell wrote in a letter in 1912:\n\n Hitherto the people attracted to philosophy have been mostly those\n who loved the big generalizations, which were all wrong, so that\n few people with exact minds have taken up the subject.\n[11]\n\nHis response was to launch Wittgenstein at it, with dramatic results.I think Wittgenstein deserves to be famous not for the discovery\nthat most previous philosophy was a waste of time, which judging\nfrom the circumstantial evidence must have been made by every smart\nperson who studied a little philosophy and declined to pursue it\nfurther, but for how he acted in response.\n[12]\nInstead of quietly\nswitching to another field, he made a fuss, from inside. He was\nGorbachev.The field of philosophy is still shaken from the fright Wittgenstein\ngave it. \n[13]\nLater in life he spent a lot of time talking about\nhow words worked. Since that seems to be allowed, that's what a\nlot of philosophers do now. Meanwhile, sensing a vacuum in the\nmetaphysical speculation department, the people who used to do\nliterary criticism have been edging Kantward, under new names like\n\"literary theory,\" \"critical theory,\" and when they're feeling\nambitious, plain \"theory.\" The writing is the familiar word salad:\n\n Gender is not like some of the other grammatical modes which\n express precisely a mode of conception without any reality that\n corresponds to the conceptual mode, and consequently do not express\n precisely something in reality by which the intellect could be\n moved to conceive a thing the way it does, even where that motive\n is not something in the thing as such.\n [14]\n\nThe singularity I've described is not going away. There's a market\nfor writing that sounds impressive and can't be disproven. There\nwill always be both supply and demand. So if one group abandons\nthis territory, there will always be others ready to occupy it.A ProposalWe may be able to do better. Here's an intriguing possibility.\nPerhaps we should do what Aristotle meant to do, instead of what\nhe did. The goal he announces in the Metaphysics seems one worth\npursuing: to discover the most general truths. That sounds good.\nBut instead of trying to discover them because they're useless,\nlet's try to discover them because they're useful.I propose we try again, but that we use that heretofore despised\ncriterion, applicability, as a guide to keep us from wondering\noff into a swamp of abstractions. Instead of trying to answer the\nquestion:\n\n What are the most general truths?\n\nlet's try to answer the question\n\n Of all the useful things we can say, which are the most general?\n\nThe test of utility I propose is whether we cause people who read\nwhat we've written to do anything differently afterward. Knowing\nwe have to give definite (if implicit) advice will keep us from\nstraying beyond the resolution of the words we're using.The goal is the same as Aristotle's; we just approach it from a\ndifferent direction.As an example of a useful, general idea, consider that of the\ncontrolled experiment. There's an idea that has turned out to be\nwidely applicable. Some might say it's part of science, but it's\nnot part of any specific science; it's literally meta-physics (in\nour sense of \"meta\"). The idea of evolution is another. It turns\nout to have quite broad applications\u2014for example, in genetic\nalgorithms and even product design. Frankfurt's distinction between\nlying and bullshitting seems a promising recent example.\n[15]These seem to me what philosophy should look like: quite general\nobservations that would cause someone who understood them to do\nsomething differently.Such observations will necessarily be about things that are imprecisely\ndefined. Once you start using words with precise meanings, you're\ndoing math. So starting from utility won't entirely solve the\nproblem I described above\u2014it won't flush out the metaphysical\nsingularity. But it should help. It gives people with good\nintentions a new roadmap into abstraction. And they may thereby\nproduce things that make the writing of the people with bad intentions\nlook bad by comparison.One drawback of this approach is that it won't produce the sort of\nwriting that gets you tenure. And not just because it's not currently\nthe fashion. In order to get tenure in any field you must not\narrive at conclusions that members of tenure committees can disagree\nwith. In practice there are two kinds of solutions to this problem.\nIn math and the sciences, you can prove what you're saying, or at\nany rate adjust your conclusions so you're not claiming anything\nfalse (\"6 of 8 subjects had lower blood pressure after the treatment\").\nIn the humanities you can either avoid drawing any definite conclusions\n(e.g. conclude that an issue is a complex one), or draw conclusions\nso narrow that no one cares enough to disagree with you.The kind of philosophy I'm advocating won't be able to take either\nof these routes. At best you'll be able to achieve the essayist's\nstandard of proof, not the mathematician's or the experimentalist's.\nAnd yet you won't be able to meet the usefulness test without\nimplying definite and fairly broadly applicable conclusions. Worse\nstill, the usefulness test will tend to produce results that annoy\npeople: there's no use in telling people things they already believe,\nand people are often upset to be told things they don't.Here's the exciting thing, though. Anyone can do this. Getting\nto general plus useful by starting with useful and cranking up the\ngenerality may be unsuitable for junior professors trying to get\ntenure, but it's better for everyone else, including professors who\nalready have it. This side of the mountain is a nice gradual slope.\nYou can start by writing things that are useful but very specific,\nand then gradually make them more general. Joe's has good burritos.\nWhat makes a good burrito? What makes good food? What makes\nanything good? You can take as long as you want. You don't have\nto get all the way to the top of the mountain. You don't have to\ntell anyone you're doing philosophy.If it seems like a daunting task to do philosophy, here's an\nencouraging thought. The field is a lot younger than it seems.\nThough the first philosophers in the western tradition lived about\n2500 years ago, it would be misleading to say the field is 2500\nyears old, because for most of that time the leading practitioners\nweren't doing much more than writing commentaries on Plato or\nAristotle while watching over their shoulders for the next invading\narmy. In the times when they weren't, philosophy was hopelessly\nintermingled with religion. It didn't shake itself free till a\ncouple hundred years ago, and even then was afflicted by the\nstructural problems I've described above. If I say this, some will\nsay it's a ridiculously overbroad and uncharitable generalization,\nand others will say it's old news, but here goes: judging from their\nworks, most philosophers up to the present have been wasting their\ntime. So in a sense the field is still at the first step. \n[16]That sounds a preposterous claim to make. It won't seem so\npreposterous in 10,000 years. Civilization always seems old, because\nit's always the oldest it's ever been. The only way to say whether\nsomething is really old or not is by looking at structural evidence,\nand structurally philosophy is young; it's still reeling from the\nunexpected breakdown of words.Philosophy is as young now as math was in 1500. There is a lot\nmore to discover.Notes\n[1]\nIn practice formal logic is not much use, because despite\nsome progress in the last 150 years we're still only able to formalize\na small percentage of statements. We may never do that much better,\nfor the same reason 1980s-style \"knowledge representation\" could\nnever have worked; many statements may have no representation more\nconcise than a huge, analog brain state.[2]\nIt was harder for Darwin's contemporaries to grasp this than\nwe can easily imagine. The story of creation in the Bible is not\njust a Judeo-Christian concept; it's roughly what everyone must\nhave believed since before people were people. The hard part of\ngrasping evolution was to realize that species weren't, as they\nseem to be, unchanging, but had instead evolved from different,\nsimpler organisms over unimaginably long periods of time.Now we don't have to make that leap. No one in an industrialized\ncountry encounters the idea of evolution for the first time as an\nadult. Everyone's taught about it as a child, either as truth or\nheresy.[3]\nGreek philosophers before Plato wrote in verse. This must\nhave affected what they said. If you try to write about the nature\nof the world in verse, it inevitably turns into incantation. Prose\nlets you be more precise, and more tentative.[4]\nPhilosophy is like math's\nne'er-do-well brother. It was born when Plato and Aristotle looked\nat the works of their predecessors and said in effect \"why can't\nyou be more like your brother?\" Russell was still saying the same\nthing 2300 years later.Math is the precise half of the most abstract ideas, and philosophy\nthe imprecise half. It's probably inevitable that philosophy will\nsuffer by comparison, because there's no lower bound to its precision.\nBad math is merely boring, whereas bad philosophy is nonsense. And\nyet there are some good ideas in the imprecise half.[5]\nAristotle's best work was in logic and zoology, both of which\nhe can be said to have invented. But the most dramatic departure\nfrom his predecessors was a new, much more analytical style of\nthinking. He was arguably the first scientist.[6]\nBrooks, Rodney, Programming in Common Lisp, Wiley, 1985, p.\n94.[7]\nSome would say we depend on Aristotle more than we realize,\nbecause his ideas were one of the ingredients in our common culture.\nCertainly a lot of the words we use have a connection with Aristotle,\nbut it seems a bit much to suggest that we wouldn't have the concept\nof the essence of something or the distinction between matter and\nform if Aristotle hadn't written about them.One way to see how much we really depend on Aristotle would be to\ndiff European culture with Chinese: what ideas did European culture\nhave in 1800 that Chinese culture didn't, in virtue of Aristotle's\ncontribution?[8]\nThe meaning of the word \"philosophy\" has changed over time.\nIn ancient times it covered a broad range of topics, comparable in\nscope to our \"scholarship\" (though without the methodological\nimplications). Even as late as Newton's time it included what we\nnow call \"science.\" But core of the subject today is still what\nseemed to Aristotle the core: the attempt to discover the most\ngeneral truths.Aristotle didn't call this \"metaphysics.\" That name got assigned\nto it because the books we now call the Metaphysics came after\n(meta = after) the Physics in the standard edition of Aristotle's\nworks compiled by Andronicus of Rhodes three centuries later. What\nwe call \"metaphysics\" Aristotle called \"first philosophy.\"[9]\nSome of Aristotle's immediate successors may have realized\nthis, but it's hard to say because most of their works are lost.[10]\nSokal, Alan, \"Transgressing the Boundaries: Toward a Transformative\nHermeneutics of Quantum Gravity,\" Social Text 46/47, pp. 217-252.Abstract-sounding nonsense seems to be most attractive when it's\naligned with some axe the audience already has to grind. If this\nis so we should find it's most popular with groups that are (or\nfeel) weak. The powerful don't need its reassurance.[11]\nLetter to Ottoline Morrell, December 1912. Quoted in:Monk, Ray, Ludwig Wittgenstein: The Duty of Genius, Penguin, 1991,\np. 75.[12]\nA preliminary result, that all metaphysics between Aristotle\nand 1783 had been a waste of time, is due to I. Kant.[13]\nWittgenstein asserted a sort of mastery to which the inhabitants\nof early 20th century Cambridge seem to have been peculiarly\nvulnerable\u2014perhaps partly because so many had been raised religious\nand then stopped believing, so had a vacant space in their heads\nfor someone to tell them what to do (others chose Marx or Cardinal\nNewman), and partly because a quiet, earnest place like Cambridge\nin that era had no natural immunity to messianic figures, just as\nEuropean politics then had no natural immunity to dictators.[14]\nThis is actually from the Ordinatio of Duns Scotus (ca.\n1300), with \"number\" replaced by \"gender.\" Plus ca change.Wolter, Allan (trans), Duns Scotus: Philosophical Writings, Nelson,\n1963, p. 92.[15]\nFrankfurt, Harry, On Bullshit, Princeton University Press,\n2005.[16]\nSome introductions to philosophy now take the line that\nphilosophy is worth studying as a process rather than for any\nparticular truths you'll learn. The philosophers whose works they\ncover would be rolling in their graves at that. They hoped they\nwere doing more than serving as examples of how to argue: they hoped\nthey were getting results. Most were wrong, but it doesn't seem\nan impossible hope.This argument seems to me like someone in 1500 looking at the lack\nof results achieved by alchemy and saying its value was as a process.\nNo, they were going about it wrong. It turns out it is possible\nto transmute lead into gold (though not economically at current\nenergy prices), but the route to that knowledge was to\nbacktrack and try another approach.Thanks to Trevor Blackwell, Paul Buchheit, Jessica Livingston, \nRobert Morris, Mark Nitzberg, and Peter Norvig for reading drafts of this."} {"title": "unions", "text": "May 2007People who worry about the increasing gap between rich and poor\ngenerally look back on the mid twentieth century as a golden age.\nIn those days we had a large number of high-paying union manufacturing\njobs that boosted the median income. I wouldn't quite call the\nhigh-paying union job a myth, but I think people who dwell on it\nare reading too much into it.Oddly enough, it was working with startups that made me realize\nwhere the high-paying union job came from. In a rapidly growing\nmarket, you don't worry too much about efficiency. It's more\nimportant to grow fast. If there's some mundane problem getting\nin your way, and there's a simple solution that's somewhat expensive,\njust take it and get on with more important things. EBay didn't\nwin by paying less for servers than their competitors.Difficult though it may be to imagine now, manufacturing was a\ngrowth industry in the mid twentieth century. This was an era when\nsmall firms making everything from cars to candy were getting\nconsolidated into a new kind of corporation with national reach and\nhuge economies of scale. You had to grow fast or die. Workers\nwere for these companies what servers are for an Internet startup.\nA reliable supply was more important than low cost.If you looked in the head of a 1950s auto executive, the attitude\nmust have been: sure, give 'em whatever they ask for, so long as\nthe new model isn't delayed.In other words, those workers were not paid what their work was\nworth. Circumstances being what they were, companies would have\nbeen stupid to insist on paying them so little.If you want a less controversial example of this phenomenon, ask\nanyone who worked as a consultant building web sites during the\nInternet Bubble. In the late nineties you could get paid huge sums\nof money for building the most trivial things. And yet does anyone\nwho was there have any expectation those days will ever return? I\ndoubt it. Surely everyone realizes that was just a temporary\naberration.The era of labor unions seems to have been the same kind of aberration, \njust spread\nover a longer period, and mixed together with a lot of ideology\nthat prevents people from viewing it with as cold an eye as they\nwould something like consulting during the Bubble.Basically, unions were just Razorfish.People who think the labor movement was the creation of heroic union\norganizers have a problem to explain: why are unions shrinking now?\nThe best they can do is fall back on the default explanation of\npeople living in fallen civilizations. Our ancestors were giants.\nThe workers of the early twentieth century must have had a moral\ncourage that's lacking today.In fact there's a simpler explanation. The early twentieth century\nwas just a fast-growing startup overpaying for infrastructure. And\nwe in the present are not a fallen people, who have abandoned\nwhatever mysterious high-minded principles produced the high-paying\nunion job. We simply live in a time when the fast-growing companies\noverspend on different things."} {"title": "apple", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nNovember 2009I don't think Apple realizes how badly the App Store approval process\nis broken. Or rather, I don't think they realize how much it matters\nthat it's broken.The way Apple runs the App Store has harmed their reputation with\nprogrammers more than anything else they've ever done. \nTheir reputation with programmers used to be great.\nIt used to be the most common complaint you heard\nabout Apple was that their fans admired them too uncritically.\nThe App Store has changed that. Now a lot of programmers\nhave started to see Apple as evil.How much of the goodwill Apple once had with programmers have they\nlost over the App Store? A third? Half? And that's just so far.\nThe App Store is an ongoing karma leak.* * *How did Apple get into this mess? Their fundamental problem is\nthat they don't understand software.They treat iPhone apps the way they treat the music they sell through\niTunes. Apple is the channel; they own the user; if you want to\nreach users, you do it on their terms. The record labels agreed,\nreluctantly. But this model doesn't work for software. It doesn't\nwork for an intermediary to own the user. The software business\nlearned that in the early 1980s, when companies like VisiCorp showed\nthat although the words \"software\" and \"publisher\" fit together,\nthe underlying concepts don't. Software isn't like music or books.\nIt's too complicated for a third party to act as an intermediary\nbetween developer and user. And yet that's what Apple is trying\nto be with the App Store: a software publisher. And a particularly\noverreaching one at that, with fussy tastes and a rigidly enforced\nhouse style.If software publishing didn't work in 1980, it works even less now\nthat software development has evolved from a small number of big\nreleases to a constant stream of small ones. But Apple doesn't\nunderstand that either. Their model of product development derives\nfrom hardware. They work on something till they think it's finished,\nthen they release it. You have to do that with hardware, but because\nsoftware is so easy to change, its design can benefit from evolution.\nThe standard way to develop applications now is to launch fast and\niterate. Which means it's a disaster to have long, random delays\neach time you release a new version.Apparently Apple's attitude is that developers should be more careful\nwhen they submit a new version to the App Store. They would say\nthat. But powerful as they are, they're not powerful enough to\nturn back the evolution of technology. Programmers don't use\nlaunch-fast-and-iterate out of laziness. They use it because it\nyields the best results. By obstructing that process, Apple is\nmaking them do bad work, and programmers hate that as much as Apple\nwould.How would Apple like it if when they discovered a serious bug in\nOS\u00a0X, instead of releasing a software update immediately, they had\nto submit their code to an intermediary who sat on it for a month\nand then rejected it because it contained an icon they didn't like?By breaking software development, Apple gets the opposite of what\nthey intended: the version of an app currently available in the App\nStore tends to be an old and buggy one. One developer told me:\n\n As a result of their process, the App Store is full of half-baked\n applications. I make a new version almost every day that I release\n to beta users. The version on the App Store feels old and crappy.\n I'm sure that a lot of developers feel this way: One emotion is\n \"I'm not really proud about what's in the App Store\", and it's\n combined with the emotion \"Really, it's Apple's fault.\"\n\nAnother wrote:\n\n I believe that they think their approval process helps users by\n ensuring quality. In reality, bugs like ours get through all the\n time and then it can take 4-8 weeks to get that bug fix approved,\n leaving users to think that iPhone apps sometimes just don't work.\n Worse for Apple, these apps work just fine on other platforms\n that have immediate approval processes.\n\nActually I suppose Apple has a third misconception: that all the\ncomplaints about App Store approvals are not a serious problem.\nThey must hear developers complaining. But partners and suppliers\nare always complaining. It would be a bad sign if they weren't;\nit would mean you were being too easy on them. Meanwhile the iPhone\nis selling better than ever. So why do they need to fix anything?They get away with maltreating developers, in the short term, because\nthey make such great hardware. I just bought a new 27\" iMac a\ncouple days ago. It's fabulous. The screen's too shiny, and the\ndisk is surprisingly loud, but it's so beautiful that you can't\nmake yourself care.So I bought it, but I bought it, for the first time, with misgivings.\nI felt the way I'd feel buying something made in a country with a\nbad human rights record. That was new. In the past when I bought\nthings from Apple it was an unalloyed pleasure. Oh boy! They make\nsuch great stuff. This time it felt like a Faustian bargain. They\nmake such great stuff, but they're such assholes. Do I really want\nto support this company?* * *Should Apple care what people like me think? What difference does\nit make if they alienate a small minority of their users?There are a couple reasons they should care. One is that these\nusers are the people they want as employees. If your company seems\nevil, the best programmers won't work for you. That hurt Microsoft\na lot starting in the 90s. Programmers started to feel sheepish\nabout working there. It seemed like selling out. When people from\nMicrosoft were talking to other programmers and they mentioned where\nthey worked, there were a lot of self-deprecating jokes about having\ngone over to the dark side. But the real problem for Microsoft\nwasn't the embarrassment of the people they hired. It was the\npeople they never got. And you know who got them? Google and\nApple. If Microsoft was the Empire, they were the Rebel Alliance.\nAnd it's largely because they got more of the best people that\nGoogle and Apple are doing so much better than Microsoft today.Why are programmers so fussy about their employers' morals? Partly\nbecause they can afford to be. The best programmers can work\nwherever they want. They don't have to work for a company they\nhave qualms about.But the other reason programmers are fussy, I think, is that evil\nbegets stupidity. An organization that wins by exercising power\nstarts to lose the ability to win by doing better work. And it's\nnot fun for a smart person to work in a place where the best ideas\naren't the ones that win. I think the reason Google embraced \"Don't\nbe evil\" so eagerly was not so much to impress the outside world\nas to inoculate themselves against arrogance.\n[1]That has worked for Google so far. They've become more\nbureaucratic, but otherwise they seem to have held true to their\noriginal principles. With Apple that seems less the case. When you\nlook at the famous \n1984 ad \nnow, it's easier to imagine Apple as the\ndictator on the screen than the woman with the hammer.\n[2]\nIn fact, if you read the dictator's speech it sounds uncannily like a\nprophecy of the App Store.\n\n We have triumphed over the unprincipled dissemination of facts.We have created, for the first time in all history, a garden of\n pure ideology, where each worker may bloom secure from the pests\n of contradictory and confusing truths.\n\nThe other reason Apple should care what programmers think of them\nis that when you sell a platform, developers make or break you. If\nanyone should know this, Apple should. VisiCalc made the Apple II.And programmers build applications for the platforms they use. Most\napplications\u2014most startups, probably\u2014grow out of personal projects.\nApple itself did. Apple made microcomputers because that's what\nSteve Wozniak wanted for himself. He couldn't have afforded a\nminicomputer. \n[3]\n Microsoft likewise started out making interpreters\nfor little microcomputers because\nBill Gates and Paul Allen were interested in using them. It's a\nrare startup that doesn't build something the founders use.The main reason there are so many iPhone apps is that so many programmers\nhave iPhones. They may know, because they read it in an article,\nthat Blackberry has such and such market share. But in practice\nit's as if RIM didn't exist. If they're going to build something,\nthey want to be able to use it themselves, and that means building\nan iPhone app.So programmers continue to develop iPhone apps, even though Apple\ncontinues to maltreat them. They're like someone stuck in an abusive\nrelationship. They're so attracted to the iPhone that they can't\nleave. But they're looking for a way out. One wrote:\n\n While I did enjoy developing for the iPhone, the control they\n place on the App Store does not give me the drive to develop\n applications as I would like. In fact I don't intend to make any\n more iPhone applications unless absolutely necessary.\n[4]\n\nCan anything break this cycle? No device I've seen so far could.\nPalm and RIM haven't a hope. The only credible contender is Android.\nBut Android is an orphan; Google doesn't really care about it, not\nthe way Apple cares about the iPhone. Apple cares about the iPhone\nthe way Google cares about search.* * *Is the future of handheld devices one locked down by Apple? It's\na worrying prospect. It would be a bummer to have another grim\nmonoculture like we had in the 1990s. In 1995, writing software\nfor end users was effectively identical with writing Windows\napplications. Our horror at that prospect was the single biggest\nthing that drove us to start building web apps.At least we know now what it would take to break Apple's lock.\nYou'd have to get iPhones out of programmers' hands. If programmers\nused some other device for mobile web access, they'd start to develop\napps for that instead.How could you make a device programmers liked better than the iPhone?\nIt's unlikely you could make something better designed. Apple\nleaves no room there. So this alternative device probably couldn't\nwin on general appeal. It would have to win by virtue of some\nappeal it had to programmers specifically.One way to appeal to programmers is with software. If you\ncould think of an application programmers had to have, but that\nwould be impossible in the circumscribed world of the iPhone, \nyou could presumably get them to switch.That would definitely happen if programmers started to use handhelds\nas development machines\u2014if handhelds displaced laptops the\nway laptops displaced desktops. You need more control of a development\nmachine than Apple will let you have over an iPhone.Could anyone make a device that you'd carry around in your pocket\nlike a phone, and yet would also work as a development machine?\nIt's hard to imagine what it would look like. But I've learned\nnever to say never about technology. A phone-sized device that\nwould work as a development machine is no more miraculous by present\nstandards than the iPhone itself would have seemed by the standards\nof 1995.My current development machine is a MacBook Air, which I use with\nan external monitor and keyboard in my office, and by itself when\ntraveling. If there was a version half the size I'd prefer it.\nThat still wouldn't be small enough to carry around everywhere like\na phone, but we're within a factor of 4 or so. Surely that gap is\nbridgeable. In fact, let's make it an\nRFS. Wanted: \nWoman with hammer.Notes[1]\nWhen Google adopted \"Don't be evil,\" they were still so small\nthat no one would have expected them to be, yet.\n[2]\nThe dictator in the 1984 ad isn't Microsoft, incidentally;\nit's IBM. IBM seemed a lot more frightening in those days, but\nthey were friendlier to developers than Apple is now.[3]\nHe couldn't even afford a monitor. That's why the Apple\nI used a TV as a monitor.[4]\nSeveral people I talked to mentioned how much they liked the\niPhone SDK. The problem is not Apple's products but their policies.\nFortunately policies are software; Apple can change them instantly\nif they want to. Handy that, isn't it?Thanks to Sam Altman, Trevor Blackwell, Ross Boucher, \nJames Bracy, Gabor Cselle,\nPatrick Collison, Jason Freedman, John Gruber, Joe Hewitt, Jessica Livingston,\nRobert Morris, Teng Siong Ong, Nikhil Pandit, Savraj Singh, and Jared Tame for reading drafts of this."} {"title": "boss", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nMarch 2008, rev. June 2008Technology tends to separate normal from natural. Our bodies\nweren't designed to eat the foods that people in rich countries eat, or\nto get so little exercise. \nThere may be a similar problem with the way we work: \na normal job may be as bad for us intellectually as white flour\nor sugar is for us physically.I began to suspect this after spending several years working \nwith startup founders. I've now worked with over 200 of them, and I've\nnoticed a definite difference between programmers working on their\nown startups and those working for large organizations.\nI wouldn't say founders seem happier, necessarily;\nstarting a startup can be very stressful. Maybe the best way to put\nit is to say that they're happier in the sense that your body is\nhappier during a long run than sitting on a sofa eating\ndoughnuts.Though they're statistically abnormal, startup founders seem to be\nworking in a way that's more natural for humans.I was in Africa last year and saw a lot of animals in the wild that\nI'd only seen in zoos before. It was remarkable how different they\nseemed. Particularly lions. Lions in the wild seem about ten times\nmore alive. They're like different animals. I suspect that working\nfor oneself feels better to humans in much the same way that living\nin the wild must feel better to a wide-ranging predator like a lion.\nLife in a zoo is easier, but it isn't the life they were designed\nfor.\nTreesWhat's so unnatural about working for a big company? The root of\nthe problem is that humans weren't meant to work in such large\ngroups.Another thing you notice when you see animals in the wild is that\neach species thrives in groups of a certain size. A herd of impalas\nmight have 100 adults; baboons maybe 20; lions rarely 10. Humans\nalso seem designed to work in groups, and what I've read about\nhunter-gatherers accords with research on organizations and my own\nexperience to suggest roughly what the ideal size is: groups of 8\nwork well; by 20 they're getting hard to manage; and a group of 50\nis really unwieldy.\n[1]\nWhatever the upper limit is, we are clearly not meant to work in\ngroups of several hundred. And yet\u2014for reasons having more\nto do with technology than human nature\u2014a great many people\nwork for companies with hundreds or thousands of employees.Companies know groups that large wouldn't work, so they divide\nthemselves into units small enough to work together. But to\ncoordinate these they have to introduce something new: bosses.These smaller groups are always arranged in a tree structure. Your\nboss is the point where your group attaches to the tree. But when\nyou use this trick for dividing a large group into smaller ones,\nsomething strange happens that I've never heard anyone mention\nexplicitly. In the group one level up from yours, your boss\nrepresents your entire group. A group of 10 managers is not merely\na group of 10 people working together in the usual way. It's really\na group of groups. Which means for a group of 10 managers to work\ntogether as if they were simply a group of 10 individuals, the group\nworking for each manager would have to work as if they were a single\nperson\u2014the workers and manager would each share only one\nperson's worth of freedom between them.In practice a group of people are never able to act as if they were\none person. But in a large organization divided into groups in\nthis way, the pressure is always in that direction. Each group\ntries its best to work as if it were the small group of individuals\nthat humans were designed to work in. That was the point of creating\nit. And when you propagate that constraint, the result is that\neach person gets freedom of action in inverse proportion to the\nsize of the entire tree.\n[2]Anyone who's worked for a large organization has felt this. You\ncan feel the difference between working for a company with 100\nemployees and one with 10,000, even if your group has only 10 people.\nCorn SyrupA group of 10 people within a large organization is a kind of fake\ntribe. The number of people you interact with is about right. But\nsomething is missing: individual initiative. Tribes of hunter-gatherers\nhave much more freedom. The leaders have a little more power than other\nmembers of the tribe, but they don't generally tell them what to\ndo and when the way a boss can.It's not your boss's fault. The real problem is that in the group\nabove you in the hierarchy, your entire group is one virtual person.\nYour boss is just the way that constraint is imparted to you.So working in a group of 10 people within a large organization feels\nboth right and wrong at the same time. On the surface it feels\nlike the kind of group you're meant to work in, but something major\nis missing. A job at a big company is like high fructose corn\nsyrup: it has some of the qualities of things you're meant to like,\nbut is disastrously lacking in others.Indeed, food is an excellent metaphor to explain what's wrong with\nthe usual sort of job.For example, working for a big company is the default thing to do,\nat least for programmers. How bad could it be? Well, food shows\nthat pretty clearly. If you were dropped at a random point in\nAmerica today, nearly all the food around you would be bad for you.\nHumans were not designed to eat white flour, refined sugar, high\nfructose corn syrup, and hydrogenated vegetable oil. And yet if\nyou analyzed the contents of the average grocery store you'd probably\nfind these four ingredients accounted for most of the calories.\n\"Normal\" food is terribly bad for you. The only people who eat\nwhat humans were actually designed to eat are a few Birkenstock-wearing\nweirdos in Berkeley.If \"normal\" food is so bad for us, why is it so common? There are\ntwo main reasons. One is that it has more immediate appeal. You\nmay feel lousy an hour after eating that pizza, but eating the first\ncouple bites feels great. The other is economies of scale.\nProducing junk food scales; producing fresh vegetables doesn't.\nWhich means (a) junk food can be very cheap, and (b) it's worth\nspending a lot to market it.If people have to choose between something that's cheap, heavily\nmarketed, and appealing in the short term, and something that's\nexpensive, obscure, and appealing in the long term, which do you\nthink most will choose?It's the same with work. The average MIT graduate wants to work\nat Google or Microsoft, because it's a recognized brand, it's safe,\nand they'll get paid a good salary right away. It's the job\nequivalent of the pizza they had for lunch. The drawbacks will\nonly become apparent later, and then only in a vague sense of\nmalaise.And founders and early employees of startups, meanwhile, are like\nthe Birkenstock-wearing weirdos of Berkeley: though a tiny minority\nof the population, they're the ones living as humans are meant to.\nIn an artificial world, only extremists live naturally.\nProgrammersThe restrictiveness of big company jobs is particularly hard on\nprogrammers, because the essence of programming is to build new\nthings. Sales people make much the same pitches every day; support\npeople answer much the same questions; but once you've written a\npiece of code you don't need to write it again. So a programmer\nworking as programmers are meant to is always making new things.\nAnd when you're part of an organization whose structure gives each\nperson freedom in inverse proportion to the size of the tree, you're\ngoing to face resistance when you do something new.This seems an inevitable consequence of bigness. It's true even\nin the smartest companies. I was talking recently to a founder who\nconsidered starting a startup right out of college, but went to\nwork for Google instead because he thought he'd learn more there.\nHe didn't learn as much as he expected. Programmers learn by doing,\nand most of the things he wanted to do, he couldn't\u2014sometimes\nbecause the company wouldn't let him, but often because the company's\ncode wouldn't let him. Between the drag of legacy code, the overhead\nof doing development in such a large organization, and the restrictions\nimposed by interfaces owned by other groups, he could only try a\nfraction of the things he would have liked to. He said he has\nlearned much more in his own startup, despite the fact that he has\nto do all the company's errands as well as programming, because at\nleast when he's programming he can do whatever he wants.An obstacle downstream propagates upstream. If you're not allowed\nto implement new ideas, you stop having them. And vice versa: when\nyou can do whatever you want, you have more ideas about what to do.\nSo working for yourself makes your brain more powerful in the same\nway a low-restriction exhaust system makes an engine more powerful.Working for yourself doesn't have to mean starting a startup, of\ncourse. But a programmer deciding between a regular job at a big\ncompany and their own startup is probably going to learn more doing\nthe startup.You can adjust the amount of freedom you get by scaling the size\nof company you work for. If you start the company, you'll have the\nmost freedom. If you become one of the first 10 employees you'll\nhave almost as much freedom as the founders. Even a company with\n100 people will feel different from one with 1000.Working for a small company doesn't ensure freedom. The tree\nstructure of large organizations sets an upper bound on freedom,\nnot a lower bound. The head of a small company may still choose\nto be a tyrant. The point is that a large organization is compelled\nby its structure to be one.\nConsequencesThat has real consequences for both organizations and individuals.\nOne is that companies will inevitably slow down as they grow larger,\nno matter how hard they try to keep their startup mojo. It's a\nconsequence of the tree structure that every large organization is\nforced to adopt.Or rather, a large organization could only avoid slowing down if\nthey avoided tree structure. And since human nature limits the\nsize of group that can work together, the only way I can imagine\nfor larger groups to avoid tree structure would be to have no\nstructure: to have each group actually be independent, and to work\ntogether the way components of a market economy do.That might be worth exploring. I suspect there are already some\nhighly partitionable businesses that lean this way. But I don't\nknow any technology companies that have done it.There is one thing companies can do short of structuring themselves\nas sponges: they can stay small. If I'm right, then it really\npays to keep a company as small as it can be at every stage.\nParticularly a technology company. Which means it's doubly important\nto hire the best people. Mediocre hires hurt you twice: they get\nless done, but they also make you big, because you need more of\nthem to solve a given problem.For individuals the upshot is the same: aim small. It will always\nsuck to work for large organizations, and the larger the organization,\nthe more it will suck.In an essay I wrote a couple years ago \nI advised graduating seniors\nto work for a couple years for another company before starting their\nown. I'd modify that now. Work for another company if you want\nto, but only for a small one, and if you want to start your own\nstartup, go ahead.The reason I suggested college graduates not start startups immediately\nwas that I felt most would fail. And they will. But ambitious\nprogrammers are better off doing their own thing and failing than\ngoing to work at a big company. Certainly they'll learn more. They\nmight even be better off financially. A lot of people in their\nearly twenties get into debt, because their expenses grow even\nfaster than the salary that seemed so high when they left school.\nAt least if you start a startup and fail your net worth will be\nzero rather than negative. \n[3]We've now funded so many different types of founders that we have\nenough data to see patterns, and there seems to be no benefit from\nworking for a big company. The people who've worked for a few years\ndo seem better than the ones straight out of college, but only\nbecause they're that much older.The people who come to us from big companies often seem kind of\nconservative. It's hard to say how much is because big companies\nmade them that way, and how much is the natural conservatism that\nmade them work for the big companies in the first place. But\ncertainly a large part of it is learned. I know because I've seen\nit burn off.Having seen that happen so many times is one of the things that\nconvinces me that working for oneself, or at least for a small\ngroup, is the natural way for programmers to live. Founders arriving\nat Y Combinator often have the downtrodden air of refugees. Three\nmonths later they're transformed: they have so much more \nconfidence\nthat they seem as if they've grown several inches taller. \n[4]\nStrange as this sounds, they seem both more worried and happier at the same\ntime. Which is exactly how I'd describe the way lions seem in the\nwild.Watching employees get transformed into founders makes it clear\nthat the difference between the two is due mostly to environment\u2014and\nin particular that the environment in big companies is toxic to\nprogrammers. In the first couple weeks of working on their own\nstartup they seem to come to life, because finally they're working\nthe way people are meant to.Notes[1]\nWhen I talk about humans being meant or designed to live a\ncertain way, I mean by evolution.[2]\nIt's not only the leaves who suffer. The constraint propagates\nup as well as down. So managers are constrained too; instead of\njust doing things, they have to act through subordinates.[3]\nDo not finance your startup with credit cards. Financing a\nstartup with debt is usually a stupid move, and credit card debt\nstupidest of all. Credit card debt is a bad idea, period. It is\na trap set by evil companies for the desperate and the foolish.[4]\nThe founders we fund used to be younger (initially we encouraged\nundergrads to apply), and the first couple times I saw this I used\nto wonder if they were actually getting physically taller.Thanks to Trevor Blackwell, Ross Boucher, Aaron Iba, Abby\nKirigin, Ivan Kirigin, Jessica Livingston, and Robert Morris for\nreading drafts of this."} {"title": "desres", "text": "January 2003(This article is derived from a keynote talk at the fall 2002 meeting\nof NEPLS.)Visitors to this country are often surprised to find that\nAmericans like to begin a conversation by asking \"what do you do?\"\nI've never liked this question. I've rarely had a\nneat answer to it. But I think I have finally solved the problem.\nNow, when someone asks me what I do, I look them straight\nin the eye and say \"I'm designing a \nnew dialect of Lisp.\" \nI recommend this answer to anyone who doesn't like being asked what\nthey do. The conversation will turn immediately to other topics.I don't consider myself to be doing research on programming languages.\nI'm just designing one, in the same way that someone might design\na building or a chair or a new typeface.\nI'm not trying to discover anything new. I just want\nto make a language that will be good to program in. In some ways,\nthis assumption makes life a lot easier.The difference between design and research seems to be a question\nof new versus good. Design doesn't have to be new, but it has to \nbe good. Research doesn't have to be good, but it has to be new.\nI think these two paths converge at the top: the best design\nsurpasses its predecessors by using new ideas, and the best research\nsolves problems that are not only new, but actually worth solving.\nSo ultimately we're aiming for the same destination, just approaching\nit from different directions.What I'm going to talk about today is what your target looks like\nfrom the back. What do you do differently when you treat\nprogramming languages as a design problem instead of a research topic?The biggest difference is that you focus more on the user.\nDesign begins by asking, who is this\nfor and what do they need from it? A good architect,\nfor example, does not begin by creating a design that he then\nimposes on the users, but by studying the intended users and figuring\nout what they need.Notice I said \"what they need,\" not \"what they want.\" I don't mean\nto give the impression that working as a designer means working as \na sort of short-order cook, making whatever the client tells you\nto. This varies from field to field in the arts, but\nI don't think there is any field in which the best work is done by\nthe people who just make exactly what the customers tell them to.The customer is always right in\nthe sense that the measure of good design is how well it works\nfor the user. If you make a novel that bores everyone, or a chair\nthat's horribly uncomfortable to sit in, then you've done a bad\njob, period. It's no defense to say that the novel or the chair \nis designed according to the most advanced theoretical principles.And yet, making what works for the user doesn't mean simply making\nwhat the user tells you to. Users don't know what all the choices\nare, and are often mistaken about what they really want.The answer to the paradox, I think, is that you have to design\nfor the user, but you have to design what the user needs, not simply \nwhat he says he wants.\nIt's much like being a doctor. You can't just treat a patient's\nsymptoms. When a patient tells you his symptoms, you have to figure\nout what's actually wrong with him, and treat that.This focus on the user is a kind of axiom from which most of the\npractice of good design can be derived, and around which most design\nissues center.If good design must do what the user needs, who is the user? When\nI say that design must be for users, I don't mean to imply that good \ndesign aims at some kind of \nlowest common denominator. You can pick any group of users you\nwant. If you're designing a tool, for example, you can design it\nfor anyone from beginners to experts, and what's good design\nfor one group might be bad for another. The point\nis, you have to pick some group of users. I don't think you can\neven talk about good or bad design except with\nreference to some intended user.You're most likely to get good design if the intended users include\nthe designer himself. When you design something\nfor a group that doesn't include you, it tends to be for people\nyou consider to be less sophisticated than you, not more sophisticated.That's a problem, because looking down on the user, however benevolently,\nseems inevitably to corrupt the designer.\nI suspect that very few housing\nprojects in the US were designed by architects who expected to live\nin them. You can see the same thing\nin programming languages. C, Lisp, and Smalltalk were created for\ntheir own designers to use. Cobol, Ada, and Java, were created \nfor other people to use.If you think you're designing something for idiots, the odds are\nthat you're not designing something good, even for idiots.\nEven if you're designing something for the most sophisticated\nusers, though, you're still designing for humans. It's different \nin research. In math you\ndon't choose abstractions because they're\neasy for humans to understand; you choose whichever make the\nproof shorter. I think this is true for the sciences generally.\nScientific ideas are not meant to be ergonomic.Over in the arts, things are very different. Design is\nall about people. The human body is a strange\nthing, but when you're designing a chair,\nthat's what you're designing for, and there's no way around it.\nAll the arts have to pander to the interests and limitations\nof humans. In painting, for example, all other things being\nequal a painting with people in it will be more interesting than\none without. It is not merely an accident of history that\nthe great paintings of the Renaissance are all full of people.\nIf they hadn't been, painting as a medium wouldn't have the prestige\nthat it does.Like it or not, programming languages are also for people,\nand I suspect the human brain is just as lumpy and idiosyncratic\nas the human body. Some ideas are easy for people to grasp\nand some aren't. For example, we seem to have a very limited\ncapacity for dealing with detail. It's this fact that makes\nprograming languages a good idea in the first place; if we\ncould handle the detail, we could just program in machine\nlanguage.Remember, too, that languages are not\nprimarily a form for finished programs, but something that\nprograms have to be developed in. Anyone in the arts could\ntell you that you might want different mediums for the\ntwo situations. Marble, for example, is a nice, durable\nmedium for finished ideas, but a hopelessly inflexible one\nfor developing new ideas.A program, like a proof,\nis a pruned version of a tree that in the past has had\nfalse starts branching off all over it. So the test of\na language is not simply how clean the finished program looks\nin it, but how clean the path to the finished program was.\nA design choice that gives you elegant finished programs\nmay not give you an elegant design process. For example, \nI've written a few macro-defining macros full of nested\nbackquotes that look now like little gems, but writing them\ntook hours of the ugliest trial and error, and frankly, I'm still\nnot entirely sure they're correct.We often act as if the test of a language were how good\nfinished programs look in it.\nIt seems so convincing when you see the same program\nwritten in two languages, and one version is much shorter.\nWhen you approach the problem from the direction of the\narts, you're less likely to depend on this sort of\ntest. You don't want to end up with a programming\nlanguage like marble.For example, it is a huge win in developing software to\nhave an interactive toplevel, what in Lisp is called a\nread-eval-print loop. And when you have one this has\nreal effects on the design of the language. It would not\nwork well for a language where you have to declare\nvariables before using them, for example. When you're\njust typing expressions into the toplevel, you want to be \nable to set x to some value and then start doing things\nto x. You don't want to have to declare the type of x\nfirst. You may dispute either of the premises, but if\na language has to have a toplevel to be convenient, and\nmandatory type declarations are incompatible with a\ntoplevel, then no language that makes type declarations \nmandatory could be convenient to program in.In practice, to get good design you have to get close, and stay\nclose, to your users. You have to calibrate your ideas on actual\nusers constantly, especially in the beginning. One of the reasons\nJane Austen's novels are so good is that she read them out loud to\nher family. That's why she never sinks into self-indulgently arty\ndescriptions of landscapes,\nor pretentious philosophizing. (The philosophy's there, but it's\nwoven into the story instead of being pasted onto it like a label.)\nIf you open an average \"literary\" novel and imagine reading it out loud\nto your friends as something you'd written, you'll feel all too\nkeenly what an imposition that kind of thing is upon the reader.In the software world, this idea is known as Worse is Better.\nActually, there are several ideas mixed together in the concept of\nWorse is Better, which is why people are still arguing about\nwhether worse\nis actually better or not. But one of the main ideas in that\nmix is that if you're building something new, you should get a\nprototype in front of users as soon as possible.The alternative approach might be called the Hail Mary strategy.\nInstead of getting a prototype out quickly and gradually refining\nit, you try to create the complete, finished, product in one long\ntouchdown pass. As far as I know, this is a\nrecipe for disaster. Countless startups destroyed themselves this\nway during the Internet bubble. I've never heard of a case\nwhere it worked.What people outside the software world may not realize is that\nWorse is Better is found throughout the arts.\nIn drawing, for example, the idea was discovered during the\nRenaissance. Now almost every drawing teacher will tell you that\nthe right way to get an accurate drawing is not to\nwork your way slowly around the contour of an object, because errors will\naccumulate and you'll find at the end that the lines don't meet.\nInstead you should draw a few quick lines in roughly the right place,\nand then gradually refine this initial sketch.In most fields, prototypes\nhave traditionally been made out of different materials.\nTypefaces to be cut in metal were initially designed \nwith a brush on paper. Statues to be cast in bronze \nwere modelled in wax. Patterns to be embroidered on tapestries\nwere drawn on paper with ink wash. Buildings to be\nconstructed from stone were tested on a smaller scale in wood.What made oil paint so exciting, when it\nfirst became popular in the fifteenth century, was that you\ncould actually make the finished work from the prototype.\nYou could make a preliminary drawing if you wanted to, but you\nweren't held to it; you could work out all the details, and\neven make major changes, as you finished the painting.You can do this in software too. A prototype doesn't have to\nbe just a model; you can refine it into the finished product.\nI think you should always do this when you can. It lets you\ntake advantage of new insights you have along the way. But\nperhaps even more important, it's good for morale.Morale is key in design. I'm surprised people\ndon't talk more about it. One of my first\ndrawing teachers told me: if you're bored when you're\ndrawing something, the drawing will look boring.\nFor example, suppose you have to draw a building, and you\ndecide to draw each brick individually. You can do this\nif you want, but if you get bored halfway through and start\nmaking the bricks mechanically instead of observing each one, \nthe drawing will look worse than if you had merely suggested\nthe bricks.Building something by gradually refining a prototype is good\nfor morale because it keeps you engaged. In software, my \nrule is: always have working code. If you're writing\nsomething that you'll be able to test in an hour, then you\nhave the prospect of an immediate reward to motivate you.\nThe same is true in the arts, and particularly in oil painting.\nMost painters start with a blurry sketch and gradually\nrefine it.\nIf you work this way, then in principle\nyou never have to end the day with something that actually\nlooks unfinished. Indeed, there is even a saying among\npainters: \"A painting is never finished, you just stop\nworking on it.\" This idea will be familiar to anyone who\nhas worked on software.Morale is another reason that it's hard to design something\nfor an unsophisticated user. It's hard to stay interested in\nsomething you don't like yourself. To make something \ngood, you have to be thinking, \"wow, this is really great,\"\nnot \"what a piece of shit; those fools will love it.\"Design means making things for humans. But it's not just the\nuser who's human. The designer is human too.Notice all this time I've been talking about \"the designer.\"\nDesign usually has to be under the control of a single person to\nbe any good. And yet it seems to be possible for several people\nto collaborate on a research project. This seems to\nme one of the most interesting differences between research and\ndesign.There have been famous instances of collaboration in the arts,\nbut most of them seem to have been cases of molecular bonding rather\nthan nuclear fusion. In an opera it's common for one person to\nwrite the libretto and another to write the music. And during the Renaissance, \njourneymen from northern\nEurope were often employed to do the landscapes in the\nbackgrounds of Italian paintings. But these aren't true collaborations.\nThey're more like examples of Robert Frost's\n\"good fences make good neighbors.\" You can stick instances\nof good design together, but within each individual project,\none person has to be in control.I'm not saying that good design requires that one person think\nof everything. There's nothing more valuable than the advice\nof someone whose judgement you trust. But after the talking is\ndone, the decision about what to do has to rest with one person.Why is it that research can be done by collaborators and \ndesign can't? This is an interesting question. I don't \nknow the answer. Perhaps,\nif design and research converge, the best research is also\ngood design, and in fact can't be done by collaborators.\nA lot of the most famous scientists seem to have worked alone.\nBut I don't know enough to say whether there\nis a pattern here. It could be simply that many famous scientists\nworked when collaboration was less common.Whatever the story is in the sciences, true collaboration\nseems to be vanishingly rare in the arts. Design by committee is a\nsynonym for bad design. Why is that so? Is there some way to\nbeat this limitation?I'm inclined to think there isn't-- that good design requires\na dictator. One reason is that good design has to \nbe all of a piece. Design is not just for humans, but\nfor individual humans. If a design represents an idea that \nfits in one person's head, then the idea will fit in the user's\nhead too.Related:"} {"title": "founders", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2010\n\n(I wrote this for Forbes, who asked me to write something\nabout the qualities we look for in founders. In print they had to cut\nthe last item because they didn't have room.)1. DeterminationThis has turned out to be the most important quality in startup\nfounders. We thought when we started Y Combinator that the most\nimportant quality would be intelligence. That's the myth in the\nValley. And certainly you don't want founders to be stupid. But\nas long as you're over a certain threshold of intelligence, what\nmatters most is determination. You're going to hit a lot of\nobstacles. You can't be the sort of person who gets demoralized\neasily.Bill Clerico and Rich Aberman of WePay \nare a good example. They're\ndoing a finance startup, which means endless negotiations with big,\nbureaucratic companies. When you're starting a startup that depends\non deals with big companies to exist, it often feels like they're\ntrying to ignore you out of existence. But when Bill Clerico starts\ncalling you, you may as well do what he asks, because he is not\ngoing away.\n2. FlexibilityYou do not however want the sort of determination implied by phrases\nlike \"don't give up on your dreams.\" The world of startups is so\nunpredictable that you need to be able to modify your dreams on the\nfly. The best metaphor I've found for the combination of determination\nand flexibility you need is a running back. \nHe's determined to get\ndownfield, but at any given moment he may need to go sideways or\neven backwards to get there.The current record holder for flexibility may be Daniel Gross of\nGreplin. He applied to YC with \nsome bad ecommerce idea. We told\nhim we'd fund him if he did something else. He thought for a second,\nand said ok. He then went through two more ideas before settling\non Greplin. He'd only been working on it for a couple days when\nhe presented to investors at Demo Day, but he got a lot of interest.\nHe always seems to land on his feet.\n3. ImaginationIntelligence does matter a lot of course. It seems like the type\nthat matters most is imagination. It's not so important to be able\nto solve predefined problems quickly as to be able to come up with\nsurprising new ideas. In the startup world, most good ideas \nseem\nbad initially. If they were obviously good, someone would already\nbe doing them. So you need the kind of intelligence that produces\nideas with just the right level of craziness.Airbnb is that kind of idea. \nIn fact, when we funded Airbnb, we\nthought it was too crazy. We couldn't believe large numbers of\npeople would want to stay in other people's places. We funded them\nbecause we liked the founders so much. As soon as we heard they'd\nbeen supporting themselves by selling Obama and McCain branded\nbreakfast cereal, they were in. And it turned out the idea was on\nthe right side of crazy after all.\n4. NaughtinessThough the most successful founders are usually good people, they\ntend to have a piratical gleam in their eye. They're not Goody\nTwo-Shoes type good. Morally, they care about getting the big\nquestions right, but not about observing proprieties. That's why\nI'd use the word naughty rather than evil. They delight in \nbreaking\nrules, but not rules that matter. This quality may be redundant\nthough; it may be implied by imagination.Sam Altman of Loopt \nis one of the most successful alumni, so we\nasked him what question we could put on the Y Combinator application\nthat would help us discover more people like him. He said to ask\nabout a time when they'd hacked something to their advantage\u2014hacked in the sense of beating the system, not breaking into\ncomputers. It has become one of the questions we pay most attention\nto when judging applications.\n5. FriendshipEmpirically it seems to be hard to start a startup with just \none\nfounder. Most of the big successes have two or three. And the\nrelationship between the founders has to be strong. They must\ngenuinely like one another, and work well together. Startups do\nto the relationship between the founders what a dog does to a sock:\nif it can be pulled apart, it will be.Emmett Shear and Justin Kan of Justin.tv \nare a good example of close\nfriends who work well together. They've known each other since\nsecond grade. They can practically read one another's minds. I'm\nsure they argue, like all founders, but I have never once sensed\nany unresolved tension between them.Thanks to Jessica Livingston and Chris Steiner for reading drafts of this."} {"title": "vw", "text": "January 2012A few hours before the Yahoo acquisition was announced in June 1998\nI took a snapshot of Viaweb's\nsite. I thought it might be interesting to look at one day.The first thing one notices is is how tiny the pages are. Screens\nwere a lot smaller in 1998. If I remember correctly, our frontpage\nused to just fit in the size window people typically used then.Browsers then (IE 6 was still 3 years in the future) had few fonts\nand they weren't antialiased. If you wanted to make pages that\nlooked good, you had to render display text as images.You may notice a certain similarity between the Viaweb and Y Combinator logos. We did that\nas an inside joke when we started YC. Considering how basic a red\ncircle is, it seemed surprising to me when we started Viaweb how\nfew other companies used one as their logo. A bit later I realized\nwhy.On the Company\npage you'll notice a mysterious individual called John McArtyem.\nRobert Morris (aka Rtm) was so publicity averse after the \nWorm that he\ndidn't want his name on the site. I managed to get him to agree\nto a compromise: we could use his bio but not his name. He has\nsince relaxed a bit\non that point.Trevor graduated at about the same time the acquisition closed, so in the\ncourse of 4 days he went from impecunious grad student to millionaire\nPhD. The culmination of my career as a writer of press releases\nwas one celebrating\nhis graduation, illustrated with a drawing I did of him during\na meeting.(Trevor also appears as Trevino\nBagwell in our directory of web designers merchants could hire\nto build stores for them. We inserted him as a ringer in case some\ncompetitor tried to spam our web designers. We assumed his logo\nwould deter any actual customers, but it did not.)Back in the 90s, to get users you had to get mentioned in magazines\nand newspapers. There were not the same ways to get found online\nthat there are today. So we used to pay a PR\nfirm $16,000 a month to get us mentioned in the press. Fortunately\nreporters liked\nus.In our advice about\ngetting traffic from search engines (I don't think the term SEO\nhad been coined yet), we say there are only 7 that matter: Yahoo,\nAltaVista, Excite, WebCrawler, InfoSeek, Lycos, and HotBot. Notice\nanything missing? Google was incorporated that September.We supported online transactions via a company called \nCybercash,\nsince if we lacked that feature we'd have gotten beaten up in product\ncomparisons. But Cybercash was so bad and most stores' order volumes\nwere so low that it was better if merchants processed orders like phone orders. We had a page in our site trying to talk merchants\nout of doing real time authorizations.The whole site was organized like a funnel, directing people to the\ntest drive.\nIt was a novel thing to be able to try out software online. We put\ncgi-bin in our dynamic urls to fool competitors about how our\nsoftware worked.We had some well\nknown users. Needless to say, Frederick's of Hollywood got the\nmost traffic. We charged a flat fee of $300/month for big stores,\nso it was a little alarming to have users who got lots of traffic.\nI once calculated how much Frederick's was costing us in bandwidth,\nand it was about $300/month.Since we hosted all the stores, which together were getting just\nover 10 million page views per month in June 1998, we consumed what\nat the time seemed a lot of bandwidth. We had 2 T1s (3 Mb/sec)\ncoming into our offices. In those days there was no AWS. Even\ncolocating servers seemed too risky, considering how often things\nwent wrong with them. So we had our servers in our offices. Or\nmore precisely, in Trevor's office. In return for the unique\nprivilege of sharing his office with no other humans, he had to\nshare it with 6 shrieking tower servers. His office was nicknamed\nthe Hot Tub on account of the heat they generated. Most days his\nstack of window air conditioners could keep up.For describing pages, we had a template language called RTML, which\nsupposedly stood for something, but which in fact I named after\nRtm. RTML was Common Lisp augmented by some macros and libraries,\nand concealed under a structure editor that made it look like it\nhad syntax.Since we did continuous releases, our software didn't actually have\nversions. But in those days the trade press expected versions, so\nwe made them up. If we wanted to get lots of attention, we made\nthe version number an\ninteger. That \"version 4.0\" icon was generated by our own\nbutton generator, incidentally. The whole Viaweb site was made\nwith our software, even though it wasn't an online store, because\nwe wanted to experience what our users did.At the end of 1997, we released a general purpose shopping search\nengine called Shopfind. It\nwas pretty advanced for the time. It had a programmable crawler\nthat could crawl most of the different stores online and pick out\nthe products."} {"title": "want", "text": "November 2022Since I was about 9 I've been puzzled by the apparent contradiction\nbetween being made of matter that behaves in a predictable way, and\nthe feeling that I could choose to do whatever I wanted. At the\ntime I had a self-interested motive for exploring the question. At\nthat age (like most succeeding ages) I was always in trouble with\nthe authorities, and it seemed to me that there might possibly be\nsome way to get out of trouble by arguing that I wasn't responsible\nfor my actions. I gradually lost hope of that, but the puzzle\nremained: How do you reconcile being a machine made of matter with\nthe feeling that you're free to choose what you do?\n[1]The best way to explain the answer may be to start with a slightly\nwrong version, and then fix it. The wrong version is: You can do\nwhat you want, but you can't want what you want. Yes, you can control\nwhat you do, but you'll do what you want, and you can't control\nthat.The reason this is mistaken is that people do sometimes change what\nthey want. People who don't want to want something \u2014 drug addicts,\nfor example \u2014 can sometimes make themselves stop wanting it. And\npeople who want to want something \u2014 who want to like classical\nmusic, or broccoli \u2014 sometimes succeed.So we modify our initial statement: You can do what you want, but\nyou can't want to want what you want.That's still not quite true. It's possible to change what you want\nto want. I can imagine someone saying \"I decided to stop wanting\nto like classical music.\" But we're getting closer to the truth.\nIt's rare for people to change what they want to want, and the more\n\"want to\"s we add, the rarer it gets.We can get arbitrarily close to a true statement by adding more \"want\nto\"s in much the same way we can get arbitrarily close to 1 by adding\nmore 9s to a string of 9s following a decimal point. In practice\nthree or four \"want to\"s must surely be enough. It's hard even to\nenvision what it would mean to change what you want to want to want\nto want, let alone actually do it.So one way to express the correct answer is to use a regular\nexpression. You can do what you want, but there's some statement\nof the form \"you can't (want to)* want what you want\" that's true.\nUltimately you get back to a want that you don't control.\n[2]\nNotes[1]\nI didn't know when I was 9 that matter might behave randomly,\nbut I don't think it affects the problem much. Randomness destroys\nthe ghost in the machine as effectively as determinism.[2]\nIf you don't like using an expression, you can make the same\npoint using higher-order desires: There is some n such that you\ndon't control your nth-order desires.\nThanks to Trevor Blackwell,\nJessica Livingston, Robert Morris, and\nMichael Nielsen for reading drafts of this."} {"title": "avg", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nApril 2001, rev. April 2003(This article is derived from a talk given at the 2001 Franz\nDeveloper Symposium.)\nIn the summer of 1995, my friend Robert Morris and I\nstarted a startup called \nViaweb. \nOur plan was to write\nsoftware that would let end users build online stores.\nWhat was novel about this software, at the time, was\nthat it ran on our server, using ordinary Web pages\nas the interface.A lot of people could have been having this idea at the\nsame time, of course, but as far as I know, Viaweb was\nthe first Web-based application. It seemed such\na novel idea to us that we named the company after it:\nViaweb, because our software worked via the Web,\ninstead of running on your desktop computer.Another unusual thing about this software was that it\nwas written primarily in a programming language called\nLisp. It was one of the first big end-user\napplications to be written in Lisp, which up till then\nhad been used mostly in universities and research labs. [1]The Secret WeaponEric Raymond has written an essay called \"How to Become a Hacker,\"\nand in it, among other things, he tells would-be hackers what\nlanguages they should learn. He suggests starting with Python and\nJava, because they are easy to learn. The serious hacker will also\nwant to learn C, in order to hack Unix, and Perl for system\nadministration and cgi scripts. Finally, the truly serious hacker\nshould consider learning Lisp:\n\n Lisp is worth learning for the profound enlightenment experience\n you will have when you finally get it; that experience will make\n you a better programmer for the rest of your days, even if you\n never actually use Lisp itself a lot.\n\nThis is the same argument you tend to hear for learning Latin. It\nwon't get you a job, except perhaps as a classics professor, but\nit will improve your mind, and make you a better writer in languages\nyou do want to use, like English.But wait a minute. This metaphor doesn't stretch that far. The\nreason Latin won't get you a job is that no one speaks it. If you\nwrite in Latin, no one can understand you. But Lisp is a computer\nlanguage, and computers speak whatever language you, the programmer,\ntell them to.So if Lisp makes you a better programmer, like he says, why wouldn't\nyou want to use it? If a painter were offered a brush that would\nmake him a better painter, it seems to me that he would want to\nuse it in all his paintings, wouldn't he? I'm not trying to make\nfun of Eric Raymond here. On the whole, his advice is good. What\nhe says about Lisp is pretty much the conventional wisdom. But\nthere is a contradiction in the conventional wisdom: Lisp will\nmake you a better programmer, and yet you won't use it.Why not? Programming languages are just tools, after all. If Lisp\nreally does yield better programs, you should use it. And if it\ndoesn't, then who needs it?This is not just a theoretical question. Software is a very\ncompetitive business, prone to natural monopolies. A company that\ngets software written faster and better will, all other things\nbeing equal, put its competitors out of business. And when you're\nstarting a startup, you feel this very keenly. Startups tend to\nbe an all or nothing proposition. You either get rich, or you get\nnothing. In a startup, if you bet on the wrong technology, your\ncompetitors will crush you.Robert and I both knew Lisp well, and we couldn't see any reason\nnot to trust our instincts and go with Lisp. We knew that everyone\nelse was writing their software in C++ or Perl. But we also knew\nthat that didn't mean anything. If you chose technology that way,\nyou'd be running Windows. When you choose technology, you have to\nignore what other people are doing, and consider only what will\nwork the best.This is especially true in a startup. In a big company, you can\ndo what all the other big companies are doing. But a startup can't\ndo what all the other startups do. I don't think a lot of people\nrealize this, even in startups.The average big company grows at about ten percent a year. So if\nyou're running a big company and you do everything the way the\naverage big company does it, you can expect to do as well as the\naverage big company-- that is, to grow about ten percent a year.The same thing will happen if you're running a startup, of course.\nIf you do everything the way the average startup does it, you should\nexpect average performance. The problem here is, average performance\nmeans that you'll go out of business. The survival rate for startups\nis way less than fifty percent. So if you're running a startup,\nyou had better be doing something odd. If not, you're in trouble.Back in 1995, we knew something that I don't think our competitors\nunderstood, and few understand even now: when you're writing\nsoftware that only has to run on your own servers, you can use\nany language you want. When you're writing desktop software,\nthere's a strong bias toward writing applications in the same\nlanguage as the operating system. Ten years ago, writing applications\nmeant writing applications in C. But with Web-based software,\nespecially when you have the source code of both the language and\nthe operating system, you can use whatever language you want.This new freedom is a double-edged sword, however. Now that you\ncan use any language, you have to think about which one to use.\nCompanies that try to pretend nothing has changed risk finding that\ntheir competitors do not.If you can use any language, which do you use? We chose Lisp.\nFor one thing, it was obvious that rapid development would be\nimportant in this market. We were all starting from scratch, so\na company that could get new features done before its competitors\nwould have a big advantage. We knew Lisp was a really good language\nfor writing software quickly, and server-based applications magnify\nthe effect of rapid development, because you can release software\nthe minute it's done.If other companies didn't want to use Lisp, so much the better.\nIt might give us a technological edge, and we needed all the help\nwe could get. When we started Viaweb, we had no experience in\nbusiness. We didn't know anything about marketing, or hiring\npeople, or raising money, or getting customers. Neither of us had\never even had what you would call a real job. The only thing we\nwere good at was writing software. We hoped that would save us.\nAny advantage we could get in the software department, we would\ntake.So you could say that using Lisp was an experiment. Our hypothesis\nwas that if we wrote our software in Lisp, we'd be able to get\nfeatures done faster than our competitors, and also to do things\nin our software that they couldn't do. And because Lisp was so\nhigh-level, we wouldn't need a big development team, so our costs\nwould be lower. If this were so, we could offer a better product\nfor less money, and still make a profit. We would end up getting\nall the users, and our competitors would get none, and eventually\ngo out of business. That was what we hoped would happen, anyway.What were the results of this experiment? Somewhat surprisingly,\nit worked. We eventually had many competitors, on the order of\ntwenty to thirty of them, but none of their software could compete\nwith ours. We had a wysiwyg online store builder that ran on the\nserver and yet felt like a desktop application. Our competitors\nhad cgi scripts. And we were always far ahead of them in features.\nSometimes, in desperation, competitors would try to introduce\nfeatures that we didn't have. But with Lisp our development cycle\nwas so fast that we could sometimes duplicate a new feature within\na day or two of a competitor announcing it in a press release. By\nthe time journalists covering the press release got round to calling\nus, we would have the new feature too.It must have seemed to our competitors that we had some kind of\nsecret weapon-- that we were decoding their Enigma traffic or\nsomething. In fact we did have a secret weapon, but it was simpler\nthan they realized. No one was leaking news of their features to\nus. We were just able to develop software faster than anyone\nthought possible.When I was about nine I happened to get hold of a copy of The Day\nof the Jackal, by Frederick Forsyth. The main character is an\nassassin who is hired to kill the president of France. The assassin\nhas to get past the police to get up to an apartment that overlooks\nthe president's route. He walks right by them, dressed up as an\nold man on crutches, and they never suspect him.Our secret weapon was similar. We wrote our software in a weird\nAI language, with a bizarre syntax full of parentheses. For years\nit had annoyed me to hear Lisp described that way. But now it\nworked to our advantage. In business, there is nothing more valuable\nthan a technical advantage your competitors don't understand. In\nbusiness, as in war, surprise is worth as much as force.And so, I'm a little embarrassed to say, I never said anything\npublicly about Lisp while we were working on Viaweb. We never\nmentioned it to the press, and if you searched for Lisp on our Web\nsite, all you'd find were the titles of two books in my bio. This\nwas no accident. A startup should give its competitors as little\ninformation as possible. If they didn't know what language our\nsoftware was written in, or didn't care, I wanted to keep it that\nway.[2]The people who understood our technology best were the customers.\nThey didn't care what language Viaweb was written in either, but\nthey noticed that it worked really well. It let them build great\nlooking online stores literally in minutes. And so, by word of\nmouth mostly, we got more and more users. By the end of 1996 we\nhad about 70 stores online. At the end of 1997 we had 500. Six\nmonths later, when Yahoo bought us, we had 1070 users. Today, as\nYahoo Store, this software continues to dominate its market. It's\none of the more profitable pieces of Yahoo, and the stores built\nwith it are the foundation of Yahoo Shopping. I left Yahoo in\n1999, so I don't know exactly how many users they have now, but\nthe last I heard there were about 20,000.\nThe Blub ParadoxWhat's so great about Lisp? And if Lisp is so great, why doesn't\neveryone use it? These sound like rhetorical questions, but actually\nthey have straightforward answers. Lisp is so great not because\nof some magic quality visible only to devotees, but because it is\nsimply the most powerful language available. And the reason everyone\ndoesn't use it is that programming languages are not merely\ntechnologies, but habits of mind as well, and nothing changes\nslower. Of course, both these answers need explaining.I'll begin with a shockingly controversial statement: programming\nlanguages vary in power.Few would dispute, at least, that high level languages are more\npowerful than machine language. Most programmers today would agree\nthat you do not, ordinarily, want to program in machine language.\nInstead, you should program in a high-level language, and have a\ncompiler translate it into machine language for you. This idea is\neven built into the hardware now: since the 1980s, instruction sets\nhave been designed for compilers rather than human programmers.Everyone knows it's a mistake to write your whole program by hand\nin machine language. What's less often understood is that there\nis a more general principle here: that if you have a choice of\nseveral languages, it is, all other things being equal, a mistake\nto program in anything but the most powerful one. [3]There are many exceptions to this rule. If you're writing a program\nthat has to work very closely with a program written in a certain\nlanguage, it might be a good idea to write the new program in the\nsame language. If you're writing a program that only has to do\nsomething very simple, like number crunching or bit manipulation,\nyou may as well use a less abstract language, especially since it\nmay be slightly faster. And if you're writing a short, throwaway\nprogram, you may be better off just using whatever language has\nthe best library functions for the task. But in general, for\napplication software, you want to be using the most powerful\n(reasonably efficient) language you can get, and using anything\nelse is a mistake, of exactly the same kind, though possibly in a\nlesser degree, as programming in machine language.You can see that machine language is very low level. But, at least\nas a kind of social convention, high-level languages are often all\ntreated as equivalent. They're not. Technically the term \"high-level\nlanguage\" doesn't mean anything very definite. There's no dividing\nline with machine languages on one side and all the high-level\nlanguages on the other. Languages fall along a continuum [4] of\nabstractness, from the most powerful all the way down to machine\nlanguages, which themselves vary in power.Consider Cobol. Cobol is a high-level language, in the sense that\nit gets compiled into machine language. Would anyone seriously\nargue that Cobol is equivalent in power to, say, Python? It's\nprobably closer to machine language than Python.Or how about Perl 4? Between Perl 4 and Perl 5, lexical closures\ngot added to the language. Most Perl hackers would agree that Perl\n5 is more powerful than Perl 4. But once you've admitted that,\nyou've admitted that one high level language can be more powerful\nthan another. And it follows inexorably that, except in special\ncases, you ought to use the most powerful you can get.This idea is rarely followed to its conclusion, though. After a\ncertain age, programmers rarely switch languages voluntarily.\nWhatever language people happen to be used to, they tend to consider\njust good enough.Programmers get very attached to their favorite languages, and I\ndon't want to hurt anyone's feelings, so to explain this point I'm\ngoing to use a hypothetical language called Blub. Blub falls right\nin the middle of the abstractness continuum. It is not the most\npowerful language, but it is more powerful than Cobol or machine\nlanguage.And in fact, our hypothetical Blub programmer wouldn't use either\nof them. Of course he wouldn't program in machine language. That's\nwhat compilers are for. And as for Cobol, he doesn't know how\nanyone can get anything done with it. It doesn't even have x (Blub\nfeature of your choice).As long as our hypothetical Blub programmer is looking down the\npower continuum, he knows he's looking down. Languages less powerful\nthan Blub are obviously less powerful, because they're missing some\nfeature he's used to. But when our hypothetical Blub programmer\nlooks in the other direction, up the power continuum, he doesn't\nrealize he's looking up. What he sees are merely weird languages.\nHe probably considers them about equivalent in power to Blub, but\nwith all this other hairy stuff thrown in as well. Blub is good\nenough for him, because he thinks in Blub.When we switch to the point of view of a programmer using any of\nthe languages higher up the power continuum, however, we find that\nhe in turn looks down upon Blub. How can you get anything done in\nBlub? It doesn't even have y.By induction, the only programmers in a position to see all the\ndifferences in power between the various languages are those who\nunderstand the most powerful one. (This is probably what Eric\nRaymond meant about Lisp making you a better programmer.) You can't\ntrust the opinions of the others, because of the Blub paradox:\nthey're satisfied with whatever language they happen to use, because\nit dictates the way they think about programs.I know this from my own experience, as a high school kid writing\nprograms in Basic. That language didn't even support recursion.\nIt's hard to imagine writing programs without using recursion, but\nI didn't miss it at the time. I thought in Basic. And I was a\nwhiz at it. Master of all I surveyed.The five languages that Eric Raymond recommends to hackers fall at\nvarious points on the power continuum. Where they fall relative\nto one another is a sensitive topic. What I will say is that I\nthink Lisp is at the top. And to support this claim I'll tell you\nabout one of the things I find missing when I look at the other\nfour languages. How can you get anything done in them, I think,\nwithout macros? [5]Many languages have something called a macro. But Lisp macros are\nunique. And believe it or not, what they do is related to the\nparentheses. The designers of Lisp didn't put all those parentheses\nin the language just to be different. To the Blub programmer, Lisp\ncode looks weird. But those parentheses are there for a reason.\nThey are the outward evidence of a fundamental difference between\nLisp and other languages.Lisp code is made out of Lisp data objects. And not in the trivial\nsense that the source files contain characters, and strings are\none of the data types supported by the language. Lisp code, after\nit's read by the parser, is made of data structures that you can\ntraverse.If you understand how compilers work, what's really going on is\nnot so much that Lisp has a strange syntax as that Lisp has no\nsyntax. You write programs in the parse trees that get generated\nwithin the compiler when other languages are parsed. But these\nparse trees are fully accessible to your programs. You can write\nprograms that manipulate them. In Lisp, these programs are called\nmacros. They are programs that write programs.Programs that write programs? When would you ever want to do that?\nNot very often, if you think in Cobol. All the time, if you think\nin Lisp. It would be convenient here if I could give an example\nof a powerful macro, and say there! how about that? But if I did,\nit would just look like gibberish to someone who didn't know Lisp;\nthere isn't room here to explain everything you'd need to know to\nunderstand what it meant. In \nAnsi Common Lisp I tried to move\nthings along as fast as I could, and even so I didn't get to macros\nuntil page 160.But I think I can give a kind of argument that might be convincing.\nThe source code of the Viaweb editor was probably about 20-25%\nmacros. Macros are harder to write than ordinary Lisp functions,\nand it's considered to be bad style to use them when they're not\nnecessary. So every macro in that code is there because it has to\nbe. What that means is that at least 20-25% of the code in this\nprogram is doing things that you can't easily do in any other\nlanguage. However skeptical the Blub programmer might be about my\nclaims for the mysterious powers of Lisp, this ought to make him\ncurious. We weren't writing this code for our own amusement. We\nwere a tiny startup, programming as hard as we could in order to\nput technical barriers between us and our competitors.A suspicious person might begin to wonder if there was some\ncorrelation here. A big chunk of our code was doing things that\nare very hard to do in other languages. The resulting software\ndid things our competitors' software couldn't do. Maybe there was\nsome kind of connection. I encourage you to follow that thread.\nThere may be more to that old man hobbling along on his crutches\nthan meets the eye.Aikido for StartupsBut I don't expect to convince anyone \n(over 25) \nto go out and learn\nLisp. The purpose of this article is not to change anyone's mind,\nbut to reassure people already interested in using Lisp-- people\nwho know that Lisp is a powerful language, but worry because it\nisn't widely used. In a competitive situation, that's an advantage.\nLisp's power is multiplied by the fact that your competitors don't\nget it.If you think of using Lisp in a startup, you shouldn't worry that\nit isn't widely understood. You should hope that it stays that\nway. And it's likely to. It's the nature of programming languages\nto make most people satisfied with whatever they currently use.\nComputer hardware changes so much faster than personal habits that\nprogramming practice is usually ten to twenty years behind the\nprocessor. At places like MIT they were writing programs in\nhigh-level languages in the early 1960s, but many companies continued\nto write code in machine language well into the 1980s. I bet a\nlot of people continued to write machine language until the processor,\nlike a bartender eager to close up and go home, finally kicked them\nout by switching to a risc instruction set.Ordinarily technology changes fast. But programming languages are\ndifferent: programming languages are not just technology, but what\nprogrammers think in. They're half technology and half religion.[6]\nAnd so the median language, meaning whatever language the median\nprogrammer uses, moves as slow as an iceberg. Garbage collection,\nintroduced by Lisp in about 1960, is now widely considered to be\na good thing. Runtime typing, ditto, is growing in popularity.\nLexical closures, introduced by Lisp in the early 1970s, are now,\njust barely, on the radar screen. Macros, introduced by Lisp in the\nmid 1960s, are still terra incognita.Obviously, the median language has enormous momentum. I'm not\nproposing that you can fight this powerful force. What I'm proposing\nis exactly the opposite: that, like a practitioner of Aikido, you\ncan use it against your opponents.If you work for a big company, this may not be easy. You will have\na hard time convincing the pointy-haired boss to let you build\nthings in Lisp, when he has just read in the paper that some other\nlanguage is poised, like Ada was twenty years ago, to take over\nthe world. But if you work for a startup that doesn't have\npointy-haired bosses yet, you can, like we did, turn the Blub\nparadox to your advantage: you can use technology that your\ncompetitors, glued immovably to the median language, will never be\nable to match.If you ever do find yourself working for a startup, here's a handy\ntip for evaluating competitors. Read their job listings. Everything\nelse on their site may be stock photos or the prose equivalent,\nbut the job listings have to be specific about what they want, or\nthey'll get the wrong candidates.During the years we worked on Viaweb I read a lot of job descriptions.\nA new competitor seemed to emerge out of the woodwork every month\nor so. The first thing I would do, after checking to see if they\nhad a live online demo, was look at their job listings. After a\ncouple years of this I could tell which companies to worry about\nand which not to. The more of an IT flavor the job descriptions\nhad, the less dangerous the company was. The safest kind were the\nones that wanted Oracle experience. You never had to worry about\nthose. You were also safe if they said they wanted C++ or Java\ndevelopers. If they wanted Perl or Python programmers, that would\nbe a bit frightening-- that's starting to sound like a company\nwhere the technical side, at least, is run by real hackers. If I\nhad ever seen a job posting looking for Lisp hackers, I would have\nbeen really worried.\nNotes[1] Viaweb at first had two parts: the editor, written in Lisp,\nwhich people used to build their sites, and the ordering system,\nwritten in C, which handled orders. The first version was mostly\nLisp, because the ordering system was small. Later we added two\nmore modules, an image generator written in C, and a back-office\nmanager written mostly in Perl.In January 2003, Yahoo released a new version of the editor \nwritten in C++ and Perl. It's hard to say whether the program is no\nlonger written in Lisp, though, because to translate this program\ninto C++ they literally had to write a Lisp interpreter: the source\nfiles of all the page-generating templates are still, as far as I\nknow, Lisp code. (See Greenspun's Tenth Rule.)[2] Robert Morris says that I didn't need to be secretive, because\neven if our competitors had known we were using Lisp, they wouldn't\nhave understood why: \"If they were that smart they'd already be\nprogramming in Lisp.\"[3] All languages are equally powerful in the sense of being Turing\nequivalent, but that's not the sense of the word programmers care\nabout. (No one wants to program a Turing machine.) The kind of\npower programmers care about may not be formally definable, but\none way to explain it would be to say that it refers to features\nyou could only get in the less powerful language by writing an\ninterpreter for the more powerful language in it. If language A\nhas an operator for removing spaces from strings and language B\ndoesn't, that probably doesn't make A more powerful, because you\ncan probably write a subroutine to do it in B. But if A supports,\nsay, recursion, and B doesn't, that's not likely to be something\nyou can fix by writing library functions.[4] Note to nerds: or possibly a lattice, narrowing toward the top;\nit's not the shape that matters here but the idea that there is at\nleast a partial order.[5] It is a bit misleading to treat macros as a separate feature.\nIn practice their usefulness is greatly enhanced by other Lisp\nfeatures like lexical closures and rest parameters.[6] As a result, comparisons of programming languages either take\nthe form of religious wars or undergraduate textbooks so determinedly\nneutral that they're really works of anthropology. People who\nvalue their peace, or want tenure, avoid the topic. But the question\nis only half a religious one; there is something there worth\nstudying, especially if you want to design new languages."} {"title": "goodtaste", "text": "November 2021(This essay is derived from a talk at the Cambridge Union.)When I was a kid, I'd have said there wasn't. My father told me so.\nSome people like some things, and other people like other things,\nand who's to say who's right?It seemed so obvious that there was no such thing as good taste\nthat it was only through indirect evidence that I realized my father\nwas wrong. And that's what I'm going to give you here: a proof by\nreductio ad absurdum. If we start from the premise that there's no\nsuch thing as good taste, we end up with conclusions that are\nobviously false, and therefore the premise must be wrong.We'd better start by saying what good taste is. There's a narrow\nsense in which it refers to aesthetic judgements and a broader one\nin which it refers to preferences of any kind. The strongest proof\nwould be to show that taste exists in the narrowest sense, so I'm\ngoing to talk about taste in art. You have better taste than me if\nthe art you like is better than the art I like.If there's no such thing as good taste, then there's no such thing\nas good art. Because if there is such a\nthing as good art, it's\neasy to tell which of two people has better taste. Show them a lot\nof works by artists they've never seen before and ask them to\nchoose the best, and whoever chooses the better art has better\ntaste.So if you want to discard the concept of good taste, you also have\nto discard the concept of good art. And that means you have to\ndiscard the possibility of people being good at making it. Which\nmeans there's no way for artists to be good at their jobs. And not\njust visual artists, but anyone who is in any sense an artist. You\ncan't have good actors, or novelists, or composers, or dancers\neither. You can have popular novelists, but not good ones.We don't realize how far we'd have to go if we discarded the concept\nof good taste, because we don't even debate the most obvious cases.\nBut it doesn't just mean we can't say which of two famous painters\nis better. It means we can't say that any painter is better than a\nrandomly chosen eight year old.That was how I realized my father was wrong. I started studying\npainting. And it was just like other kinds of work I'd done: you\ncould do it well, or badly, and if you tried hard, you could get\nbetter at it. And it was obvious that Leonardo and Bellini were\nmuch better at it than me. That gap between us was not imaginary.\nThey were so good. And if they could be good, then art could be\ngood, and there was such a thing as good taste after all.Now that I've explained how to show there is such a thing as good\ntaste, I should also explain why people think there isn't. There\nare two reasons. One is that there's always so much disagreement\nabout taste. Most people's response to art is a tangle of unexamined\nimpulses. Is the artist famous? Is the subject attractive? Is this\nthe sort of art they're supposed to like? Is it hanging in a famous\nmuseum, or reproduced in a big, expensive book? In practice most\npeople's response to art is dominated by such extraneous factors.And the people who do claim to have good taste are so often mistaken.\nThe paintings admired by the so-called experts in one generation\nare often so different from those admired a few generations later.\nIt's easy to conclude there's nothing real there at all. It's only\nwhen you isolate this force, for example by trying to paint and\ncomparing your work to Bellini's, that you can see that it does in\nfact exist.The other reason people doubt that art can be good is that there\ndoesn't seem to be any room in the art for this goodness. The\nargument goes like this. Imagine several people looking at a work\nof art and judging how good it is. If being good art really is a\nproperty of objects, it should be in the object somehow. But it\ndoesn't seem to be; it seems to be something happening in the heads\nof each of the observers. And if they disagree, how do you choose\nbetween them?The solution to this puzzle is to realize that the purpose of art\nis to work on its human audience, and humans have a lot in common.\nAnd to the extent the things an object acts upon respond in the\nsame way, that's arguably what it means for the object to have the\ncorresponding property. If everything a particle interacts with\nbehaves as if the particle had a mass of m, then it has a mass of\nm. So the distinction between \"objective\" and \"subjective\" is not\nbinary, but a matter of degree, depending on how much the subjects\nhave in common. Particles interacting with one another are at one\npole, but people interacting with art are not all the way at the\nother; their reactions aren't random.Because people's responses to art aren't random, art can be designed\nto operate on people, and be good or bad depending on how effectively\nit does so. Much as a vaccine can be. If someone were talking about\nthe ability of a vaccine to confer immunity, it would seem very\nfrivolous to object that conferring immunity wasn't really a property\nof vaccines, because acquiring immunity is something that happens\nin the immune system of each individual person. Sure, people's\nimmune systems vary, and a vaccine that worked on one might not\nwork on another, but that doesn't make it meaningless to talk about\nthe effectiveness of a vaccine.The situation with art is messier, of course. You can't measure\neffectiveness by simply taking a vote, as you do with vaccines.\nYou have to imagine the responses of subjects with a deep knowledge\nof art, and enough clarity of mind to be able to ignore extraneous\ninfluences like the fame of the artist. And even then you'd still\nsee some disagreement. People do vary, and judging art is hard,\nespecially recent art. There is definitely not a total order either\nof works or of people's ability to judge them. But there is equally\ndefinitely a partial order of both. So while it's not possible to\nhave perfect taste, it is possible to have good taste.\nThanks to the Cambridge Union for inviting me, and to Trevor\nBlackwell, Jessica Livingston, and Robert Morris for reading drafts\nof this.\n"} {"title": "newideas", "text": "May 2021There's one kind of opinion I'd be very afraid to express publicly.\nIf someone I knew to be both a domain expert and a reasonable person\nproposed an idea that sounded preposterous, I'd be very reluctant\nto say \"That will never work.\"Anyone who has studied the history of ideas, and especially the\nhistory of science, knows that's how big things start. Someone\nproposes an idea that sounds crazy, most people dismiss it, then\nit gradually takes over the world.Most implausible-sounding ideas are in fact bad and could be safely\ndismissed. But not when they're proposed by reasonable domain\nexperts. If the person proposing the idea is reasonable, then they\nknow how implausible it sounds. And yet they're proposing it anyway.\nThat suggests they know something you don't. And if they have deep\ndomain expertise, that's probably the source of it.\n[1]Such ideas are not merely unsafe to dismiss, but disproportionately\nlikely to be interesting. When the average person proposes an\nimplausible-sounding idea, its implausibility is evidence of their\nincompetence. But when a reasonable domain expert does it, the\nsituation is reversed. There's something like an efficient market\nhere: on average the ideas that seem craziest will, if correct,\nhave the biggest effect. So if you can eliminate the theory that\nthe person proposing an implausible-sounding idea is incompetent,\nits implausibility switches from evidence that it's boring to\nevidence that it's exciting.\n[2]Such ideas are not guaranteed to work. But they don't have to be.\nThey just have to be sufficiently good bets \u2014 to have sufficiently\nhigh expected value. And I think on average they do. I think if you\nbet on the entire set of implausible-sounding ideas proposed by\nreasonable domain experts, you'd end up net ahead.The reason is that everyone is too conservative. The word \"paradigm\"\nis overused, but this is a case where it's warranted. Everyone is\ntoo much in the grip of the current paradigm. Even the people who\nhave the new ideas undervalue them initially. Which means that\nbefore they reach the stage of proposing them publicly, they've\nalready subjected them to an excessively strict filter.\n[3]The wise response to such an idea is not to make statements, but\nto ask questions, because there's a real mystery here. Why has this\nsmart and reasonable person proposed an idea that seems so wrong?\nAre they mistaken, or are you? One of you has to be. If you're the\none who's mistaken, that would be good to know, because it means\nthere's a hole in your model of the world. But even if they're\nmistaken, it should be interesting to learn why. A trap that an\nexpert falls into is one you have to worry about too.This all seems pretty obvious. And yet there are clearly a lot of\npeople who don't share my fear of dismissing new ideas. Why do they\ndo it? Why risk looking like a jerk now and a fool later, instead\nof just reserving judgement?One reason they do it is envy. If you propose a radical new idea\nand it succeeds, your reputation (and perhaps also your wealth)\nwill increase proportionally. Some people would be envious if that\nhappened, and this potential envy propagates back into a conviction\nthat you must be wrong.Another reason people dismiss new ideas is that it's an easy way\nto seem sophisticated. When a new idea first emerges, it usually\nseems pretty feeble. It's a mere hatchling. Received wisdom is a\nfull-grown eagle by comparison. So it's easy to launch a devastating\nattack on a new idea, and anyone who does will seem clever to those\nwho don't understand this asymmetry.This phenomenon is exacerbated by the difference between how those\nworking on new ideas and those attacking them are rewarded. The\nrewards for working on new ideas are weighted by the value of the\noutcome. So it's worth working on something that only has a 10%\nchance of succeeding if it would make things more than 10x better.\nWhereas the rewards for attacking new ideas are roughly constant;\nsuch attacks seem roughly equally clever regardless of the target.People will also attack new ideas when they have a vested interest\nin the old ones. It's not surprising, for example, that some of\nDarwin's harshest critics were churchmen. People build whole careers\non some ideas. When someone claims they're false or obsolete, they\nfeel threatened.The lowest form of dismissal is mere factionalism: to automatically\ndismiss any idea associated with the opposing faction. The lowest\nform of all is to dismiss an idea because of who proposed it.But the main thing that leads reasonable people to dismiss new ideas\nis the same thing that holds people back from proposing them: the\nsheer pervasiveness of the current paradigm. It doesn't just affect\nthe way we think; it is the Lego blocks we build thoughts out of.\nPopping out of the current paradigm is something only a few people\ncan do. And even they usually have to suppress their intuitions at\nfirst, like a pilot flying through cloud who has to trust his\ninstruments over his sense of balance.\n[4]Paradigms don't just define our present thinking. They also vacuum\nup the trail of crumbs that led to them, making our standards for\nnew ideas impossibly high. The current paradigm seems so perfect\nto us, its offspring, that we imagine it must have been accepted\ncompletely as soon as it was discovered \u2014 that whatever the church thought\nof the heliocentric model, astronomers must have been convinced as\nsoon as Copernicus proposed it. Far, in fact, from it. Copernicus\npublished the heliocentric model in 1532, but it wasn't till the\nmid seventeenth century that the balance of scientific opinion\nshifted in its favor.\n[5]Few understand how feeble new ideas look when they first appear.\nSo if you want to have new ideas yourself, one of the most valuable\nthings you can do is to learn what they look like when they're born.\nRead about how new ideas happened, and try to get yourself into the\nheads of people at the time. How did things look to them, when the\nnew idea was only half-finished, and even the person who had it was\nonly half-convinced it was right?But you don't have to stop at history. You can observe big new ideas\nbeing born all around you right now. Just look for a reasonable\ndomain expert proposing something that sounds wrong.If you're nice, as well as wise, you won't merely resist attacking\nsuch people, but encourage them. Having new ideas is a lonely\nbusiness. Only those who've tried it know how lonely. These people\nneed your help. And if you help them, you'll probably learn something\nin the process.Notes[1]\nThis domain expertise could be in another field. Indeed,\nsuch crossovers tend to be particularly promising.[2]\nI'm not claiming this principle extends much beyond math,\nengineering, and the hard sciences. In politics, for example,\ncrazy-sounding ideas generally are as bad as they sound. Though\narguably this is not an exception, because the people who propose\nthem are not in fact domain experts; politicians are domain experts\nin political tactics, like how to get elected and how to get\nlegislation passed, but not in the world that policy acts upon.\nPerhaps no one could be.[3]\nThis sense of \"paradigm\" was defined by Thomas Kuhn in his\nStructure of Scientific Revolutions, but I also recommend his\nCopernican Revolution, where you can see him at work developing the\nidea.[4]\nThis is one reason people with a touch of Asperger's may have\nan advantage in discovering new ideas. They're always flying on\ninstruments.[5]\nHall, Rupert. From Galileo to Newton. Collins, 1963. This\nbook is particularly good at getting into contemporaries' heads.Thanks to Trevor Blackwell, Patrick Collison, Suhail Doshi, Daniel\nGackle, Jessica Livingston, and Robert Morris for reading drafts of this."} {"title": "superangels", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2010After barely changing at all for decades, the startup funding\nbusiness is now in what could, at least by comparison, be called\nturmoil. At Y Combinator we've seen dramatic changes in the funding\nenvironment for startups. Fortunately one of them is much higher\nvaluations.The trends we've been seeing are probably not YC-specific. I wish\nI could say they were, but the main cause is probably just that we\nsee trends first\u2014partly because the startups we fund are very\nplugged into the Valley and are quick to take advantage of anything\nnew, and partly because we fund so many that we have enough data\npoints to see patterns clearly.What we're seeing now, everyone's probably going to be seeing in\nthe next couple years. So I'm going to explain what we're seeing,\nand what that will mean for you if you try to raise money.Super-AngelsLet me start by describing what the world of startup funding used\nto look like. There used to be two sharply differentiated types\nof investors: angels and venture capitalists. Angels are individual\nrich people who invest small amounts of their own money, while VCs\nare employees of funds that invest large amounts of other people's.For decades there were just those two types of investors, but now\na third type has appeared halfway between them: the so-called\nsuper-angels. \n[1]\n And VCs have been provoked by their arrival\ninto making a lot of angel-style investments themselves. So the\npreviously sharp line between angels and VCs has become hopelessly\nblurred.There used to be a no man's land between angels and VCs. Angels\nwould invest $20k to $50k apiece, and VCs usually a million or more.\nSo an angel round meant a collection of angel investments that\ncombined to maybe $200k, and a VC round meant a series A round in\nwhich a single VC fund (or occasionally two) invested $1-5 million.The no man's land between angels and VCs was a very inconvenient\none for startups, because it coincided with the amount many wanted\nto raise. Most startups coming out of Demo Day wanted to raise\naround $400k. But it was a pain to stitch together that much out\nof angel investments, and most VCs weren't interested in investments\nso small. That's the fundamental reason the super-angels have\nappeared. They're responding to the market.The arrival of a new type of investor is big news for startups,\nbecause there used to be only two and they rarely competed with one\nanother. Super-angels compete with both angels and VCs. That's\ngoing to change the rules about how to raise money. I don't know\nyet what the new rules will be, but it looks like most of the changes\nwill be for the better.A super-angel has some of the qualities of an angel, and some of\nthe qualities of a VC. They're usually individuals, like angels.\nIn fact many of the current super-angels were initially angels of\nthe classic type. But like VCs, they invest other people's money.\nThis allows them to invest larger amounts than angels: a typical\nsuper-angel investment is currently about $100k. They make investment\ndecisions quickly, like angels. And they make a lot more investments\nper partner than VCs\u2014up to 10 times as many.The fact that super-angels invest other people's money makes them\ndoubly alarming to VCs. They don't just compete for startups; they\nalso compete for investors. What super-angels really are is a new\nform of fast-moving, lightweight VC fund. And those of us in the\ntechnology world know what usually happens when something comes\nalong that can be described in terms like that. Usually it's the\nreplacement.Will it be? As of now, few of the startups that take money from\nsuper-angels are ruling out taking VC money. They're just postponing\nit. But that's still a problem for VCs. Some of the startups that\npostpone raising VC money may do so well on the angel money they\nraise that they never bother to raise more. And those who do raise\nVC rounds will be able to get higher valuations when they do. If\nthe best startups get 10x higher valuations when they raise series\nA rounds, that would cut VCs' returns from winners at least tenfold.\n[2]So I think VC funds are seriously threatened by the super-angels.\nBut one thing that may save them to some extent is the uneven\ndistribution of startup outcomes: practically all the returns are\nconcentrated in a few big successes. The expected value of a startup\nis the percentage chance it's Google. So to the extent that winning\nis a matter of absolute returns, the super-angels could win practically\nall the battles for individual startups and yet lose the war, if\nthey merely failed to get those few big winners. And there's a\nchance that could happen, because the top VC funds have better\nbrands, and can also do more for their portfolio companies. \n[3]Because super-angels make more investments per partner, they have\nless partner per investment. They can't pay as much attention to\nyou as a VC on your board could. How much is that extra attention\nworth? It will vary enormously from one partner to another. There's\nno consensus yet in the general case. So for now this is something\nstartups are deciding individually.Till now, VCs' claims about how much value they added were sort of\nlike the government's. Maybe they made you feel better, but you\nhad no choice in the matter, if you needed money on the scale only\nVCs could supply. Now that VCs have competitors, that's going to\nput a market price on the help they offer. The interesting thing\nis, no one knows yet what it will be.Do startups that want to get really big need the sort of advice and\nconnections only the top VCs can supply? Or would super-angel money\ndo just as well? The VCs will say you need them, and the super-angels\nwill say you don't. But the truth is, no one knows yet, not even\nthe VCs and super-angels themselves. All the super-angels know\nis that their new model seems promising enough to be worth trying,\nand all the VCs know is that it seems promising enough to worry\nabout.RoundsWhatever the outcome, the conflict between VCs and super-angels is\ngood news for founders. And not just for the obvious reason that\nmore competition for deals means better terms. The whole shape of\ndeals is changing.One of the biggest differences between angels and VCs is the amount\nof your company they want. VCs want a lot. In a series A round\nthey want a third of your company, if they can get it. They don't\ncare much how much they pay for it, but they want a lot because the\nnumber of series A investments they can do is so small. In a\ntraditional series A investment, at least one partner from the VC\nfund takes a seat on your board. \n[4]\n Since board seats last about\n5 years and each partner can't handle more than about 10 at once,\nthat means a VC fund can only do about 2 series A deals per partner\nper year. And that means they need to get as much of the company\nas they can in each one. You'd have to be a very promising startup\nindeed to get a VC to use up one of his 10 board seats for only a\nfew percent of you.Since angels generally don't take board seats, they don't have this\nconstraint. They're happy to buy only a few percent of you. And\nalthough the super-angels are in most respects mini VC funds, they've\nretained this critical property of angels. They don't take board\nseats, so they don't need a big percentage of your company.Though that means you'll get correspondingly less attention from\nthem, it's good news in other respects. Founders never really liked\ngiving up as much equity as VCs wanted. It was a lot of the company\nto give up in one shot. Most founders doing series A deals would\nprefer to take half as much money for half as much stock, and then\nsee what valuation they could get for the second half of the stock\nafter using the first half of the money to increase its value. But\nVCs never offered that option.Now startups have another alternative. Now it's easy to raise angel\nrounds about half the size of series A rounds. Many of the startups\nwe fund are taking this route, and I predict that will be true of\nstartups in general.A typical big angel round might be $600k on a convertible note with\na valuation cap of $4 million premoney. Meaning that when the note\nconverts into stock (in a later round, or upon acquisition), the\ninvestors in that round will get .6 / 4.6, or 13% of the company.\nThat's a lot less than the 30 to 40% of the company you usually\ngive up in a series A round if you do it so early. \n[5]But the advantage of these medium-sized rounds is not just that\nthey cause less dilution. You also lose less control. After an\nangel round, the founders almost always still have control of the\ncompany, whereas after a series A round they often don't. The\ntraditional board structure after a series A round is two founders,\ntwo VCs, and a (supposedly) neutral fifth person. Plus series A\nterms usually give the investors a veto over various kinds of\nimportant decisions, including selling the company. Founders usually\nhave a lot of de facto control after a series A, as long as things\nare going well. But that's not the same as just being able to do\nwhat you want, like you could before.A third and quite significant advantage of angel rounds is that\nthey're less stressful to raise. Raising a traditional series A\nround has in the past taken weeks, if not months. When a VC firm\ncan only do 2 deals per partner per year, they're careful about\nwhich they do. To get a traditional series A round you have to go\nthrough a series of meetings, culminating in a full partner meeting\nwhere the firm as a whole says yes or no. That's the really scary\npart for founders: not just that series A rounds take so long, but\nat the end of this long process the VCs might still say no. The\nchance of getting rejected after the full partner meeting averages\nabout 25%. At some firms it's over 50%.Fortunately for founders, VCs have been getting a lot faster.\nNowadays Valley VCs are more likely to take 2 weeks than 2 months.\nBut they're still not as fast as angels and super-angels, the most\ndecisive of whom sometimes decide in hours.Raising an angel round is not only quicker, but you get feedback\nas it progresses. An angel round is not an all or nothing thing\nlike a series A. It's composed of multiple investors with varying\ndegrees of seriousness, ranging from the upstanding ones who commit\nunequivocally to the jerks who give you lines like \"come back to\nme to fill out the round.\" You usually start collecting money from\nthe most committed investors and work your way out toward the\nambivalent ones, whose interest increases as the round fills up.But at each point you know how you're doing. If investors turn\ncold you may have to raise less, but when investors in an angel\nround turn cold the process at least degrades gracefully, instead\nof blowing up in your face and leaving you with nothing, as happens\nif you get rejected by a VC fund after a full partner meeting.\nWhereas if investors seem hot, you can not only close the round\nfaster, but now that convertible notes are becoming the norm,\nactually raise the price to reflect demand.ValuationHowever, the VCs have a weapon they can use against the super-angels,\nand they have started to use it. VCs have started making angel-sized\ninvestments too. The term \"angel round\" doesn't mean that all the\ninvestors in it are angels; it just describes the structure of the\nround. Increasingly the participants include VCs making investments\nof a hundred thousand or two. And when VCs invest in angel rounds\nthey can do things that super-angels don't like. VCs are quite\nvaluation-insensitive in angel rounds\u2014partly because they are\nin general, and partly because they don't care that much about the\nreturns on angel rounds, which they still view mostly as a way to\nrecruit startups for series A rounds later. So VCs who invest in\nangel rounds can blow up the valuations for angels and super-angels\nwho invest in them. \n[6]Some super-angels seem to care about valuations. Several turned\ndown YC-funded startups after Demo Day because their valuations\nwere too high. This was not a problem for the startups; by definition\na high valuation means enough investors were willing to accept it.\nBut it was mysterious to me that the super-angels would quibble\nabout valuations. Did they not understand that the big returns\ncome from a few big successes, and that it therefore mattered far\nmore which startups you picked than how much you paid for them?After thinking about it for a while and observing certain other\nsigns, I have a theory that explains why the super-angels may be\nsmarter than they seem. It would make sense for super-angels to\nwant low valuations if they're hoping to invest in startups that\nget bought early. If you're hoping to hit the next Google, you\nshouldn't care if the valuation is 20 million. But if you're looking\nfor companies that are going to get bought for 30 million, you care.\nIf you invest at 20 and the company gets bought for 30, you only\nget 1.5x. You might as well buy Apple.So if some of the super-angels were looking for companies that could\nget acquired quickly, that would explain why they'd care about\nvaluations. But why would they be looking for those? Because\ndepending on the meaning of \"quickly,\" it could actually be very\nprofitable. A company that gets acquired for 30 million is a failure\nto a VC, but it could be a 10x return for an angel, and moreover,\na quick 10x return. Rate of return is what matters in\ninvesting\u2014not the multiple you get, but the multiple per year.\nIf a super-angel gets 10x in one year, that's a higher rate of\nreturn than a VC could ever hope to get from a company that took 6\nyears to go public. To get the same rate of return, the VC would\nhave to get a multiple of 10^6\u2014one million x. Even Google\ndidn't come close to that.So I think at least some super-angels are looking for companies\nthat will get bought. That's the only rational explanation for\nfocusing on getting the right valuations, instead of the right\ncompanies. And if so they'll be different to deal with than VCs.\nThey'll be tougher on valuations, but more accommodating if you want\nto sell early.PrognosisWho will win, the super-angels or the VCs? I think the answer to\nthat is, some of each. They'll each become more like one another.\nThe super-angels will start to invest larger amounts, and the VCs\nwill gradually figure out ways to make more, smaller investments\nfaster. A decade from now the players will be hard to tell apart,\nand there will probably be survivors from each group.What does that mean for founders? One thing it means is that the\nhigh valuations startups are presently getting may not last forever.\nTo the extent that valuations are being driven up by price-insensitive\nVCs, they'll fall again if VCs become more like super-angels and\nstart to become more miserly about valuations. Fortunately if this\ndoes happen it will take years.The short term forecast is more competition between investors, which\nis good news for you. The super-angels will try to undermine the\nVCs by acting faster, and the VCs will try to undermine the\nsuper-angels by driving up valuations. Which for founders will\nresult in the perfect combination: funding rounds that close fast,\nwith high valuations.But remember that to get that combination, your startup will have\nto appeal to both super-angels and VCs. If you don't seem like you\nhave the potential to go public, you won't be able to use VCs to\ndrive up the valuation of an angel round.There is a danger of having VCs in an angel round: the so-called\nsignalling risk. If VCs are only doing it in the hope of investing\nmore later, what happens if they don't? That's a signal to everyone\nelse that they think you're lame.How much should you worry about that? The seriousness of signalling\nrisk depends on how far along you are. If by the next time you\nneed to raise money, you have graphs showing rising revenue or\ntraffic month after month, you don't have to worry about any signals\nyour existing investors are sending. Your results will speak for\nthemselves. \n[7]Whereas if the next time you need to raise money you won't yet have\nconcrete results, you may need to think more about the message your\ninvestors might send if they don't invest more. I'm not sure yet\nhow much you have to worry, because this whole phenomenon of VCs\ndoing angel investments is so new. But my instincts tell me you\ndon't have to worry much. Signalling risk smells like one of those\nthings founders worry about that's not a real problem. As a rule,\nthe only thing that can kill a good startup is the startup itself.\nStartups hurt themselves way more often than competitors hurt them,\nfor example. I suspect signalling risk is in this category too.One thing YC-funded startups have been doing to mitigate the risk\nof taking money from VCs in angel rounds is not to take too much\nfrom any one VC. Maybe that will help, if you have the luxury of\nturning down money.Fortunately, more and more startups will. After decades of competition\nthat could best be described as intramural, the startup funding\nbusiness is finally getting some real competition. That should\nlast several years at least, and maybe a lot longer. Unless there's\nsome huge market crash, the next couple years are going to be a\ngood time for startups to raise money. And that's exciting because\nit means lots more startups will happen.\nNotes[1]\nI've also heard them called \"Mini-VCs\" and \"Micro-VCs.\" I\ndon't know which name will stick.There were a couple predecessors. Ron Conway had angel funds\nstarting in the 1990s, and in some ways First Round Capital is closer to a\nsuper-angel than a VC fund.[2]\nIt wouldn't cut their overall returns tenfold, because investing\nlater would probably (a) cause them to lose less on investments\nthat failed, and (b) not allow them to get as large a percentage\nof startups as they do now. So it's hard to predict precisely what\nwould happen to their returns.[3]\nThe brand of an investor derives mostly from the success of\ntheir portfolio companies. The top VCs thus have a big brand\nadvantage over the super-angels. They could make it self-perpetuating\nif they used it to get all the best new startups. But I don't think\nthey'll be able to. To get all the best startups, you have to do\nmore than make them want you. You also have to want them; you have\nto recognize them when you see them, and that's much harder.\nSuper-angels will snap up stars that VCs miss. And that will cause\nthe brand gap between the top VCs and the super-angels gradually\nto erode.[4]\nThough in a traditional series A round VCs put two partners\non your board, there are signs now that VCs may begin to conserve\nboard seats by switching to what used to be considered an angel-round\nboard, consisting of two founders and one VC. Which is also to the\nfounders' advantage if it means they still control the company.[5]\nIn a series A round, you usually have to give up more than\nthe actual amount of stock the VCs buy, because they insist you\ndilute yourselves to set aside an \"option pool\" as well. I predict\nthis practice will gradually disappear though.[6]\nThe best thing for founders, if they can get it, is a convertible\nnote with no valuation cap at all. In that case the money invested\nin the angel round just converts into stock at the valuation of the\nnext round, no matter how large. Angels and super-angels tend not\nto like uncapped notes. They have no idea how much of the company\nthey're buying. If the company does well and the valuation of the\nnext round is high, they may end up with only a sliver of it. So\nby agreeing to uncapped notes, VCs who don't care about valuations\nin angel rounds can make offers that super-angels hate to match.[7]\nObviously signalling risk is also not a problem if you'll\nnever need to raise more money. But startups are often mistaken\nabout that.Thanks to Sam Altman, John Bautista, Patrick Collison, James\nLindenbaum, Reid Hoffman, Jessica Livingston and Harj Taggar\nfor reading drafts\nof this."} {"title": "useful", "text": "February 2020What should an essay be? Many people would say persuasive. That's\nwhat a lot of us were taught essays should be. But I think we can\naim for something more ambitious: that an essay should be useful.To start with, that means it should be correct. But it's not enough\nmerely to be correct. It's easy to make a statement correct by\nmaking it vague. That's a common flaw in academic writing, for\nexample. If you know nothing at all about an issue, you can't go\nwrong by saying that the issue is a complex one, that there are\nmany factors to be considered, that it's a mistake to take too\nsimplistic a view of it, and so on.Though no doubt correct, such statements tell the reader nothing.\nUseful writing makes claims that are as strong as they can be made\nwithout becoming false.For example, it's more useful to say that Pike's Peak is near the\nmiddle of Colorado than merely somewhere in Colorado. But if I say\nit's in the exact middle of Colorado, I've now gone too far, because\nit's a bit east of the middle.Precision and correctness are like opposing forces. It's easy to\nsatisfy one if you ignore the other. The converse of vaporous\nacademic writing is the bold, but false, rhetoric of demagogues.\nUseful writing is bold, but true.It's also two other things: it tells people something important,\nand that at least some of them didn't already know.Telling people something they didn't know doesn't always mean\nsurprising them. Sometimes it means telling them something they\nknew unconsciously but had never put into words. In fact those may\nbe the more valuable insights, because they tend to be more\nfundamental.Let's put them all together. Useful writing tells people something\ntrue and important that they didn't already know, and tells them\nas unequivocally as possible.Notice these are all a matter of degree. For example, you can't\nexpect an idea to be novel to everyone. Any insight that you have\nwill probably have already been had by at least one of the world's\n7 billion people. But it's sufficient if an idea is novel to a lot\nof readers.Ditto for correctness, importance, and strength. In effect the four\ncomponents are like numbers you can multiply together to get a score\nfor usefulness. Which I realize is almost awkwardly reductive, but\nnonetheless true._____\nHow can you ensure that the things you say are true and novel and\nimportant? Believe it or not, there is a trick for doing this. I\nlearned it from my friend Robert Morris, who has a horror of saying\nanything dumb. His trick is not to say anything unless he's sure\nit's worth hearing. This makes it hard to get opinions out of him,\nbut when you do, they're usually right.Translated into essay writing, what this means is that if you write\na bad sentence, you don't publish it. You delete it and try again.\nOften you abandon whole branches of four or five paragraphs. Sometimes\na whole essay.You can't ensure that every idea you have is good, but you can\nensure that every one you publish is, by simply not publishing the\nones that aren't.In the sciences, this is called publication bias, and is considered\nbad. When some hypothesis you're exploring gets inconclusive results,\nyou're supposed to tell people about that too. But with essay\nwriting, publication bias is the way to go.My strategy is loose, then tight. I write the first draft of an\nessay fast, trying out all kinds of ideas. Then I spend days rewriting\nit very carefully.I've never tried to count how many times I proofread essays, but\nI'm sure there are sentences I've read 100 times before publishing\nthem. When I proofread an essay, there are usually passages that\nstick out in an annoying way, sometimes because they're clumsily\nwritten, and sometimes because I'm not sure they're true. The\nannoyance starts out unconscious, but after the tenth reading or\nso I'm saying \"Ugh, that part\" each time I hit it. They become like\nbriars that catch your sleeve as you walk past. Usually I won't\npublish an essay till they're all gone \u0097 till I can read through\nthe whole thing without the feeling of anything catching.I'll sometimes let through a sentence that seems clumsy, if I can't\nthink of a way to rephrase it, but I will never knowingly let through\none that doesn't seem correct. You never have to. If a sentence\ndoesn't seem right, all you have to do is ask why it doesn't, and\nyou've usually got the replacement right there in your head.This is where essayists have an advantage over journalists. You\ndon't have a deadline. You can work for as long on an essay as you\nneed to get it right. You don't have to publish the essay at all,\nif you can't get it right. Mistakes seem to lose courage in the\nface of an enemy with unlimited resources. Or that's what it feels\nlike. What's really going on is that you have different expectations\nfor yourself. You're like a parent saying to a child \"we can sit\nhere all night till you eat your vegetables.\" Except you're the\nchild too.I'm not saying no mistake gets through. For example, I added condition\n(c) in \"A Way to Detect Bias\" \nafter readers pointed out that I'd\nomitted it. But in practice you can catch nearly all of them.There's a trick for getting importance too. It's like the trick I\nsuggest to young founders for getting startup ideas: to make something\nyou yourself want. You can use yourself as a proxy for the reader.\nThe reader is not completely unlike you, so if you write about\ntopics that seem important to you, they'll probably seem important\nto a significant number of readers as well.Importance has two factors. It's the number of people something\nmatters to, times how much it matters to them. Which means of course\nthat it's not a rectangle, but a sort of ragged comb, like a Riemann\nsum.The way to get novelty is to write about topics you've thought about\na lot. Then you can use yourself as a proxy for the reader in this\ndepartment too. Anything you notice that surprises you, who've\nthought about the topic a lot, will probably also surprise a\nsignificant number of readers. And here, as with correctness and\nimportance, you can use the Morris technique to ensure that you\nwill. If you don't learn anything from writing an essay, don't\npublish it.You need humility to measure novelty, because acknowledging the\nnovelty of an idea means acknowledging your previous ignorance of\nit. Confidence and humility are often seen as opposites, but in\nthis case, as in many others, confidence helps you to be humble.\nIf you know you're an expert on some topic, you can freely admit\nwhen you learn something you didn't know, because you can be confident\nthat most other people wouldn't know it either.The fourth component of useful writing, strength, comes from two\nthings: thinking well, and the skillful use of qualification. These\ntwo counterbalance each other, like the accelerator and clutch in\na car with a manual transmission. As you try to refine the expression\nof an idea, you adjust the qualification accordingly. Something\nyou're sure of, you can state baldly with no qualification at all,\nas I did the four components of useful writing. Whereas points that\nseem dubious have to be held at arm's length with perhapses.As you refine an idea, you're pushing in the direction of less\nqualification. But you can rarely get it down to zero. Sometimes\nyou don't even want to, if it's a side point and a fully refined\nversion would be too long.Some say that qualifications weaken writing. For example, that you\nshould never begin a sentence in an essay with \"I think,\" because\nif you're saying it, then of course you think it. And it's true\nthat \"I think x\" is a weaker statement than simply \"x.\" Which is\nexactly why you need \"I think.\" You need it to express your degree\nof certainty.But qualifications are not scalars. They're not just experimental\nerror. There must be 50 things they can express: how broadly something\napplies, how you know it, how happy you are it's so, even how it\ncould be falsified. I'm not going to try to explore the structure\nof qualification here. It's probably more complex than the whole\ntopic of writing usefully. Instead I'll just give you a practical\ntip: Don't underestimate qualification. It's an important skill in\nits own right, not just a sort of tax you have to pay in order to\navoid saying things that are false. So learn and use its full range.\nIt may not be fully half of having good ideas, but it's part of\nhaving them.There's one other quality I aim for in essays: to say things as\nsimply as possible. But I don't think this is a component of\nusefulness. It's more a matter of consideration for the reader. And\nit's a practical aid in getting things right; a mistake is more\nobvious when expressed in simple language. But I'll admit that the\nmain reason I write simply is not for the reader's sake or because\nit helps get things right, but because it bothers me to use more\nor fancier words than I need to. It seems inelegant, like a program\nthat's too long.I realize florid writing works for some people. But unless you're\nsure you're one of them, the best advice is to write as simply as\nyou can._____\nI believe the formula I've given you, importance + novelty +\ncorrectness + strength, is the recipe for a good essay. But I should\nwarn you that it's also a recipe for making people mad.The root of the problem is novelty. When you tell people something\nthey didn't know, they don't always thank you for it. Sometimes the\nreason people don't know something is because they don't want to\nknow it. Usually because it contradicts some cherished belief. And\nindeed, if you're looking for novel ideas, popular but mistaken\nbeliefs are a good place to find them. Every popular mistaken belief\ncreates a dead zone of ideas around \nit that are relatively unexplored because they contradict it.The strength component just makes things worse. If there's anything\nthat annoys people more than having their cherished assumptions\ncontradicted, it's having them flatly contradicted.Plus if you've used the Morris technique, your writing will seem\nquite confident. Perhaps offensively confident, to people who\ndisagree with you. The reason you'll seem confident is that you are\nconfident: you've cheated, by only publishing the things you're\nsure of. It will seem to people who try to disagree with you that\nyou never admit you're wrong. In fact you constantly admit you're\nwrong. You just do it before publishing instead of after.And if your writing is as simple as possible, that just makes things\nworse. Brevity is the diction of command. If you watch someone\ndelivering unwelcome news from a position of inferiority, you'll\nnotice they tend to use lots of words, to soften the blow. Whereas\nto be short with someone is more or less to be rude to them.It can sometimes work to deliberately phrase statements more weakly\nthan you mean. To put \"perhaps\" in front of something you're actually\nquite sure of. But you'll notice that when writers do this, they\nusually do it with a wink.I don't like to do this too much. It's cheesy to adopt an ironic\ntone for a whole essay. I think we just have to face the fact that\nelegance and curtness are two names for the same thing.You might think that if you work sufficiently hard to ensure that\nan essay is correct, it will be invulnerable to attack. That's sort\nof true. It will be invulnerable to valid attacks. But in practice\nthat's little consolation.In fact, the strength component of useful writing will make you\nparticularly vulnerable to misrepresentation. If you've stated an\nidea as strongly as you could without making it false, all anyone\nhas to do is to exaggerate slightly what you said, and now it is\nfalse.Much of the time they're not even doing it deliberately. One of the\nmost surprising things you'll discover, if you start writing essays,\nis that people who disagree with you rarely disagree with what\nyou've actually written. Instead they make up something you said\nand disagree with that.For what it's worth, the countermove is to ask someone who does\nthis to quote a specific sentence or passage you wrote that they\nbelieve is false, and explain why. I say \"for what it's worth\"\nbecause they never do. So although it might seem that this could\nget a broken discussion back on track, the truth is that it was\nnever on track in the first place.Should you explicitly forestall likely misinterpretations? Yes, if\nthey're misinterpretations a reasonably smart and well-intentioned\nperson might make. In fact it's sometimes better to say something\nslightly misleading and then add the correction than to try to get\nan idea right in one shot. That can be more efficient, and can also\nmodel the way such an idea would be discovered.But I don't think you should explicitly forestall intentional\nmisinterpretations in the body of an essay. An essay is a place to\nmeet honest readers. You don't want to spoil your house by putting\nbars on the windows to protect against dishonest ones. The place\nto protect against intentional misinterpretations is in end-notes.\nBut don't think you can predict them all. People are as ingenious\nat misrepresenting you when you say something they don't want to\nhear as they are at coming up with rationalizations for things they\nwant to do but know they shouldn't. I suspect it's the same skill._____\nAs with most other things, the way to get better at writing essays\nis to practice. But how do you start? Now that we've examined the\nstructure of useful writing, we can rephrase that question more\nprecisely. Which constraint do you relax initially? The answer is,\nthe first component of importance: the number of people who care\nabout what you write.If you narrow the topic sufficiently, you can probably find something\nyou're an expert on. Write about that to start with. If you only\nhave ten readers who care, that's fine. You're helping them, and\nyou're writing. Later you can expand the breadth of topics you write\nabout.The other constraint you can relax is a little surprising: publication.\nWriting essays doesn't have to mean publishing them. That may seem\nstrange now that the trend is to publish every random thought, but\nit worked for me. I wrote what amounted to essays in notebooks for\nabout 15 years. I never published any of them and never expected\nto. I wrote them as a way of figuring things out. But when the web\ncame along I'd had a lot of practice.Incidentally, \nSteve \nWozniak did the same thing. In high school he\ndesigned computers on paper for fun. He couldn't build them because\nhe couldn't afford the components. But when Intel launched 4K DRAMs\nin 1975, he was ready._____\nHow many essays are there left to write though? The answer to that\nquestion is probably the most exciting thing I've learned about\nessay writing. Nearly all of them are left to write.Although the essay \nis an old form, it hasn't been assiduously\ncultivated. In the print era, publication was expensive, and there\nwasn't enough demand for essays to publish that many. You could\npublish essays if you were already well known for writing something\nelse, like novels. Or you could write book reviews that you took\nover to express your own ideas. But there was not really a direct\npath to becoming an essayist. Which meant few essays got written,\nand those that did tended to be about a narrow range of subjects.Now, thanks to the internet, there's a path. Anyone can publish\nessays online. You start in obscurity, perhaps, but at least you\ncan start. You don't need anyone's permission.It sometimes happens that an area of knowledge sits quietly for\nyears, till some change makes it explode. Cryptography did this to\nnumber theory. The internet is doing it to the essay.The exciting thing is not that there's a lot left to write, but\nthat there's a lot left to discover. There's a certain kind of idea\nthat's best discovered by writing essays. If most essays are still\nunwritten, most such ideas are still undiscovered.Notes[1] Put railings on the balconies, but don't put bars on the windows.[2] Even now I sometimes write essays that are not meant for\npublication. I wrote several to figure out what Y Combinator should\ndo, and they were really helpful.Thanks to Trevor Blackwell, Daniel Gackle, Jessica Livingston, and\nRobert Morris for reading drafts of this."} {"title": "aord", "text": "October 2015When I talk to a startup that's been operating for more than 8 or\n9 months, the first thing I want to know is almost always the same.\nAssuming their expenses remain constant and their revenue growth\nis what it has been over the last several months, do they make it to\nprofitability on the money they have left? Or to put it more\ndramatically, by default do they live or die?The startling thing is how often the founders themselves don't know.\nHalf the founders I talk to don't know whether they're default alive\nor default dead.If you're among that number, Trevor Blackwell has made a handy\ncalculator you can use to find out.The reason I want to know first whether a startup is default alive\nor default dead is that the rest of the conversation depends on the\nanswer. If the company is default alive, we can talk about ambitious\nnew things they could do. If it's default dead, we probably need\nto talk about how to save it. We know the current trajectory ends\nbadly. How can they get off that trajectory?Why do so few founders know whether they're default alive or default\ndead? Mainly, I think, because they're not used to asking that.\nIt's not a question that makes sense to ask early on, any more than\nit makes sense to ask a 3 year old how he plans to support\nhimself. But as the company grows older, the question switches from\nmeaningless to critical. That kind of switch often takes people\nby surprise.I propose the following solution: instead of starting to ask too\nlate whether you're default alive or default dead, start asking too\nearly. It's hard to say precisely when the question switches\npolarity. But it's probably not that dangerous to start worrying\ntoo early that you're default dead, whereas it's very dangerous to\nstart worrying too late.The reason is a phenomenon I wrote about earlier: the\nfatal pinch.\nThe fatal pinch is default dead + slow growth + not enough\ntime to fix it. And the way founders end up in it is by not realizing\nthat's where they're headed.There is another reason founders don't ask themselves whether they're\ndefault alive or default dead: they assume it will be easy to raise\nmore money. But that assumption is often false, and worse still, the\nmore you depend on it, the falser it becomes.Maybe it will help to separate facts from hopes. Instead of thinking\nof the future with vague optimism, explicitly separate the components.\nSay \"We're default dead, but we're counting on investors to save\nus.\" Maybe as you say that, it will set off the same alarms in your\nhead that it does in mine. And if you set off the alarms sufficiently\nearly, you may be able to avoid the fatal pinch.It would be safe to be default dead if you could count on investors\nsaving you. As a rule their interest is a function of\ngrowth. If you have steep revenue growth, say over 5x a year, you\ncan start to count on investors being interested even if you're not\nprofitable.\n[1]\nBut investors are so fickle that you can never\ndo more than start to count on them. Sometimes something about your\nbusiness will spook investors even if your growth is great. So no\nmatter how good your growth is, you can never safely treat fundraising\nas more than a plan A. You should always have a plan B as well: you\nshould know (as in write down) precisely what you'll need to do to\nsurvive if you can't raise more money, and precisely when you'll \nhave to switch to plan B if plan A isn't working.In any case, growing fast versus operating cheaply is far from the\nsharp dichotomy many founders assume it to be. In practice there\nis surprisingly little connection between how much a startup spends\nand how fast it grows. When a startup grows fast, it's usually\nbecause the product hits a nerve, in the sense of hitting some big\nneed straight on. When a startup spends a lot, it's usually because\nthe product is expensive to develop or sell, or simply because\nthey're wasteful.If you're paying attention, you'll be asking at this point not just\nhow to avoid the fatal pinch, but how to avoid being default dead.\nThat one is easy: don't hire too fast. Hiring too fast is by far\nthe biggest killer of startups that raise money.\n[2]Founders tell themselves they need to hire in order to grow. But\nmost err on the side of overestimating this need rather than\nunderestimating it. Why? Partly because there's so much work to\ndo. Naive founders think that if they can just hire enough\npeople, it will all get done. Partly because successful startups have\nlots of employees, so it seems like that's what one does in order\nto be successful. In fact the large staffs of successful startups\nare probably more the effect of growth than the cause. And\npartly because when founders have slow growth they don't want to\nface what is usually the real reason: the product is not appealing\nenough.Plus founders who've just raised money are often encouraged to\noverhire by the VCs who funded them. Kill-or-cure strategies are\noptimal for VCs because they're protected by the portfolio effect.\nVCs want to blow you up, in one sense of the phrase or the other.\nBut as a founder your incentives are different. You want above all\nto survive.\n[3]Here's a common way startups die. They make something moderately\nappealing and have decent initial growth. They raise their first\nround fairly easily, because the founders seem smart and the idea\nsounds plausible. But because the product is only moderately\nappealing, growth is ok but not great. The founders convince\nthemselves that hiring a bunch of people is the way to boost growth.\nTheir investors agree. But (because the product is only moderately\nappealing) the growth never comes. Now they're rapidly running out\nof runway. They hope further investment will save them. But because\nthey have high expenses and slow growth, they're now unappealing\nto investors. They're unable to raise more, and the company dies.What the company should have done is address the fundamental problem:\nthat the product is only moderately appealing. Hiring people is\nrarely the way to fix that. More often than not it makes it harder.\nAt this early stage, the product needs to evolve more than to be\n\"built out,\" and that's usually easier with fewer people.\n[4]Asking whether you're default alive or default dead may save you\nfrom this. Maybe the alarm bells it sets off will counteract the\nforces that push you to overhire. Instead you'll be compelled to\nseek growth in other ways. For example, by doing\nthings that don't scale, or by redesigning the product in the\nway only founders can.\nAnd for many if not most startups, these paths to growth will be\nthe ones that actually work.Airbnb waited 4 months after raising money at the end of Y\u00a0Combinator\nbefore they hired their first employee. In the meantime the founders\nwere terribly overworked. But they were overworked evolving Airbnb\ninto the astonishingly successful organism it is now.Notes[1]\nSteep usage growth will also interest investors. Revenue\nwill ultimately be a constant multiple of usage, so x% usage growth\npredicts x% revenue growth. But in practice investors discount\nmerely predicted revenue, so if you're measuring usage you need a\nhigher growth rate to impress investors.[2]\nStartups that don't raise money are saved from hiring too\nfast because they can't afford to. But that doesn't mean you should\navoid raising money in order to avoid this problem, any more than\nthat total abstinence is the only way to avoid becoming an alcoholic.[3]\nI would not be surprised if VCs' tendency to push founders\nto overhire is not even in their own interest. They don't know how\nmany of the companies that get killed by overspending might have\ndone well if they'd survived. My guess is a significant number.[4]\nAfter reading a draft, Sam Altman wrote:\"I think you should make the hiring point more strongly. I think\nit's roughly correct to say that YC's most successful companies\nhave never been the fastest to hire, and one of the marks of a great\nfounder is being able to resist this urge.\"Paul Buchheit adds:\"A related problem that I see a lot is premature scaling\u2014founders\ntake a small business that isn't really working (bad unit economics,\ntypically) and then scale it up because they want impressive growth\nnumbers. This is similar to over-hiring in that it makes the business\nmuch harder to fix once it's big, plus they are bleeding cash really\nfast.\"\nThanks to Sam Altman, Paul Buchheit, Joe Gebbia, Jessica Livingston,\nand Geoff Ralston for reading drafts of this."} {"title": "before", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2014(This essay is derived from a guest lecture in Sam Altman's startup class at\nStanford. It's intended for college students, but much of it is\napplicable to potential founders at other ages.)One of the advantages of having kids is that when you have to give\nadvice, you can ask yourself \"what would I tell my own kids?\" My\nkids are little, but I can imagine what I'd tell them about startups\nif they were in college, and that's what I'm going to tell you.Startups are very counterintuitive. I'm not sure why. Maybe it's\njust because knowledge about them hasn't permeated our culture yet.\nBut whatever the reason, starting a startup is a task where you\ncan't always trust your instincts.It's like skiing in that way. When you first try skiing and you\nwant to slow down, your instinct is to lean back. But if you lean\nback on skis you fly down the hill out of control. So part of\nlearning to ski is learning to suppress that impulse. Eventually\nyou get new habits, but at first it takes a conscious effort. At\nfirst there's a list of things you're trying to remember as you\nstart down the hill.Startups are as unnatural as skiing, so there's a similar list for\nstartups. Here I'm going to give you the first part of it \u2014 the things\nto remember if you want to prepare yourself to start a startup.\nCounterintuitiveThe first item on it is the fact I already mentioned: that startups\nare so weird that if you trust your instincts, you'll make a lot\nof mistakes. If you know nothing more than this, you may at least\npause before making them.When I was running Y Combinator I used to joke that our function\nwas to tell founders things they would ignore. It's really true.\nBatch after batch, the YC partners warn founders about mistakes\nthey're about to make, and the founders ignore them, and then come\nback a year later and say \"I wish we'd listened.\"Why do the founders ignore the partners' advice? Well, that's the\nthing about counterintuitive ideas: they contradict your intuitions.\nThey seem wrong. So of course your first impulse is to disregard\nthem. And in fact my joking description is not merely the curse\nof Y Combinator but part of its raison d'etre. If founders' instincts\nalready gave them the right answers, they wouldn't need us. You\nonly need other people to give you advice that surprises you. That's\nwhy there are a lot of ski instructors and not many running\ninstructors.\n[1]You can, however, trust your instincts about people. And in fact\none of the most common mistakes young founders make is not to\ndo that enough. They get involved with people who seem impressive,\nbut about whom they feel some misgivings personally. Later when\nthings blow up they say \"I knew there was something off about him,\nbut I ignored it because he seemed so impressive.\"If you're thinking about getting involved with someone \u2014 as a\ncofounder, an employee, an investor, or an acquirer \u2014 and you\nhave misgivings about them, trust your gut. If someone seems\nslippery, or bogus, or a jerk, don't ignore it.This is one case where it pays to be self-indulgent. Work with\npeople you genuinely like, and you've known long enough to be sure.\nExpertiseThe second counterintuitive point is that it's not that important\nto know a lot about startups. The way to succeed in a startup is\nnot to be an expert on startups, but to be an expert on your users\nand the problem you're solving for them.\nMark Zuckerberg didn't succeed because he was an expert on startups.\nHe succeeded despite being a complete noob at startups, because he\nunderstood his users really well.If you don't know anything about, say, how to raise an angel round,\ndon't feel bad on that account. That sort of thing you can learn\nwhen you need to, and forget after you've done it.In fact, I worry it's not merely unnecessary to learn in great\ndetail about the mechanics of startups, but possibly somewhat\ndangerous. If I met an undergrad who knew all about convertible\nnotes and employee agreements and (God forbid) class FF stock, I\nwouldn't think \"here is someone who is way ahead of their peers.\"\nIt would set off alarms. Because another of the characteristic\nmistakes of young founders is to go through the motions of starting\na startup. They make up some plausible-sounding idea, raise money\nat a good valuation, rent a cool office, hire a bunch of people.\nFrom the outside that seems like what startups do. But the next\nstep after rent a cool office and hire a bunch of people is: gradually\nrealize how completely fucked they are, because while imitating all\nthe outward forms of a startup they have neglected the one thing\nthat's actually essential: making something people want.\nGameWe saw this happen so often that we made up a name for it: playing\nhouse. Eventually I realized why it was happening. The reason\nyoung founders go through the motions of starting a startup is\nbecause that's what they've been trained to do for their whole lives\nup to that point. Think about what you have to do to get into\ncollege, for example. Extracurricular activities, check. Even in\ncollege classes most of the work is as artificial as running laps.I'm not attacking the educational system for being this way. There\nwill always be a certain amount of fakeness in the work you do when\nyou're being taught something, and if you measure their performance\nit's inevitable that people will exploit the difference to the point\nwhere much of what you're measuring is artifacts of the fakeness.I confess I did it myself in college. I found that in a lot of\nclasses there might only be 20 or 30 ideas that were the right shape\nto make good exam questions. The way I studied for exams in these\nclasses was not (except incidentally) to master the material taught\nin the class, but to make a list of potential exam questions and\nwork out the answers in advance. When I walked into the final, the\nmain thing I'd be feeling was curiosity about which of my questions\nwould turn up on the exam. It was like a game.It's not surprising that after being trained for their whole lives\nto play such games, young founders' first impulse on starting a\nstartup is to try to figure out the tricks for winning at this new\ngame. Since fundraising appears to be the measure of success for\nstartups (another classic noob mistake), they always want to know what the\ntricks are for convincing investors. We tell them the best way to\nconvince investors is to make a startup\nthat's actually doing well, meaning growing fast, and then simply\ntell investors so. Then they want to know what the tricks are for\ngrowing fast. And we have to tell them the best way to do that is\nsimply to make something people want.So many of the conversations YC partners have with young founders\nbegin with the founder asking \"How do we...\" and the partner replying\n\"Just...\"Why do the founders always make things so complicated? The reason,\nI realized, is that they're looking for the trick.So this is the third counterintuitive thing to remember about\nstartups: starting a startup is where gaming the system stops\nworking. Gaming the system may continue to work if you go to work\nfor a big company. Depending on how broken the company is, you can\nsucceed by sucking up to the right people, giving the impression\nof productivity, and so on. \n[2]\nBut that doesn't work with startups.\nThere is no boss to trick, only users, and all users care about is\nwhether your product does what they want. Startups are as impersonal\nas physics. You have to make something people want, and you prosper\nonly to the extent you do.The dangerous thing is, faking does work to some degree on investors.\nIf you're super good at sounding like you know what you're talking\nabout, you can fool investors for at least one and perhaps even two\nrounds of funding. But it's not in your interest to. The company\nis ultimately doomed. All you're doing is wasting your own time\nriding it down.So stop looking for the trick. There are tricks in startups, as\nthere are in any domain, but they are an order of magnitude less\nimportant than solving the real problem. A founder who knows nothing\nabout fundraising but has made something users love will have an\neasier time raising money than one who knows every trick in the\nbook but has a flat usage graph. And more importantly, the founder\nwho has made something users love is the one who will go on to\nsucceed after raising the money.Though in a sense it's bad news in that you're deprived of one of\nyour most powerful weapons, I think it's exciting that gaming the\nsystem stops working when you start a startup. It's exciting that\nthere even exist parts of the world where you win by doing good\nwork. Imagine how depressing the world would be if it were all\nlike school and big companies, where you either have to spend a lot\nof time on bullshit things or lose to people who do.\n[3]\nI would\nhave been delighted if I'd realized in college that there were parts\nof the real world where gaming the system mattered less than others,\nand a few where it hardly mattered at all. But there are, and this\nvariation is one of the most important things to consider when\nyou're thinking about your future. How do you win in each type of\nwork, and what would you like to win by doing?\n[4]\nAll-ConsumingThat brings us to our fourth counterintuitive point: startups are\nall-consuming. If you start a startup, it will take over your life\nto a degree you cannot imagine. And if your startup succeeds, it\nwill take over your life for a long time: for several years at the\nvery least, maybe for a decade, maybe for the rest of your working\nlife. So there is a real opportunity cost here.Larry Page may seem to have an enviable life, but there are aspects\nof it that are unenviable. Basically at 25 he started running as\nfast as he could and it must seem to him that he hasn't stopped to\ncatch his breath since. Every day new shit happens in the Google\nempire that only the CEO can deal with, and he, as CEO, has to deal\nwith it. If he goes on vacation for even a week, a whole week's\nbacklog of shit accumulates. And he has to bear this uncomplainingly,\npartly because as the company's daddy he can never show fear or\nweakness, and partly because billionaires get less than zero sympathy\nif they talk about having difficult lives. Which has the strange\nside effect that the difficulty of being a successful startup founder\nis concealed from almost everyone except those who've done it.Y Combinator has now funded several companies that can be called\nbig successes, and in every single case the founders say the same\nthing. It never gets any easier. The nature of the problems change.\nYou're worrying about construction delays at your London office\ninstead of the broken air conditioner in your studio apartment.\nBut the total volume of worry never decreases; if anything it\nincreases.Starting a successful startup is similar to having kids in that\nit's like a button you push that changes your life irrevocably.\nAnd while it's truly wonderful having kids, there are a lot of\nthings that are easier to do before you have them than after. Many\nof which will make you a better parent when you do have kids. And\nsince you can delay pushing the button for a while, most people in\nrich countries do.Yet when it comes to startups, a lot of people seem to think they're\nsupposed to start them while they're still in college. Are you\ncrazy? And what are the universities thinking? They go out of\ntheir way to ensure their students are well supplied with contraceptives,\nand yet they're setting up entrepreneurship programs and startup\nincubators left and right.To be fair, the universities have their hand forced here. A lot\nof incoming students are interested in startups. Universities are,\nat least de facto, expected to prepare them for their careers. So\nstudents who want to start startups hope universities can teach\nthem about startups. And whether universities can do this or not,\nthere's some pressure to claim they can, lest they lose applicants\nto other universities that do.Can universities teach students about startups? Yes and no. They\ncan teach students about startups, but as I explained before, this\nis not what you need to know. What you need to learn about are the\nneeds of your own users, and you can't do that until you actually\nstart the company.\n[5]\nSo starting a startup is intrinsically\nsomething you can only really learn by doing it. And it's impossible\nto do that in college, for the reason I just explained: startups\ntake over your life. You can't start a startup for real as a\nstudent, because if you start a startup for real you're not a student\nanymore. You may be nominally a student for a bit, but you won't even\nbe that for long.\n[6]Given this dichotomy, which of the two paths should you take? Be\na real student and not start a startup, or start a real startup and\nnot be a student? I can answer that one for you. Do not start a\nstartup in college. How to start a startup is just a subset of a\nbigger problem you're trying to solve: how to have a good life.\nAnd though starting a startup can be part of a good life for a lot\nof ambitious people, age 20 is not the optimal time to do it.\nStarting a startup is like a brutally fast depth-first search. Most\npeople should still be searching breadth-first at 20.You can do things in your early 20s that you can't do as well before\nor after, like plunge deeply into projects on a whim and travel\nsuper cheaply with no sense of a deadline. For unambitious people,\nthis sort of thing is the dreaded \"failure to launch,\" but for the\nambitious ones it can be an incomparably valuable sort of exploration.\nIf you start a startup at 20 and you're sufficiently successful,\nyou'll never get to do it.\n[7]Mark Zuckerberg will never get to bum around a foreign country. He\ncan do other things most people can't, like charter jets to fly him\nto foreign countries. But success has taken a lot of the serendipity\nout of his life. Facebook is running him as much as he's running\nFacebook. And while it can be very cool to be in the grip of a\nproject you consider your life's work, there are advantages to\nserendipity too, especially early in life. Among other things it\ngives you more options to choose your life's work from.There's not even a tradeoff here. You're not sacrificing anything\nif you forgo starting a startup at 20, because you're more likely\nto succeed if you wait. In the unlikely case that you're 20 and\none of your side projects takes off like Facebook did, you'll face\na choice of running with it or not, and it may be reasonable to run\nwith it. But the usual way startups take off is for the founders\nto make them take off, and it's gratuitously\nstupid to do that at 20.\nTryShould you do it at any age? I realize I've made startups sound\npretty hard. If I haven't, let me try again: starting a startup\nis really hard. What if it's too hard? How can you tell if you're\nup to this challenge?The answer is the fifth counterintuitive point: you can't tell. Your\nlife so far may have given you some idea what your prospects might\nbe if you tried to become a mathematician, or a professional football\nplayer. But unless you've had a very strange life you haven't done\nmuch that was like being a startup founder.\nStarting a startup will change you a lot. So what you're trying\nto estimate is not just what you are, but what you could grow into,\nand who can do that?For the past 9 years it was my job to predict whether people would\nhave what it took to start successful startups. It was easy to\ntell how smart they were, and most people reading this will be over\nthat threshold. The hard part was predicting how tough and ambitious they would become. There\nmay be no one who has more experience at trying to predict that,\nso I can tell you how much an expert can know about it, and the\nanswer is: not much. I learned to keep a completely open mind about\nwhich of the startups in each batch would turn out to be the stars.The founders sometimes think they know. Some arrive feeling sure\nthey will ace Y Combinator just as they've aced every one of the (few,\nartificial, easy) tests they've faced in life so far. Others arrive\nwondering how they got in, and hoping YC doesn't discover whatever\nmistake caused it to accept them. But there is little correlation\nbetween founders' initial attitudes and how well their companies\ndo.I've read that the same is true in the military \u2014 that the\nswaggering recruits are no more likely to turn out to be really\ntough than the quiet ones. And probably for the same reason: that\nthe tests involved are so different from the ones in their previous\nlives.If you're absolutely terrified of starting a startup, you probably\nshouldn't do it. But if you're merely unsure whether you're up to\nit, the only way to find out is to try. Just not now.\nIdeasSo if you want to start a startup one day, what should you do in\ncollege? There are only two things you need initially: an idea and\ncofounders. And the m.o. for getting both is the same. Which leads\nto our sixth and last counterintuitive point: that the way to get\nstartup ideas is not to try to think of startup ideas.I've written a whole essay on this,\nso I won't repeat it all here. But the short version is that if\nyou make a conscious effort to think of startup ideas, the ideas\nyou come up with will not merely be bad, but bad and plausible-sounding,\nmeaning you'll waste a lot of time on them before realizing they're\nbad.The way to come up with good startup ideas is to take a step back.\nInstead of making a conscious effort to think of startup ideas,\nturn your mind into the type that startup ideas form in without any\nconscious effort. In fact, so unconsciously that you don't even\nrealize at first that they're startup ideas.This is not only possible, it's how Apple, Yahoo, Google, and\nFacebook all got started. None of these companies were even meant\nto be companies at first. They were all just side projects. The\nbest startups almost have to start as side projects, because great\nideas tend to be such outliers that your conscious mind would reject\nthem as ideas for companies.Ok, so how do you turn your mind into the type that startup ideas\nform in unconsciously? (1) Learn a lot about things that matter,\nthen (2) work on problems that interest you (3) with people you\nlike and respect. The third part, incidentally, is how you get\ncofounders at the same time as the idea.The first time I wrote that paragraph, instead of \"learn a lot about\nthings that matter,\" I wrote \"become good at some technology.\" But\nthat prescription, though sufficient, is too narrow. What was\nspecial about Brian Chesky and Joe Gebbia was not that they were\nexperts in technology. They were good at design, and perhaps even\nmore importantly, they were good at organizing groups and making\nprojects happen. So you don't have to work on technology per se,\nso long as you work on problems demanding enough to stretch you.What kind of problems are those? That is very hard to answer in\nthe general case. History is full of examples of young people who\nwere working on important problems that no\none else at the time thought were important, and in particular\nthat their parents didn't think were important. On the other hand,\nhistory is even fuller of examples of parents who thought their\nkids were wasting their time and who were right. So how do you\nknow when you're working on real stuff?\n[8]I know how I know. Real problems are interesting, and I am\nself-indulgent in the sense that I always want to work on interesting\nthings, even if no one else cares about them (in fact, especially\nif no one else cares about them), and find it very hard to make\nmyself work on boring things, even if they're supposed to be\nimportant.My life is full of case after case where I worked on something just\nbecause it seemed interesting, and it turned out later to be useful\nin some worldly way. Y\nCombinator itself was something I only did because it seemed\ninteresting. So I seem to have some sort of internal compass that\nhelps me out. But I don't know what other people have in their\nheads. Maybe if I think more about this I can come up with heuristics\nfor recognizing genuinely interesting problems, but for the moment\nthe best I can offer is the hopelessly question-begging advice that\nif you have a taste for genuinely interesting problems, indulging\nit energetically is the best way to prepare yourself for a startup.\nAnd indeed, probably also the best way to live.\n[9]But although I can't explain in the general case what counts as an\ninteresting problem, I can tell you about a large subset of them.\nIf you think of technology as something that's spreading like a\nsort of fractal stain, every moving point on the edge represents\nan interesting problem. So one guaranteed way to turn your mind\ninto the type that has good startup ideas is to get yourself to the\nleading edge of some technology \u2014 to cause yourself, as Paul\nBuchheit put it, to \"live in the future.\" When you reach that point,\nideas that will seem to other people uncannily prescient will seem\nobvious to you. You may not realize they're startup ideas, but\nyou'll know they're something that ought to exist.For example, back at Harvard in the mid 90s a fellow grad student\nof my friends Robert and Trevor wrote his own voice over IP software.\nHe didn't mean it to be a startup, and he never tried to turn it\ninto one. He just wanted to talk to his girlfriend in Taiwan without\npaying for long distance calls, and since he was an expert on\nnetworks it seemed obvious to him that the way to do it was turn\nthe sound into packets and ship it over the Internet. He never did\nany more with his software than talk to his girlfriend, but this\nis exactly the way the best startups get started.So strangely enough the optimal thing to do in college if you want\nto be a successful startup founder is not some sort of new, vocational\nversion of college focused on \"entrepreneurship.\" It's the classic\nversion of college as education for its own sake. If you want to\nstart a startup after college, what you should do in college is\nlearn powerful things. And if you have genuine intellectual\ncuriosity, that's what you'll naturally tend to do if you just\nfollow your own inclinations.\n[10]The component of entrepreneurship that really matters is domain\nexpertise. The way to become Larry Page was to become an expert\non search. And the way to become an expert on search was to be\ndriven by genuine curiosity, not some ulterior motive.At its best, starting a startup is merely an ulterior motive for\ncuriosity. And you'll do it best if you introduce the ulterior\nmotive toward the end of the process.So here is the ultimate advice for young would-be startup founders,\nboiled down to two words: just learn.\nNotes[1]\nSome founders listen more than others, and this tends to be a\npredictor of success. One of the things I\nremember about the Airbnbs during YC is how intently they listened.[2]\nIn fact, this is one of the reasons startups are possible. If\nbig companies weren't plagued by internal inefficiencies, they'd\nbe proportionately more effective, leaving less room for startups.[3]\nIn a startup you have to spend a lot of time on schleps, but this sort of work is merely\nunglamorous, not bogus.[4]\nWhat should you do if your true calling is gaming the system?\nManagement consulting.[5]\nThe company may not be incorporated, but if you start to get\nsignificant numbers of users, you've started it, whether you realize\nit yet or not.[6]\nIt shouldn't be that surprising that colleges can't teach\nstudents how to be good startup founders, because they can't teach\nthem how to be good employees either.The way universities \"teach\" students how to be employees is to\nhand off the task to companies via internship programs. But you\ncouldn't do the equivalent thing for startups, because by definition\nif the students did well they would never come back.[7]\nCharles Darwin was 22 when he received an invitation to travel\naboard the HMS Beagle as a naturalist. It was only because he was\notherwise unoccupied, to a degree that alarmed his family, that he\ncould accept it. And yet if he hadn't we probably would not know\nhis name.[8]\nParents can sometimes be especially conservative in this\ndepartment. There are some whose definition of important problems\nincludes only those on the critical path to med school.[9]\nI did manage to think of a heuristic for detecting whether you\nhave a taste for interesting ideas: whether you find known boring\nideas intolerable. Could you endure studying literary theory, or\nworking in middle management at a large company?[10]\nIn fact, if your goal is to start a startup, you can stick\neven more closely to the ideal of a liberal education than past\ngenerations have. Back when students focused mainly on getting a\njob after college, they thought at least a little about how the\ncourses they took might look to an employer. And perhaps even\nworse, they might shy away from taking a difficult class lest they\nget a low grade, which would harm their all-important GPA. Good\nnews: users don't care what your GPA\nwas. And I've never heard of investors caring either. Y Combinator\ncertainly never asks what classes you took in college or what grades\nyou got in them.\nThanks to Sam Altman, Paul Buchheit, John Collison, Patrick\nCollison, Jessica Livingston, Robert Morris, Geoff Ralston, and\nFred Wilson for reading drafts of this."} {"title": "bias", "text": "October 2015This will come as a surprise to a lot of people, but in some cases\nit's possible to detect bias in a selection process without knowing\nanything about the applicant pool. Which is exciting because among\nother things it means third parties can use this technique to detect\nbias whether those doing the selecting want them to or not.You can use this technique whenever (a) you have at least\na random sample of the applicants that were selected, (b) their\nsubsequent performance is measured, and (c) the groups of\napplicants you're comparing have roughly equal distribution of ability.How does it work? Think about what it means to be biased. What\nit means for a selection process to be biased against applicants\nof type x is that it's harder for them to make it through. Which\nmeans applicants of type x have to be better to get selected than\napplicants not of type x.\n[1]\nWhich means applicants of type x\nwho do make it through the selection process will outperform other\nsuccessful applicants. And if the performance of all the successful\napplicants is measured, you'll know if they do.Of course, the test you use to measure performance must be a valid\none. And in particular it must not be invalidated by the bias you're\ntrying to measure.\nBut there are some domains where performance can be measured, and\nin those detecting bias is straightforward. Want to know if the\nselection process was biased against some type of applicant? Check\nwhether they outperform the others. This is not just a heuristic\nfor detecting bias. It's what bias means.For example, many suspect that venture capital firms are biased\nagainst female founders. This would be easy to detect: among their\nportfolio companies, do startups with female founders outperform\nthose without? A couple months ago, one VC firm (almost certainly\nunintentionally) published a study showing bias of this type. First\nRound Capital found that among its portfolio companies, startups\nwith female founders outperformed\nthose without by 63%. \n[2]The reason I began by saying that this technique would come as a\nsurprise to many people is that we so rarely see analyses of this\ntype. I'm sure it will come as a surprise to First Round that they\nperformed one. I doubt anyone there realized that by limiting their\nsample to their own portfolio, they were producing a study not of\nstartup trends but of their own biases when selecting companies.I predict we'll see this technique used more in the future. The\ninformation needed to conduct such studies is increasingly available.\nData about who applies for things is usually closely guarded by the\norganizations selecting them, but nowadays data about who gets\nselected is often publicly available to anyone who takes the trouble\nto aggregate it.\nNotes[1]\nThis technique wouldn't work if the selection process looked\nfor different things from different types of applicants\u2014for\nexample, if an employer hired men based on their ability but women\nbased on their appearance.[2]\nAs Paul Buchheit points out, First Round excluded their most \nsuccessful investment, Uber, from the study. And while it \nmakes sense to exclude outliers from some types of studies, \nstudies of returns from startup investing, which is all about \nhitting outliers, are not one of them.\nThanks to Sam Altman, Jessica Livingston, and Geoff Ralston for reading\ndrafts of this."} {"title": "copy", "text": "July 2006\nWhen I was in high school I spent a lot of time imitating bad\nwriters. What we studied in English classes was mostly fiction,\nso I assumed that was the highest form of writing. Mistake number\none. The stories that seemed to be most admired were ones in which\npeople suffered in complicated ways. Anything funny or\ngripping was ipso facto suspect, unless it was old enough to be hard to\nunderstand, like Shakespeare or Chaucer. Mistake number two. The\nideal medium seemed the short story, which I've since learned had\nquite a brief life, roughly coincident with the peak of magazine\npublishing. But since their size made them perfect for use in\nhigh school classes, we read a lot of them, which gave us the\nimpression the short story was flourishing. Mistake number three.\nAnd because they were so short, nothing really had to happen; you\ncould just show a randomly truncated slice of life, and that was\nconsidered advanced. Mistake number four. The result was that I\nwrote a lot of stories in which nothing happened except that someone\nwas unhappy in a way that seemed deep.For most of college I was a philosophy major. I was very impressed\nby the papers published in philosophy journals. They were so\nbeautifully typeset, and their tone was just captivating\u2014alternately\ncasual and buffer-overflowingly technical. A fellow would be walking\nalong a street and suddenly modality qua modality would spring upon\nhim. I didn't ever quite understand these papers, but I figured\nI'd get around to that later, when I had time to reread them more\nclosely. In the meantime I tried my best to imitate them. This\nwas, I can now see, a doomed undertaking, because they weren't\nreally saying anything. No philosopher ever refuted another, for\nexample, because no one said anything definite enough to refute.\nNeedless to say, my imitations didn't say anything either.In grad school I was still wasting time imitating the wrong things.\nThere was then a fashionable type of program called an expert system,\nat the core of which was something called an inference engine. I\nlooked at what these things did and thought \"I could write that in\na thousand lines of code.\" And yet eminent professors were writing\nbooks about them, and startups were selling them for a year's salary\na copy. What an opportunity, I thought; these impressive things\nseem easy to me; I must be pretty sharp. Wrong. It was simply a\nfad. The books the professors wrote about expert systems are now\nignored. They were not even on a path to anything interesting.\nAnd the customers paying so much for them were largely the same\ngovernment agencies that paid thousands for screwdrivers and toilet\nseats.How do you avoid copying the wrong things? Copy only what you\ngenuinely like. That would have saved me in all three cases. I\ndidn't enjoy the short stories we had to read in English classes;\nI didn't learn anything from philosophy papers; I didn't use expert\nsystems myself. I believed these things were good because they\nwere admired.It can be hard to separate the things you like from the things\nyou're impressed with. One trick is to ignore presentation. Whenever\nI see a painting impressively hung in a museum, I ask myself: how\nmuch would I pay for this if I found it at a garage sale, dirty and\nframeless, and with no idea who painted it? If you walk around a\nmuseum trying this experiment, you'll find you get some truly\nstartling results. Don't ignore this data point just because it's\nan outlier.Another way to figure out what you like is to look at what you enjoy\nas guilty pleasures. Many things people like, especially if they're\nyoung and ambitious, they like largely for the feeling of virtue\nin liking them. 99% of people reading Ulysses are thinking\n\"I'm reading Ulysses\" as they do it. A guilty pleasure is\nat least a pure one. What do you read when you don't feel up to being\nvirtuous? What kind of book do you read and feel sad that there's\nonly half of it left, instead of being impressed that you're half\nway through? That's what you really like.Even when you find genuinely good things to copy, there's another\npitfall to be avoided. Be careful to copy what makes them good,\nrather than their flaws. It's easy to be drawn into imitating\nflaws, because they're easier to see, and of course easier to copy\ntoo. For example, most painters in the eighteenth and nineteenth\ncenturies used brownish colors. They were imitating the great\npainters of the Renaissance, whose paintings by that time were brown\nwith dirt. Those paintings have since been cleaned, revealing\nbrilliant colors; their imitators are of course still brown.It was painting, incidentally, that cured me of copying the wrong\nthings. Halfway through grad school I decided I wanted to try being\na painter, and the art world was so manifestly corrupt that it\nsnapped the leash of credulity. These people made philosophy\nprofessors seem as scrupulous as mathematicians. It was so clearly\na choice of doing good work xor being an insider that I was forced\nto see the distinction. It's there to some degree in almost every\nfield, but I had till then managed to avoid facing it.That was one of the most valuable things I learned from painting:\nyou have to figure out for yourself what's \ngood. You can't trust\nauthorities. They'll lie to you on this one.\n\nComment on this essay."} {"title": "ecw", "text": "December 2014If the world were static, we could have monotonically increasing\nconfidence in our beliefs. The more (and more varied) experience\na belief survived, the less likely it would be false. Most people\nimplicitly believe something like this about their opinions. And\nthey're justified in doing so with opinions about things that don't\nchange much, like human nature. But you can't trust your opinions\nin the same way about things that change, which could include\npractically everything else.When experts are wrong, it's often because they're experts on an\nearlier version of the world.Is it possible to avoid that? Can you protect yourself against\nobsolete beliefs? To some extent, yes. I spent almost a decade\ninvesting in early stage startups, and curiously enough protecting\nyourself against obsolete beliefs is exactly what you have to do\nto succeed as a startup investor. Most really good startup ideas\nlook like bad ideas at first, and many of those look bad specifically\nbecause some change in the world just switched them from bad to\ngood. I spent a lot of time learning to recognize such ideas, and\nthe techniques I used may be applicable to ideas in general.The first step is to have an explicit belief in change. People who\nfall victim to a monotonically increasing confidence in their\nopinions are implicitly concluding the world is static. If you\nconsciously remind yourself it isn't, you start to look for change.Where should one look for it? Beyond the moderately useful\ngeneralization that human nature doesn't change much, the unfortunate\nfact is that change is hard to predict. This is largely a tautology\nbut worth remembering all the same: change that matters usually\ncomes from an unforeseen quarter.So I don't even try to predict it. When I get asked in interviews\nto predict the future, I always have to struggle to come up with\nsomething plausible-sounding on the fly, like a student who hasn't\nprepared for an exam.\n[1]\nBut it's not out of laziness that I haven't\nprepared. It seems to me that beliefs about the future are so\nrarely correct that they usually aren't worth the extra rigidity\nthey impose, and that the best strategy is simply to be aggressively\nopen-minded. Instead of trying to point yourself in the right\ndirection, admit you have no idea what the right direction is, and\ntry instead to be super sensitive to the winds of change.It's ok to have working hypotheses, even though they may constrain\nyou a bit, because they also motivate you. It's exciting to chase\nthings and exciting to try to guess answers. But you have to be\ndisciplined about not letting your hypotheses harden into anything\nmore.\n[2]I believe this passive m.o. works not just for evaluating new ideas\nbut also for having them. The way to come up with new ideas is not\nto try explicitly to, but to try to solve problems and simply not\ndiscount weird hunches you have in the process.The winds of change originate in the unconscious minds of domain\nexperts. If you're sufficiently expert in a field, any weird idea\nor apparently irrelevant question that occurs to you is ipso facto\nworth exploring. \n[3]\n Within Y Combinator, when an idea is described\nas crazy, it's a compliment\u2014in fact, on average probably a\nhigher compliment than when an idea is described as good.Startup investors have extraordinary incentives for correcting\nobsolete beliefs. If they can realize before other investors that\nsome apparently unpromising startup isn't, they can make a huge\namount of money. But the incentives are more than just financial.\nInvestors' opinions are explicitly tested: startups come to them\nand they have to say yes or no, and then, fairly quickly, they learn\nwhether they guessed right. The investors who say no to a Google\n(and there were several) will remember it for the rest of their\nlives.Anyone who must in some sense bet on ideas rather than merely\ncommenting on them has similar incentives. Which means anyone who\nwants such incentives can have them, by turning their comments into\nbets: if you write about a topic in some fairly durable and public\nform, you'll find you worry much more about getting things right\nthan most people would in a casual conversation.\n[4]Another trick I've found to protect myself against obsolete beliefs\nis to focus initially on people rather than ideas. Though the nature\nof future discoveries is hard to predict, I've found I can predict\nquite well what sort of people will make them. Good new ideas come\nfrom earnest, energetic, independent-minded people.Betting on people over ideas saved me countless times as an investor.\nWe thought Airbnb was a bad idea, for example. But we could tell\nthe founders were earnest, energetic, and independent-minded.\n(Indeed, almost pathologically so.) So we suspended disbelief and\nfunded them.This too seems a technique that should be generally applicable.\nSurround yourself with the sort of people new ideas come from. If\nyou want to notice quickly when your beliefs become obsolete, you\ncan't do better than to be friends with the people whose discoveries\nwill make them so.It's hard enough already not to become the prisoner of your own\nexpertise, but it will only get harder, because change is accelerating.\nThat's not a recent trend; change has been accelerating since the\npaleolithic era. Ideas beget ideas. I don't expect that to change.\nBut I could be wrong.\nNotes[1]\nMy usual trick is to talk about aspects of the present that\nmost people haven't noticed yet.[2]\nEspecially if they become well enough known that people start\nto identify them with you. You have to be extra skeptical about\nthings you want to believe, and once a hypothesis starts to be\nidentified with you, it will almost certainly start to be in that\ncategory.[3]\nIn practice \"sufficiently expert\" doesn't require one to be\nrecognized as an expert\u2014which is a trailing indicator in any\ncase. In many fields a year of focused work plus caring a lot would\nbe enough.[4]\nThough they are public and persist indefinitely, comments on\ne.g. forums and places like Twitter seem empirically to work like\ncasual conversation. The threshold may be whether what you write\nhas a title.\nThanks to Sam Altman, Patrick Collison, and Robert Morris\nfor reading drafts of this."} {"title": "foundervisa", "text": "\n\nApril 2009I usually avoid politics, but since we now seem to have an administration that's open to suggestions, I'm going to risk making one. The single biggest thing the government could do to increase the number of startups in this country is a policy that would cost nothing: establish a new class of visa for startup founders.The biggest constraint on the number of new startups that get created in the US is not tax policy or employment law or even Sarbanes-Oxley. It's that we won't let the people who want to start them into the country.Letting just 10,000 startup founders into the country each year could have a visible effect on the economy. If we assume 4 people per startup, which is probably an overestimate, that's 2500 new companies. Each year. They wouldn't all grow as big as Google, but out of 2500 some would come close.By definition these 10,000 founders wouldn't be taking jobs from Americans: it could be part of the terms of the visa that they couldn't work for existing companies, only new ones they'd founded. In fact they'd cause there to be \nmore jobs for Americans, because the companies they started would hire more employees as they grew.The tricky part might seem to be how one defined a startup. But that could be solved quite easily: let the market decide. Startup investors work hard to find the best startups. The government could not do better than to piggyback on their expertise, and use investment by recognized startup investors as the test of whether a company was a real startup.How would the government decide who's a startup investor? The same way they decide what counts as a university for student visas. We'll establish our own accreditation procedure. We know who one another are.10,000 people is a drop in the bucket by immigration standards, but would represent a huge increase in the pool of startup founders. I think this would have such a visible effect on the economy that it would make the legislator who introduced the bill famous. The only way to know for sure would be to try it, and that would cost practically nothing.\nThanks to Trevor Blackwell, Paul Buchheit, Jeff Clavier, David Hornik, Jessica Livingston, Greg Mcadoo, Aydin Senkut, and Fred Wilson for reading drafts of this.Related:"} {"title": "gap", "text": "May 2004When people care enough about something to do it well, those who\ndo it best tend to be far better than everyone else. There's a\nhuge gap between Leonardo and second-rate contemporaries like\nBorgognone. You see the same gap between Raymond Chandler and the\naverage writer of detective novels. A top-ranked professional chess\nplayer could play ten thousand games against an ordinary club player\nwithout losing once.Like chess or painting or writing novels, making money is a very\nspecialized skill. But for some reason we treat this skill\ndifferently. No one complains when a few people surpass all the\nrest at playing chess or writing novels, but when a few people make\nmore money than the rest, we get editorials saying this is wrong.Why? The pattern of variation seems no different than for any other\nskill. What causes people to react so strongly when the skill is\nmaking money?I think there are three reasons we treat making money as different:\nthe misleading model of wealth we learn as children; the disreputable\nway in which, till recently, most fortunes were accumulated; and\nthe worry that great variations in income are somehow bad for\nsociety. As far as I can tell, the first is mistaken, the second\noutdated, and the third empirically false. Could it be that, in a\nmodern democracy, variation in income is actually a sign of health?The Daddy Model of WealthWhen I was five I thought electricity was created by electric\nsockets. I didn't realize there were power plants out there\ngenerating it. Likewise, it doesn't occur to most kids that wealth\nis something that has to be generated. It seems to be something\nthat flows from parents.Because of the circumstances in which they encounter it, children\ntend to misunderstand wealth. They confuse it with money. They\nthink that there is a fixed amount of it. And they think of it as\nsomething that's distributed by authorities (and so should be\ndistributed equally), rather than something that has to be created\n(and might be created unequally).In fact, wealth is not money. Money is just a convenient way of\ntrading one form of wealth for another. Wealth is the underlying\nstuff\u2014the goods and services we buy. When you travel to a\nrich or poor country, you don't have to look at people's bank\naccounts to tell which kind you're in. You can see\nwealth\u2014in buildings and streets, in the clothes and the health\nof the people.Where does wealth come from? People make it. This was easier to\ngrasp when most people lived on farms, and made many of the things\nthey wanted with their own hands. Then you could see in the house,\nthe herds, and the granary the wealth that each family created. It\nwas obvious then too that the wealth of the world was not a fixed\nquantity that had to be shared out, like slices of a pie. If you\nwanted more wealth, you could make it.This is just as true today, though few of us create wealth directly\nfor ourselves (except for a few vestigial domestic tasks). Mostly\nwe create wealth for other people in exchange for money, which we\nthen trade for the forms of wealth we want. \n[1]Because kids are unable to create wealth, whatever they have has\nto be given to them. And when wealth is something you're given,\nthen of course it seems that it should be distributed equally.\n[2]\nAs in most families it is. The kids see to that. \"Unfair,\" they\ncry, when one sibling gets more than another.In the real world, you can't keep living off your parents. If you\nwant something, you either have to make it, or do something of\nequivalent value for someone else, in order to get them to give you\nenough money to buy it. In the real world, wealth is (except for\na few specialists like thieves and speculators) something you have\nto create, not something that's distributed by Daddy. And since\nthe ability and desire to create it vary from person to person,\nit's not made equally.You get paid by doing or making something people want, and those\nwho make more money are often simply better at doing what people\nwant. Top actors make a lot more money than B-list actors. The\nB-list actors might be almost as charismatic, but when people go\nto the theater and look at the list of movies playing, they want\nthat extra oomph that the big stars have.Doing what people want is not the only way to get money, of course.\nYou could also rob banks, or solicit bribes, or establish a monopoly.\nSuch tricks account for some variation in wealth, and indeed for\nsome of the biggest individual fortunes, but they are not the root\ncause of variation in income. The root cause of variation in income,\nas Occam's Razor implies, is the same as the root cause of variation\nin every other human skill.In the United States, the CEO of a large public company makes about\n100 times as much as the average person. \n[3]\nBasketball players\nmake about 128 times as much, and baseball players 72 times as much.\nEditorials quote this kind of statistic with horror. But I have\nno trouble imagining that one person could be 100 times as productive\nas another. In ancient Rome the price of slaves varied by\na factor of 50 depending on their skills. \n[4]\nAnd that's without\nconsidering motivation, or the extra leverage in productivity that\nyou can get from modern technology.Editorials about athletes' or CEOs' salaries remind me of early\nChristian writers, arguing from first principles about whether the\nEarth was round, when they could just walk outside and check.\n[5]\nHow much someone's work is worth is not a policy question. It's\nsomething the market already determines.\"Are they really worth 100 of us?\" editorialists ask. Depends on\nwhat you mean by worth. If you mean worth in the sense of what\npeople will pay for their skills, the answer is yes, apparently.A few CEOs' incomes reflect some kind of wrongdoing. But are there\nnot others whose incomes really do reflect the wealth they generate?\nSteve Jobs saved a company that was in a terminal decline. And not\nmerely in the way a turnaround specialist does, by cutting costs;\nhe had to decide what Apple's next products should be. Few others\ncould have done it. And regardless of the case with CEOs, it's\nhard to see how anyone could argue that the salaries of professional\nbasketball players don't reflect supply and demand.It may seem unlikely in principle that one individual could really\ngenerate so much more wealth than another. The key to this mystery\nis to revisit that question, are they really worth 100 of us?\nWould a basketball team trade one of their players for 100\nrandom people? What would Apple's next product look like if you\nreplaced Steve Jobs with a committee of 100 random people? \n[6]\nThese\nthings don't scale linearly. Perhaps the CEO or the professional\nathlete has only ten times (whatever that means) the skill and\ndetermination of an ordinary person. But it makes all the difference\nthat it's concentrated in one individual.When we say that one kind of work is overpaid and another underpaid,\nwhat are we really saying? In a free market, prices are determined\nby what buyers want. People like baseball more than poetry, so\nbaseball players make more than poets. To say that a certain kind\nof work is underpaid is thus identical with saying that people want\nthe wrong things.Well, of course people want the wrong things. It seems odd to be\nsurprised by that. And it seems even odder to say that it's\nunjust that certain kinds of work are underpaid. \n[7]\nThen\nyou're saying that it's unjust that people want the wrong things.\nIt's lamentable that people prefer reality TV and corndogs to\nShakespeare and steamed vegetables, but unjust? That seems like\nsaying that blue is heavy, or that up is circular.The appearance of the word \"unjust\" here is the unmistakable spectral\nsignature of the Daddy Model. Why else would this idea occur in\nthis odd context? Whereas if the speaker were still operating on\nthe Daddy Model, and saw wealth as something that flowed from a\ncommon source and had to be shared out, rather than something\ngenerated by doing what other people wanted, this is exactly what\nyou'd get on noticing that some people made much more than others.When we talk about \"unequal distribution of income,\" we should\nalso ask, where does that income come from?\n[8]\nWho made the wealth\nit represents? Because to the extent that income varies simply\naccording to how much wealth people create, the distribution may\nbe unequal, but it's hardly unjust.Stealing ItThe second reason we tend to find great disparities of wealth\nalarming is that for most of human history the usual way to accumulate\na fortune was to steal it: in pastoral societies by cattle raiding;\nin agricultural societies by appropriating others' estates in times\nof war, and taxing them in times of peace.In conflicts, those on the winning side would receive the estates\nconfiscated from the losers. In England in the 1060s, when William\nthe Conqueror distributed the estates of the defeated Anglo-Saxon\nnobles to his followers, the conflict was military. By the 1530s,\nwhen Henry VIII distributed the estates of the monasteries to his\nfollowers, it was mostly political. \n[9]\nBut the principle was the\nsame. Indeed, the same principle is at work now in Zimbabwe.In more organized societies, like China, the ruler and his officials\nused taxation instead of confiscation. But here too we see the\nsame principle: the way to get rich was not to create wealth, but\nto serve a ruler powerful enough to appropriate it.This started to change in Europe with the rise of the middle class.\nNow we think of the middle class as people who are neither rich nor\npoor, but originally they were a distinct group. In a feudal\nsociety, there are just two classes: a warrior aristocracy, and the\nserfs who work their estates. The middle class were a new, third\ngroup who lived in towns and supported themselves by manufacturing\nand trade.Starting in the tenth and eleventh centuries, petty nobles and\nformer serfs banded together in towns that gradually became powerful\nenough to ignore the local feudal lords. \n[10]\nLike serfs, the middle\nclass made a living largely by creating wealth. (In port cities\nlike Genoa and Pisa, they also engaged in piracy.) But unlike serfs\nthey had an incentive to create a lot of it. Any wealth a serf\ncreated belonged to his master. There was not much point in making\nmore than you could hide. Whereas the independence of the townsmen\nallowed them to keep whatever wealth they created.Once it became possible to get rich by creating wealth, society as\na whole started to get richer very rapidly. Nearly everything we\nhave was created by the middle class. Indeed, the other two classes\nhave effectively disappeared in industrial societies, and their\nnames been given to either end of the middle class. (In the original\nsense of the word, Bill Gates is middle class.)But it was not till the Industrial Revolution that wealth creation\ndefinitively replaced corruption as the best way to get rich. In\nEngland, at least, corruption only became unfashionable (and in\nfact only started to be called \"corruption\") when there started to\nbe other, faster ways to get rich.Seventeenth-century England was much like the third world today,\nin that government office was a recognized route to wealth. The\ngreat fortunes of that time still derived more from what we would\nnow call corruption than from commerce. \n[11]\nBy the nineteenth\ncentury that had changed. There continued to be bribes, as there\nstill are everywhere, but politics had by then been left to men who\nwere driven more by vanity than greed. Technology had made it\npossible to create wealth faster than you could steal it. The\nprototypical rich man of the nineteenth century was not a courtier\nbut an industrialist.With the rise of the middle class, wealth stopped being a zero-sum\ngame. Jobs and Wozniak didn't have to make us poor to make themselves\nrich. Quite the opposite: they created things that made our lives\nmaterially richer. They had to, or we wouldn't have paid for them.But since for most of the world's history the main route to wealth\nwas to steal it, we tend to be suspicious of rich people. Idealistic\nundergraduates find their unconsciously preserved child's model of\nwealth confirmed by eminent writers of the past. It is a case of\nthe mistaken meeting the outdated.\"Behind every great fortune, there is a crime,\" Balzac wrote. Except\nhe didn't. What he actually said was that a great fortune with no\napparent cause was probably due to a crime well enough executed\nthat it had been forgotten. If we were talking about Europe in\n1000, or most of the third world today, the standard misquotation\nwould be spot on. But Balzac lived in nineteenth-century France,\nwhere the Industrial Revolution was well advanced. He knew you\ncould make a fortune without stealing it. After all, he did himself,\nas a popular novelist.\n[12]Only a few countries (by no coincidence, the richest ones) have\nreached this stage. In most, corruption still has the upper hand.\nIn most, the fastest way to get wealth is by stealing it. And so\nwhen we see increasing differences in income in a rich country,\nthere is a tendency to worry that it's sliding back toward becoming\nanother Venezuela. I think the opposite is happening. I think\nyou're seeing a country a full step ahead of Venezuela.The Lever of TechnologyWill technology increase the gap between rich and poor? It will\ncertainly increase the gap between the productive and the unproductive.\nThat's the whole point of technology. With a tractor an energetic\nfarmer could plow six times as much land in a day as he could with\na team of horses. But only if he mastered a new kind of farming.I've seen the lever of technology grow visibly in my own time. In\nhigh school I made money by mowing lawns and scooping ice cream at\nBaskin-Robbins. This was the only kind of work available at the\ntime. Now high school kids could write software or design web\nsites. But only some of them will; the rest will still be scooping\nice cream.I remember very vividly when in 1985 improved technology made it\npossible for me to buy a computer of my own. Within months I was\nusing it to make money as a freelance programmer. A few years\nbefore, I couldn't have done this. A few years before, there was\nno such thing as a freelance programmer. But Apple created\nwealth, in the form of powerful, inexpensive computers, and programmers\nimmediately set to work using it to create more.As this example suggests, the rate at which technology increases\nour productive capacity is probably exponential, rather than linear.\nSo we should expect to see ever-increasing variation in individual\nproductivity as time goes on. Will that increase the gap between\nrich and the poor? Depends which gap you mean.Technology should increase the gap in income, but it seems to\ndecrease other gaps. A hundred years ago, the rich led a different\nkind of life from ordinary people. They lived in houses\nfull of servants, wore elaborately uncomfortable clothes, and\ntravelled about in carriages drawn by teams of horses which themselves\nrequired their own houses and servants. Now, thanks to technology,\nthe rich live more like the average person.Cars are a good example of why. It's possible to buy expensive,\nhandmade cars that cost hundreds of thousands of dollars. But there\nis not much point. Companies make more money by building a large\nnumber of ordinary cars than a small number of expensive ones. So\na company making a mass-produced car can afford to spend a lot more\non its design. If you buy a custom-made car, something will always\nbe breaking. The only point of buying one now is to advertise that\nyou can.Or consider watches. Fifty years ago, by spending a lot of money\non a watch you could get better performance. When watches had\nmechanical movements, expensive watches kept better time. Not any\nmore. Since the invention of the quartz movement, an ordinary Timex\nis more accurate than a Patek Philippe costing hundreds of thousands\nof dollars.\n[13]\nIndeed, as with expensive cars, if you're determined\nto spend a lot of money on a watch, you have to put up with some\ninconvenience to do it: as well as keeping worse time, mechanical\nwatches have to be wound.The only thing technology can't cheapen is brand. Which is precisely\nwhy we hear ever more about it. Brand is the residue left as the\nsubstantive differences between rich and poor evaporate. But what\nlabel you have on your stuff is a much smaller matter than having\nit versus not having it. In 1900, if you kept a carriage, no one\nasked what year or brand it was. If you had one, you were rich.\nAnd if you weren't rich, you took the omnibus or walked. Now even\nthe poorest Americans drive cars, and it is only because we're so\nwell trained by advertising that we can even recognize the especially\nexpensive ones.\n[14]The same pattern has played out in industry after industry. If\nthere is enough demand for something, technology will make it cheap\nenough to sell in large volumes, and the mass-produced versions\nwill be, if not better, at least more convenient.\n[15]\nAnd there\nis nothing the rich like more than convenience. The rich people I\nknow drive the same cars, wear the same clothes, have the same kind\nof furniture, and eat the same foods as my other friends. Their\nhouses are in different neighborhoods, or if in the same neighborhood\nare different sizes, but within them life is similar. The houses\nare made using the same construction techniques and contain much\nthe same objects. It's inconvenient to do something expensive and\ncustom.The rich spend their time more like everyone else too. Bertie\nWooster seems long gone. Now, most people who are rich enough not\nto work do anyway. It's not just social pressure that makes them;\nidleness is lonely and demoralizing.Nor do we have the social distinctions there were a hundred years\nago. The novels and etiquette manuals of that period read now\nlike descriptions of some strange tribal society. \"With respect\nto the continuance of friendships...\" hints Mrs. Beeton's Book\nof Household Management (1880), \"it may be found necessary, in\nsome cases, for a mistress to relinquish, on assuming the responsibility\nof a household, many of those commenced in the earlier part of her\nlife.\" A woman who married a rich man was expected to drop friends\nwho didn't. You'd seem a barbarian if you behaved that way today.\nYou'd also have a very boring life. People still tend to segregate\nthemselves somewhat, but much more on the basis of education than\nwealth.\n[16]Materially and socially, technology seems to be decreasing the gap\nbetween the rich and the poor, not increasing it. If Lenin walked\naround the offices of a company like Yahoo or Intel or Cisco, he'd\nthink communism had won. Everyone would be wearing the same clothes,\nhave the same kind of office (or rather, cubicle) with the same\nfurnishings, and address one another by their first names instead\nof by honorifics. Everything would seem exactly as he'd predicted,\nuntil he looked at their bank accounts. Oops.Is it a problem if technology increases that gap? It doesn't seem\nto be so far. As it increases the gap in income, it seems to\ndecrease most other gaps.Alternative to an AxiomOne often hears a policy criticized on the grounds that it would\nincrease the income gap between rich and poor. As if it were an\naxiom that this would be bad. It might be true that increased\nvariation in income would be bad, but I don't see how we can say\nit's axiomatic.Indeed, it may even be false, in industrial democracies. In a\nsociety of serfs and warlords, certainly, variation in income is a\nsign of an underlying problem. But serfdom is not the only cause\nof variation in income. A 747 pilot doesn't make 40 times as much\nas a checkout clerk because he is a warlord who somehow holds her\nin thrall. His skills are simply much more valuable.I'd like to propose an alternative idea: that in a modern society,\nincreasing variation in income is a sign of health. Technology\nseems to increase the variation in productivity at faster than\nlinear rates. If we don't see corresponding variation in income,\nthere are three possible explanations: (a) that technical innovation\nhas stopped, (b) that the people who would create the most wealth\naren't doing it, or (c) that they aren't getting paid for it.I think we can safely say that (a) and (b) would be bad. If you\ndisagree, try living for a year using only the resources available\nto the average Frankish nobleman in 800, and report back to us.\n(I'll be generous and not send you back to the stone age.)The only option, if you're going to have an increasingly prosperous\nsociety without increasing variation in income, seems to be (c),\nthat people will create a lot of wealth without being paid for it.\nThat Jobs and Wozniak, for example, will cheerfully work 20-hour\ndays to produce the Apple computer for a society that allows them,\nafter taxes, to keep just enough of their income to match what they\nwould have made working 9 to 5 at a big company.Will people create wealth if they can't get paid for it? Only if\nit's fun. People will write operating systems for free. But they\nwon't install them, or take support calls, or train customers to\nuse them. And at least 90% of the work that even the highest tech\ncompanies do is of this second, unedifying kind.All the unfun kinds of wealth creation slow dramatically in a society\nthat confiscates private fortunes. We can confirm this empirically.\nSuppose you hear a strange noise that you think may be due to a\nnearby fan. You turn the fan off, and the noise stops. You turn\nthe fan back on, and the noise starts again. Off, quiet. On,\nnoise. In the absence of other information, it would seem the noise\nis caused by the fan.At various times and places in history, whether you could accumulate\na fortune by creating wealth has been turned on and off. Northern\nItaly in 800, off (warlords would steal it). Northern Italy in\n1100, on. Central France in 1100, off (still feudal). England in\n1800, on. England in 1974, off (98% tax on investment income).\nUnited States in 1974, on. We've even had a twin study: West\nGermany, on; East Germany, off. In every case, the creation of\nwealth seems to appear and disappear like the noise of a fan as you\nswitch on and off the prospect of keeping it.There is some momentum involved. It probably takes at least a\ngeneration to turn people into East Germans (luckily for England).\nBut if it were merely a fan we were studying, without all the extra\nbaggage that comes from the controversial topic of wealth, no one\nwould have any doubt that the fan was causing the noise.If you suppress variations in income, whether by stealing private\nfortunes, as feudal rulers used to do, or by taxing them away, as\nsome modern governments have done, the result always seems to be\nthe same. Society as a whole ends up poorer.If I had a choice of living in a society where I was materially\nmuch better off than I am now, but was among the poorest, or in one\nwhere I was the richest, but much worse off than I am now, I'd take\nthe first option. If I had children, it would arguably be immoral\nnot to. It's absolute poverty you want to avoid, not relative\npoverty. If, as the evidence so far implies, you have to have one\nor the other in your society, take relative poverty.You need rich people in your society not so much because in spending\ntheir money they create jobs, but because of what they have to do\nto get rich. I'm not talking about the trickle-down effect\nhere. I'm not saying that if you let Henry Ford get rich, he'll\nhire you as a waiter at his next party. I'm saying that he'll make\nyou a tractor to replace your horse.Notes[1]\nPart of the reason this subject is so contentious is that some\nof those most vocal on the subject of wealth\u2014university\nstudents, heirs, professors, politicians, and journalists\u2014have\nthe least experience creating it. (This phenomenon will be familiar\nto anyone who has overheard conversations about sports in a bar.)Students are mostly still on the parental dole, and have not stopped\nto think about where that money comes from. Heirs will be on the\nparental dole for life. Professors and politicians live within\nsocialist eddies of the economy, at one remove from the creation\nof wealth, and are paid a flat rate regardless of how hard they\nwork. And journalists as part of their professional code segregate\nthemselves from the revenue-collecting half of the businesses they\nwork for (the ad sales department). Many of these people never\ncome face to face with the fact that the money they receive represents\nwealth\u2014wealth that, except in the case of journalists, someone\nelse created earlier. They live in a world in which income is\ndoled out by a central authority according to some abstract notion\nof fairness (or randomly, in the case of heirs), rather than given\nby other people in return for something they wanted, so it may seem\nto them unfair that things don't work the same in the rest of the\neconomy.(Some professors do create a great deal of wealth for\nsociety. But the money they're paid isn't a quid pro quo.\nIt's more in the nature of an investment.)[2]\nWhen one reads about the origins of the Fabian Society, it\nsounds like something cooked up by the high-minded Edwardian\nchild-heroes of Edith Nesbit's The Wouldbegoods.[3]\nAccording to a study by the Corporate Library, the median total\ncompensation, including salary, bonus, stock grants, and the exercise\nof stock options, of S&P 500 CEOs in 2002 was $3.65 million.\nAccording to Sports Illustrated, the average NBA player's\nsalary during the 2002-03 season was $4.54 million, and the average\nmajor league baseball player's salary at the start of the 2003\nseason was $2.56 million. According to the Bureau of Labor\nStatistics, the mean annual wage in the US in 2002 was $35,560.[4]\nIn the early empire the price of an ordinary adult slave seems\nto have been about 2,000 sestertii (e.g. Horace, Sat. ii.7.43).\nA servant girl cost 600 (Martial vi.66), while Columella (iii.3.8)\nsays that a skilled vine-dresser was worth 8,000. A doctor, P.\nDecimus Eros Merula, paid 50,000 sestertii for his freedom (Dessau,\nInscriptiones 7812). Seneca (Ep. xxvii.7) reports\nthat one Calvisius Sabinus paid 100,000 sestertii apiece for slaves\nlearned in the Greek classics. Pliny (Hist. Nat. vii.39)\nsays that the highest price paid for a slave up to his time was\n700,000 sestertii, for the linguist (and presumably teacher) Daphnis,\nbut that this had since been exceeded by actors buying their own\nfreedom.Classical Athens saw a similar variation in prices. An ordinary\nlaborer was worth about 125 to 150 drachmae. Xenophon (Mem.\nii.5) mentions prices ranging from 50 to 6,000 drachmae (for the\nmanager of a silver mine).For more on the economics of ancient slavery see:Jones, A. H. M., \"Slavery in the Ancient World,\" Economic History\nReview, 2:9 (1956), 185-199, reprinted in Finley, M. I. (ed.),\nSlavery in Classical Antiquity, Heffer, 1964.[5]\nEratosthenes (276\u2014195 BC) used shadow lengths in different\ncities to estimate the Earth's circumference. He was off by only\nabout 2%.[6]\nNo, and Windows, respectively.[7]\nOne of the biggest divergences between the Daddy Model and\nreality is the valuation of hard work. In the Daddy Model, hard\nwork is in itself deserving. In reality, wealth is measured by\nwhat one delivers, not how much effort it costs. If I paint someone's\nhouse, the owner shouldn't pay me extra for doing it with a toothbrush.It will seem to someone still implicitly operating on the Daddy\nModel that it is unfair when someone works hard and doesn't get\npaid much. To help clarify the matter, get rid of everyone else\nand put our worker on a desert island, hunting and gathering fruit.\nIf he's bad at it he'll work very hard and not end up with much\nfood. Is this unfair? Who is being unfair to him?[8]\nPart of the reason for the tenacity of the Daddy Model may be\nthe dual meaning of \"distribution.\" When economists talk about\n\"distribution of income,\" they mean statistical distribution. But\nwhen you use the phrase frequently, you can't help associating it\nwith the other sense of the word (as in e.g. \"distribution of alms\"),\nand thereby subconsciously seeing wealth as something that flows\nfrom some central tap. The word \"regressive\" as applied to tax\nrates has a similar effect, at least on me; how can anything\nregressive be good?[9]\n\"From the beginning of the reign Thomas Lord Roos was an assiduous\ncourtier of the young Henry VIII and was soon to reap the rewards.\nIn 1525 he was made a Knight of the Garter and given the Earldom\nof Rutland. In the thirties his support of the breach with Rome,\nhis zeal in crushing the Pilgrimage of Grace, and his readiness to\nvote the death-penalty in the succession of spectacular treason\ntrials that punctuated Henry's erratic matrimonial progress made\nhim an obvious candidate for grants of monastic property.\"Stone, Lawrence, Family and Fortune: Studies in Aristocratic\nFinance in the Sixteenth and Seventeenth Centuries, Oxford\nUniversity Press, 1973, p. 166.[10]\nThere is archaeological evidence for large settlements earlier,\nbut it's hard to say what was happening in them.Hodges, Richard and David Whitehouse, Mohammed, Charlemagne and\nthe Origins of Europe, Cornell University Press, 1983.[11]\nWilliam Cecil and his son Robert were each in turn the most\npowerful minister of the crown, and both used their position to\namass fortunes among the largest of their times. Robert in particular\ntook bribery to the point of treason. \"As Secretary of State and\nthe leading advisor to King James on foreign policy, [he] was a\nspecial recipient of favour, being offered large bribes by the Dutch\nnot to make peace with Spain, and large bribes by Spain to make\npeace.\" (Stone, op. cit., p. 17.)[12]\nThough Balzac made a lot of money from writing, he was notoriously\nimprovident and was troubled by debts all his life.[13]\nA Timex will gain or lose about .5 seconds per day. The most\naccurate mechanical watch, the Patek Philippe 10 Day Tourbillon,\nis rated at -1.5 to +2 seconds. Its retail price is about $220,000.[14]\nIf asked to choose which was more expensive, a well-preserved\n1989 Lincoln Town Car ten-passenger limousine ($5,000) or a 2004\nMercedes S600 sedan ($122,000), the average Edwardian might well\nguess wrong.[15]\nTo say anything meaningful about income trends, you have to\ntalk about real income, or income as measured in what it can buy.\nBut the usual way of calculating real income ignores much of the\ngrowth in wealth over time, because it depends on a consumer price\nindex created by bolting end to end a series of numbers that are\nonly locally accurate, and that don't include the prices of new\ninventions until they become so common that their prices stabilize.So while we might think it was very much better to live in a world\nwith antibiotics or air travel or an electric power grid than\nwithout, real income statistics calculated in the usual way will\nprove to us that we are only slightly richer for having these things.Another approach would be to ask, if you were going back to the\nyear x in a time machine, how much would you have to spend on trade\ngoods to make your fortune? For example, if you were going back\nto 1970 it would certainly be less than $500, because the processing\npower you can get for $500 today would have been worth at least\n$150 million in 1970. The function goes asymptotic fairly quickly,\nbecause for times over a hundred years or so you could get all you\nneeded in present-day trash. In 1800 an empty plastic drink bottle\nwith a screw top would have seemed a miracle of workmanship.[16]\nSome will say this amounts to the same thing, because the rich\nhave better opportunities for education. That's a valid point. It\nis still possible, to a degree, to buy your kids' way into top\ncolleges by sending them to private schools that in effect hack the\ncollege admissions process.According to a 2002 report by the National Center for Education\nStatistics, about 1.7% of American kids attend private, non-sectarian\nschools. At Princeton, 36% of the class of 2007 came from such\nschools. (Interestingly, the number at Harvard is significantly\nlower, about 28%.) Obviously this is a huge loophole. It does at\nleast seem to be closing, not widening.Perhaps the designers of admissions processes should take a lesson\nfrom the example of computer security, and instead of just assuming\nthat their system can't be hacked, measure the degree to which it\nis."} {"title": "gh", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nJuly 2004(This essay is derived from a talk at Oscon 2004.)\nA few months ago I finished a new \nbook, \nand in reviews I keep\nnoticing words like \"provocative'' and \"controversial.'' To say\nnothing of \"idiotic.''I didn't mean to make the book controversial. I was trying to make\nit efficient. I didn't want to waste people's time telling them\nthings they already knew. It's more efficient just to give them\nthe diffs. But I suppose that's bound to yield an alarming book.EdisonsThere's no controversy about which idea is most controversial:\nthe suggestion that variation in wealth might not be as big a\nproblem as we think.I didn't say in the book that variation in wealth was in itself a\ngood thing. I said in some situations it might be a sign of good\nthings. A throbbing headache is not a good thing, but it can be\na sign of a good thing-- for example, that you're recovering\nconsciousness after being hit on the head.Variation in wealth can be a sign of variation in productivity.\n(In a society of one, they're identical.) And that\nis almost certainly a good thing: if your society has no variation\nin productivity, it's probably not because everyone is Thomas\nEdison. It's probably because you have no Thomas Edisons.In a low-tech society you don't see much variation in productivity.\nIf you have a tribe of nomads collecting sticks for a fire, how\nmuch more productive is the best stick gatherer going to be than\nthe worst? A factor of two? Whereas when you hand people a complex tool\nlike a computer, the variation in what they can do with\nit is enormous.That's not a new idea. Fred Brooks wrote about it in 1974, and\nthe study he quoted was published in 1968. But I think he\nunderestimated the variation between programmers. He wrote about productivity in lines\nof code: the best programmers can solve a given problem in a tenth\nthe time. But what if the problem isn't given? In programming, as\nin many fields, the hard part isn't solving problems, but deciding\nwhat problems to solve. Imagination is hard to measure, but\nin practice it dominates the kind of productivity that's measured\nin lines of code.Productivity varies in any field, but there are few in which it\nvaries so much. The variation between programmers\nis so great that it becomes a difference in kind. I don't\nthink this is something intrinsic to programming, though. In every field,\ntechnology magnifies differences in productivity. I think what's\nhappening in programming is just that we have a lot of technological\nleverage. But in every field the lever is getting longer, so the\nvariation we see is something that more and more fields will see\nas time goes on. And the success of companies, and countries, will\ndepend increasingly on how they deal with it.If variation in productivity increases with technology, then the\ncontribution of the most productive individuals will not only be\ndisproportionately large, but will actually grow with time. When\nyou reach the point where 90% of a group's output is created by 1%\nof its members, you lose big if something (whether Viking raids,\nor central planning) drags their productivity down to the average.If we want to get the most out of them, we need to understand these\nespecially productive people. What motivates them? What do they\nneed to do their jobs? How do you recognize them? How do you\nget them to come and work for you? And then of course there's the\nquestion, how do you become one?More than MoneyI know a handful of super-hackers, so I sat down and thought about\nwhat they have in common. Their defining quality is probably that\nthey really love to program. Ordinary programmers write code to pay\nthe bills. Great hackers think of it as something they do for fun,\nand which they're delighted to find people will pay them for.Great programmers are sometimes said to be indifferent to money.\nThis isn't quite true. It is true that all they really care about\nis doing interesting work. But if you make enough money, you get\nto work on whatever you want, and for that reason hackers are\nattracted by the idea of making really large amounts of money.\nBut as long as they still have to show up for work every day, they\ncare more about what they do there than how much they get paid for\nit.Economically, this is a fact of the greatest importance, because\nit means you don't have to pay great hackers anything like what\nthey're worth. A great programmer might be ten or a hundred times\nas productive as an ordinary one, but he'll consider himself lucky\nto get paid three times as much. As I'll explain later, this is\npartly because great hackers don't know how good they are. But\nit's also because money is not the main thing they want.What do hackers want? Like all craftsmen, hackers like good tools.\nIn fact, that's an understatement. Good hackers find it unbearable\nto use bad tools. They'll simply refuse to work on projects with\nthe wrong infrastructure.At a startup I once worked for, one of the things pinned up on our\nbulletin board was an ad from IBM. It was a picture of an AS400,\nand the headline read, I think, \"hackers despise\nit.'' [1]When you decide what infrastructure to use for a project, you're\nnot just making a technical decision. You're also making a social\ndecision, and this may be the more important of the two. For\nexample, if your company wants to write some software, it might\nseem a prudent choice to write it in Java. But when you choose a\nlanguage, you're also choosing a community. The programmers you'll\nbe able to hire to work on a Java project won't be as\nsmart as the\nones you could get to work on a project written in Python.\nAnd the quality of your hackers probably matters more than the\nlanguage you choose. Though, frankly, the fact that good hackers\nprefer Python to Java should tell you something about the relative\nmerits of those languages.Business types prefer the most popular languages because they view\nlanguages as standards. They don't want to bet the company on\nBetamax. The thing about languages, though, is that they're not\njust standards. If you have to move bits over a network, by all\nmeans use TCP/IP. But a programming language isn't just a format.\nA programming language is a medium of expression.I've read that Java has just overtaken Cobol as the most popular\nlanguage. As a standard, you couldn't wish for more. But as a\nmedium of expression, you could do a lot better. Of all the great\nprogrammers I can think of, I know of only one who would voluntarily\nprogram in Java. And of all the great programmers I can think of\nwho don't work for Sun, on Java, I know of zero.Great hackers also generally insist on using open source software.\nNot just because it's better, but because it gives them more control.\nGood hackers insist on control. This is part of what makes them\ngood hackers: when something's broken, they need to fix it. You\nwant them to feel this way about the software they're writing for\nyou. You shouldn't be surprised when they feel the same way about\nthe operating system.A couple years ago a venture capitalist friend told me about a new\nstartup he was involved with. It sounded promising. But the next\ntime I talked to him, he said they'd decided to build their software\non Windows NT, and had just hired a very experienced NT developer\nto be their chief technical officer. When I heard this, I thought,\nthese guys are doomed. One, the CTO couldn't be a first rate\nhacker, because to become an eminent NT developer he would have\nhad to use NT voluntarily, multiple times, and I couldn't imagine\na great hacker doing that; and two, even if he was good, he'd have\na hard time hiring anyone good to work for him if the project had\nto be built on NT. [2]The Final FrontierAfter software, the most important tool to a hacker is probably\nhis office. Big companies think the function of office space is to express\nrank. But hackers use their offices for more than that: they\nuse their office as a place to think in. And if you're a technology\ncompany, their thoughts are your product. So making hackers work\nin a noisy, distracting environment is like having a paint factory\nwhere the air is full of soot.The cartoon strip Dilbert has a lot to say about cubicles, and with\ngood reason. All the hackers I know despise them. The mere prospect\nof being interrupted is enough to prevent hackers from working on\nhard problems. If you want to get real work done in an office with\ncubicles, you have two options: work at home, or come in early or\nlate or on a weekend, when no one else is there. Don't companies\nrealize this is a sign that something is broken? An office\nenvironment is supposed to be something that helps\nyou work, not something you work despite.Companies like Cisco are proud that everyone there has a cubicle,\neven the CEO. But they're not so advanced as they think; obviously\nthey still view office space as a badge of rank. Note too that\nCisco is famous for doing very little product development in house.\nThey get new technology by buying the startups that created it-- where\npresumably the hackers did have somewhere quiet to work.One big company that understands what hackers need is Microsoft.\nI once saw a recruiting ad for Microsoft with a big picture of a\ndoor. Work for us, the premise was, and we'll give you a place to\nwork where you can actually get work done. And you know, Microsoft\nis remarkable among big companies in that they are able to develop\nsoftware in house. Not well, perhaps, but well enough.If companies want hackers to be productive, they should look at\nwhat they do at home. At home, hackers can arrange things themselves\nso they can get the most done. And when they work at home, hackers\ndon't work in noisy, open spaces; they work in rooms with doors. They\nwork in cosy, neighborhoody places with people around and somewhere\nto walk when they need to mull something over, instead of in glass\nboxes set in acres of parking lots. They have a sofa they can take\na nap on when they feel tired, instead of sitting in a coma at\ntheir desk, pretending to work. There's no crew of people with\nvacuum cleaners that roars through every evening during the prime\nhacking hours. There are no meetings or, God forbid, corporate\nretreats or team-building exercises. And when you look at what\nthey're doing on that computer, you'll find it reinforces what I\nsaid earlier about tools. They may have to use Java and Windows\nat work, but at home, where they can choose for themselves, you're\nmore likely to find them using Perl and Linux.Indeed, these statistics about Cobol or Java being the most popular\nlanguage can be misleading. What we ought to look at, if we want\nto know what tools are best, is what hackers choose when they can\nchoose freely-- that is, in projects of their own. When you ask\nthat question, you find that open source operating systems already\nhave a dominant market share, and the number one language is probably\nPerl.InterestingAlong with good tools, hackers want interesting projects. What\nmakes a project interesting? Well, obviously overtly sexy\napplications like stealth planes or special effects software would\nbe interesting to work on. But any application can be interesting\nif it poses novel technical challenges. So it's hard to predict\nwhich problems hackers will like, because some become\ninteresting only when the people working on them discover a new\nkind of solution. Before ITA\n(who wrote the software inside Orbitz),\nthe people working on airline fare searches probably thought it\nwas one of the most boring applications imaginable. But ITA made\nit interesting by \nredefining the problem in a more ambitious way.I think the same thing happened at Google. When Google was founded,\nthe conventional wisdom among the so-called portals was that search\nwas boring and unimportant. But the guys at Google didn't think\nsearch was boring, and that's why they do it so well.This is an area where managers can make a difference. Like a parent\nsaying to a child, I bet you can't clean up your whole room in\nten minutes, a good manager can sometimes redefine a problem as a\nmore interesting one. Steve Jobs seems to be particularly good at\nthis, in part simply by having high standards. There were a lot\nof small, inexpensive computers before the Mac. He redefined the\nproblem as: make one that's beautiful. And that probably drove\nthe developers harder than any carrot or stick could.They certainly delivered. When the Mac first appeared, you didn't\neven have to turn it on to know it would be good; you could tell\nfrom the case. A few weeks ago I was walking along the street in\nCambridge, and in someone's trash I saw what appeared to be a Mac\ncarrying case. I looked inside, and there was a Mac SE. I carried\nit home and plugged it in, and it booted. The happy Macintosh\nface, and then the finder. My God, it was so simple. It was just\nlike ... Google.Hackers like to work for people with high standards. But it's not\nenough just to be exacting. You have to insist on the right things.\nWhich usually means that you have to be a hacker yourself. I've\nseen occasional articles about how to manage programmers. Really\nthere should be two articles: one about what to do if\nyou are yourself a programmer, and one about what to do if you're not. And the \nsecond could probably be condensed into two words: give up.The problem is not so much the day to day management. Really good\nhackers are practically self-managing. The problem is, if you're\nnot a hacker, you can't tell who the good hackers are. A similar\nproblem explains why American cars are so ugly. I call it the\ndesign paradox. You might think that you could make your products\nbeautiful just by hiring a great designer to design them. But if\nyou yourself don't have good taste, \nhow are you going to recognize\na good designer? By definition you can't tell from his portfolio.\nAnd you can't go by the awards he's won or the jobs he's had,\nbecause in design, as in most fields, those tend to be driven by\nfashion and schmoozing, with actual ability a distant third.\nThere's no way around it: you can't manage a process intended to\nproduce beautiful things without knowing what beautiful is. American\ncars are ugly because American car companies are run by people with\nbad taste.Many people in this country think of taste as something elusive,\nor even frivolous. It is neither. To drive design, a manager must\nbe the most demanding user of a company's products. And if you\nhave really good taste, you can, as Steve Jobs does, make satisfying\nyou the kind of problem that good people like to work on.Nasty Little ProblemsIt's pretty easy to say what kinds of problems are not interesting:\nthose where instead of solving a few big, clear, problems, you have\nto solve a lot of nasty little ones. One of the worst kinds of\nprojects is writing an interface to a piece of software that's\nfull of bugs. Another is when you have to customize\nsomething for an individual client's complex and ill-defined needs.\nTo hackers these kinds of projects are the death of a thousand\ncuts.The distinguishing feature of nasty little problems is that you\ndon't learn anything from them. Writing a compiler is interesting\nbecause it teaches you what a compiler is. But writing an interface\nto a buggy piece of software doesn't teach you anything, because the\nbugs are random. [3] So it's not just fastidiousness that makes good\nhackers avoid nasty little problems. It's more a question of\nself-preservation. Working on nasty little problems makes you\nstupid. Good hackers avoid it for the same reason models avoid\ncheeseburgers.Of course some problems inherently have this character. And because\nof supply and demand, they pay especially well. So a company that\nfound a way to get great hackers to work on tedious problems would\nbe very successful. How would you do it?One place this happens is in startups. At our startup we had \nRobert Morris working as a system administrator. That's like having the\nRolling Stones play at a bar mitzvah. You can't hire that kind of\ntalent. But people will do any amount of drudgery for companies\nof which they're the founders. [4]Bigger companies solve the problem by partitioning the company.\nThey get smart people to work for them by establishing a separate\nR&D department where employees don't have to work directly on\ncustomers' nasty little problems. [5] In this model, the research\ndepartment functions like a mine. They produce new ideas; maybe\nthe rest of the company will be able to use them.You may not have to go to this extreme. \nBottom-up programming\nsuggests another way to partition the company: have the smart people\nwork as toolmakers. If your company makes software to do x, have\none group that builds tools for writing software of that type, and\nanother that uses these tools to write the applications. This way\nyou might be able to get smart people to write 99% of your code,\nbut still keep them almost as insulated from users as they would\nbe in a traditional research department. The toolmakers would have\nusers, but they'd only be the company's own developers. [6]If Microsoft used this approach, their software wouldn't be so full\nof security holes, because the less smart people writing the actual\napplications wouldn't be doing low-level stuff like allocating\nmemory. Instead of writing Word directly in C, they'd be plugging\ntogether big Lego blocks of Word-language. (Duplo, I believe, is\nthe technical term.)ClumpingAlong with interesting problems, what good hackers like is other\ngood hackers. Great hackers tend to clump together-- sometimes\nspectacularly so, as at Xerox Parc. So you won't attract good\nhackers in linear proportion to how good an environment you create\nfor them. The tendency to clump means it's more like the square\nof the environment. So it's winner take all. At any given time,\nthere are only about ten or twenty places where hackers most want to\nwork, and if you aren't one of them, you won't just have fewer\ngreat hackers, you'll have zero.Having great hackers is not, by itself, enough to make a company\nsuccessful. It works well for Google and ITA, which are two of\nthe hot spots right now, but it didn't help Thinking Machines or\nXerox. Sun had a good run for a while, but their business model\nis a down elevator. In that situation, even the best hackers can't\nsave you.I think, though, that all other things being equal, a company that\ncan attract great hackers will have a huge advantage. There are\npeople who would disagree with this. When we were making the rounds\nof venture capital firms in the 1990s, several told us that software\ncompanies didn't win by writing great software, but through brand,\nand dominating channels, and doing the right deals.They really seemed to believe this, and I think I know why. I\nthink what a lot of VCs are looking for, at least unconsciously,\nis the next Microsoft. And of course if Microsoft is your model,\nyou shouldn't be looking for companies that hope to win by writing\ngreat software. But VCs are mistaken to look for the next Microsoft,\nbecause no startup can be the next Microsoft unless some other\ncompany is prepared to bend over at just the right moment and be\nthe next IBM.It's a mistake to use Microsoft as a model, because their whole\nculture derives from that one lucky break. Microsoft is a bad data\npoint. If you throw them out, you find that good products do tend\nto win in the market. What VCs should be looking for is the next\nApple, or the next Google.I think Bill Gates knows this. What worries him about Google is\nnot the power of their brand, but the fact that they have\nbetter hackers. [7]\nRecognitionSo who are the great hackers? How do you know when you meet one?\nThat turns out to be very hard. Even hackers can't tell. I'm\npretty sure now that my friend Trevor Blackwell is a great hacker.\nYou may have read on Slashdot how he made his \nown Segway. The\nremarkable thing about this project was that he wrote all the\nsoftware in one day (in Python, incidentally).For Trevor, that's\npar for the course. But when I first met him, I thought he was a\ncomplete idiot. He was standing in Robert Morris's office babbling\nat him about something or other, and I remember standing behind\nhim making frantic gestures at Robert to shoo this nut out of his\noffice so we could go to lunch. Robert says he misjudged Trevor\nat first too. Apparently when Robert first met him, Trevor had\njust begun a new scheme that involved writing down everything about\nevery aspect of his life on a stack of index cards, which he carried\nwith him everywhere. He'd also just arrived from Canada, and had\na strong Canadian accent and a mullet.The problem is compounded by the fact that hackers, despite their\nreputation for social obliviousness, sometimes put a good deal of\neffort into seeming smart. When I was in grad school I used to\nhang around the MIT AI Lab occasionally. It was kind of intimidating\nat first. Everyone there spoke so fast. But after a while I\nlearned the trick of speaking fast. You don't have to think any\nfaster; just use twice as many words to say everything. With this amount of noise in the signal, it's hard to tell good\nhackers when you meet them. I can't tell, even now. You also\ncan't tell from their resumes. It seems like the only way to judge\na hacker is to work with him on something.And this is the reason that high-tech areas \nonly happen around universities. The active ingredient\nhere is not so much the professors as the students. Startups grow up\naround universities because universities bring together promising young\npeople and make them work on the same projects. The\nsmart ones learn who the other smart ones are, and together\nthey cook up new projects of their own.Because you can't tell a great hacker except by working with him,\nhackers themselves can't tell how good they are. This is true to\na degree in most fields. I've found that people who\nare great at something are not so much convinced of their own\ngreatness as mystified at why everyone else seems so incompetent.\nBut it's particularly hard for hackers to know how good they are,\nbecause it's hard to compare their work. This is easier in most\nother fields. In the hundred meters, you know in 10 seconds who's\nfastest. Even in math there seems to be a general consensus about\nwhich problems are hard to solve, and what constitutes a good\nsolution. But hacking is like writing. Who can say which of two\nnovels is better? Certainly not the authors.With hackers, at least, other hackers can tell. That's because,\nunlike novelists, hackers collaborate on projects. When you get\nto hit a few difficult problems over the net at someone, you learn\npretty quickly how hard they hit them back. But hackers can't\nwatch themselves at work. So if you ask a great hacker how good\nhe is, he's almost certain to reply, I don't know. He's not just\nbeing modest. He really doesn't know.And none of us know, except about people we've actually worked\nwith. Which puts us in a weird situation: we don't know who our\nheroes should be. The hackers who become famous tend to become\nfamous by random accidents of PR. Occasionally I need to give an\nexample of a great hacker, and I never know who to use. The first\nnames that come to mind always tend to be people I know personally,\nbut it seems lame to use them. So, I think, maybe I should say\nRichard Stallman, or Linus Torvalds, or Alan Kay, or someone famous\nlike that. But I have no idea if these guys are great hackers.\nI've never worked with them on anything.If there is a Michael Jordan of hacking, no one knows, including\nhim.CultivationFinally, the question the hackers have all been wondering about:\nhow do you become a great hacker? I don't know if it's possible\nto make yourself into one. But it's certainly possible to do things\nthat make you stupid, and if you can make yourself stupid, you\ncan probably make yourself smart too.The key to being a good hacker may be to work on what you like.\nWhen I think about the great hackers I know, one thing they have\nin common is the extreme \ndifficulty of making them work \non anything they\ndon't want to. I don't know if this is cause or effect; it may be\nboth.To do something well you have to love it. \nSo to the extent you\ncan preserve hacking as something you love, you're likely to do it\nwell. Try to keep the sense of wonder you had about programming at\nage 14. If you're worried that your current job is rotting your\nbrain, it probably is.The best hackers tend to be smart, of course, but that's true in\na lot of fields. Is there some quality that's unique to hackers?\nI asked some friends, and the number one thing they mentioned was\ncuriosity. \nI'd always supposed that all smart people were curious--\nthat curiosity was simply the first derivative of knowledge. But\napparently hackers are particularly curious, especially about how\nthings work. That makes sense, because programs are in effect\ngiant descriptions of how things work.Several friends mentioned hackers' ability to concentrate-- their\nability, as one put it, to \"tune out everything outside their own\nheads.'' I've certainly noticed this. And I've heard several \nhackers say that after drinking even half a beer they can't program at\nall. So maybe hacking does require some special ability to focus.\nPerhaps great hackers can load a large amount of context into their\nhead, so that when they look at a line of code, they see not just\nthat line but the whole program around it. John McPhee\nwrote that Bill Bradley's success as a basketball player was due\npartly to his extraordinary peripheral vision. \"Perfect'' eyesight\nmeans about 47 degrees of vertical peripheral vision. Bill Bradley\nhad 70; he could see the basket when he was looking at the floor.\nMaybe great hackers have some similar inborn ability. (I cheat by\nusing a very dense language, \nwhich shrinks the court.)This could explain the disconnect over cubicles. Maybe the people\nin charge of facilities, not having any concentration to shatter,\nhave no idea that working in a cubicle feels to a hacker like having\none's brain in a blender. (Whereas Bill, if the rumors of autism\nare true, knows all too well.)One difference I've noticed between great hackers and smart people\nin general is that hackers are more \npolitically incorrect. To the\nextent there is a secret handshake among good hackers, it's when they\nknow one another well enough to express opinions that would get\nthem stoned to death by the general public. And I can see why\npolitical incorrectness would be a useful quality in programming.\nPrograms are very complex and, at least in the hands of good\nprogrammers, very fluid. In such situations it's helpful to have\na habit of questioning assumptions.Can you cultivate these qualities? I don't know. But you can at\nleast not repress them. So here is my best shot at a recipe. If\nit is possible to make yourself into a great hacker, the way to do\nit may be to make the following deal with yourself: you never have\nto work on boring projects (unless your family will starve otherwise),\nand in return, you'll never allow yourself to do a half-assed job.\nAll the great hackers I know seem to have made that deal, though\nperhaps none of them had any choice in the matter.Notes\n[1] In fairness, I have to say that IBM makes decent hardware. I\nwrote this on an IBM laptop.[2] They did turn out to be doomed. They shut down a few months\nlater.[3] I think this is what people mean when they talk\nabout the \"meaning of life.\" On the face of it, this seems an \nodd idea. Life isn't an expression; how could it have meaning?\nBut it can have a quality that feels a lot like meaning. In a project\nlike a compiler, you have to solve a lot of problems, but the problems\nall fall into a pattern, as in a signal. Whereas when the problems\nyou have to solve are random, they seem like noise.\n[4] Einstein at one point worked designing refrigerators. (He had equity.)[5] It's hard to say exactly what constitutes research in the\ncomputer world, but as a first approximation, it's software that\ndoesn't have users.I don't think it's publication that makes the best hackers want to work\nin research departments. I think it's mainly not having to have a\nthree hour meeting with a product manager about problems integrating\nthe Korean version of Word 13.27 with the talking paperclip.[6] Something similar has been happening for a long time in the\nconstruction industry. When you had a house built a couple hundred\nyears ago, the local builders built everything in it. But increasingly\nwhat builders do is assemble components designed and manufactured\nby someone else. This has, like the arrival of desktop publishing,\ngiven people the freedom to experiment in disastrous ways, but it\nis certainly more efficient.[7] Google is much more dangerous to Microsoft than Netscape was.\nProbably more dangerous than any other company has ever been. Not\nleast because they're determined to fight. On their job listing\npage, they say that one of their \"core values'' is \"Don't be evil.''\nFrom a company selling soybean oil or mining equipment, such a\nstatement would merely be eccentric. But I think all of us in the\ncomputer world recognize who that is a declaration of war on.Thanks to Jessica Livingston, Robert Morris, and Sarah Harlin\nfor reading earlier versions of this talk."} {"title": "mod", "text": "December 2019There are two distinct ways to be politically moderate: on purpose\nand by accident. Intentional moderates are trimmers, deliberately\nchoosing a position mid-way between the extremes of right and left.\nAccidental moderates end up in the middle, on average, because they\nmake up their own minds about each question, and the far right and\nfar left are roughly equally wrong.You can distinguish intentional from accidental moderates by the\ndistribution of their opinions. If the far left opinion on some\nmatter is 0 and the far right opinion 100, an intentional moderate's\nopinion on every question will be near 50. Whereas an accidental\nmoderate's opinions will be scattered over a broad range, but will,\nlike those of the intentional moderate, average to about 50.Intentional moderates are similar to those on the far left and the\nfar right in that their opinions are, in a sense, not their own.\nThe defining quality of an ideologue, whether on the left or the\nright, is to acquire one's opinions in bulk. You don't get to pick\nand choose. Your opinions about taxation can be predicted from your\nopinions about sex. And although intentional moderates\nmight seem to be the opposite of ideologues, their beliefs (though\nin their case the word \"positions\" might be more accurate) are also\nacquired in bulk. If the median opinion shifts to the right or left,\nthe intentional moderate must shift with it. Otherwise they stop\nbeing moderate.Accidental moderates, on the other hand, not only choose their own\nanswers, but choose their own questions. They may not care at all\nabout questions that the left and right both think are terribly\nimportant. So you can only even measure the politics of an accidental\nmoderate from the intersection of the questions they care about and\nthose the left and right care about, and this can\nsometimes be vanishingly small.It is not merely a manipulative rhetorical trick to say \"if you're\nnot with us, you're against us,\" but often simply false.Moderates are sometimes derided as cowards, particularly by \nthe extreme left. But while it may be accurate to call intentional\nmoderates cowards, openly being an accidental moderate requires the\nmost courage of all, because you get attacked from both right and\nleft, and you don't have the comfort of being an orthodox member\nof a large group to sustain you.Nearly all the most impressive people I know are accidental moderates.\nIf I knew a lot of professional athletes, or people in the entertainment\nbusiness, that might be different. Being on the far left or far\nright doesn't affect how fast you run or how well you sing. But\nsomeone who works with ideas has to be independent-minded to do it\nwell.Or more precisely, you have to be independent-minded about the ideas\nyou work with. You could be mindlessly doctrinaire in your politics\nand still be a good mathematician. In the 20th century, a lot of\nvery smart people were Marxists \u0097 just no one who was smart about\nthe subjects Marxism involves. But if the ideas you use in your\nwork intersect with the politics of your time, you have two choices:\nbe an accidental moderate, or be mediocre.Notes[1] It's possible in theory for one side to be entirely right and\nthe other to be entirely wrong. Indeed, ideologues must always\nbelieve this is the case. But historically it rarely has been.[2] For some reason the far right tend to ignore moderates rather\nthan despise them as backsliders. I'm not sure why. Perhaps it\nmeans that the far right is less ideological than the far left. Or\nperhaps that they are more confident, or more resigned, or simply\nmore disorganized. I just don't know.[3] Having heretical opinions doesn't mean you have to express\nthem openly. It may be\neasier to have them if you don't.\nThanks to Austen Allred, Trevor Blackwell, Patrick Collison, Jessica Livingston,\nAmjad Masad, Ryan Petersen, and Harj Taggar for reading drafts of this."} {"title": "rootsoflisp", "text": "May 2001\n\n(I wrote this article to help myself understand exactly\nwhat McCarthy discovered. You don't need to know this stuff\nto program in Lisp, but it should be helpful to \nanyone who wants to\nunderstand the essence of Lisp \u0097 both in the sense of its\norigins and its semantic core. The fact that it has such a core\nis one of Lisp's distinguishing features, and the reason why,\nunlike other languages, Lisp has dialects.)In 1960, John \nMcCarthy published a remarkable paper in\nwhich he did for programming something like what Euclid did for\ngeometry. He showed how, given a handful of simple\noperators and a notation for functions, you can\nbuild a whole programming language.\nHe called this language Lisp, for \"List Processing,\"\nbecause one of his key ideas was to use a simple\ndata structure called a list for both\ncode and data.It's worth understanding what McCarthy discovered, not\njust as a landmark in the history of computers, but as\na model for what programming is tending to become in\nour own time. It seems to me that there have been\ntwo really clean, consistent models of programming so\nfar: the C model and the Lisp model.\nThese two seem points of high ground, with swampy lowlands\nbetween them. As computers have grown more powerful,\nthe new languages being developed have been moving\nsteadily toward the Lisp model. A popular recipe\nfor new programming languages in the past 20 years \nhas been to take the C model of computing and add to\nit, piecemeal, parts taken from the Lisp model,\nlike runtime typing and garbage collection.In this article I'm going to try to explain in the\nsimplest possible terms what McCarthy discovered.\nThe point is not just to learn about an interesting\ntheoretical result someone figured out forty years ago,\nbut to show where languages are heading.\nThe unusual thing about Lisp \u0097 in fact, the defining\nquality of Lisp \u0097 is that it can be written in\nitself. To understand what McCarthy meant by this,\nwe're going to retrace his steps, with his mathematical\nnotation translated into running Common Lisp code."} {"title": "siliconvalley", "text": "May 2006(This essay is derived from a keynote at Xtech.)Could you reproduce Silicon Valley elsewhere, or is there something\nunique about it?It wouldn't be surprising if it were hard to reproduce in other\ncountries, because you couldn't reproduce it in most of the US\neither. What does it take to make a silicon valley even here?What it takes is the right people. If you could get the right ten\nthousand people to move from Silicon Valley to Buffalo, Buffalo\nwould become Silicon Valley. \n[1]That's a striking departure from the past. Up till a couple decades\nago, geography was destiny for cities. All great cities were located\non waterways, because cities made money by trade, and water was the\nonly economical way to ship.Now you could make a great city anywhere, if you could get the right\npeople to move there. So the question of how to make a silicon\nvalley becomes: who are the right people, and how do you get them\nto move?Two TypesI think you only need two kinds of people to create a technology\nhub: rich people and nerds. They're the limiting reagents in the\nreaction that produces startups, because they're the only ones\npresent when startups get started. Everyone else will move.Observation bears this out: within the US, towns have become startup\nhubs if and only if they have both rich people and nerds. Few\nstartups happen in Miami, for example, because although it's full\nof rich people, it has few nerds. It's not the kind of place nerds\nlike.Whereas Pittsburgh has the opposite problem: plenty of nerds, but\nno rich people. The top US Computer Science departments are said\nto be MIT, Stanford, Berkeley, and Carnegie-Mellon. MIT yielded\nRoute 128. Stanford and Berkeley yielded Silicon Valley. But\nCarnegie-Mellon? The record skips at that point. Lower down the\nlist, the University of Washington yielded a high-tech community\nin Seattle, and the University of Texas at Austin yielded one in\nAustin. But what happened in Pittsburgh? And in Ithaca, home of\nCornell, which is also high on the list?I grew up in Pittsburgh and went to college at Cornell, so I can\nanswer for both. The weather is terrible, particularly in winter,\nand there's no interesting old city to make up for it, as there is\nin Boston. Rich people don't want to live in Pittsburgh or Ithaca.\nSo while there are plenty of hackers who could start startups,\nthere's no one to invest in them.Not BureaucratsDo you really need the rich people? Wouldn't it work to have the\ngovernment invest in the nerds? No, it would not. Startup investors\nare a distinct type of rich people. They tend to have a lot of\nexperience themselves in the technology business. This (a) helps\nthem pick the right startups, and (b) means they can supply advice\nand connections as well as money. And the fact that they have a\npersonal stake in the outcome makes them really pay attention.Bureaucrats by their nature are the exact opposite sort of people\nfrom startup investors. The idea of them making startup investments\nis comic. It would be like mathematicians running Vogue-- or\nperhaps more accurately, Vogue editors running a math journal.\n[2]Though indeed, most things bureaucrats do, they do badly. We just\ndon't notice usually, because they only have to compete against\nother bureaucrats. But as startup investors they'd have to compete\nagainst pros with a great deal more experience and motivation.Even corporations that have in-house VC groups generally forbid\nthem to make their own investment decisions. Most are only allowed\nto invest in deals where some reputable private VC firm is willing\nto act as lead investor.Not BuildingsIf you go to see Silicon Valley, what you'll see are buildings.\nBut it's the people that make it Silicon Valley, not the buildings.\nI read occasionally about attempts to set up \"technology\nparks\" in other places, as if the active ingredient of Silicon\nValley were the office space. An article about Sophia Antipolis\nbragged that companies there included Cisco, Compaq, IBM, NCR, and\nNortel. Don't the French realize these aren't startups?Building office buildings for technology companies won't get you a\nsilicon valley, because the key stage in the life of a startup\nhappens before they want that kind of space. The key stage is when\nthey're three guys operating out of an apartment. Wherever the\nstartup is when it gets funded, it will stay. The defining quality\nof Silicon Valley is not that Intel or Apple or Google have offices\nthere, but that they were started there.So if you want to reproduce Silicon Valley, what you need to reproduce\nis those two or three founders sitting around a kitchen table\ndeciding to start a company. And to reproduce that you need those\npeople.UniversitiesThe exciting thing is, all you need are the people. If you could\nattract a critical mass of nerds and investors to live somewhere,\nyou could reproduce Silicon Valley. And both groups are highly\nmobile. They'll go where life is good. So what makes a place good\nto them?What nerds like is other nerds. Smart people will go wherever other\nsmart people are. And in particular, to great universities. In\ntheory there could be other ways to attract them, but so far\nuniversities seem to be indispensable. Within the US, there are\nno technology hubs without first-rate universities-- or at least,\nfirst-rate computer science departments.So if you want to make a silicon valley, you not only need a\nuniversity, but one of the top handful in the world. It has to be\ngood enough to act as a magnet, drawing the best people from thousands\nof miles away. And that means it has to stand up to existing magnets\nlike MIT and Stanford.This sounds hard. Actually it might be easy. My professor friends,\nwhen they're deciding where they'd like to work, consider one thing\nabove all: the quality of the other faculty. What attracts professors\nis good colleagues. So if you managed to recruit, en masse, a\nsignificant number of the best young researchers, you could create\na first-rate university from nothing overnight. And you could do\nthat for surprisingly little. If you paid 200 people hiring bonuses\nof $3 million apiece, you could put together a faculty that would\nbear comparison with any in the world. And from that point the\nchain reaction would be self-sustaining. So whatever it costs to\nestablish a mediocre university, for an additional half billion or\nso you could have a great one. \n[3]PersonalityHowever, merely creating a new university would not be enough to\nstart a silicon valley. The university is just the seed. It has\nto be planted in the right soil, or it won't germinate. Plant it\nin the wrong place, and you just create Carnegie-Mellon.To spawn startups, your university has to be in a town that has\nattractions other than the university. It has to be a place where\ninvestors want to live, and students want to stay after they graduate.The two like much the same things, because most startup investors\nare nerds themselves. So what do nerds look for in a town? Their\ntastes aren't completely different from other people's, because a\nlot of the towns they like most in the US are also big tourist\ndestinations: San Francisco, Boston, Seattle. But their tastes\ncan't be quite mainstream either, because they dislike other big\ntourist destinations, like New York, Los Angeles, and Las Vegas.There has been a lot written lately about the \"creative class.\" The\nthesis seems to be that as wealth derives increasingly from ideas,\ncities will prosper only if they attract those who have them. That\nis certainly true; in fact it was the basis of Amsterdam's prosperity\n400 years ago.A lot of nerd tastes they share with the creative class in general.\nFor example, they like well-preserved old neighborhoods instead of\ncookie-cutter suburbs, and locally-owned shops and restaurants\ninstead of national chains. Like the rest of the creative class,\nthey want to live somewhere with personality.What exactly is personality? I think it's the feeling that each\nbuilding is the work of a distinct group of people. A town with\npersonality is one that doesn't feel mass-produced. So if you want\nto make a startup hub-- or any town to attract the \"creative class\"--\nyou probably have to ban large development projects.\nWhen a large tract has been developed by a single organization, you\ncan always tell. \n[4]Most towns with personality are old, but they don't have to be.\nOld towns have two advantages: they're denser, because they were\nlaid out before cars, and they're more varied, because they were\nbuilt one building at a time. You could have both now. Just have\nbuilding codes that ensure density, and ban large scale developments.A corollary is that you have to keep out the biggest developer of\nall: the government. A government that asks \"How can we build a\nsilicon valley?\" has probably ensured failure by the way they framed\nthe question. You don't build a silicon valley; you let one grow.NerdsIf you want to attract nerds, you need more than a town with\npersonality. You need a town with the right personality. Nerds\nare a distinct subset of the creative class, with different tastes\nfrom the rest. You can see this most clearly in New York, which\nattracts a lot of creative people, but few nerds. \n[5]What nerds like is the kind of town where people walk around smiling.\nThis excludes LA, where no one walks at all, and also New York,\nwhere people walk, but not smiling. When I was in grad school in\nBoston, a friend came to visit from New York. On the subway back\nfrom the airport she asked \"Why is everyone smiling?\" I looked and\nthey weren't smiling. They just looked like they were compared to\nthe facial expressions she was used to.If you've lived in New York, you know where these facial expressions\ncome from. It's the kind of place where your mind may be excited,\nbut your body knows it's having a bad time. People don't so much\nenjoy living there as endure it for the sake of the excitement.\nAnd if you like certain kinds of excitement, New York is incomparable.\nIt's a hub of glamour, a magnet for all the shorter half-life\nisotopes of style and fame.Nerds don't care about glamour, so to them the appeal of New York\nis a mystery. People who like New York will pay a fortune for a\nsmall, dark, noisy apartment in order to live in a town where the\ncool people are really cool. A nerd looks at that deal and sees\nonly: pay a fortune for a small, dark, noisy apartment.Nerds will pay a premium to live in a town where the smart people\nare really smart, but you don't have to pay as much for that. It's\nsupply and demand: glamour is popular, so you have to pay a lot for\nit.Most nerds like quieter pleasures. They like cafes instead of\nclubs; used bookshops instead of fashionable clothing shops; hiking\ninstead of dancing; sunlight instead of tall buildings. A nerd's\nidea of paradise is Berkeley or Boulder.YouthIt's the young nerds who start startups, so it's those specifically\nthe city has to appeal to. The startup hubs in the US are all\nyoung-feeling towns. This doesn't mean they have to be new.\nCambridge has the oldest town plan in America, but it feels young\nbecause it's full of students.What you can't have, if you want to create a silicon valley, is a\nlarge, existing population of stodgy people. It would be a waste\nof time to try to reverse the fortunes of a declining industrial town\nlike Detroit or Philadelphia by trying to encourage startups. Those\nplaces have too much momentum in the wrong direction. You're better\noff starting with a blank slate in the form of a small town. Or\nbetter still, if there's a town young people already flock to, that\none.The Bay Area was a magnet for the young and optimistic for decades\nbefore it was associated with technology. It was a place people\nwent in search of something new. And so it became synonymous with\nCalifornia nuttiness. There's still a lot of that there. If you\nwanted to start a new fad-- a new way to focus one's \"energy,\" for\nexample, or a new category of things not to eat-- the Bay Area would\nbe the place to do it. But a place that tolerates oddness in the\nsearch for the new is exactly what you want in a startup hub, because\neconomically that's what startups are. Most good startup ideas\nseem a little crazy; if they were obviously good ideas, someone\nwould have done them already.(How many people are going to want computers in their houses?\nWhat, another search engine?)That's the connection between technology and liberalism. Without\nexception the high-tech cities in the US are also the most liberal.\nBut it's not because liberals are smarter that this is so. It's\nbecause liberal cities tolerate odd ideas, and smart people by\ndefinition have odd ideas.Conversely, a town that gets praised for being \"solid\" or representing\n\"traditional values\" may be a fine place to live, but it's never\ngoing to succeed as a startup hub. The 2004 presidential election,\nthough a disaster in other respects, conveniently supplied us with\na county-by-county \nmap of such places. \n[6]To attract the young, a town must have an intact center. In most\nAmerican cities the center has been abandoned, and the growth, if\nany, is in the suburbs. Most American cities have been turned\ninside out. But none of the startup hubs has: not San Francisco,\nor Boston, or Seattle. They all have intact centers.\n[7]\nMy guess is that no city with a dead center could be turned into a\nstartup hub. Young people don't want to live in the suburbs.Within the US, the two cities I think could most easily be turned\ninto new silicon valleys are Boulder and Portland. Both have the\nkind of effervescent feel that attracts the young. They're each\nonly a great university short of becoming a silicon valley, if they\nwanted to.TimeA great university near an attractive town. Is that all it takes?\nThat was all it took to make the original Silicon Valley. Silicon\nValley traces its origins to William Shockley, one of the inventors\nof the transistor. He did the research that won him the Nobel Prize\nat Bell Labs, but when he started his own company in 1956 he moved\nto Palo Alto to do it. At the time that was an odd thing to do.\nWhy did he? Because he had grown up there and remembered how nice\nit was. Now Palo Alto is suburbia, but then it was a charming\ncollege town-- a charming college town with perfect weather and San\nFrancisco only an hour away.The companies that rule Silicon Valley now are all descended in\nvarious ways from Shockley Semiconductor. Shockley was a difficult\nman, and in 1957 his top people-- \"the traitorous eight\"-- left to\nstart a new company, Fairchild Semiconductor. Among them were\nGordon Moore and Robert Noyce, who went on to found Intel, and\nEugene Kleiner, who founded the VC firm Kleiner Perkins. Forty-two\nyears later, Kleiner Perkins funded Google, and the partner responsible\nfor the deal was John Doerr, who came to Silicon Valley in 1974 to\nwork for Intel.So although a lot of the newest companies in Silicon Valley don't\nmake anything out of silicon, there always seem to be multiple links\nback to Shockley. There's a lesson here: startups beget startups.\nPeople who work for startups start their own. People who get rich\nfrom startups fund new ones. I suspect this kind of organic growth\nis the only way to produce a startup hub, because it's the only way\nto grow the expertise you need.That has two important implications. The first is that you need\ntime to grow a silicon valley. The university you could create in\na couple years, but the startup community around it has to grow\norganically. The cycle time is limited by the time it takes a\ncompany to succeed, which probably averages about five years.The other implication of the organic growth hypothesis is that you\ncan't be somewhat of a startup hub. You either have a self-sustaining\nchain reaction, or not. Observation confirms this too: cities\neither have a startup scene, or they don't. There is no middle\nground. Chicago has the third largest metropolitan area in America.\nAs source of startups it's negligible compared to Seattle, number 15.The good news is that the initial seed can be quite small. Shockley\nSemiconductor, though itself not very successful, was big enough.\nIt brought a critical mass of experts in an important new technology\ntogether in a place they liked enough to stay.CompetingOf course, a would-be silicon valley faces an obstacle the original\none didn't: it has to compete with Silicon Valley. Can that be\ndone? Probably.One of Silicon Valley's biggest advantages is its venture capital\nfirms. This was not a factor in Shockley's day, because VC funds\ndidn't exist. In fact, Shockley Semiconductor and Fairchild\nSemiconductor were not startups at all in our sense. They were\nsubsidiaries-- of Beckman Instruments and Fairchild Camera and\nInstrument respectively. Those companies were apparently willing\nto establish subsidiaries wherever the experts wanted to live.Venture investors, however, prefer to fund startups within an hour's\ndrive. For one, they're more likely to notice startups nearby.\nBut when they do notice startups in other towns they prefer them\nto move. They don't want to have to travel to attend board meetings,\nand in any case the odds of succeeding are higher in a startup hub.The centralizing effect of venture firms is a double one: they cause\nstartups to form around them, and those draw in more startups through\nacquisitions. And although the first may be weakening because it's\nnow so cheap to start some startups, the second seems as strong as ever.\nThree of the most admired\n\"Web 2.0\" companies were started outside the usual startup hubs,\nbut two of them have already been reeled in through acquisitions.Such centralizing forces make it harder for new silicon valleys to\nget started. But by no means impossible. Ultimately power rests\nwith the founders. A startup with the best people will beat one\nwith funding from famous VCs, and a startup that was sufficiently\nsuccessful would never have to move. So a town that\ncould exert enough pull over the right people could resist and\nperhaps even surpass Silicon Valley.For all its power, Silicon Valley has a great weakness: the paradise\nShockley found in 1956 is now one giant parking lot. San Francisco\nand Berkeley are great, but they're forty miles away. Silicon\nValley proper is soul-crushing suburban sprawl. It\nhas fabulous weather, which makes it significantly better than the\nsoul-crushing sprawl of most other American cities. But a competitor\nthat managed to avoid sprawl would have real leverage. All a city\nneeds is to be the kind of place the next traitorous eight look at\nand say \"I want to stay here,\" and that would be enough to get the\nchain reaction started.Notes[1]\nIt's interesting to consider how low this number could be\nmade. I suspect five hundred would be enough, even if they could\nbring no assets with them. Probably just thirty, if I could pick them, \nwould be enough to turn Buffalo into a significant startup hub.[2]\nBureaucrats manage to allocate research funding moderately\nwell, but only because (like an in-house VC fund) they outsource\nmost of the work of selection. A professor at a famous university\nwho is highly regarded by his peers will get funding, pretty much\nregardless of the proposal. That wouldn't work for startups, whose\nfounders aren't sponsored by organizations, and are often unknowns.[3]\nYou'd have to do it all at once, or at least a whole department\nat a time, because people would be more likely to come if they\nknew their friends were. And you should probably start from scratch,\nrather than trying to upgrade an existing university, or much energy\nwould be lost in friction.[4]\nHypothesis: Any plan in which multiple independent buildings\nare gutted or demolished to be \"redeveloped\" as a single project\nis a net loss of personality for the city, with the exception of\nthe conversion of buildings not previously public, like warehouses.[5]\nA few startups get started in New York, but less\nthan a tenth as many per capita as in Boston, and mostly\nin less nerdy fields like finance and media.[6]\nSome blue counties are false positives (reflecting the\nremaining power of Democractic party machines), but there are no\nfalse negatives. You can safely write off all the red counties.[7]\nSome \"urban renewal\" experts took a shot at destroying Boston's\nin the 1960s, leaving the area around city hall a bleak wasteland,\nbut most neighborhoods successfully resisted them.Thanks to Chris Anderson, Trevor Blackwell, Marc Hedlund,\nJessica Livingston, Robert Morris, Greg Mcadoo, Fred Wilson,\nand Stephen Wolfram for\nreading drafts of this, and to Ed Dumbill for inviting me to speak.(The second part of this talk became Why Startups\nCondense in America.)"} {"title": "corpdev", "text": "January 2015Corporate Development, aka corp dev, is the group within companies\nthat buys other companies. If you're talking to someone from corp\ndev, that's why, whether you realize it yet or not.It's usually a mistake to talk to corp dev unless (a) you want to\nsell your company right now and (b) you're sufficiently likely to\nget an offer at an acceptable price. In practice that means startups\nshould only talk to corp dev when they're either doing really well\nor really badly. If you're doing really badly, meaning the company\nis about to die, you may as well talk to them, because you have\nnothing to lose. And if you're doing really well, you can safely\ntalk to them, because you both know the price will have to be high,\nand if they show the slightest sign of wasting your time, you'll\nbe confident enough to tell them to get lost.The danger is to companies in the middle. Particularly to young\ncompanies that are growing fast, but haven't been doing it for long\nenough to have grown big yet. It's usually a mistake for a promising\ncompany less than a year old even to talk to corp dev.But it's a mistake founders constantly make. When someone from\ncorp dev wants to meet, the founders tell themselves they should\nat least find out what they want. Besides, they don't want to\noffend Big Company by refusing to meet.Well, I'll tell you what they want. They want to talk about buying\nyou. That's what the title \"corp dev\" means. So before agreeing\nto meet with someone from corp dev, ask yourselves, \"Do we want to\nsell the company right now?\" And if the answer is no, tell them\n\"Sorry, but we're focusing on growing the company.\" They won't be\noffended. And certainly the founders of Big Company won't be\noffended. If anything they'll think more highly of you. You'll\nremind them of themselves. They didn't sell either; that's why\nthey're in a position now to buy other companies.\n[1]Most founders who get contacted by corp dev already know what it\nmeans. And yet even when they know what corp dev does and know\nthey don't want to sell, they take the meeting. Why do they do it?\nThe same mix of denial and wishful thinking that underlies most\nmistakes founders make. It's flattering to talk to someone who wants\nto buy you. And who knows, maybe their offer will be surprisingly\nhigh. You should at least see what it is, right?No. If they were going to send you an offer immediately by email,\nsure, you might as well open it. But that is not how conversations\nwith corp dev work. If you get an offer at all, it will be at the\nend of a long and unbelievably distracting process. And if the\noffer is surprising, it will be surprisingly low.Distractions are the thing you can least afford in a startup. And\nconversations with corp dev are the worst sort of distraction,\nbecause as well as consuming your attention they undermine your\nmorale. One of the tricks to surviving a grueling process is not\nto stop and think how tired you are. Instead you get into a sort\nof flow. \n[2]\nImagine what it would do to you if at mile 20 of a\nmarathon, someone ran up beside you and said \"You must feel really\ntired. Would you like to stop and take a rest?\" Conversations\nwith corp dev are like that but worse, because the suggestion of\nstopping gets combined in your mind with the imaginary high price\nyou think they'll offer.And then you're really in trouble. If they can, corp dev people\nlike to turn the tables on you. They like to get you to the point\nwhere you're trying to convince them to buy instead of them trying\nto convince you to sell. And surprisingly often they succeed.This is a very slippery slope, greased with some of the most powerful\nforces that can work on founders' minds, and attended by an experienced\nprofessional whose full time job is to push you down it.Their tactics in pushing you down that slope are usually fairly\nbrutal. Corp dev people's whole job is to buy companies, and they\ndon't even get to choose which. The only way their performance is\nmeasured is by how cheaply they can buy you, and the more ambitious\nones will stop at nothing to achieve that. For example, they'll\nalmost always start with a lowball offer, just to see if you'll\ntake it. Even if you don't, a low initial offer will demoralize you\nand make you easier to manipulate.And that is the most innocent of their tactics. Just wait till\nyou've agreed on a price and think you have a done deal, and then\nthey come back and say their boss has vetoed the deal and won't do\nit for more than half the agreed upon price. Happens all the time.\nIf you think investors can behave badly, it's nothing compared to\nwhat corp dev people can do. Even corp dev people at companies\nthat are otherwise benevolent.I remember once complaining to a\nfriend at Google about some nasty trick their corp dev people had\npulled on a YC startup.\"What happened to Don't be Evil?\" I asked.\"I don't think corp dev got the memo,\" he replied.The tactics you encounter in M&A conversations can be like nothing\nyou've experienced in the otherwise comparatively \nupstanding world\nof Silicon Valley. It's as if a chunk of genetic material from the\nold-fashioned robber baron business world got incorporated into the\nstartup world.\n[3]The simplest way to protect yourself is to use the trick that John\nD. Rockefeller, whose grandfather was an alcoholic, used to protect\nhimself from becoming one. He once told a Sunday school class\n\n Boys, do you know why I never became a drunkard? Because I never\n took the first drink.\n\nDo you want to sell your company right now? Not eventually, right\nnow. If not, just don't take the first meeting. They won't be\noffended. And you in turn will be guaranteed to be spared one of\nthe worst experiences that can happen to a startup.If you do want to sell, there's another set of \ntechniques\n for doing\nthat. But the biggest mistake founders make in dealing with corp\ndev is not doing a bad job of talking to them when they're ready\nto, but talking to them before they are. So if you remember only\nthe title of this essay, you already know most of what you need to\nknow about M&A in the first year.Notes[1]\nI'm not saying you should never sell. I'm saying you should\nbe clear in your own mind about whether you want to sell or not,\nand not be led by manipulation or wishful thinking into trying to\nsell earlier than you otherwise would have.[2]\nIn a startup, as in most competitive sports, the task at hand\nalmost does this for you; you're too busy to feel tired. But when\nyou lose that protection, e.g. at the final whistle, the fatigue\nhits you like a wave. To talk to corp dev is to let yourself feel\nit mid-game.[3]\nTo be fair, the apparent misdeeds of corp dev people are magnified\nby the fact that they function as the face of a large organization\nthat often doesn't know its own mind. Acquirers can be surprisingly\nindecisive about acquisitions, and their flakiness is indistinguishable\nfrom dishonesty by the time it filters down to you.Thanks to Marc Andreessen, Jessica Livingston, Geoff\nRalston, and Qasar Younis for reading drafts of this."} {"title": "langdes", "text": "May 2001\n\n(These are some notes I made\nfor a panel discussion on programming language design\nat MIT on May 10, 2001.)1. Programming Languages Are for People.Programming languages\nare how people talk to computers. The computer would be just as\nhappy speaking any language that was unambiguous. The reason we\nhave high level languages is because people can't deal with\nmachine language. The point of programming\nlanguages is to prevent our poor frail human brains from being \noverwhelmed by a mass of detail.Architects know that some kinds of design problems are more personal\nthan others. One of the cleanest, most abstract design problems\nis designing bridges. There your job is largely a matter of spanning\na given distance with the least material. The other end of the\nspectrum is designing chairs. Chair designers have to spend their\ntime thinking about human butts.Software varies in the same way. Designing algorithms for routing\ndata through a network is a nice, abstract problem, like designing\nbridges. Whereas designing programming languages is like designing\nchairs: it's all about dealing with human weaknesses.Most of us hate to acknowledge this. Designing systems of great\nmathematical elegance sounds a lot more appealing to most of us\nthan pandering to human weaknesses. And there is a role for mathematical\nelegance: some kinds of elegance make programs easier to understand.\nBut elegance is not an end in itself.And when I say languages have to be designed to suit human weaknesses,\nI don't mean that languages have to be designed for bad programmers.\nIn fact I think you ought to design for the \nbest programmers, but\neven the best programmers have limitations. I don't think anyone\nwould like programming in a language where all the variables were\nthe letter x with integer subscripts.2. Design for Yourself and Your Friends.If you look at the history of programming languages, a lot of the best\nones were languages designed for their own authors to use, and a\nlot of the worst ones were designed for other people to use.When languages are designed for other people, it's always a specific\ngroup of other people: people not as smart as the language designer.\nSo you get a language that talks down to you. Cobol is the most\nextreme case, but a lot of languages are pervaded by this spirit.It has nothing to do with how abstract the language is. C is pretty\nlow-level, but it was designed for its authors to use, and that's\nwhy hackers like it.The argument for designing languages for bad programmers is that\nthere are more bad programmers than good programmers. That may be\nso. But those few good programmers write a disproportionately\nlarge percentage of the software.I'm interested in the question, how do you design a language that\nthe very best hackers will like? I happen to think this is\nidentical to the question, how do you design a good programming\nlanguage?, but even if it isn't, it is at least an interesting\nquestion.3. Give the Programmer as Much Control as Possible.Many languages\n(especially the ones designed for other people) have the attitude\nof a governess: they try to prevent you from\ndoing things that they think aren't good for you. I like the \nopposite approach: give the programmer as much\ncontrol as you can.When I first learned Lisp, what I liked most about it was\nthat it considered me an equal partner. In the other languages\nI had learned up till then, there was the language and there was my \nprogram, written in the language, and the two were very separate.\nBut in Lisp the functions and macros I wrote were just like those\nthat made up the language itself. I could rewrite the language\nif I wanted. It had the same appeal as open-source software.4. Aim for Brevity.Brevity is underestimated and even scorned.\nBut if you look into the hearts of hackers, you'll see that they\nreally love it. How many times have you heard hackers speak fondly\nof how in, say, APL, they could do amazing things with just a couple\nlines of code? I think anything that really smart people really\nlove is worth paying attention to.I think almost anything\nyou can do to make programs shorter is good. There should be lots\nof library functions; anything that can be implicit should be;\nthe syntax should be terse to a fault; even the names of things\nshould be short.And it's not only programs that should be short. The manual should\nbe thin as well. A good part of manuals is taken up with clarifications\nand reservations and warnings and special cases. If you force \nyourself to shorten the manual, in the best case you do it by fixing\nthe things in the language that required so much explanation.5. Admit What Hacking Is.A lot of people wish that hacking was\nmathematics, or at least something like a natural science. I think\nhacking is more like architecture. Architecture is\nrelated to physics, in the sense that architects have to design\nbuildings that don't fall down, but the actual goal of architects\nis to make great buildings, not to make discoveries about statics.What hackers like to do is make great programs.\nAnd I think, at least in our own minds, we have to remember that it's\nan admirable thing to write great programs, even when this work \ndoesn't translate easily into the conventional intellectual\ncurrency of research papers. Intellectually, it is just as\nworthwhile to design a language programmers will love as it is to design a\nhorrible one that embodies some idea you can publish a paper\nabout.1. How to Organize Big Libraries?Libraries are becoming an\nincreasingly important component of programming languages. They're\nalso getting bigger, and this can be dangerous. If it takes longer\nto find the library function that will do what you want than it\nwould take to write it yourself, then all that code is doing nothing\nbut make your manual thick. (The Symbolics manuals were a case in \npoint.) So I think we will have to work on ways to organize\nlibraries. The ideal would be to design them so that the programmer\ncould guess what library call would do the right thing.2. Are People Really Scared of Prefix Syntax?This is an open\nproblem in the sense that I have wondered about it for years and\nstill don't know the answer. Prefix syntax seems perfectly natural\nto me, except possibly for math. But it could be that a lot of \nLisp's unpopularity is simply due to having an unfamiliar syntax. \nWhether to do anything about it, if it is true, is another question. \n\n3. What Do You Need for Server-Based Software?\n\nI think a lot of the most exciting new applications that get written\nin the next twenty years will be Web-based applications, meaning\nprograms that sit on the server and talk to you through a Web\nbrowser. And to write these kinds of programs we may need some\nnew things.One thing we'll need is support for the new way that server-based \napps get released. Instead of having one or two big releases a\nyear, like desktop software, server-based apps get released as a\nseries of small changes. You may have as many as five or ten\nreleases a day. And as a rule everyone will always use the latest\nversion.You know how you can design programs to be debuggable?\nWell, server-based software likewise has to be designed to be\nchangeable. You have to be able to change it easily, or at least\nto know what is a small change and what is a momentous one.Another thing that might turn out to be useful for server based\nsoftware, surprisingly, is continuations. In Web-based software\nyou can use something like continuation-passing style to get the\neffect of subroutines in the inherently \nstateless world of a Web\nsession. Maybe it would be worthwhile having actual continuations,\nif it was not too expensive.4. What New Abstractions Are Left to Discover?I'm not sure how\nreasonable a hope this is, but one thing I would really love to \ndo, personally, is discover a new abstraction-- something that would\nmake as much of a difference as having first class functions or\nrecursion or even keyword parameters. This may be an impossible\ndream. These things don't get discovered that often. But I am always\nlooking.1. You Can Use Whatever Language You Want.Writing application\nprograms used to mean writing desktop software. And in desktop\nsoftware there is a big bias toward writing the application in the\nsame language as the operating system. And so ten years ago,\nwriting software pretty much meant writing software in C.\nEventually a tradition evolved:\napplication programs must not be written in unusual languages. \nAnd this tradition had so long to develop that nontechnical people\nlike managers and venture capitalists also learned it.Server-based software blows away this whole model. With server-based\nsoftware you can use any language you want. Almost nobody understands\nthis yet (especially not managers and venture capitalists).\nA few hackers understand it, and that's why we even hear\nabout new, indy languages like Perl and Python. We're not hearing\nabout Perl and Python because people are using them to write Windows\napps.What this means for us, as people interested in designing programming\nlanguages, is that there is now potentially an actual audience for\nour work.2. Speed Comes from Profilers.Language designers, or at least\nlanguage implementors, like to write compilers that generate fast\ncode. But I don't think this is what makes languages fast for users.\nKnuth pointed out long ago that speed only matters in a few critical\nbottlenecks. And anyone who's tried it knows that you can't guess\nwhere these bottlenecks are. Profilers are the answer.Language designers are solving the wrong problem. Users don't need\nbenchmarks to run fast. What they need is a language that can show\nthem what parts of their own programs need to be rewritten. That's\nwhere speed comes from in practice. So maybe it would be a net \nwin if language implementors took half the time they would\nhave spent doing compiler optimizations and spent it writing a\ngood profiler instead.3. You Need an Application to Drive the Design of a Language.This may not be an absolute rule, but it seems like the best languages\nall evolved together with some application they were being used to\nwrite. C was written by people who needed it for systems programming.\nLisp was developed partly to do symbolic differentiation, and\nMcCarthy was so eager to get started that he was writing differentiation\nprograms even in the first paper on Lisp, in 1960.It's especially good if your application solves some new problem.\nThat will tend to drive your language to have new features that \nprogrammers need. I personally am interested in writing\na language that will be good for writing server-based applications.[During the panel, Guy Steele also made this point, with the\nadditional suggestion that the application should not consist of\nwriting the compiler for your language, unless your language\nhappens to be intended for writing compilers.]4. A Language Has to Be Good for Writing Throwaway Programs.You know what a throwaway program is: something you write quickly for\nsome limited task. I think if you looked around you'd find that \na lot of big, serious programs started as throwaway programs. I\nwould not be surprised if most programs started as throwaway\nprograms. And so if you want to make a language that's good for\nwriting software in general, it has to be good for writing throwaway\nprograms, because that is the larval stage of most software.5. Syntax Is Connected to Semantics.It's traditional to think of\nsyntax and semantics as being completely separate. This will\nsound shocking, but it may be that they aren't.\nI think that what you want in your language may be related\nto how you express it.I was talking recently to Robert Morris, and he pointed out that\noperator overloading is a bigger win in languages with infix\nsyntax. In a language with prefix syntax, any function you define\nis effectively an operator. If you want to define a plus for a\nnew type of number you've made up, you can just define a new function\nto add them. If you do that in a language with infix syntax,\nthere's a big difference in appearance between the use of an\noverloaded operator and a function call.1. New Programming Languages.Back in the 1970s\nit was fashionable to design new programming languages. Recently\nit hasn't been. But I think server-based software will make new \nlanguages fashionable again. With server-based software, you can\nuse any language you want, so if someone does design a language that\nactually seems better than others that are available, there will be\npeople who take a risk and use it.2. Time-Sharing.Richard Kelsey gave this as an idea whose time\nhas come again in the last panel, and I completely agree with him.\nMy guess (and Microsoft's guess, it seems) is that much computing\nwill move from the desktop onto remote servers. In other words, \ntime-sharing is back. And I think there will need to be support\nfor it at the language level. For example, I know that Richard\nand Jonathan Rees have done a lot of work implementing process \nscheduling within Scheme 48.3. Efficiency.Recently it was starting to seem that computers\nwere finally fast enough. More and more we were starting to hear\nabout byte code, which implies to me at least that we feel we have\ncycles to spare. But I don't think we will, with server-based\nsoftware. Someone is going to have to pay for the servers that\nthe software runs on, and the number of users they can support per\nmachine will be the divisor of their capital cost.So I think efficiency will matter, at least in computational\nbottlenecks. It will be especially important to do i/o fast,\nbecause server-based applications do a lot of i/o.It may turn out that byte code is not a win, in the end. Sun and\nMicrosoft seem to be facing off in a kind of a battle of the byte\ncodes at the moment. But they're doing it because byte code is a\nconvenient place to insert themselves into the process, not because\nbyte code is in itself a good idea. It may turn out that this\nwhole battleground gets bypassed. That would be kind of amusing.1. Clients.This is just a guess, but my guess is that\nthe winning model for most applications will be purely server-based.\nDesigning software that works on the assumption that everyone will \nhave your client is like designing a society on the assumption that\neveryone will just be honest. It would certainly be convenient, but\nyou have to assume it will never happen.I think there will be a proliferation of devices that have some\nkind of Web access, and all you'll be able to assume about them is\nthat they can support simple html and forms. Will you have a\nbrowser on your cell phone? Will there be a phone in your palm \npilot? Will your blackberry get a bigger screen? Will you be able\nto browse the Web on your gameboy? Your watch? I don't know. \nAnd I don't have to know if I bet on\neverything just being on the server. It's\njust so much more robust to have all the \nbrains on the server.2. Object-Oriented Programming.I realize this is a\ncontroversial one, but I don't think object-oriented programming\nis such a big deal. I think it is a fine model for certain kinds\nof applications that need that specific kind of data structure, \nlike window systems, simulations, and cad programs. But I don't\nsee why it ought to be the model for all programming.I think part of the reason people in big companies like object-oriented\nprogramming is because it yields a lot of what looks like work.\nSomething that might naturally be represented as, say, a list of\nintegers, can now be represented as a class with all kinds of\nscaffolding and hustle and bustle.Another attraction of\nobject-oriented programming is that methods give you some of the\neffect of first class functions. But this is old news to Lisp\nprogrammers. When you have actual first class functions, you can\njust use them in whatever way is appropriate to the task at hand,\ninstead of forcing everything into a mold of classes and methods.What this means for language design, I think, is that you shouldn't\nbuild object-oriented programming in too deeply. Maybe the\nanswer is to offer more general, underlying stuff, and let people design\nwhatever object systems they want as libraries.3. Design by Committee.Having your language designed by a committee is a big pitfall, \nand not just for the reasons everyone knows about. Everyone\nknows that committees tend to yield lumpy, inconsistent designs. \nBut I think a greater danger is that they won't take risks.\nWhen one person is in charge he can take risks\nthat a committee would never agree on.Is it necessary to take risks to design a good language though?\nMany people might suspect\nthat language design is something where you should stick fairly\nclose to the conventional wisdom. I bet this isn't true.\nIn everything else people do, reward is proportionate to risk.\nWhy should language design be any different?"} {"title": "laundry", "text": "October 2004\nAs E. B. White said, \"good writing is rewriting.\" I didn't\nrealize this when I was in school. In writing, as in math and \nscience, they only show you the finished product.\nYou don't see all the false starts. This gives students a\nmisleading view of how things get made.Part of the reason it happens is that writers don't want \npeople to see their mistakes. But I'm willing to let people\nsee an early draft if it will show how much you have\nto rewrite to beat an essay into shape.Below is the oldest version I can find of\nThe Age of the Essay \n(probably the second or third day), with\ntext that ultimately survived in \nred and text that later\ngot deleted in gray.\nThere seem to be several categories of cuts: things I got wrong,\nthings that seem like bragging, flames,\ndigressions, stretches of awkward prose, and unnecessary words.I discarded more from the beginning. That's\nnot surprising; it takes a while to hit your stride. There\nare more digressions at the start, because I'm not sure where\nI'm heading.The amount of cutting is about average. I probably write\nthree to four words for every one that appears in the final\nversion of an essay.(Before anyone gets mad at me for opinions expressed here, remember\nthat anything you see here that's not in the final version is obviously\nsomething I chose not to publish, often because I disagree\nwith it.)\nRecently a friend said that what he liked about\nmy essays was that they weren't written the way\nwe'd been taught to write essays in school. You\nremember: topic sentence, introductory paragraph,\nsupporting paragraphs, conclusion. It hadn't\noccurred to me till then that those horrible things\nwe had to write in school were even connected to\nwhat I was doing now. But sure enough, I thought,\nthey did call them \"essays,\" didn't they?Well, they're not. Those things you have to write\nin school are not only not essays, they're one of the\nmost pointless of all the pointless hoops you have\nto jump through in school. And I worry that they\nnot only teach students the wrong things about writing,\nbut put them off writing entirely.So I'm going to give the other side of the story: what\nan essay really is, and how you write one. Or at least,\nhow I write one. Students be forewarned: if you actually write\nthe kind of essay I describe, you'll probably get bad\ngrades. But knowing how it's really done should\nat least help you to understand the feeling of futility\nyou have when you're writing the things they tell you to.\nThe most obvious difference between real essays and\nthe things one has to write in school is that real\nessays are not exclusively about English literature.\nIt's a fine thing for schools to\n\nteach students how to\nwrite. But for some bizarre reason (actually, a very specific bizarre\nreason that I'll explain in a moment),\n\nthe teaching of\nwriting has gotten mixed together with the study\nof literature. And so all over the country, students are\nwriting not about how a baseball team with a small budget \nmight compete with the Yankees, or the role of color in\nfashion, or what constitutes a good dessert, but about\nsymbolism in Dickens.With obvious \nresults. Only a few people really\n\ncare about\nsymbolism in Dickens. The teacher doesn't.\nThe students don't. Most of the people who've had to write PhD\ndisserations about Dickens don't. And certainly\n\nDickens himself would be more interested in an essay\nabout color or baseball.How did things get this way? To answer that we have to go back\nalmost a thousand years. Between about 500 and 1000, life was\nnot very good in Europe. The term \"dark ages\" is presently\nout of fashion as too judgemental (the period wasn't dark; \nit was just different), but if this label didn't already\nexist, it would seem an inspired metaphor. What little\noriginal thought there was took place in lulls between\nconstant wars and had something of the character of\nthe thoughts of parents with a new baby.\nThe most amusing thing written during this\nperiod, Liudprand of Cremona's Embassy to Constantinople, is,\nI suspect, mostly inadvertantly so.Around 1000 Europe began to catch its breath.\nAnd once they\nhad the luxury of curiosity, one of the first things they discovered\nwas what we call \"the classics.\"\nImagine if we were visited \nby aliens. If they could even get here they'd presumably know a\nfew things we don't. Immediately Alien Studies would become\nthe most dynamic field of scholarship: instead of painstakingly\ndiscovering things for ourselves, we could simply suck up\neverything they'd discovered. So it was in Europe in 1200.\nWhen classical texts began to circulate in Europe, they contained\nnot just new answers, but new questions. (If anyone proved\na theorem in christian Europe before 1200, for example, there\nis no record of it.)For a couple centuries, some of the most important work\nbeing done was intellectual archaelogy. Those were also\nthe centuries during which schools were first established.\nAnd since reading ancient texts was the essence of what\nscholars did then, it became the basis of the curriculum.By 1700, someone who wanted to learn about\nphysics didn't need to start by mastering Greek in order to read Aristotle. But schools\nchange slower than scholarship: the study of\nancient texts\nhad such prestige that it remained the backbone of \neducation\nuntil the late 19th century. By then it was merely a tradition.\nIt did serve some purposes: reading a foreign language was difficult,\nand thus taught discipline, or at least, kept students busy;\nit introduced students to\ncultures quite different from their own; and its very uselessness\nmade it function (like white gloves) as a social bulwark.\nBut it certainly wasn't\ntrue, and hadn't been true for centuries, that students were\nserving apprenticeships in the hottest area of scholarship.Classical scholarship had also changed. In the early era, philology\nactually mattered. The texts that filtered into Europe were\nall corrupted to some degree by the errors of translators and\ncopyists. Scholars had to figure out what Aristotle said\nbefore they could figure out what he meant. But by the modern\nera such questions were answered as well as they were ever\ngoing to be. And so the study of ancient texts became less\nabout ancientness and more about texts.The time was then ripe for the question: if the study of\nancient texts is a valid field for scholarship, why not modern\ntexts? The answer, of course, is that the raison d'etre\nof classical scholarship was a kind of intellectual archaelogy that\ndoes not need to be done in the case of contemporary authors.\nBut for obvious reasons no one wanted to give that answer.\nThe archaeological work being mostly done, it implied that\nthe people studying the classics were, if not wasting their\ntime, at least working on problems of minor importance.And so began the study of modern literature. There was some\ninitial resistance, but it didn't last long.\nThe limiting\nreagent in the growth of university departments is what\nparents will let undergraduates study. If parents will let\ntheir children major in x, the rest follows straightforwardly.\nThere will be jobs teaching x, and professors to fill them.\nThe professors will establish scholarly journals and publish\none another's papers. Universities with x departments will\nsubscribe to the journals. Graduate students who want jobs\nas professors of x will write dissertations about it. It may\ntake a good long while for the more prestigious universities\nto cave in and establish departments in cheesier xes, but\nat the other end of the scale there are so many universities\ncompeting to attract students that the mere establishment of\na discipline requires little more than the desire to do it.High schools imitate universities.\nAnd so once university\nEnglish departments were established in the late nineteenth century,\nthe 'riting component of the 3 Rs \nwas morphed into English.\nWith the bizarre consequence that high school students now\nhad to write about English literature-- to write, without\neven realizing it, imitations of whatever\nEnglish professors had been publishing in their journals a\nfew decades before. It's no wonder if this seems to the\nstudent a pointless exercise, because we're now three steps\nremoved from real work: the students are imitating English\nprofessors, who are imitating classical scholars, who are\nmerely the inheritors of a tradition growing out of what\nwas, 700 years ago, fascinating and urgently needed work.Perhaps high schools should drop English and just teach writing.\nThe valuable part of English classes is learning to write, and\nthat could be taught better by itself. Students learn better\nwhen they're interested in what they're doing, and it's hard\nto imagine a topic less interesting than symbolism in Dickens.\nMost of the people who write about that sort of thing professionally\nare not really interested in it. (Though indeed, it's been a\nwhile since they were writing about symbolism; now they're\nwriting about gender.)I have no illusions about how eagerly this suggestion will \nbe adopted. Public schools probably couldn't stop teaching\nEnglish even if they wanted to; they're probably required to by\nlaw. But here's a related suggestion that goes with the grain\ninstead of against it: that universities establish a\nwriting major. Many of the students who now major in English\nwould major in writing if they could, and most would\nbe better off.It will be argued that it is a good thing for students to be\nexposed to their literary heritage. Certainly. But is that\nmore important than that they learn to write well? And are\nEnglish classes even the place to do it? After all,\nthe average public high school student gets zero exposure to \nhis artistic heritage. No disaster results.\nThe people who are interested in art learn about it for\nthemselves, and those who aren't don't. I find that American\nadults are no better or worse informed about literature than\nart, despite the fact that they spent years studying literature\nin high school and no time at all studying art. Which presumably\nmeans that what they're taught in school is rounding error \ncompared to what they pick up on their own.Indeed, English classes may even be harmful. In my case they\nwere effectively aversion therapy. Want to make someone dislike\na book? Force him to read it and write an essay about it.\nAnd make the topic so intellectually bogus that you\ncould not, if asked, explain why one ought to write about it.\nI love to read more than anything, but by the end of high school\nI never read the books we were assigned. I was so disgusted with\nwhat we were doing that it became a point of honor\nwith me to write nonsense at least as good at the other students'\nwithout having more than glanced over the book to learn the names\nof the characters and a few random events in it.I hoped this might be fixed in college, but I found the same\nproblem there. It was not the teachers. It was English. \nWe were supposed to read novels and write essays about them.\nAbout what, and why? That no one seemed to be able to explain.\nEventually by trial and error I found that what the teacher \nwanted us to do was pretend that the story had really taken\nplace, and to analyze based on what the characters said and did (the\nsubtler clues, the better) what their motives must have been.\nOne got extra credit for motives having to do with class,\nas I suspect one must now for those involving gender and \nsexuality. I learned how to churn out such stuff well enough\nto get an A, but I never took another English class.And the books we did these disgusting things to, like those\nwe mishandled in high school, I find still have black marks\nagainst them in my mind. The one saving grace was that \nEnglish courses tend to favor pompous, dull writers like\nHenry James, who deserve black marks against their names anyway.\nOne of the principles the IRS uses in deciding whether to\nallow deductions is that, if something is fun, it isn't work.\nFields that are intellectually unsure of themselves rely on\na similar principle. Reading P.G. Wodehouse or Evelyn Waugh or\nRaymond Chandler is too obviously pleasing to seem like\nserious work, as reading Shakespeare would have been before \nEnglish evolved enough to make it an effort to understand him. [sh]\nAnd so good writers (just you wait and see who's still in\nprint in 300 years) are less likely to have readers turned \nagainst them by clumsy, self-appointed tour guides.\nThe other big difference between a real essay and the \nthings\nthey make you write in school is that a real essay doesn't \ntake a position and then defend it. That principle,\nlike the idea that we ought to be writing about literature, \nturns out to be another intellectual hangover of long\nforgotten origins. It's often mistakenly believed that\nmedieval universities were mostly seminaries. In fact they\nwere more law schools. And at least in our tradition\nlawyers are advocates: they are\ntrained to be able to\ntake\neither side of an argument and make as good a case for it \nas they can. Whether or not this is a good idea (in the case of prosecutors,\nit probably isn't), it tended to pervade\nthe atmosphere of\nearly universities. After the lecture the most common form\nof discussion was the disputation. This idea\nis at least\nnominally preserved in our present-day thesis defense-- indeed,\nin the very word thesis. Most people treat the words \nthesis\nand dissertation as interchangeable, but originally, at least,\na thesis was a position one took and the dissertation was\nthe argument by which one defended it.I'm not complaining that we blur these two words together.\nAs far as I'm concerned, the sooner we lose the original\nsense of the word thesis, the better. For many, perhaps most, \ngraduate students, it is stuffing a square peg into a round\nhole to try to recast one's work as a single thesis. And\nas for the disputation, that seems clearly a net lose.\nArguing two sides of a case may be a necessary evil in a\nlegal dispute, but it's not the best way to get at the truth,\nas I think lawyers would be the first to admit.\nAnd yet this principle is built into the very structure of \nthe essays\nthey teach you to write in high school. The topic\nsentence is your thesis, chosen in advance, the supporting \nparagraphs the blows you strike in the conflict, and the\nconclusion--- uh, what it the conclusion? I was never sure \nabout that in high school. If your thesis was well expressed,\nwhat need was there to restate it? In theory it seemed that\nthe conclusion of a really good essay ought not to need to \nsay any more than QED.\nBut when you understand the origins\nof this sort of \"essay\", you can see where the\nconclusion comes from. It's the concluding remarks to the \njury.\nWhat other alternative is there? To answer that\nwe have to\nreach back into history again, though this time not so far.\nTo Michel de Montaigne, inventor of the essay.\nHe was\ndoing something quite different from what a\nlawyer does,\nand\nthe difference is embodied in the name. Essayer is the French\nverb meaning \"to try\" (the cousin of our word assay),\n\nand an \"essai\" is an effort.\nAn essay is something you\nwrite in order\nto figure something out.Figure out what? You don't know yet. And so you can't begin with a\nthesis, because you don't have one, and may never have \none. An essay doesn't begin with a statement, but with a \nquestion. In a real essay, you don't take a position and\ndefend it. You see a door that's ajar, and you open it and\nwalk in to see what's inside.If all you want to do is figure things out, why do you need\nto write anything, though? Why not just sit and think? Well,\nthere precisely is Montaigne's great discovery. Expressing\nideas helps to form them. Indeed, helps is far too weak a\nword. 90%\nof what ends up in my essays was stuff\nI only\nthought of when I sat down to write them. That's why I\nwrite them.So there's another difference between essays and\nthe things\nyou have to write in school. In school\n\nyou are, in theory,\nexplaining yourself to someone else. In the best case---if\nyou're really organized---you're just writing it down.\nIn a real essay you're writing for yourself. You're\nthinking out loud.But not quite. Just as inviting people over forces you to\nclean up your apartment, writing something that you know\n\nother people will read forces you to think well. So it\ndoes matter to have an audience. The things I've written\njust for myself are no good. Indeed, they're bad in\na particular way:\nthey tend to peter out. When I run into\ndifficulties, I notice that I\ntend to conclude with a few vague\nquestions and then drift off to get a cup of tea.This seems a common problem.\nIt's practically the standard\nending in blog entries--- with the addition of a \"heh\" or an \nemoticon, prompted by the all too accurate sense that\nsomething is missing.And indeed, a lot of\npublished essays peter out in this\nsame way.\nParticularly the sort written by the staff writers of newsmagazines. Outside writers tend to supply\neditorials of the defend-a-position variety, which\nmake a beeline toward a rousing (and\nforeordained) conclusion. But the staff writers feel\nobliged to write something more\nbalanced, which in\npractice ends up meaning blurry.\nSince they're\nwriting for a popular magazine, they start with the\nmost radioactively controversial questions, from which\n(because they're writing for a popular magazine)\nthey then proceed to recoil from\nin terror.\nGay marriage, for or\nagainst? This group says one thing. That group says\nanother. One thing is certain: the question is a\ncomplex one. (But don't get mad at us. We didn't\ndraw any conclusions.)Questions aren't enough. An essay has to come up with answers.\nThey don't always, of course. Sometimes you start with a \npromising question and get nowhere. But those you don't\npublish. Those are like experiments that get inconclusive\nresults. Something you publish ought to tell the reader \nsomething he didn't already know.\nBut what you tell him doesn't matter, so long as \nit's interesting. I'm sometimes accused of meandering.\nIn defend-a-position writing that would be a flaw.\nThere you're not concerned with truth. You already\nknow where you're going, and you want to go straight there,\nblustering through obstacles, and hand-waving\nyour way across swampy ground. But that's not what\nyou're trying to do in an essay. An essay is supposed to\nbe a search for truth. It would be suspicious if it didn't\nmeander.The Meander is a river in Asia Minor (aka\nTurkey).\nAs you might expect, it winds all over the place.\nBut does it\ndo this out of frivolity? Quite the opposite.\nLike all rivers, it's rigorously following the laws of physics.\nThe path it has discovered,\nwinding as it is, represents\nthe most economical route to the sea.The river's algorithm is simple. At each step, flow down.\nFor the essayist this translates to: flow interesting.\nOf all the places to go next, choose\nwhichever seems\nmost interesting.I'm pushing this metaphor a bit. An essayist\ncan't have\nquite as little foresight as a river. In fact what you do\n(or what I do) is somewhere between a river and a roman\nroad-builder. I have a general idea of the direction\nI want to go in, and\nI choose the next topic with that in mind. This essay is\nabout writing, so I do occasionally yank it back in that\ndirection, but it is not all the sort of essay I\nthought I was going to write about writing.Note too that hill-climbing (which is what this algorithm is\ncalled) can get you in trouble.\nSometimes, just\nlike a river,\nyou\nrun up against a blank wall. What\nI do then is just \nwhat the river does: backtrack.\nAt one point in this essay\nI found that after following a certain thread I ran out\nof ideas. I had to go back n\nparagraphs and start over\nin another direction. For illustrative purposes I've left\nthe abandoned branch as a footnote.\nErr on the side of the river. An essay is not a reference\nwork. It's not something you read looking for a specific\nanswer, and feel cheated if you don't find it. I'd much\nrather read an essay that went off in an unexpected but\ninteresting direction than one that plodded dutifully along\na prescribed course.So what's interesting? For me, interesting means surprise.\nDesign, as Matz\nhas said, should follow the principle of\nleast surprise.\nA button that looks like it will make a\nmachine stop should make it stop, not speed up. Essays\nshould do the opposite. Essays should aim for maximum\nsurprise.I was afraid of flying for a long time and could only travel\nvicariously. When friends came back from faraway places,\nit wasn't just out of politeness that I asked them about\ntheir trip.\nI really wanted to know. And I found that\nthe best way to get information out of them was to ask\nwhat surprised them. How was the place different from what\nthey expected? This is an extremely useful question.\nYou can ask it of even\nthe most unobservant people, and it will\nextract information they didn't even know they were\nrecording. Indeed, you can ask it in real time. Now when I go somewhere\nnew, I make a note of what surprises me about it. Sometimes I\neven make a conscious effort to visualize the place beforehand,\nso I'll have a detailed image to diff with reality.\nSurprises are facts\nyou didn't already \nknow.\nBut they're\nmore than that. They're facts\nthat contradict things you\nthought you knew. And so they're the most valuable sort of\nfact you can get. They're like a food that's not merely\nhealthy, but counteracts the unhealthy effects of things\nyou've already eaten.\nHow do you find surprises? Well, therein lies half\nthe work of essay writing. (The other half is expressing\nyourself well.) You can at least\nuse yourself as a\nproxy for the reader. You should only write about things\nyou've thought about a lot. And anything you come across\nthat surprises you, who've thought about the topic a lot,\nwill probably surprise most readers.For example, in a recent essay I pointed out that because\nyou can only judge computer programmers by working with\nthem, no one knows in programming who the heroes should\nbe.\nI\ncertainly\ndidn't realize this when I started writing\nthe \nessay, and even now I find it kind of weird. That's\nwhat you're looking for.So if you want to write essays, you need two ingredients:\nyou need\na few topics that you think about a lot, and you\nneed some ability to ferret out the unexpected.What should you think about? My guess is that it\ndoesn't matter. Almost everything is\ninteresting if you get deeply\nenough into it. The one possible exception\nare\nthings\nlike working in fast food, which\nhave deliberately had all\nthe variation sucked out of them.\nIn retrospect, was there\nanything interesting about working in Baskin-Robbins?\nWell, it was interesting to notice\nhow important color was\nto the customers. Kids a certain age would point into\nthe case and say that they wanted yellow. Did they want\nFrench Vanilla or Lemon? They would just look at you\nblankly. They wanted yellow. And then there was the\nmystery of why the perennial favorite Pralines n' Cream\nwas so appealing. I'm inclined now to\nthink it was the salt.\nAnd the mystery of why Passion Fruit tasted so disgusting.\nPeople would order it because of the name, and were always\ndisappointed. It should have been called In-sink-erator\nFruit.\nAnd there was\nthe difference in the way fathers and\nmothers bought ice cream for their kids.\nFathers tended to\nadopt the attitude of\nbenevolent kings bestowing largesse,\nand mothers that of\nharried bureaucrats,\ngiving in to\npressure against their better judgement.\nSo, yes, there does seem to be material, even in\nfast food.What about the other half, ferreting out the unexpected?\nThat may require some natural ability. I've noticed for\na long time that I'm pathologically observant. ....[That was as far as I'd gotten at the time.]Notes[sh] In Shakespeare's own time, serious writing meant theological\ndiscourses, not the bawdy plays acted over on the other \nside of the river among the bear gardens and whorehouses.The other extreme, the work that seems formidable from the moment\nit's created (indeed, is deliberately intended to be)\nis represented by Milton. Like the Aeneid, Paradise Lost is a\nrock imitating a butterfly that happened to get fossilized.\nEven Samuel Johnson seems to have balked at this, on the one \nhand paying Milton the compliment of an extensive biography,\nand on the other writing of Paradise Lost that \"none who read it\never wished it longer.\""} {"title": "love", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nJanuary 2006To do something well you have to like it. That idea is not exactly\nnovel. We've got it down to four words: \"Do what you love.\" But\nit's not enough just to tell people that. Doing what you love is\ncomplicated.The very idea is foreign to what most of us learn as kids. When I\nwas a kid, it seemed as if work and fun were opposites by definition.\nLife had two states: some of the time adults were making you do\nthings, and that was called work; the rest of the time you could\ndo what you wanted, and that was called playing. Occasionally the\nthings adults made you do were fun, just as, occasionally, playing\nwasn't\u2014for example, if you fell and hurt yourself. But except\nfor these few anomalous cases, work was pretty much defined as\nnot-fun.And it did not seem to be an accident. School, it was implied, was\ntedious because it was preparation for grownup work.The world then was divided into two groups, grownups and kids.\nGrownups, like some kind of cursed race, had to work. Kids didn't,\nbut they did have to go to school, which was a dilute version of\nwork meant to prepare us for the real thing. Much as we disliked\nschool, the grownups all agreed that grownup work was worse, and\nthat we had it easy.Teachers in particular all seemed to believe implicitly that work\nwas not fun. Which is not surprising: work wasn't fun for most of\nthem. Why did we have to memorize state capitals instead of playing\ndodgeball? For the same reason they had to watch over a bunch of\nkids instead of lying on a beach. You couldn't just do what you\nwanted.I'm not saying we should let little kids do whatever they want.\nThey may have to be made to work on certain things. But if we make\nkids work on dull stuff, it might be wise to tell them that tediousness\nis not the defining quality of work, and indeed that the reason\nthey have to work on dull stuff now is so they can work on more\ninteresting stuff later.\n[1]Once, when I was about 9 or 10, my father told me I could be whatever\nI wanted when I grew up, so long as I enjoyed it. I remember that\nprecisely because it seemed so anomalous. It was like being told\nto use dry water. Whatever I thought he meant, I didn't think he\nmeant work could literally be fun\u2014fun like playing. It\ntook me years to grasp that.JobsBy high school, the prospect of an actual job was on the horizon.\nAdults would sometimes come to speak to us about their work, or we\nwould go to see them at work. It was always understood that they\nenjoyed what they did. In retrospect I think one may have: the\nprivate jet pilot. But I don't think the bank manager really did.The main reason they all acted as if they enjoyed their work was\npresumably the upper-middle class convention that you're supposed\nto. It would not merely be bad for your career to say that you\ndespised your job, but a social faux-pas.Why is it conventional to pretend to like what you do? The first\nsentence of this essay explains that. If you have to like something\nto do it well, then the most successful people will all like what\nthey do. That's where the upper-middle class tradition comes from.\nJust as houses all over America are full of \nchairs\nthat are, without\nthe owners even knowing it, nth-degree imitations of chairs designed\n250 years ago for French kings, conventional attitudes about work\nare, without the owners even knowing it, nth-degree imitations of\nthe attitudes of people who've done great things.What a recipe for alienation. By the time they reach an age to\nthink about what they'd like to do, most kids have been thoroughly\nmisled about the idea of loving one's work. School has trained\nthem to regard work as an unpleasant duty. Having a job is said\nto be even more onerous than schoolwork. And yet all the adults\nclaim to like what they do. You can't blame kids for thinking \"I\nam not like these people; I am not suited to this world.\"Actually they've been told three lies: the stuff they've been taught\nto regard as work in school is not real work; grownup work is not\n(necessarily) worse than schoolwork; and many of the adults around\nthem are lying when they say they like what they do.The most dangerous liars can be the kids' own parents. If you take\na boring job to give your family a high standard of living, as so\nmany people do, you risk infecting your kids with the idea that\nwork is boring. \n[2]\nMaybe it would be better for kids in this one\ncase if parents were not so unselfish. A parent who set an example\nof loving their work might help their kids more than an expensive\nhouse.\n[3]It was not till I was in college that the idea of work finally broke\nfree from the idea of making a living. Then the important question\nbecame not how to make money, but what to work on. Ideally these\ncoincided, but some spectacular boundary cases (like Einstein in\nthe patent office) proved they weren't identical.The definition of work was now to make some original contribution\nto the world, and in the process not to starve. But after the habit\nof so many years my idea of work still included a large component\nof pain. Work still seemed to require discipline, because only\nhard problems yielded grand results, and hard problems couldn't\nliterally be fun. Surely one had to force oneself to work on them.If you think something's supposed to hurt, you're less likely to\nnotice if you're doing it wrong. That about sums up my experience\nof graduate school.BoundsHow much are you supposed to like what you do? Unless you\nknow that, you don't know when to stop searching. And if, like most\npeople, you underestimate it, you'll tend to stop searching too\nearly. You'll end up doing something chosen for you by your parents,\nor the desire to make money, or prestige\u2014or sheer inertia.Here's an upper bound: Do what you love doesn't mean, do what you\nwould like to do most this second. Even Einstein probably\nhad moments when he wanted to have a cup of coffee, but told himself\nhe ought to finish what he was working on first.It used to perplex me when I read about people who liked what they\ndid so much that there was nothing they'd rather do. There didn't\nseem to be any sort of work I liked that much. If I had a\nchoice of (a) spending the next hour working on something or (b)\nbe teleported to Rome and spend the next hour wandering about, was\nthere any sort of work I'd prefer? Honestly, no.But the fact is, almost anyone would rather, at any given moment,\nfloat about in the Carribbean, or have sex, or eat some delicious\nfood, than work on hard problems. The rule about doing what you\nlove assumes a certain length of time. It doesn't mean, do what\nwill make you happiest this second, but what will make you happiest\nover some longer period, like a week or a month.Unproductive pleasures pall eventually. After a while you get tired\nof lying on the beach. If you want to stay happy, you have to do\nsomething.As a lower bound, you have to like your work more than any unproductive\npleasure. You have to like what you do enough that the concept of\n\"spare time\" seems mistaken. Which is not to say you have to spend\nall your time working. You can only work so much before you get\ntired and start to screw up. Then you want to do something else\u2014even something mindless. But you don't regard this time as the\nprize and the time you spend working as the pain you endure to earn\nit.I put the lower bound there for practical reasons. If your work\nis not your favorite thing to do, you'll have terrible problems\nwith procrastination. You'll have to force yourself to work, and\nwhen you resort to that the results are distinctly inferior.To be happy I think you have to be doing something you not only\nenjoy, but admire. You have to be able to say, at the end, wow,\nthat's pretty cool. This doesn't mean you have to make something.\nIf you learn how to hang glide, or to speak a foreign language\nfluently, that will be enough to make you say, for a while at least,\nwow, that's pretty cool. What there has to be is a test.So one thing that falls just short of the standard, I think, is\nreading books. Except for some books in math and the hard sciences,\nthere's no test of how well you've read a book, and that's why\nmerely reading books doesn't quite feel like work. You have to do\nsomething with what you've read to feel productive.I think the best test is one Gino Lee taught me: to try to do things\nthat would make your friends say wow. But it probably wouldn't\nstart to work properly till about age 22, because most people haven't\nhad a big enough sample to pick friends from before then.SirensWhat you should not do, I think, is worry about the opinion of\nanyone beyond your friends. You shouldn't worry about prestige.\nPrestige is the opinion of the rest of the world. When you can ask\nthe opinions of people whose judgement you respect, what does it\nadd to consider the opinions of people you don't even know? \n[4]This is easy advice to give. It's hard to follow, especially when\nyou're young. \n[5]\nPrestige is like a powerful magnet that warps\neven your beliefs about what you enjoy. It causes you to work not\non what you like, but what you'd like to like.That's what leads people to try to write novels, for example. They\nlike reading novels. They notice that people who write them win\nNobel prizes. What could be more wonderful, they think, than to\nbe a novelist? But liking the idea of being a novelist is not\nenough; you have to like the actual work of novel-writing if you're\ngoing to be good at it; you have to like making up elaborate lies.Prestige is just fossilized inspiration. If you do anything well\nenough, you'll make it prestigious. Plenty of things we now\nconsider prestigious were anything but at first. Jazz comes to\nmind\u2014though almost any established art form would do. So just\ndo what you like, and let prestige take care of itself.Prestige is especially dangerous to the ambitious. If you want to\nmake ambitious people waste their time on errands, the way to do\nit is to bait the hook with prestige. That's the recipe for getting\npeople to give talks, write forewords, serve on committees, be\ndepartment heads, and so on. It might be a good rule simply to\navoid any prestigious task. If it didn't suck, they wouldn't have\nhad to make it prestigious.Similarly, if you admire two kinds of work equally, but one is more\nprestigious, you should probably choose the other. Your opinions\nabout what's admirable are always going to be slightly influenced\nby prestige, so if the two seem equal to you, you probably have\nmore genuine admiration for the less prestigious one.The other big force leading people astray is money. Money by itself\nis not that dangerous. When something pays well but is regarded\nwith contempt, like telemarketing, or prostitution, or personal\ninjury litigation, ambitious people aren't tempted by it. That\nkind of work ends up being done by people who are \"just trying to\nmake a living.\" (Tip: avoid any field whose practitioners say\nthis.) The danger is when money is combined with prestige, as in,\nsay, corporate law, or medicine. A comparatively safe and prosperous\ncareer with some automatic baseline prestige is dangerously tempting\nto someone young, who hasn't thought much about what they really\nlike.The test of whether people love what they do is whether they'd do\nit even if they weren't paid for it\u2014even if they had to work at\nanother job to make a living. How many corporate lawyers would do\ntheir current work if they had to do it for free, in their spare\ntime, and take day jobs as waiters to support themselves?This test is especially helpful in deciding between different kinds\nof academic work, because fields vary greatly in this respect. Most\ngood mathematicians would work on math even if there were no jobs\nas math professors, whereas in the departments at the other end of\nthe spectrum, the availability of teaching jobs is the driver:\npeople would rather be English professors than work in ad agencies,\nand publishing papers is the way you compete for such jobs. Math\nwould happen without math departments, but it is the existence of\nEnglish majors, and therefore jobs teaching them, that calls into\nbeing all those thousands of dreary papers about gender and identity\nin the novels of Conrad. No one does \nthat \nkind of thing for fun.The advice of parents will tend to err on the side of money. It\nseems safe to say there are more undergrads who want to be novelists\nand whose parents want them to be doctors than who want to be doctors\nand whose parents want them to be novelists. The kids think their\nparents are \"materialistic.\" Not necessarily. All parents tend to\nbe more conservative for their kids than they would for themselves,\nsimply because, as parents, they share risks more than rewards. If\nyour eight year old son decides to climb a tall tree, or your teenage\ndaughter decides to date the local bad boy, you won't get a share\nin the excitement, but if your son falls, or your daughter gets\npregnant, you'll have to deal with the consequences.DisciplineWith such powerful forces leading us astray, it's not surprising\nwe find it so hard to discover what we like to work on. Most people\nare doomed in childhood by accepting the axiom that work = pain.\nThose who escape this are nearly all lured onto the rocks by prestige\nor money. How many even discover something they love to work on?\nA few hundred thousand, perhaps, out of billions.It's hard to find work you love; it must be, if so few do. So don't\nunderestimate this task. And don't feel bad if you haven't succeeded\nyet. In fact, if you admit to yourself that you're discontented,\nyou're a step ahead of most people, who are still in denial. If\nyou're surrounded by colleagues who claim to enjoy work that you\nfind contemptible, odds are they're lying to themselves. Not\nnecessarily, but probably.Although doing great work takes less discipline than people think\u2014because the way to do great work is to find something you like so\nmuch that you don't have to force yourself to do it\u2014finding\nwork you love does usually require discipline. Some people are\nlucky enough to know what they want to do when they're 12, and just\nglide along as if they were on railroad tracks. But this seems the\nexception. More often people who do great things have careers with\nthe trajectory of a ping-pong ball. They go to school to study A,\ndrop out and get a job doing B, and then become famous for C after\ntaking it up on the side.Sometimes jumping from one sort of work to another is a sign of\nenergy, and sometimes it's a sign of laziness. Are you dropping\nout, or boldly carving a new path? You often can't tell yourself.\nPlenty of people who will later do great things seem to be disappointments\nearly on, when they're trying to find their niche.Is there some test you can use to keep yourself honest? One is to\ntry to do a good job at whatever you're doing, even if you don't\nlike it. Then at least you'll know you're not using dissatisfaction\nas an excuse for being lazy. Perhaps more importantly, you'll get\ninto the habit of doing things well.Another test you can use is: always produce. For example, if you\nhave a day job you don't take seriously because you plan to be a\nnovelist, are you producing? Are you writing pages of fiction,\nhowever bad? As long as you're producing, you'll know you're not\nmerely using the hazy vision of the grand novel you plan to write\none day as an opiate. The view of it will be obstructed by the all\ntoo palpably flawed one you're actually writing.\"Always produce\" is also a heuristic for finding the work you love.\nIf you subject yourself to that constraint, it will automatically\npush you away from things you think you're supposed to work on,\ntoward things you actually like. \"Always produce\" will discover\nyour life's work the way water, with the aid of gravity, finds the\nhole in your roof.Of course, figuring out what you like to work on doesn't mean you\nget to work on it. That's a separate question. And if you're\nambitious you have to keep them separate: you have to make a conscious\neffort to keep your ideas about what you want from being contaminated\nby what seems possible. \n[6]It's painful to keep them apart, because it's painful to observe\nthe gap between them. So most people pre-emptively lower their\nexpectations. For example, if you asked random people on the street\nif they'd like to be able to draw like Leonardo, you'd find most\nwould say something like \"Oh, I can't draw.\" This is more a statement\nof intention than fact; it means, I'm not going to try. Because\nthe fact is, if you took a random person off the street and somehow\ngot them to work as hard as they possibly could at drawing for the\nnext twenty years, they'd get surprisingly far. But it would require\na great moral effort; it would mean staring failure in the eye every\nday for years. And so to protect themselves people say \"I can't.\"Another related line you often hear is that not everyone can do\nwork they love\u2014that someone has to do the unpleasant jobs. Really?\nHow do you make them? In the US the only mechanism for forcing\npeople to do unpleasant jobs is the draft, and that hasn't been\ninvoked for over 30 years. All we can do is encourage people to\ndo unpleasant work, with money and prestige.If there's something people still won't do, it seems as if society\njust has to make do without. That's what happened with domestic\nservants. For millennia that was the canonical example of a job\n\"someone had to do.\" And yet in the mid twentieth century servants\npractically disappeared in rich countries, and the rich have just\nhad to do without.So while there may be some things someone has to do, there's a good\nchance anyone saying that about any particular job is mistaken.\nMost unpleasant jobs would either get automated or go undone if no\none were willing to do them.Two RoutesThere's another sense of \"not everyone can do work they love\"\nthat's all too true, however. One has to make a living, and it's\nhard to get paid for doing work you love. There are two routes to\nthat destination:\n\n The organic route: as you become more eminent, gradually to\n increase the parts of your job that you like at the expense of\n those you don't.The two-job route: to work at things you don't like to get money\n to work on things you do.\n\nThe organic route is more common. It happens naturally to anyone\nwho does good work. A young architect has to take whatever work\nhe can get, but if he does well he'll gradually be in a position\nto pick and choose among projects. The disadvantage of this route\nis that it's slow and uncertain. Even tenure is not real freedom.The two-job route has several variants depending on how long you\nwork for money at a time. At one extreme is the \"day job,\" where\nyou work regular hours at one job to make money, and work on what\nyou love in your spare time. At the other extreme you work at\nsomething till you make enough not to \nhave to work for money again.The two-job route is less common than the organic route, because\nit requires a deliberate choice. It's also more dangerous. Life\ntends to get more expensive as you get older, so it's easy to get\nsucked into working longer than you expected at the money job.\nWorse still, anything you work on changes you. If you work too\nlong on tedious stuff, it will rot your brain. And the best paying\njobs are most dangerous, because they require your full attention.The advantage of the two-job route is that it lets you jump over\nobstacles. The landscape of possible jobs isn't flat; there are\nwalls of varying heights between different kinds of work. \n[7]\nThe trick of maximizing the parts of your job that you like can get you\nfrom architecture to product design, but not, probably, to music.\nIf you make money doing one thing and then work on another, you\nhave more freedom of choice.Which route should you take? That depends on how sure you are of\nwhat you want to do, how good you are at taking orders, how much\nrisk you can stand, and the odds that anyone will pay (in your\nlifetime) for what you want to do. If you're sure of the general\narea you want to work in and it's something people are likely to\npay you for, then you should probably take the organic route. But\nif you don't know what you want to work on, or don't like to take\norders, you may want to take the two-job route, if you can stand\nthe risk.Don't decide too soon. Kids who know early what they want to do\nseem impressive, as if they got the answer to some math question\nbefore the other kids. They have an answer, certainly, but odds\nare it's wrong.A friend of mine who is a quite successful doctor complains constantly\nabout her job. When people applying to medical school ask her for\nadvice, she wants to shake them and yell \"Don't do it!\" (But she\nnever does.) How did she get into this fix? In high school she\nalready wanted to be a doctor. And she is so ambitious and determined\nthat she overcame every obstacle along the way\u2014including,\nunfortunately, not liking it.Now she has a life chosen for her by a high-school kid.When you're young, you're given the impression that you'll get\nenough information to make each choice before you need to make it.\nBut this is certainly not so with work. When you're deciding what\nto do, you have to operate on ridiculously incomplete information.\nEven in college you get little idea what various types of work are\nlike. At best you may have a couple internships, but not all jobs\noffer internships, and those that do don't teach you much more about\nthe work than being a batboy teaches you about playing baseball.In the design of lives, as in the design of most other things, you\nget better results if you use flexible media. So unless you're\nfairly sure what you want to do, your best bet may be to choose a\ntype of work that could turn into either an organic or two-job\ncareer. That was probably part of the reason I chose computers.\nYou can be a professor, or make a lot of money, or morph it into\nany number of other kinds of work.It's also wise, early on, to seek jobs that let you do many different\nthings, so you can learn faster what various kinds of work are like.\nConversely, the extreme version of the two-job route is dangerous\nbecause it teaches you so little about what you like. If you work\nhard at being a bond trader for ten years, thinking that you'll\nquit and write novels when you have enough money, what happens when\nyou quit and then discover that you don't actually like writing\nnovels?Most people would say, I'd take that problem. Give me a million\ndollars and I'll figure out what to do. But it's harder than it\nlooks. Constraints give your life shape. Remove them and most\npeople have no idea what to do: look at what happens to those who\nwin lotteries or inherit money. Much as everyone thinks they want\nfinancial security, the happiest people are not those who have it,\nbut those who like what they do. So a plan that promises freedom\nat the expense of knowing what to do with it may not be as good as\nit seems.Whichever route you take, expect a struggle. Finding work you love\nis very difficult. Most people fail. Even if you succeed, it's\nrare to be free to work on what you want till your thirties or\nforties. But if you have the destination in sight you'll be more\nlikely to arrive at it. If you know you can love work, you're in\nthe home stretch, and if you know what work you love, you're\npractically there.Notes[1]\nCurrently we do the opposite: when we make kids do boring work,\nlike arithmetic drills, instead of admitting frankly that it's\nboring, we try to disguise it with superficial decorations.[2]\nOne father told me about a related phenomenon: he found himself\nconcealing from his family how much he liked his work. When he\nwanted to go to work on a saturday, he found it easier to say that\nit was because he \"had to\" for some reason, rather than admitting\nhe preferred to work than stay home with them.[3]\nSomething similar happens with suburbs. Parents move to suburbs\nto raise their kids in a safe environment, but suburbs are so dull\nand artificial that by the time they're fifteen the kids are convinced\nthe whole world is boring.[4]\nI'm not saying friends should be the only audience for your\nwork. The more people you can help, the better. But friends should\nbe your compass.[5]\nDonald Hall said young would-be poets were mistaken to be so\nobsessed with being published. But you can imagine what it would\ndo for a 24 year old to get a poem published in The New Yorker.\nNow to people he meets at parties he's a real poet. Actually he's\nno better or worse than he was before, but to a clueless audience\nlike that, the approval of an official authority makes all the\ndifference. So it's a harder problem than Hall realizes. The\nreason the young care so much about prestige is that the people\nthey want to impress are not very discerning.[6]\nThis is isomorphic to the principle that you should prevent\nyour beliefs about how things are from being contaminated by how\nyou wish they were. Most people let them mix pretty promiscuously.\nThe continuing popularity of religion is the most visible index of\nthat.[7]\nA more accurate metaphor would be to say that the graph of jobs\nis not very well connected.Thanks to Trevor Blackwell, Dan Friedman, Sarah Harlin,\nJessica Livingston, Jackie McDonough, Robert Morris, Peter Norvig, \nDavid Sloo, and Aaron Swartz\nfor reading drafts of this."} {"title": "nft", "text": "May 2021Noora Health, a nonprofit I've \nsupported for years, just launched\na new NFT. It has a dramatic name, Save Thousands of Lives,\nbecause that's what the proceeds will do.Noora has been saving lives for 7 years. They run programs in\nhospitals in South Asia to teach new mothers how to take care of\ntheir babies once they get home. They're in 165 hospitals now. And\nbecause they know the numbers before and after they start at a new\nhospital, they can measure the impact they have. It is massive.\nFor every 1000 live births, they save 9 babies.This number comes from a study\nof 133,733 families at 28 different\nhospitals that Noora conducted in collaboration with the Better\nBirth team at Ariadne Labs, a joint center for health systems\ninnovation at Brigham and Women\u0092s Hospital and Harvard T.H. Chan\nSchool of Public Health.Noora is so effective that even if you measure their costs in the\nmost conservative way, by dividing their entire budget by the number\nof lives saved, the cost of saving a life is the lowest I've seen.\n$1,235.For this NFT, they're going to issue a public report tracking how\nthis specific tranche of money is spent, and estimating the number\nof lives saved as a result.NFTs are a new territory, and this way of using them is especially\nnew, but I'm excited about its potential. And I'm excited to see\nwhat happens with this particular auction, because unlike an NFT\nrepresenting something that has already happened,\nthis NFT gets better as the price gets higher.The reserve price was about $2.5 million, because that's what it\ntakes for the name to be accurate: that's what it costs to save\n2000 lives. But the higher the price of this NFT goes, the more\nlives will be saved. What a sentence to be able to write."} {"title": "startuplessons", "text": "April 2006(This essay is derived from a talk at the 2006 \nStartup School.)The startups we've funded so far are pretty quick, but they seem\nquicker to learn some lessons than others. I think it's because\nsome things about startups are kind of counterintuitive.We've now \ninvested \nin enough companies that I've learned a trick\nfor determining which points are the counterintuitive ones:\nthey're the ones I have to keep repeating.So I'm going to number these points, and maybe with future startups\nI'll be able to pull off a form of Huffman coding. I'll make them\nall read this, and then instead of nagging them in detail, I'll\njust be able to say: number four!\n1. Release Early.The thing I probably repeat most is this recipe for a startup: get\na version 1 out fast, then improve it based on users' reactions.By \"release early\" I don't mean you should release something full\nof bugs, but that you should release something minimal. Users hate\nbugs, but they don't seem to mind a minimal version 1, if there's\nmore coming soon.There are several reasons it pays to get version 1 done fast. One\nis that this is simply the right way to write software, whether for\na startup or not. I've been repeating that since 1993, and I haven't seen much since to\ncontradict it. I've seen a lot of startups die because they were\ntoo slow to release stuff, and none because they were too quick.\n[1]One of the things that will surprise you if you build something\npopular is that you won't know your users. Reddit now has almost half a million\nunique visitors a month. Who are all those people? They have no\nidea. No web startup does. And since you don't know your users,\nit's dangerous to guess what they'll like. Better to release\nsomething and let them tell you.Wufoo took this to heart and released\ntheir form-builder before the underlying database. You can't even\ndrive the thing yet, but 83,000 people came to sit in the driver's\nseat and hold the steering wheel. And Wufoo got valuable feedback\nfrom it: Linux users complained they used too much Flash, so they\nrewrote their software not to. If they'd waited to release everything\nat once, they wouldn't have discovered this problem till it was\nmore deeply wired in.Even if you had no users, it would still be important to release\nquickly, because for a startup the initial release acts as a shakedown\ncruise. If anything major is broken-- if the idea's no good,\nfor example, or the founders hate one another-- the stress of getting\nthat first version out will expose it. And if you have such problems\nyou want to find them early.Perhaps the most important reason to release early, though, is that\nit makes you work harder. When you're working on something that\nisn't released, problems are intriguing. In something that's out\nthere, problems are alarming. There is a lot more urgency once you\nrelease. And I think that's precisely why people put it off. They\nknow they'll have to work a lot harder once they do. \n[2]\n2. Keep Pumping Out Features.Of course, \"release early\" has a second component, without which\nit would be bad advice. If you're going to start with something\nthat doesn't do much, you better improve it fast.What I find myself repeating is \"pump out features.\" And this rule\nisn't just for the initial stages. This is something all startups\nshould do for as long as they want to be considered startups.I don't mean, of course, that you should make your application ever\nmore complex. By \"feature\" I mean one unit of hacking-- one quantum\nof making users' lives better.As with exercise, improvements beget improvements. If you run every\nday, you'll probably feel like running tomorrow. But if you skip\nrunning for a couple weeks, it will be an effort to drag yourself\nout. So it is with hacking: the more ideas you implement, the more\nideas you'll have. You should make your system better at least in\nsome small way every day or two.This is not just a good way to get development done; it is also a\nform of marketing. Users love a site that's constantly improving.\nIn fact, users expect a site to improve. Imagine if you visited a\nsite that seemed very good, and then returned two months later and\nnot one thing had changed. Wouldn't it start to seem lame? \n[3]They'll like you even better when you improve in response to their\ncomments, because customers are used to companies ignoring them.\nIf you're the rare exception-- a company that actually listens--\nyou'll generate fanatical loyalty. You won't need to advertise,\nbecause your users will do it for you.This seems obvious too, so why do I have to keep repeating it? I\nthink the problem here is that people get used to how things are.\nOnce a product gets past the stage where it has glaring flaws, you\nstart to get used to it, and gradually whatever features it happens\nto have become its identity. For example, I doubt many people at\nYahoo (or Google for that matter) realized how much better web mail\ncould be till Paul Buchheit showed them.I think the solution is to assume that anything you've made is far\nshort of what it could be. Force yourself, as a sort of intellectual\nexercise, to keep thinking of improvements. Ok, sure, what you\nhave is perfect. But if you had to change something, what would\nit be?If your product seems finished, there are two possible explanations:\n(a) it is finished, or (b) you lack imagination. Experience suggests\n(b) is a thousand times more likely.\n3. Make Users Happy.Improving constantly is an instance of a more general rule: make\nusers happy. One thing all startups have in common is that they\ncan't force anyone to do anything. They can't force anyone to use\ntheir software, and they can't force anyone to do deals with them.\nA startup has to sing for its supper. That's why the successful\nones make great things. They have to, or die.When you're running a startup you feel like a little bit of debris\nblown about by powerful winds. The most powerful wind is users.\nThey can either catch you and loft you up into the sky, as they did\nwith Google, or leave you flat on the pavement, as they do with\nmost startups. Users are a fickle wind, but more powerful than any\nother. If they take you up, no competitor can keep you down.As a little piece of debris, the rational thing for you to do is\nnot to lie flat, but to curl yourself into a shape the wind will\ncatch.I like the wind metaphor because it reminds you how impersonal the\nstream of traffic is. The vast majority of people who visit your\nsite will be casual visitors. It's them you have to design your\nsite for. The people who really care will find what they want by\nthemselves.The median visitor will arrive with their finger poised on the Back\nbutton. Think about your own experience: most links you\nfollow lead to something lame. Anyone who has used the web for\nmore than a couple weeks has been trained to click on Back after\nfollowing a link. So your site has to say \"Wait! Don't click on\nBack. This site isn't lame. Look at this, for example.\"There are two things you have to do to make people pause. The most\nimportant is to explain, as concisely as possible, what the hell\nyour site is about. How often have you visited a site that seemed\nto assume you already knew what they did? For example, the corporate\nsite that says the\ncompany makes\n\n enterprise content management solutions for business that enable\n organizations to unify people, content and processes to minimize\n business risk, accelerate time-to-value and sustain lower total\n cost of ownership.\n\nAn established company may get away with such an opaque description,\nbut no startup can. A startup\nshould be able to explain in one or two sentences exactly what it\ndoes. \n[4]\nAnd not just to users. You need this for everyone:\ninvestors, acquirers, partners, reporters, potential employees, and\neven current employees. You probably shouldn't even start a company\nto do something that can't be described compellingly in one or two\nsentences.The other thing I repeat is to give people everything you've got,\nright away. If you have something impressive, try to put it on the\nfront page, because that's the only one most visitors will see.\nThough indeed there's a paradox here: the more you push the good\nstuff toward the front, the more likely visitors are to explore\nfurther. \n[5]In the best case these two suggestions get combined: you tell\nvisitors what your site is about by showing them. One of the\nstandard pieces of advice in fiction writing is \"show, don't tell.\"\nDon't say that a character's angry; have him grind his teeth, or\nbreak his pencil in half. Nothing will explain what your site does\nso well as using it.The industry term here is \"conversion.\" The job of your site is\nto convert casual visitors into users-- whatever your definition\nof a user is. You can measure this in your growth rate. Either\nyour site is catching on, or it isn't, and you must know which. If\nyou have decent growth, you'll win in the end, no matter how obscure\nyou are now. And if you don't, you need to fix something.\n4. Fear the Right Things.Another thing I find myself saying a lot is \"don't worry.\" Actually,\nit's more often \"don't worry about this; worry about that instead.\"\nStartups are right to be paranoid, but they sometimes fear the wrong\nthings.Most visible disasters are not so alarming as they seem. Disasters\nare normal in a startup: a founder quits, you discover a patent\nthat covers what you're doing, your servers keep crashing, you run\ninto an insoluble technical problem, you have to change your name,\na deal falls through-- these are all par for the course. They won't\nkill you unless you let them.Nor will most competitors. A lot of startups worry \"what if Google\nbuilds something like us?\" Actually big companies are not the ones\nyou have to worry about-- not even Google. The people at Google\nare smart, but no smarter than you; they're not as motivated, because\nGoogle is not going to go out of business if this one product fails;\nand even at Google they have a lot of bureaucracy to slow them down.What you should fear, as a startup, is not the established players,\nbut other startups you don't know exist yet. They're way more\ndangerous than Google because, like you, they're cornered animals.Looking just at existing competitors can give you a false sense of\nsecurity. You should compete against what someone else could be\ndoing, not just what you can see people doing. A corollary is that\nyou shouldn't relax just because you have no visible competitors\nyet. No matter what your idea, there's someone else out there\nworking on the same thing.That's the downside of it being easier to start a startup: more people\nare doing it. But I disagree with Caterina Fake when she says that\nmakes this a bad time to start a startup. More people are starting\nstartups, but not as many more as could. Most college graduates\nstill think they have to get a job. The average person can't ignore\nsomething that's been beaten into their head since they were three\njust because serving web pages recently got a lot cheaper.And in any case, competitors are not the biggest threat. Way more\nstartups hose themselves than get crushed by competitors. There\nare a lot of ways to do it, but the three main ones are internal\ndisputes, inertia, and ignoring users. Each is, by itself, enough\nto kill you. But if I had to pick the worst, it would be ignoring\nusers. If you want a recipe for a startup that's going to die,\nhere it is: a couple of founders who have some great idea they know\neveryone is going to love, and that's what they're going to build,\nno matter what.Almost everyone's initial plan is broken. If companies stuck to\ntheir initial plans, Microsoft would be selling programming languages,\nand Apple would be selling printed circuit boards. In both cases\ntheir customers told them what their business should be-- and they\nwere smart enough to listen.As Richard Feynman said, the imagination of nature is greater than\nthe imagination of man. You'll find more interesting things by\nlooking at the world than you could ever produce just by thinking.\nThis principle is very powerful. It's why the best abstract painting\nstill falls short of Leonardo, for example. And it applies to\nstartups too. No idea for a product could ever be so clever as the\nones you can discover by smashing a beam of prototypes into a beam\nof users.\n5. Commitment Is a Self-Fulfilling Prophecy.I now have enough experience with startups to be able to say what\nthe most important quality is in a startup founder, and it's not\nwhat you might think. The most important quality in a startup\nfounder is determination. Not intelligence-- determination.This is a little depressing. I'd like to believe Viaweb succeeded\nbecause we were smart, not merely determined. A lot of people in\nthe startup world want to believe that. Not just founders, but\ninvestors too. They like the idea of inhabiting a world ruled by\nintelligence. And you can tell they really believe this, because\nit affects their investment decisions.Time after time VCs invest in startups founded by eminent professors.\nThis may work in biotech, where a lot of startups simply commercialize\nexisting research, but in software you want to invest in students,\nnot professors. Microsoft, Yahoo, and Google were all founded by\npeople who dropped out of school to do it. What students lack in\nexperience they more than make up in dedication.Of course, if you want to get rich, it's not enough merely to be\ndetermined. You have to be smart too, right? I'd like to think\nso, but I've had an experience that convinced me otherwise: I spent\nseveral years living in New York.You can lose quite a lot in the brains department and it won't kill\nyou. But lose even a little bit in the commitment department, and\nthat will kill you very rapidly.Running a startup is like walking on your hands: it's possible, but\nit requires extraordinary effort. If an ordinary employee were\nasked to do the things a startup founder has to, he'd be very\nindignant. Imagine if you were hired at some big company, and in\naddition to writing software ten times faster than you'd ever had\nto before, they expected you to answer support calls, administer\nthe servers, design the web site, cold-call customers, find the\ncompany office space, and go out and get everyone lunch.And to do all this not in the calm, womb-like atmosphere of a big\ncompany, but against a backdrop of constant disasters. That's the\npart that really demands determination. In a startup, there's\nalways some disaster happening. So if you're the least bit inclined\nto find an excuse to quit, there's always one right there.But if you lack commitment, chances are it will have been hurting\nyou long before you actually quit. Everyone who deals with startups\nknows how important commitment is, so if they sense you're ambivalent,\nthey won't give you much attention. If you lack commitment, you'll\njust find that for some mysterious reason good things happen to\nyour competitors but not to you. If you lack commitment, it will\nseem to you that you're unlucky.Whereas if you're determined to stick around, people will pay\nattention to you, because odds are they'll have to deal with you\nlater. You're a local, not just a tourist, so everyone has to come\nto terms with you.At Y Combinator we sometimes mistakenly fund teams who have the\nattitude that they're going to give this startup thing a shot for\nthree months, and if something great happens, they'll stick with\nit-- \"something great\" meaning either that someone wants to buy\nthem or invest millions of dollars in them. But if this is your\nattitude, \"something great\" is very unlikely to happen to you,\nbecause both acquirers and investors judge you by your level of\ncommitment.If an acquirer thinks you're going to stick around no matter what,\nthey'll be more likely to buy you, because if they don't and you\nstick around, you'll probably grow, your price will go up, and\nthey'll be left wishing they'd bought you earlier. Ditto for\ninvestors. What really motivates investors, even big VCs, is not\nthe hope of good returns, but the fear of missing out. \n[6]\nSo if\nyou make it clear you're going to succeed no matter what, and the only\nreason you need them is to make it happen a little faster, you're\nmuch more likely to get money.You can't fake this. The only way to convince everyone that you're\nready to fight to the death is actually to be ready to.You have to be the right kind of determined, though. I carefully\nchose the word determined rather than stubborn, because stubbornness\nis a disastrous quality in a startup. You have to be determined,\nbut flexible, like a running back. A successful running back doesn't\njust put his head down and try to run through people. He improvises:\nif someone appears in front of him, he runs around them; if someone\ntries to grab him, he spins out of their grip; he'll even run in\nthe wrong direction briefly if that will help. The one thing he'll\nnever do is stand still. \n[7]\n6. There Is Always Room.I was talking recently to a startup founder about whether it might\nbe good to add a social component to their software. He said he\ndidn't think so, because the whole social thing was tapped out.\nReally? So in a hundred years the only social networking sites\nwill be the Facebook, MySpace, Flickr, and Del.icio.us? Not likely.There is always room for new stuff. At every point in history,\neven the darkest bits of the dark ages, people were discovering\nthings that made everyone say \"why didn't anyone think of that\nbefore?\" We know this continued to be true up till 2004, when the\nFacebook was founded-- though strictly speaking someone else did\nthink of that.The reason we don't see the opportunities all around us is that we\nadjust to however things are, and assume that's how things have to\nbe. For example, it would seem crazy to most people to try to make\na better search engine than Google. Surely that field, at least,\nis tapped out. Really? In a hundred years-- or even twenty-- are\npeople still going to search for information using something like\nthe current Google? Even Google probably doesn't think that.In particular, I don't think there's any limit to the number of\nstartups. Sometimes you hear people saying \"All these guys starting\nstartups now are going to be disappointed. How many little startups\nare Google and Yahoo going to buy, after all?\" That sounds cleverly\nskeptical, but I can prove it's mistaken. No one proposes that\nthere's some limit to the number of people who can be employed in\nan economy consisting of big, slow-moving companies with a couple\nthousand people each. Why should there be any limit to the number\nwho could be employed by small, fast-moving companies with ten each?\nIt seems to me the only limit would be the number of people who\nwant to work that hard.The limit on the number of startups is not the number that can get\nacquired by Google and Yahoo-- though it seems even that should\nbe unlimited, if the startups were actually worth buying-- but the\namount of wealth that can be created. And I don't think there's\nany limit on that, except cosmological ones.So for all practical purposes, there is no limit to the number of\nstartups. Startups make wealth, which means they make things people\nwant, and if there's a limit on the number of things people want,\nwe are nowhere near it. I still don't even have a flying car.\n7. Don't Get Your Hopes Up.This is another one I've been repeating since long before Y Combinator.\nIt was practically the corporate motto at Viaweb.Startup founders are naturally optimistic. They wouldn't do it\notherwise. But you should treat your optimism the way you'd treat\nthe core of a nuclear reactor: as a source of power that's also\nvery dangerous. You have to build a shield around it, or it will\nfry you.The shielding of a reactor is not uniform; the reactor would be\nuseless if it were. It's pierced in a few places to let pipes in.\nAn optimism shield has to be pierced too. I think the place to\ndraw the line is between what you expect of yourself, and what you\nexpect of other people. It's ok to be optimistic about what you\ncan do, but assume the worst about machines and other people.This is particularly necessary in a startup, because you tend to\nbe pushing the limits of whatever you're doing. So things don't\nhappen in the smooth, predictable way they do in the rest of the\nworld. Things change suddenly, and usually for the worse.Shielding your optimism is nowhere more important than with deals.\nIf your startup is doing a deal, just assume it's not going to\nhappen. The VCs who say they're going to invest in you aren't.\nThe company that says they're going to buy you isn't. The big\ncustomer who wants to use your system in their whole company won't.\nThen if things work out you can be pleasantly surprised.The reason I warn startups not to get their hopes up is not to save\nthem from being disappointed when things fall through. It's\nfor a more practical reason: to prevent them from leaning their\ncompany against something that's going to fall over, taking them\nwith it.For example, if someone says they want to invest in you, there's a\nnatural tendency to stop looking for other investors. That's why\npeople proposing deals seem so positive: they want you to\nstop looking. And you want to stop too, because doing deals is a\npain. Raising money, in particular, is a huge time sink. So you\nhave to consciously force yourself to keep looking.Even if you ultimately do the first deal, it will be to your advantage\nto have kept looking, because you'll get better terms. Deals are\ndynamic; unless you're negotiating with someone unusually honest,\nthere's not a single point where you shake hands and the deal's\ndone. There are usually a lot of subsidiary questions to be cleared\nup after the handshake, and if the other side senses weakness-- if\nthey sense you need this deal-- they will be very tempted to screw\nyou in the details.VCs and corp dev guys are professional negotiators. They're trained\nto take advantage of weakness. \n[8]\nSo while they're often nice\nguys, they just can't help it. And as pros they do this more than\nyou. So don't even try to bluff them. The only way a startup can\nhave any leverage in a deal is genuinely not to need it. And if\nyou don't believe in a deal, you'll be less likely to depend on it.So I want to plant a hypnotic suggestion in your heads: when you\nhear someone say the words \"we want to invest in you\" or \"we want\nto acquire you,\" I want the following phrase to appear automatically\nin your head: don't get your hopes up. Just continue running\nyour company as if this deal didn't exist. Nothing is more likely\nto make it close.The way to succeed in a startup is to focus on the goal of getting\nlots of users, and keep walking swiftly toward it while investors\nand acquirers scurry alongside trying to wave money in your face.\nSpeed, not MoneyThe way I've described it, starting a startup sounds pretty stressful.\nIt is. When I talk to the founders of the companies we've funded,\nthey all say the same thing: I knew it would be hard, but I didn't\nrealize it would be this hard.So why do it? It would be worth enduring a lot of pain and stress\nto do something grand or heroic, but just to make money? Is making\nmoney really that important?No, not really. It seems ridiculous to me when people take business\ntoo seriously. I regard making money as a boring errand to be got\nout of the way as soon as possible. There is nothing grand or\nheroic about starting a startup per se.So why do I spend so much time thinking about startups? I'll tell\nyou why. Economically, a startup is best seen not as a way to get\nrich, but as a way to work faster. You have to make a living, and\na startup is a way to get that done quickly, instead of letting it\ndrag on through your whole life.\n[9]We take it for granted most of the time, but human life is fairly\nmiraculous. It is also palpably short. You're given this marvellous\nthing, and then poof, it's taken away. You can see why people\ninvent gods to explain it. But even to people who don't believe\nin gods, life commands respect. There are times in most of our\nlives when the days go by in a blur, and almost everyone has a\nsense, when this happens, of wasting something precious. As Ben\nFranklin said, if you love life, don't waste time, because time is\nwhat life is made of.So no, there's nothing particularly grand about making money. That's\nnot what makes startups worth the trouble. What's important about\nstartups is the speed. By compressing the dull but necessary task\nof making a living into the smallest possible time, you show respect\nfor life, and there is something grand about that.Notes[1]\nStartups can die from releasing something full of bugs, and not\nfixing them fast enough, but I don't know of any that died from\nreleasing something stable but minimal very early, then promptly\nimproving it.[2]\nI know this is why I haven't released Arc. The moment I do,\nI'll have people nagging me for features.[3]\nA web site is different from a book or movie or desktop application\nin this respect. Users judge a site not as a single snapshot, but\nas an animation with multiple frames. Of the two, I'd say the rate of\nimprovement is more important to users than where you currently\nare.[4]\nIt should not always tell this to users, however. For example,\nMySpace is basically a replacement mall for mallrats. But it was\nwiser for them, initially, to pretend that the site was about bands.[5]\nSimilarly, don't make users register to try your site. Maybe\nwhat you have is so valuable that visitors should gladly register\nto get at it. But they've been trained to expect the opposite.\nMost of the things they've tried on the web have sucked-- and\nprobably especially those that made them register.[6]\nVCs have rational reasons for behaving this way. They don't\nmake their money (if they make money) off their median investments.\nIn a typical fund, half the companies fail, most of the rest generate\nmediocre returns, and one or two \"make the fund\" by succeeding\nspectacularly. So if they miss just a few of the most promising\nopportunities, it could hose the whole fund.[7]\nThe attitude of a running back doesn't translate to soccer.\nThough it looks great when a forward dribbles past multiple defenders,\na player who persists in trying such things will do worse in the\nlong term than one who passes.[8]\nThe reason Y Combinator never negotiates valuations\nis that we're not professional negotiators, and don't want to turn\ninto them.[9]\nThere are two ways to do \nwork you love: (a) to make money, then work\non what you love, or (b) to get a job where you get paid to work on\nstuff you love. In practice the first phases of both\nconsist mostly of unedifying schleps, and in (b) the second phase is less\nsecure.Thanks to Sam Altman, Trevor Blackwell, Beau Hartshorne, Jessica \nLivingston, and Robert Morris for reading drafts of this."} {"title": "gba", "text": "April 2004To the popular press, \"hacker\" means someone who breaks\ninto computers. Among programmers it means a good programmer.\nBut the two meanings are connected. To programmers,\n\"hacker\" connotes mastery in the most literal sense: someone\nwho can make a computer do what he wants\u2014whether the computer\nwants to or not.To add to the confusion, the noun \"hack\" also has two senses. It can\nbe either a compliment or an insult. It's called a hack when\nyou do something in an ugly way. But when you do something\nso clever that you somehow beat the system, that's also\ncalled a hack. The word is used more often in the former than\nthe latter sense, probably because ugly solutions are more\ncommon than brilliant ones.Believe it or not, the two senses of \"hack\" are also\nconnected. Ugly and imaginative solutions have something in\ncommon: they both break the rules. And there is a gradual\ncontinuum between rule breaking that's merely ugly (using\nduct tape to attach something to your bike) and rule breaking\nthat is brilliantly imaginative (discarding Euclidean space).Hacking predates computers. When he\nwas working on the Manhattan Project, Richard Feynman used to\namuse himself by breaking into safes containing secret documents.\nThis tradition continues today.\nWhen we were in grad school, a hacker friend of mine who spent too much\ntime around MIT had\nhis own lock picking kit.\n(He now runs a hedge fund, a not unrelated enterprise.)It is sometimes hard to explain to authorities why one would\nwant to do such things.\nAnother friend of mine once got in trouble with the government for\nbreaking into computers. This had only recently been declared\na crime, and the FBI found that their usual investigative\ntechnique didn't work. Police investigation apparently begins with\na motive. The usual motives are few: drugs, money, sex,\nrevenge. Intellectual curiosity was not one of the motives on\nthe FBI's list. Indeed, the whole concept seemed foreign to\nthem.Those in authority tend to be annoyed by hackers'\ngeneral attitude of disobedience. But that disobedience is\na byproduct of the qualities that make them good programmers.\nThey may laugh at the CEO when he talks in generic corporate\nnewspeech, but they also laugh at someone who tells them\na certain problem can't be solved.\nSuppress one, and you suppress the other.This attitude is sometimes affected. Sometimes young programmers\nnotice the eccentricities of eminent hackers and decide to\nadopt some of their own in order to seem smarter.\nThe fake version is not merely\nannoying; the prickly attitude of these posers\ncan actually slow the process of innovation.But even factoring in their annoying eccentricities,\nthe disobedient attitude of hackers is a net win. I wish its\nadvantages were better understood.For example, I suspect people in Hollywood are\nsimply mystified by\nhackers' attitudes toward copyrights. They are a perennial\ntopic of heated discussion on Slashdot.\nBut why should people who program computers\nbe so concerned about copyrights, of all things?Partly because some companies use mechanisms to prevent\ncopying. Show any hacker a lock and his first thought is\nhow to pick it. But there is a deeper reason that\nhackers are alarmed by measures like copyrights and patents.\nThey see increasingly aggressive measures to protect\n\"intellectual property\"\nas a threat to the intellectual\nfreedom they need to do their job.\nAnd they are right.It is by poking about inside current technology that\nhackers get ideas for the next generation. No thanks,\nintellectual homeowners may say, we don't need any\noutside help. But they're wrong.\nThe next generation of computer technology has\noften\u2014perhaps more often than not\u2014been developed by outsiders.In 1977 there was no doubt some group within IBM developing\nwhat they expected to be\nthe next generation of business computer. They were mistaken.\nThe next generation of business computer was\nbeing developed on entirely different lines by two long-haired\nguys called Steve in a garage in Los Altos. At about the\nsame time, the powers that be\nwere cooperating to develop the\nofficial next generation operating system, Multics.\nBut two guys who thought Multics excessively complex went off\nand wrote their own. They gave it a name that\nwas a joking reference to Multics: Unix.The latest intellectual property laws impose\nunprecedented restrictions on the sort of poking around that\nleads to new ideas. In the past, a competitor might use patents\nto prevent you from selling a copy of something they\nmade, but they couldn't prevent you from\ntaking one apart to see how it worked. The latest\nlaws make this a crime. How are we\nto develop new technology if we can't study current\ntechnology to figure out how to improve it?Ironically, hackers have brought this on themselves.\nComputers are responsible for the problem. The control systems\ninside machines used to be physical: gears and levers and cams.\nIncreasingly, the brains (and thus the value) of products is\nin software. And by this I mean software in the general sense:\ni.e. data. A song on an LP is physically stamped into the\nplastic. A song on an iPod's disk is merely stored on it.Data is by definition easy to copy. And the Internet\nmakes copies easy to distribute. So it is no wonder\ncompanies are afraid. But, as so often happens, fear has\nclouded their judgement. The government has responded\nwith draconian laws to protect intellectual property.\nThey probably mean well. But\nthey may not realize that such laws will do more harm\nthan good.Why are programmers so violently opposed to these laws?\nIf I were a legislator, I'd be interested in this\nmystery\u2014for the same reason that, if I were a farmer and suddenly\nheard a lot of squawking coming from my hen house one night,\nI'd want to go out and investigate. Hackers are not stupid,\nand unanimity is very rare in this world.\nSo if they're all squawking, \nperhaps there is something amiss.Could it be that such laws, though intended to protect America,\nwill actually harm it? Think about it. There is something\nvery American about Feynman breaking into safes during\nthe Manhattan Project. It's hard to imagine the authorities\nhaving a sense of humor about such things over\nin Germany at that time. Maybe it's not a coincidence.Hackers are unruly. That is the essence of hacking. And it\nis also the essence of Americanness. It is no accident\nthat Silicon Valley\nis in America, and not France, or Germany,\nor England, or Japan. In those countries, people color inside\nthe lines.I lived for a while in Florence. But after I'd been there\na few months I realized that what I'd been unconsciously hoping\nto find there was back in the place I'd just left.\nThe reason Florence is famous is that in 1450, it was New York.\nIn 1450 it was filled with the kind of turbulent and ambitious\npeople you find now in America. (So I went back to America.)It is greatly to America's advantage that it is\na congenial atmosphere for the right sort of unruliness\u2014that\nit is a home not just for the smart, but for smart-alecks.\nAnd hackers are invariably smart-alecks. If we had a national\nholiday, it would be April 1st. It says a great deal about\nour work that we use the same word for a brilliant or a\nhorribly cheesy solution. When we cook one up we're not\nalways 100% sure which kind it is. But as long as it has\nthe right sort of wrongness, that's a promising sign.\nIt's odd that people\nthink of programming as precise and methodical. Computers\nare precise and methodical. Hacking is something you do\nwith a gleeful laugh.In our world some of the most characteristic solutions\nare not far removed from practical\njokes. IBM was no doubt rather surprised by the consequences\nof the licensing deal for DOS, just as the hypothetical\n\"adversary\" must be when Michael Rabin solves a problem by\nredefining it as one that's easier to solve.Smart-alecks have to develop a keen sense of how much they\ncan get away with. And lately hackers \nhave sensed a change\nin the atmosphere.\nLately hackerliness seems rather frowned upon.To hackers the recent contraction in civil liberties seems\nespecially ominous. That must also mystify outsiders. \nWhy should we care especially about civil\nliberties? Why programmers, more than\ndentists or salesmen or landscapers?Let me put the case in terms a government official would appreciate.\nCivil liberties are not just an ornament, or a quaint\nAmerican tradition. Civil liberties make countries rich.\nIf you made a graph of\nGNP per capita vs. civil liberties, you'd notice a definite\ntrend. Could civil liberties really be a cause, rather\nthan just an effect? I think so. I think a society in which\npeople can do and say what they want will also tend to\nbe one in which the most efficient solutions win, rather than\nthose sponsored by the most influential people.\nAuthoritarian countries become corrupt;\ncorrupt countries become poor; and poor countries are weak. \nIt seems to me there is\na Laffer curve for government power, just as for\ntax revenues. At least, it seems likely enough that it\nwould be stupid to try the experiment and find out. Unlike\nhigh tax rates, you can't repeal totalitarianism if it\nturns out to be a mistake.This is why hackers worry. The government spying on people doesn't\nliterally make programmers write worse code. It just leads\neventually to a world in which bad ideas win. And because\nthis is so important to hackers, they're especially sensitive\nto it. They can sense totalitarianism approaching from a\ndistance, as animals can sense an approaching \nthunderstorm.It would be ironic if, as hackers fear, recent measures\nintended to protect national security and intellectual property\nturned out to be a missile aimed right at what makes \nAmerica successful. But it would not be the first time that\nmeasures taken in an atmosphere of panic had\nthe opposite of the intended effect.There is such a thing as Americanness.\nThere's nothing like living abroad to teach you that. \nAnd if you want to know whether something will nurture or squash\nthis quality, it would be hard to find a better focus\ngroup than hackers, because they come closest of any group\nI know to embodying it. Closer, probably, than\nthe men running our government,\nwho for all their talk of patriotism\nremind me more of Richelieu or Mazarin\nthan Thomas Jefferson or George Washington.When you read what the founding fathers had to say for\nthemselves, they sound more like hackers.\n\"The spirit of resistance to government,\"\nJefferson wrote, \"is so valuable on certain occasions, that I wish\nit always to be kept alive.\"Imagine an American president saying that today.\nLike the remarks of an outspoken old grandmother, the sayings of\nthe founding fathers have embarrassed generations of\ntheir less confident successors. They remind us where we come from.\nThey remind us that it is the people who break rules that are\nthe source of America's wealth and power.Those in a position to impose rules naturally want them to be\nobeyed. But be careful what you ask for. You might get it.Thanks to Ken Anderson, Trevor Blackwell, Daniel Giffin, \nSarah Harlin, Shiro Kawai, Jessica Livingston, Matz, \nJackie McDonough, Robert Morris, Eric Raymond, Guido van Rossum,\nDavid Weinberger, and\nSteven Wolfram for reading drafts of this essay.\n(The image shows Steves Jobs and Wozniak \nwith a \"blue box.\"\nPhoto by Margret Wozniak. Reproduced by permission of Steve\nWozniak.)"} {"title": "island", "text": "July 2006I've discovered a handy test for figuring out what you're addicted\nto. Imagine you were going to spend the weekend at a friend's house\non a little island off the coast of Maine. There are no shops on\nthe island and you won't be able to leave while you're there. Also,\nyou've never been to this house before, so you can't assume it will\nhave more than any house might.What, besides clothes and toiletries, do you make a point of packing?\nThat's what you're addicted to. For example, if you find yourself\npacking a bottle of vodka (just in case), you may want to stop and\nthink about that.For me the list is four things: books, earplugs, a notebook, and a\npen.There are other things I might bring if I thought of it, like music,\nor tea, but I can live without them. I'm not so addicted to caffeine\nthat I wouldn't risk the house not having any tea, just for a\nweekend.Quiet is another matter. I realize it seems a bit eccentric to\ntake earplugs on a trip to an island off the coast of Maine. If\nanywhere should be quiet, that should. But what if the person in\nthe next room snored? What if there was a kid playing basketball?\n(Thump, thump, thump... thump.) Why risk it? Earplugs are small.Sometimes I can think with noise. If I already have momentum on\nsome project, I can work in noisy places. I can edit an essay or\ndebug code in an airport. But airports are not so bad: most of the\nnoise is whitish. I couldn't work with the sound of a sitcom coming\nthrough the wall, or a car in the street playing thump-thump music.And of course there's another kind of thinking, when you're starting\nsomething new, that requires complete quiet. You never\nknow when this will strike. It's just as well to carry plugs.The notebook and pen are professional equipment, as it were. Though\nactually there is something druglike about them, in the sense that\ntheir main purpose is to make me feel better. I hardly ever go\nback and read stuff I write down in notebooks. It's just that if\nI can't write things down, worrying about remembering one idea gets\nin the way of having the next. Pen and paper wick ideas.The best notebooks I've found are made by a company called Miquelrius.\nI use their smallest size, which is about 2.5 x 4 in.\nThe secret to writing on such\nnarrow pages is to break words only when you run out of space, like\na Latin inscription. I use the cheapest plastic Bic ballpoints,\npartly because their gluey ink doesn't seep through pages, and\npartly so I don't worry about losing them.I only started carrying a notebook about three years ago. Before\nthat I used whatever scraps of paper I could find. But the problem\nwith scraps of paper is that they're not ordered. In a notebook\nyou can guess what a scribble means by looking at the pages\naround it. In the scrap era I was constantly finding notes I'd\nwritten years before that might say something I needed to remember,\nif I could only figure out what.As for books, I know the house would probably have something to\nread. On the average trip I bring four books and only read one of\nthem, because I find new books to read en route. Really bringing\nbooks is insurance.I realize this dependence on books is not entirely good\u2014that what\nI need them for is distraction. The books I bring on trips are\noften quite virtuous, the sort of stuff that might be assigned\nreading in a college class. But I know my motives aren't virtuous.\nI bring books because if the world gets boring I need to be able\nto slip into another distilled by some writer. It's like eating\njam when you know you should be eating fruit.There is a point where I'll do without books. I was walking in\nsome steep mountains once, and decided I'd rather just think, if I\nwas bored, rather than carry a single unnecessary ounce. It wasn't\nso bad. I found I could entertain myself by having ideas instead\nof reading other people's. If you stop eating jam, fruit starts\nto taste better.So maybe I'll try not bringing books on some future trip. They're\ngoing to have to pry the plugs out of my cold, dead ears, however."} {"title": "vcsqueeze", "text": "November 2005In the next few years, venture capital funds will find themselves\nsqueezed from four directions. They're already stuck with a seller's\nmarket, because of the huge amounts they raised at the end of the\nBubble and still haven't invested. This by itself is not the end\nof the world. In fact, it's just a more extreme version of the\nnorm\nin the VC business: too much money chasing too few deals.Unfortunately, those few deals now want less and less money, because\nit's getting so cheap to start a startup. The four causes: open\nsource, which makes software free; Moore's law, which makes hardware\ngeometrically closer to free; the Web, which makes promotion free\nif you're good; and better languages, which make development a lot\ncheaper.When we started our startup in 1995, the first three were our biggest\nexpenses. We had to pay $5000 for the Netscape Commerce Server,\nthe only software that then supported secure http connections. We\npaid $3000 for a server with a 90 MHz processor and 32 meg of\nmemory. And we paid a PR firm about $30,000 to promote our launch.Now you could get all three for nothing. You can get the software\nfor free; people throw away computers more powerful than our first\nserver; and if you make something good you can generate ten times\nas much traffic by word of mouth online than our first PR firm got\nthrough the print media.And of course another big change for the average startup is that\nprogramming languages have improved-- or rather, the median language has. At most startups ten years\nago, software development meant ten programmers writing code in\nC++. Now the same work might be done by one or two using Python\nor Ruby.During the Bubble, a lot of people predicted that startups would\noutsource their development to India. I think a better model for\nthe future is David Heinemeier Hansson, who outsourced his development\nto a more powerful language instead. A lot of well-known applications\nare now, like BaseCamp, written by just one programmer. And one\nguy is more than 10x cheaper than ten, because (a) he won't waste\nany time in meetings, and (b) since he's probably a founder, he can\npay himself nothing.Because starting a startup is so cheap, venture capitalists now\noften want to give startups more money than the startups want to\ntake. VCs like to invest several million at a time. But as one\nVC told me after a startup he funded would only take about half a\nmillion, \"I don't know what we're going to do. Maybe we'll just\nhave to give some of it back.\" Meaning give some of the fund back\nto the institutional investors who supplied it, because it wasn't\ngoing to be possible to invest it all.Into this already bad situation comes the third problem: Sarbanes-Oxley.\nSarbanes-Oxley is a law, passed after the Bubble, that drastically\nincreases the regulatory burden on public companies. And in addition\nto the cost of compliance, which is at least two million dollars a\nyear, the law introduces frightening legal exposure for corporate\nofficers. An experienced CFO I know said flatly: \"I would not\nwant to be CFO of a public company now.\"You might think that responsible corporate governance is an area\nwhere you can't go too far. But you can go too far in any law, and\nthis remark convinced me that Sarbanes-Oxley must have. This CFO\nis both the smartest and the most upstanding money guy I know. If\nSarbanes-Oxley deters people like him from being CFOs of public \ncompanies, that's proof enough that it's broken.Largely because of Sarbanes-Oxley, few startups go public now. For\nall practical purposes, succeeding now equals getting bought. Which\nmeans VCs are now in the business of finding promising little 2-3\nman startups and pumping them up into companies that cost $100\nmillion to acquire. They didn't mean to be in this business; it's\njust what their business has evolved into.Hence the fourth problem: the acquirers have begun to realize they\ncan buy wholesale. Why should they wait for VCs to make the startups\nthey want more expensive? Most of what the VCs add, acquirers don't\nwant anyway. The acquirers already have brand recognition and HR\ndepartments. What they really want is the software and the developers,\nand that's what the startup is in the early phase: concentrated\nsoftware and developers.Google, typically, seems to have been the first to figure this out.\n\"Bring us your startups early,\" said Google's speaker at the Startup School. They're quite\nexplicit about it: they like to acquire startups at just the point\nwhere they would do a Series A round. (The Series A round is the\nfirst round of real VC funding; it usually happens in the first\nyear.) It is a brilliant strategy, and one that other big technology\ncompanies will no doubt try to duplicate. Unless they want to have \nstill more of their lunch eaten by Google.Of course, Google has an advantage in buying startups: a lot of the\npeople there are rich, or expect to be when their options vest.\nOrdinary employees find it very hard to recommend an acquisition;\nit's just too annoying to see a bunch of twenty year olds get rich\nwhen you're still working for salary. Even if it's the right thing \nfor your company to do.The Solution(s)Bad as things look now, there is a way for VCs to save themselves.\nThey need to do two things, one of which won't surprise them, and \nanother that will seem an anathema.Let's start with the obvious one: lobby to get Sarbanes-Oxley \nloosened. This law was created to prevent future Enrons, not to\ndestroy the IPO market. Since the IPO market was practically dead\nwhen it passed, few saw what bad effects it would have. But now \nthat technology has recovered from the last bust, we can see clearly\nwhat a bottleneck Sarbanes-Oxley has become.Startups are fragile plants\u2014seedlings, in fact. These seedlings\nare worth protecting, because they grow into the trees of the\neconomy. Much of the economy's growth is their growth. I think\nmost politicians realize that. But they don't realize just how \nfragile startups are, and how easily they can become collateral\ndamage of laws meant to fix some other problem.Still more dangerously, when you destroy startups, they make very\nlittle noise. If you step on the toes of the coal industry, you'll\nhear about it. But if you inadvertantly squash the startup industry,\nall that happens is that the founders of the next Google stay in \ngrad school instead of starting a company.My second suggestion will seem shocking to VCs: let founders cash \nout partially in the Series A round. At the moment, when VCs invest\nin a startup, all the stock they get is newly issued and all the \nmoney goes to the company. They could buy some stock directly from\nthe founders as well.Most VCs have an almost religious rule against doing this. They\ndon't want founders to get a penny till the company is sold or goes\npublic. VCs are obsessed with control, and they worry that they'll\nhave less leverage over the founders if the founders have any money.This is a dumb plan. In fact, letting the founders sell a little stock\nearly would generally be better for the company, because it would\ncause the founders' attitudes toward risk to be aligned with the\nVCs'. As things currently work, their attitudes toward risk tend\nto be diametrically opposed: the founders, who have nothing, would\nprefer a 100% chance of $1 million to a 20% chance of $10 million,\nwhile the VCs can afford to be \"rational\" and prefer the latter.Whatever they say, the reason founders are selling their companies\nearly instead of doing Series A rounds is that they get paid up\nfront. That first million is just worth so much more than the\nsubsequent ones. If founders could sell a little stock early,\nthey'd be happy to take VC money and bet the rest on a bigger\noutcome.So why not let the founders have that first million, or at least\nhalf million? The VCs would get same number of shares for the \nmoney. So what if some of the money would go to the \nfounders instead of the company?Some VCs will say this is\nunthinkable\u2014that they want all their money to be put to work\ngrowing the company. But the fact is, the huge size of current VC\ninvestments is dictated by the structure\nof VC funds, not the needs of startups. Often as not these large \ninvestments go to work destroying the company rather than growing\nit.The angel investors who funded our startup let the founders sell\nsome stock directly to them, and it was a good deal for everyone. \nThe angels made a huge return on that investment, so they're happy.\nAnd for us founders it blunted the terrifying all-or-nothingness\nof a startup, which in its raw form is more a distraction than a\nmotivator.If VCs are frightened at the idea of letting founders partially\ncash out, let me tell them something still more frightening: you\nare now competing directly with Google.\nThanks to Trevor Blackwell, Sarah Harlin, Jessica\nLivingston, and Robert Morris for reading drafts of this."} {"title": "wisdom", "text": "February 2007A few days ago I finally figured out something I've wondered about\nfor 25 years: the relationship between wisdom and intelligence.\nAnyone can see they're not the same by the number of people who are\nsmart, but not very wise. And yet intelligence and wisdom do seem\nrelated. How?What is wisdom? I'd say it's knowing what to do in a lot of\nsituations. I'm not trying to make a deep point here about the\ntrue nature of wisdom, just to figure out how we use the word. A\nwise person is someone who usually knows the right thing to do.And yet isn't being smart also knowing what to do in certain\nsituations? For example, knowing what to do when the teacher tells\nyour elementary school class to add all the numbers from 1 to 100?\n[1]Some say wisdom and intelligence apply to different types of\nproblems\u2014wisdom to human problems and intelligence to abstract\nones. But that isn't true. Some wisdom has nothing to do with\npeople: for example, the wisdom of the engineer who knows certain\nstructures are less prone to failure than others. And certainly\nsmart people can find clever solutions to human problems as well\nas abstract ones. \n[2]Another popular explanation is that wisdom comes from experience\nwhile intelligence is innate. But people are not simply wise in\nproportion to how much experience they have. Other things must\ncontribute to wisdom besides experience, and some may be innate: a\nreflective disposition, for example.Neither of the conventional explanations of the difference between\nwisdom and intelligence stands up to scrutiny. So what is the\ndifference? If we look at how people use the words \"wise\" and\n\"smart,\" what they seem to mean is different shapes of performance.Curve\"Wise\" and \"smart\" are both ways of saying someone knows what to\ndo. The difference is that \"wise\" means one has a high average\noutcome across all situations, and \"smart\" means one does spectacularly\nwell in a few. That is, if you had a graph in which the x axis\nrepresented situations and the y axis the outcome, the graph of the\nwise person would be high overall, and the graph of the smart person\nwould have high peaks.The distinction is similar to the rule that one should judge talent\nat its best and character at its worst. Except you judge intelligence\nat its best, and wisdom by its average. That's how the two are\nrelated: they're the two different senses in which the same curve\ncan be high.So a wise person knows what to do in most situations, while a smart\nperson knows what to do in situations where few others could. We\nneed to add one more qualification: we should ignore cases where\nsomeone knows what to do because they have inside information. \n[3]\nBut aside from that, I don't think we can get much more specific\nwithout starting to be mistaken.Nor do we need to. Simple as it is, this explanation predicts, or\nat least accords with, both of the conventional stories about the\ndistinction between wisdom and intelligence. Human problems are\nthe most common type, so being good at solving those is key in\nachieving a high average outcome. And it seems natural that a\nhigh average outcome depends mostly on experience, but that dramatic\npeaks can only be achieved by people with certain rare, innate\nqualities; nearly anyone can learn to be a good swimmer, but to be\nan Olympic swimmer you need a certain body type.This explanation also suggests why wisdom is such an elusive concept:\nthere's no such thing. \"Wise\" means something\u2014that one is\non average good at making the right choice. But giving the name\n\"wisdom\" to the supposed quality that enables one to do that doesn't\nmean such a thing exists. To the extent \"wisdom\" means anything,\nit refers to a grab-bag of qualities as various as self-discipline,\nexperience, and empathy. \n[4]Likewise, though \"intelligent\" means something, we're asking for\ntrouble if we insist on looking for a single thing called \"intelligence.\"\nAnd whatever its components, they're not all innate. We use the\nword \"intelligent\" as an indication of ability: a smart person can\ngrasp things few others could. It does seem likely there's some\ninborn predisposition to intelligence (and wisdom too), but this\npredisposition is not itself intelligence.One reason we tend to think of intelligence as inborn is that people\ntrying to measure it have concentrated on the aspects of it that\nare most measurable. A quality that's inborn will obviously be\nmore convenient to work with than one that's influenced by experience,\nand thus might vary in the course of a study. The problem comes\nwhen we drag the word \"intelligence\" over onto what they're measuring.\nIf they're measuring something inborn, they can't be measuring\nintelligence. Three year olds aren't smart. When we describe one\nas smart, it's shorthand for \"smarter than other three year olds.\"SplitPerhaps it's a technicality to point out that a predisposition to\nintelligence is not the same as intelligence. But it's an important\ntechnicality, because it reminds us that we can become smarter,\njust as we can become wiser.The alarming thing is that we may have to choose between the two.If wisdom and intelligence are the average and peaks of the same\ncurve, then they converge as the number of points on the curve\ndecreases. If there's just one point, they're identical: the average\nand maximum are the same. But as the number of points increases,\nwisdom and intelligence diverge. And historically the number of\npoints on the curve seems to have been increasing: our ability is\ntested in an ever wider range of situations.In the time of Confucius and Socrates, people seem to have regarded\nwisdom, learning, and intelligence as more closely related than we\ndo. Distinguishing between \"wise\" and \"smart\" is a modern habit.\n[5]\nAnd the reason we do is that they've been diverging. As knowledge\ngets more specialized, there are more points on the curve, and the\ndistinction between the spikes and the average becomes sharper,\nlike a digital image rendered with more pixels.One consequence is that some old recipes may have become obsolete.\nAt the very least we have to go back and figure out if they were\nreally recipes for wisdom or intelligence. But the really striking\nchange, as intelligence and wisdom drift apart, is that we may have\nto decide which we prefer. We may not be able to optimize for both\nsimultaneously.Society seems to have voted for intelligence. We no longer admire\nthe sage\u2014not the way people did two thousand years ago. Now\nwe admire the genius. Because in fact the distinction we began\nwith has a rather brutal converse: just as you can be smart without\nbeing very wise, you can be wise without being very smart. That\ndoesn't sound especially admirable. That gets you James Bond, who\nknows what to do in a lot of situations, but has to rely on Q for\nthe ones involving math.Intelligence and wisdom are obviously not mutually exclusive. In\nfact, a high average may help support high peaks. But there are\nreasons to believe that at some point you have to choose between\nthem. One is the example of very smart people, who are so often\nunwise that in popular culture this now seems to be regarded as the\nrule rather than the exception. Perhaps the absent-minded professor\nis wise in his way, or wiser than he seems, but he's not wise in\nthe way Confucius or Socrates wanted people to be. \n[6]NewFor both Confucius and Socrates, wisdom, virtue, and happiness were\nnecessarily related. The wise man was someone who knew what the\nright choice was and always made it; to be the right choice, it had\nto be morally right; he was therefore always happy, knowing he'd\ndone the best he could. I can't think of many ancient philosophers\nwho would have disagreed with that, so far as it goes.\"The superior man is always happy; the small man sad,\" said Confucius.\n[7]Whereas a few years ago I read an interview with a mathematician\nwho said that most nights he went to bed discontented, feeling he\nhadn't made enough progress. \n[8]\nThe Chinese and Greek words we\ntranslate as \"happy\" didn't mean exactly what we do by it, but\nthere's enough overlap that this remark contradicts them.Is the mathematician a small man because he's discontented? No;\nhe's just doing a kind of work that wasn't very common in Confucius's\nday.Human knowledge seems to grow fractally. Time after time, something\nthat seemed a small and uninteresting area\u2014experimental error,\neven\u2014turns out, when examined up close, to have as much in\nit as all knowledge up to that point. Several of the fractal buds\nthat have exploded since ancient times involve inventing and\ndiscovering new things. Math, for example, used to be something a\nhandful of people did part-time. Now it's the career of thousands.\nAnd in work that involves making new things, some old rules don't\napply.Recently I've spent some time advising people, and there I find the\nancient rule still works: try to understand the situation as well\nas you can, give the best advice you can based on your experience,\nand then don't worry about it, knowing you did all you could. But\nI don't have anything like this serenity when I'm writing an essay.\nThen I'm worried. What if I run out of ideas? And when I'm writing,\nfour nights out of five I go to bed discontented, feeling I didn't\nget enough done.Advising people and writing are fundamentally different types of\nwork. When people come to you with a problem and you have to figure\nout the right thing to do, you don't (usually) have to invent\nanything. You just weigh the alternatives and try to judge which\nis the prudent choice. But prudence can't tell me what sentence\nto write next. The search space is too big.Someone like a judge or a military officer can in much of his work\nbe guided by duty, but duty is no guide in making things. Makers\ndepend on something more precarious: inspiration. And like most\npeople who lead a precarious existence, they tend to be worried,\nnot contented. In that respect they're more like the small man of\nConfucius's day, always one bad harvest (or ruler) away from\nstarvation. Except instead of being at the mercy of weather and\nofficials, they're at the mercy of their own imagination.LimitsTo me it was a relief just to realize it might be ok to be discontented.\nThe idea that a successful person should be happy has thousands of\nyears of momentum behind it. If I was any good, why didn't I have\nthe easy confidence winners are supposed to have? But that, I now\nbelieve, is like a runner asking \"If I'm such a good athlete, why\ndo I feel so tired?\" Good runners still get tired; they just get\ntired at higher speeds.People whose work is to invent or discover things are in the same\nposition as the runner. There's no way for them to do the best\nthey can, because there's no limit to what they could do. The\nclosest you can come is to compare yourself to other people. But\nthe better you do, the less this matters. An undergrad who gets\nsomething published feels like a star. But for someone at the top\nof the field, what's the test of doing well? Runners can at least\ncompare themselves to others doing exactly the same thing; if you\nwin an Olympic gold medal, you can be fairly content, even if you\nthink you could have run a bit faster. But what is a novelist to\ndo?Whereas if you're doing the kind of work in which problems are\npresented to you and you have to choose between several alternatives,\nthere's an upper bound on your performance: choosing the best every\ntime. In ancient societies, nearly all work seems to have been of\nthis type. The peasant had to decide whether a garment was worth\nmending, and the king whether or not to invade his neighbor, but\nneither was expected to invent anything. In principle they could\nhave; the king could have invented firearms, then invaded his\nneighbor. But in practice innovations were so rare that they weren't\nexpected of you, any more than goalkeepers are expected to score\ngoals. \n[9]\nIn practice, it seemed as if there was a correct decision\nin every situation, and if you made it you'd done your job perfectly,\njust as a goalkeeper who prevents the other team from scoring is\nconsidered to have played a perfect game.In this world, wisdom seemed paramount. \n[10]\nEven now, most people\ndo work in which problems are put before them and they have to\nchoose the best alternative. But as knowledge has grown more\nspecialized, there are more and more types of work in which people\nhave to make up new things, and in which performance is therefore\nunbounded. Intelligence has become increasingly important relative\nto wisdom because there is more room for spikes.RecipesAnother sign we may have to choose between intelligence and wisdom\nis how different their recipes are. Wisdom seems to come largely\nfrom curing childish qualities, and intelligence largely from\ncultivating them.Recipes for wisdom, particularly ancient ones, tend to have a\nremedial character. To achieve wisdom one must cut away all the\ndebris that fills one's head on emergence from childhood, leaving\nonly the important stuff. Both self-control and experience have\nthis effect: to eliminate the random biases that come from your own\nnature and from the circumstances of your upbringing respectively.\nThat's not all wisdom is, but it's a large part of it. Much of\nwhat's in the sage's head is also in the head of every twelve year\nold. The difference is that in the head of the twelve year old\nit's mixed together with a lot of random junk.The path to intelligence seems to be through working on hard problems.\nYou develop intelligence as you might develop muscles, through\nexercise. But there can't be too much compulsion here. No amount\nof discipline can replace genuine curiosity. So cultivating\nintelligence seems to be a matter of identifying some bias in one's\ncharacter\u2014some tendency to be interested in certain types of\nthings\u2014and nurturing it. Instead of obliterating your\nidiosyncrasies in an effort to make yourself a neutral vessel for\nthe truth, you select one and try to grow it from a seedling into\na tree.The wise are all much alike in their wisdom, but very smart people\ntend to be smart in distinctive ways.Most of our educational traditions aim at wisdom. So perhaps one\nreason schools work badly is that they're trying to make intelligence\nusing recipes for wisdom. Most recipes for wisdom have an element\nof subjection. At the very least, you're supposed to do what the\nteacher says. The more extreme recipes aim to break down your\nindividuality the way basic training does. But that's not the route\nto intelligence. Whereas wisdom comes through humility, it may\nactually help, in cultivating intelligence, to have a mistakenly\nhigh opinion of your abilities, because that encourages you to keep\nworking. Ideally till you realize how mistaken you were.(The reason it's hard to learn new skills late in life is not just\nthat one's brain is less malleable. Another probably even worse\nobstacle is that one has higher standards.)I realize we're on dangerous ground here. I'm not proposing the\nprimary goal of education should be to increase students' \"self-esteem.\"\nThat just breeds laziness. And in any case, it doesn't really fool\nthe kids, not the smart ones. They can tell at a young age that a\ncontest where everyone wins is a fraud.A teacher has to walk a narrow path: you want to encourage kids to\ncome up with things on their own, but you can't simply applaud\neverything they produce. You have to be a good audience: appreciative,\nbut not too easily impressed. And that's a lot of work. You have\nto have a good enough grasp of kids' capacities at different ages\nto know when to be surprised.That's the opposite of traditional recipes for education. Traditionally\nthe student is the audience, not the teacher; the student's job is\nnot to invent, but to absorb some prescribed body of material. (The\nuse of the term \"recitation\" for sections in some colleges is a\nfossil of this.) The problem with these old traditions is that\nthey're too much influenced by recipes for wisdom.DifferentI deliberately gave this essay a provocative title; of course it's\nworth being wise. But I think it's important to understand the\nrelationship between intelligence and wisdom, and particularly what\nseems to be the growing gap between them. That way we can avoid\napplying rules and standards to intelligence that are really meant\nfor wisdom. These two senses of \"knowing what to do\" are more\ndifferent than most people realize. The path to wisdom is through\ndiscipline, and the path to intelligence through carefully selected\nself-indulgence. Wisdom is universal, and intelligence idiosyncratic.\nAnd while wisdom yields calmness, intelligence much of the time\nleads to discontentment.That's particularly worth remembering. A physicist friend recently\ntold me half his department was on Prozac. Perhaps if we acknowledge\nthat some amount of frustration is inevitable in certain kinds\nof work, we can mitigate its effects. Perhaps we can box it up and\nput it away some of the time, instead of letting it flow together\nwith everyday sadness to produce what seems an alarmingly large\npool. At the very least, we can avoid being discontented about\nbeing discontented.If you feel exhausted, it's not necessarily because there's something\nwrong with you. Maybe you're just running fast.Notes[1]\nGauss was supposedly asked this when he was 10. Instead of\nlaboriously adding together the numbers like the other students,\nhe saw that they consisted of 50 pairs that each summed to 101 (100\n+ 1, 99 + 2, etc), and that he could just multiply 101 by 50 to get\nthe answer, 5050.[2]\nA variant is that intelligence is the ability to solve problems,\nand wisdom the judgement to know how to use those solutions. But\nwhile this is certainly an important relationship between wisdom\nand intelligence, it's not the distinction between them. Wisdom\nis useful in solving problems too, and intelligence can help in\ndeciding what to do with the solutions.[3]\nIn judging both intelligence and wisdom we have to factor out\nsome knowledge. People who know the combination of a safe will be\nbetter at opening it than people who don't, but no one would say\nthat was a test of intelligence or wisdom.But knowledge overlaps with wisdom and probably also intelligence.\nA knowledge of human nature is certainly part of wisdom. So where\ndo we draw the line?Perhaps the solution is to discount knowledge that at some point\nhas a sharp drop in utility. For example, understanding French\nwill help you in a large number of situations, but its value drops\nsharply as soon as no one else involved knows French. Whereas the\nvalue of understanding vanity would decline more gradually.The knowledge whose utility drops sharply is the kind that has\nlittle relation to other knowledge. This includes mere conventions,\nlike languages and safe combinations, and also what we'd call\n\"random\" facts, like movie stars' birthdays, or how to distinguish\n1956 from 1957 Studebakers.[4]\nPeople seeking some single thing called \"wisdom\" have been\nfooled by grammar. Wisdom is just knowing the right thing to do,\nand there are a hundred and one different qualities that help in\nthat. Some, like selflessness, might come from meditating in an\nempty room, and others, like a knowledge of human nature, might\ncome from going to drunken parties.Perhaps realizing this will help dispel the cloud of semi-sacred\nmystery that surrounds wisdom in so many people's eyes. The mystery\ncomes mostly from looking for something that doesn't exist. And\nthe reason there have historically been so many different schools\nof thought about how to achieve wisdom is that they've focused on\ndifferent components of it.When I use the word \"wisdom\" in this essay, I mean no more than\nwhatever collection of qualities helps people make the right choice\nin a wide variety of situations.[5]\nEven in English, our sense of the word \"intelligence\" is\nsurprisingly recent. Predecessors like \"understanding\" seem to\nhave had a broader meaning.[6]\nThere is of course some uncertainty about how closely the remarks\nattributed to Confucius and Socrates resemble their actual opinions.\nI'm using these names as we use the name \"Homer,\" to mean the\nhypothetical people who said the things attributed to them.[7]\nAnalects VII:36, Fung trans.Some translators use \"calm\" instead of \"happy.\" One source of\ndifficulty here is that present-day English speakers have a different\nidea of happiness from many older societies. Every language probably\nhas a word meaning \"how one feels when things are going well,\" but\ndifferent cultures react differently when things go well. We react\nlike children, with smiles and laughter. But in a more reserved\nsociety, or in one where life was tougher, the reaction might be a\nquiet contentment.[8]\nIt may have been Andrew Wiles, but I'm not sure. If anyone\nremembers such an interview, I'd appreciate hearing from you.[9]\nConfucius claimed proudly that he had never invented\nanything\u2014that he had simply passed on an accurate account of\nancient traditions. [Analects VII:1] It's hard for us now to\nappreciate how important a duty it must have been in preliterate\nsocieties to remember and pass on the group's accumulated knowledge.\nEven in Confucius's time it still seems to have been the first duty\nof the scholar.[10]\nThe bias toward wisdom in ancient philosophy may be exaggerated\nby the fact that, in both Greece and China, many of the first\nphilosophers (including Confucius and Plato) saw themselves as\nteachers of administrators, and so thought disproportionately about\nsuch matters. The few people who did invent things, like storytellers,\nmust have seemed an outlying data point that could be ignored.Thanks to Trevor Blackwell, Sarah Harlin, Jessica Livingston,\nand Robert Morris for reading drafts of this."} {"title": "worked", "text": "February 2021Before college the two main things I worked on, outside of school,\nwere writing and programming. I didn't write essays. I wrote what\nbeginning writers were supposed to write then, and probably still\nare: short stories. My stories were awful. They had hardly any plot,\njust characters with strong feelings, which I imagined made them\ndeep.The first programs I tried writing were on the IBM 1401 that our\nschool district used for what was then called \"data processing.\"\nThis was in 9th grade, so I was 13 or 14. The school district's\n1401 happened to be in the basement of our junior high school, and\nmy friend Rich Draves and I got permission to use it. It was like\na mini Bond villain's lair down there, with all these alien-looking\nmachines \u0097 CPU, disk drives, printer, card reader \u0097 sitting up\non a raised floor under bright fluorescent lights.The language we used was an early version of Fortran. You had to\ntype programs on punch cards, then stack them in the card reader\nand press a button to load the program into memory and run it. The\nresult would ordinarily be to print something on the spectacularly\nloud printer.I was puzzled by the 1401. I couldn't figure out what to do with\nit. And in retrospect there's not much I could have done with it.\nThe only form of input to programs was data stored on punched cards,\nand I didn't have any data stored on punched cards. The only other\noption was to do things that didn't rely on any input, like calculate\napproximations of pi, but I didn't know enough math to do anything\ninteresting of that type. So I'm not surprised I can't remember any\nprograms I wrote, because they can't have done much. My clearest\nmemory is of the moment I learned it was possible for programs not\nto terminate, when one of mine didn't. On a machine without\ntime-sharing, this was a social as well as a technical error, as\nthe data center manager's expression made clear.With microcomputers, everything changed. Now you could have a\ncomputer sitting right in front of you, on a desk, that could respond\nto your keystrokes as it was running instead of just churning through\na stack of punch cards and then stopping. \n[1]The first of my friends to get a microcomputer built it himself.\nIt was sold as a kit by Heathkit. I remember vividly how impressed\nand envious I felt watching him sitting in front of it, typing\nprograms right into the computer.Computers were expensive in those days and it took me years of\nnagging before I convinced my father to buy one, a TRS-80, in about\n1980. The gold standard then was the Apple II, but a TRS-80 was\ngood enough. This was when I really started programming. I wrote\nsimple games, a program to predict how high my model rockets would\nfly, and a word processor that my father used to write at least one\nbook. There was only room in memory for about 2 pages of text, so\nhe'd write 2 pages at a time and then print them out, but it was a\nlot better than a typewriter.Though I liked programming, I didn't plan to study it in college.\nIn college I was going to study philosophy, which sounded much more\npowerful. It seemed, to my naive high school self, to be the study\nof the ultimate truths, compared to which the things studied in\nother fields would be mere domain knowledge. What I discovered when\nI got to college was that the other fields took up so much of the\nspace of ideas that there wasn't much left for these supposed\nultimate truths. All that seemed left for philosophy were edge cases\nthat people in other fields felt could safely be ignored.I couldn't have put this into words when I was 18. All I knew at\nthe time was that I kept taking philosophy courses and they kept\nbeing boring. So I decided to switch to AI.AI was in the air in the mid 1980s, but there were two things\nespecially that made me want to work on it: a novel by Heinlein\ncalled The Moon is a Harsh Mistress, which featured an intelligent\ncomputer called Mike, and a PBS documentary that showed Terry\nWinograd using SHRDLU. I haven't tried rereading The Moon is a Harsh\nMistress, so I don't know how well it has aged, but when I read it\nI was drawn entirely into its world. It seemed only a matter of\ntime before we'd have Mike, and when I saw Winograd using SHRDLU,\nit seemed like that time would be a few years at most. All you had\nto do was teach SHRDLU more words.There weren't any classes in AI at Cornell then, not even graduate\nclasses, so I started trying to teach myself. Which meant learning\nLisp, since in those days Lisp was regarded as the language of AI.\nThe commonly used programming languages then were pretty primitive,\nand programmers' ideas correspondingly so. The default language at\nCornell was a Pascal-like language called PL/I, and the situation\nwas similar elsewhere. Learning Lisp expanded my concept of a program\nso fast that it was years before I started to have a sense of where\nthe new limits were. This was more like it; this was what I had\nexpected college to do. It wasn't happening in a class, like it was\nsupposed to, but that was ok. For the next couple years I was on a\nroll. I knew what I was going to do.For my undergraduate thesis, I reverse-engineered SHRDLU. My God\ndid I love working on that program. It was a pleasing bit of code,\nbut what made it even more exciting was my belief \u0097 hard to imagine\nnow, but not unique in 1985 \u0097 that it was already climbing the\nlower slopes of intelligence.I had gotten into a program at Cornell that didn't make you choose\na major. You could take whatever classes you liked, and choose\nwhatever you liked to put on your degree. I of course chose \"Artificial\nIntelligence.\" When I got the actual physical diploma, I was dismayed\nto find that the quotes had been included, which made them read as\nscare-quotes. At the time this bothered me, but now it seems amusingly\naccurate, for reasons I was about to discover.I applied to 3 grad schools: MIT and Yale, which were renowned for\nAI at the time, and Harvard, which I'd visited because Rich Draves\nwent there, and was also home to Bill Woods, who'd invented the\ntype of parser I used in my SHRDLU clone. Only Harvard accepted me,\nso that was where I went.I don't remember the moment it happened, or if there even was a\nspecific moment, but during the first year of grad school I realized\nthat AI, as practiced at the time, was a hoax. By which I mean the\nsort of AI in which a program that's told \"the dog is sitting on\nthe chair\" translates this into some formal representation and adds\nit to the list of things it knows.What these programs really showed was that there's a subset of\nnatural language that's a formal language. But a very proper subset.\nIt was clear that there was an unbridgeable gap between what they\ncould do and actually understanding natural language. It was not,\nin fact, simply a matter of teaching SHRDLU more words. That whole\nway of doing AI, with explicit data structures representing concepts,\nwas not going to work. Its brokenness did, as so often happens,\ngenerate a lot of opportunities to write papers about various\nband-aids that could be applied to it, but it was never going to\nget us Mike.So I looked around to see what I could salvage from the wreckage\nof my plans, and there was Lisp. I knew from experience that Lisp\nwas interesting for its own sake and not just for its association\nwith AI, even though that was the main reason people cared about\nit at the time. So I decided to focus on Lisp. In fact, I decided\nto write a book about Lisp hacking. It's scary to think how little\nI knew about Lisp hacking when I started writing that book. But\nthere's nothing like writing a book about something to help you\nlearn it. The book, On Lisp, wasn't published till 1993, but I wrote\nmuch of it in grad school.Computer Science is an uneasy alliance between two halves, theory\nand systems. The theory people prove things, and the systems people\nbuild things. I wanted to build things. I had plenty of respect for\ntheory \u0097 indeed, a sneaking suspicion that it was the more admirable\nof the two halves \u0097 but building things seemed so much more exciting.The problem with systems work, though, was that it didn't last.\nAny program you wrote today, no matter how good, would be obsolete\nin a couple decades at best. People might mention your software in\nfootnotes, but no one would actually use it. And indeed, it would\nseem very feeble work. Only people with a sense of the history of\nthe field would even realize that, in its time, it had been good.There were some surplus Xerox Dandelions floating around the computer\nlab at one point. Anyone who wanted one to play around with could\nhave one. I was briefly tempted, but they were so slow by present\nstandards; what was the point? No one else wanted one either, so\noff they went. That was what happened to systems work.I wanted not just to build things, but to build things that would\nlast.In this dissatisfied state I went in 1988 to visit Rich Draves at\nCMU, where he was in grad school. One day I went to visit the\nCarnegie Institute, where I'd spent a lot of time as a kid. While\nlooking at a painting there I realized something that might seem\nobvious, but was a big surprise to me. There, right on the wall,\nwas something you could make that would last. Paintings didn't\nbecome obsolete. Some of the best ones were hundreds of years old.And moreover this was something you could make a living doing. Not\nas easily as you could by writing software, of course, but I thought\nif you were really industrious and lived really cheaply, it had to\nbe possible to make enough to survive. And as an artist you could\nbe truly independent. You wouldn't have a boss, or even need to get\nresearch funding.I had always liked looking at paintings. Could I make them? I had\nno idea. I'd never imagined it was even possible. I knew intellectually\nthat people made art \u0097 that it didn't just appear spontaneously\n\u0097 but it was as if the people who made it were a different species.\nThey either lived long ago or were mysterious geniuses doing strange\nthings in profiles in Life magazine. The idea of actually being\nable to make art, to put that verb before that noun, seemed almost\nmiraculous.That fall I started taking art classes at Harvard. Grad students\ncould take classes in any department, and my advisor, Tom Cheatham,\nwas very easy going. If he even knew about the strange classes I\nwas taking, he never said anything.So now I was in a PhD program in computer science, yet planning to\nbe an artist, yet also genuinely in love with Lisp hacking and\nworking away at On Lisp. In other words, like many a grad student,\nI was working energetically on multiple projects that were not my\nthesis.I didn't see a way out of this situation. I didn't want to drop out\nof grad school, but how else was I going to get out? I remember\nwhen my friend Robert Morris got kicked out of Cornell for writing\nthe internet worm of 1988, I was envious that he'd found such a\nspectacular way to get out of grad school.Then one day in April 1990 a crack appeared in the wall. I ran into\nprofessor Cheatham and he asked if I was far enough along to graduate\nthat June. I didn't have a word of my dissertation written, but in\nwhat must have been the quickest bit of thinking in my life, I\ndecided to take a shot at writing one in the 5 weeks or so that\nremained before the deadline, reusing parts of On Lisp where I\ncould, and I was able to respond, with no perceptible delay \"Yes,\nI think so. I'll give you something to read in a few days.\"I picked applications of continuations as the topic. In retrospect\nI should have written about macros and embedded languages. There's\na whole world there that's barely been explored. But all I wanted\nwas to get out of grad school, and my rapidly written dissertation\nsufficed, just barely.Meanwhile I was applying to art schools. I applied to two: RISD in\nthe US, and the Accademia di Belli Arti in Florence, which, because\nit was the oldest art school, I imagined would be good. RISD accepted\nme, and I never heard back from the Accademia, so off to Providence\nI went.I'd applied for the BFA program at RISD, which meant in effect that\nI had to go to college again. This was not as strange as it sounds,\nbecause I was only 25, and art schools are full of people of different\nages. RISD counted me as a transfer sophomore and said I had to do\nthe foundation that summer. The foundation means the classes that\neveryone has to take in fundamental subjects like drawing, color,\nand design.Toward the end of the summer I got a big surprise: a letter from\nthe Accademia, which had been delayed because they'd sent it to\nCambridge England instead of Cambridge Massachusetts, inviting me\nto take the entrance exam in Florence that fall. This was now only\nweeks away. My nice landlady let me leave my stuff in her attic. I\nhad some money saved from consulting work I'd done in grad school;\nthere was probably enough to last a year if I lived cheaply. Now\nall I had to do was learn Italian.Only stranieri (foreigners) had to take this entrance exam. In\nretrospect it may well have been a way of excluding them, because\nthere were so many stranieri attracted by the idea of studying\nart in Florence that the Italian students would otherwise have been\noutnumbered. I was in decent shape at painting and drawing from the\nRISD foundation that summer, but I still don't know how I managed\nto pass the written exam. I remember that I answered the essay\nquestion by writing about Cezanne, and that I cranked up the\nintellectual level as high as I could to make the most of my limited\nvocabulary. \n[2]I'm only up to age 25 and already there are such conspicuous patterns.\nHere I was, yet again about to attend some august institution in\nthe hopes of learning about some prestigious subject, and yet again\nabout to be disappointed. The students and faculty in the painting\ndepartment at the Accademia were the nicest people you could imagine,\nbut they had long since arrived at an arrangement whereby the\nstudents wouldn't require the faculty to teach anything, and in\nreturn the faculty wouldn't require the students to learn anything.\nAnd at the same time all involved would adhere outwardly to the\nconventions of a 19th century atelier. We actually had one of those\nlittle stoves, fed with kindling, that you see in 19th century\nstudio paintings, and a nude model sitting as close to it as possible\nwithout getting burned. Except hardly anyone else painted her besides\nme. The rest of the students spent their time chatting or occasionally\ntrying to imitate things they'd seen in American art magazines.Our model turned out to live just down the street from me. She made\na living from a combination of modelling and making fakes for a\nlocal antique dealer. She'd copy an obscure old painting out of a\nbook, and then he'd take the copy and maltreat it to make it look\nold. \n[3]While I was a student at the Accademia I started painting still\nlives in my bedroom at night. These paintings were tiny, because\nthe room was, and because I painted them on leftover scraps of\ncanvas, which was all I could afford at the time. Painting still\nlives is different from painting people, because the subject, as\nits name suggests, can't move. People can't sit for more than about\n15 minutes at a time, and when they do they don't sit very still.\nSo the traditional m.o. for painting people is to know how to paint\na generic person, which you then modify to match the specific person\nyou're painting. Whereas a still life you can, if you want, copy\npixel by pixel from what you're seeing. You don't want to stop\nthere, of course, or you get merely photographic accuracy, and what\nmakes a still life interesting is that it's been through a head.\nYou want to emphasize the visual cues that tell you, for example,\nthat the reason the color changes suddenly at a certain point is\nthat it's the edge of an object. By subtly emphasizing such things\nyou can make paintings that are more realistic than photographs not\njust in some metaphorical sense, but in the strict information-theoretic\nsense. \n[4]I liked painting still lives because I was curious about what I was\nseeing. In everyday life, we aren't consciously aware of much we're\nseeing. Most visual perception is handled by low-level processes\nthat merely tell your brain \"that's a water droplet\" without telling\nyou details like where the lightest and darkest points are, or\n\"that's a bush\" without telling you the shape and position of every\nleaf. This is a feature of brains, not a bug. In everyday life it\nwould be distracting to notice every leaf on every bush. But when\nyou have to paint something, you have to look more closely, and\nwhen you do there's a lot to see. You can still be noticing new\nthings after days of trying to paint something people usually take\nfor granted, just as you can after\ndays of trying to write an essay about something people usually\ntake for granted.This is not the only way to paint. I'm not 100% sure it's even a\ngood way to paint. But it seemed a good enough bet to be worth\ntrying.Our teacher, professor Ulivi, was a nice guy. He could see I worked\nhard, and gave me a good grade, which he wrote down in a sort of\npassport each student had. But the Accademia wasn't teaching me\nanything except Italian, and my money was running out, so at the\nend of the first year I went back to the US.I wanted to go back to RISD, but I was now broke and RISD was very\nexpensive, so I decided to get a job for a year and then return to\nRISD the next fall. I got one at a company called Interleaf, which\nmade software for creating documents. You mean like Microsoft Word?\nExactly. That was how I learned that low end software tends to eat\nhigh end software. But Interleaf still had a few years to live yet.\n[5]Interleaf had done something pretty bold. Inspired by Emacs, they'd\nadded a scripting language, and even made the scripting language a\ndialect of Lisp. Now they wanted a Lisp hacker to write things in\nit. This was the closest thing I've had to a normal job, and I\nhereby apologize to my boss and coworkers, because I was a bad\nemployee. Their Lisp was the thinnest icing on a giant C cake, and\nsince I didn't know C and didn't want to learn it, I never understood\nmost of the software. Plus I was terribly irresponsible. This was\nback when a programming job meant showing up every day during certain\nworking hours. That seemed unnatural to me, and on this point the\nrest of the world is coming around to my way of thinking, but at\nthe time it caused a lot of friction. Toward the end of the year I\nspent much of my time surreptitiously working on On Lisp, which I\nhad by this time gotten a contract to publish.The good part was that I got paid huge amounts of money, especially\nby art student standards. In Florence, after paying my part of the\nrent, my budget for everything else had been $7 a day. Now I was\ngetting paid more than 4 times that every hour, even when I was\njust sitting in a meeting. By living cheaply I not only managed to\nsave enough to go back to RISD, but also paid off my college loans.I learned some useful things at Interleaf, though they were mostly\nabout what not to do. I learned that it's better for technology\ncompanies to be run by product people than sales people (though\nsales is a real skill and people who are good at it are really good\nat it), that it leads to bugs when code is edited by too many people,\nthat cheap office space is no bargain if it's depressing, that\nplanned meetings are inferior to corridor conversations, that big,\nbureaucratic customers are a dangerous source of money, and that\nthere's not much overlap between conventional office hours and the\noptimal time for hacking, or conventional offices and the optimal\nplace for it.But the most important thing I learned, and which I used in both\nViaweb and Y Combinator, is that the low end eats the high end:\nthat it's good to be the \"entry level\" option, even though that\nwill be less prestigious, because if you're not, someone else will\nbe, and will squash you against the ceiling. Which in turn means\nthat prestige is a danger sign.When I left to go back to RISD the next fall, I arranged to do\nfreelance work for the group that did projects for customers, and\nthis was how I survived for the next several years. When I came\nback to visit for a project later on, someone told me about a new\nthing called HTML, which was, as he described it, a derivative of\nSGML. Markup language enthusiasts were an occupational hazard at\nInterleaf and I ignored him, but this HTML thing later became a big\npart of my life.In the fall of 1992 I moved back to Providence to continue at RISD.\nThe foundation had merely been intro stuff, and the Accademia had\nbeen a (very civilized) joke. Now I was going to see what real art\nschool was like. But alas it was more like the Accademia than not.\nBetter organized, certainly, and a lot more expensive, but it was\nnow becoming clear that art school did not bear the same relationship\nto art that medical school bore to medicine. At least not the\npainting department. The textile department, which my next door\nneighbor belonged to, seemed to be pretty rigorous. No doubt\nillustration and architecture were too. But painting was post-rigorous.\nPainting students were supposed to express themselves, which to the\nmore worldly ones meant to try to cook up some sort of distinctive\nsignature style.A signature style is the visual equivalent of what in show business\nis known as a \"schtick\": something that immediately identifies the\nwork as yours and no one else's. For example, when you see a painting\nthat looks like a certain kind of cartoon, you know it's by Roy\nLichtenstein. So if you see a big painting of this type hanging in\nthe apartment of a hedge fund manager, you know he paid millions\nof dollars for it. That's not always why artists have a signature\nstyle, but it's usually why buyers pay a lot for such work.\n[6]There were plenty of earnest students too: kids who \"could draw\"\nin high school, and now had come to what was supposed to be the\nbest art school in the country, to learn to draw even better. They\ntended to be confused and demoralized by what they found at RISD,\nbut they kept going, because painting was what they did. I was not\none of the kids who could draw in high school, but at RISD I was\ndefinitely closer to their tribe than the tribe of signature style\nseekers.I learned a lot in the color class I took at RISD, but otherwise I\nwas basically teaching myself to paint, and I could do that for\nfree. So in 1993 I dropped out. I hung around Providence for a bit,\nand then my college friend Nancy Parmet did me a big favor. A\nrent-controlled apartment in a building her mother owned in New\nYork was becoming vacant. Did I want it? It wasn't much more than\nmy current place, and New York was supposed to be where the artists\nwere. So yes, I wanted it!\n[7]Asterix comics begin by zooming in on a tiny corner of Roman Gaul\nthat turns out not to be controlled by the Romans. You can do\nsomething similar on a map of New York City: if you zoom in on the\nUpper East Side, there's a tiny corner that's not rich, or at least\nwasn't in 1993. It's called Yorkville, and that was my new home.\nNow I was a New York artist \u0097 in the strictly technical sense of\nmaking paintings and living in New York.I was nervous about money, because I could sense that Interleaf was\non the way down. Freelance Lisp hacking work was very rare, and I\ndidn't want to have to program in another language, which in those\ndays would have meant C++ if I was lucky. So with my unerring nose\nfor financial opportunity, I decided to write another book on Lisp.\nThis would be a popular book, the sort of book that could be used\nas a textbook. I imagined myself living frugally off the royalties\nand spending all my time painting. (The painting on the cover of\nthis book, ANSI Common Lisp, is one that I painted around this\ntime.)The best thing about New York for me was the presence of Idelle and\nJulian Weber. Idelle Weber was a painter, one of the early\nphotorealists, and I'd taken her painting class at Harvard. I've\nnever known a teacher more beloved by her students. Large numbers\nof former students kept in touch with her, including me. After I\nmoved to New York I became her de facto studio assistant.She liked to paint on big, square canvases, 4 to 5 feet on a side.\nOne day in late 1994 as I was stretching one of these monsters there\nwas something on the radio about a famous fund manager. He wasn't\nthat much older than me, and was super rich. The thought suddenly\noccurred to me: why don't I become rich? Then I'll be able to work\non whatever I want.Meanwhile I'd been hearing more and more about this new thing called\nthe World Wide Web. Robert Morris showed it to me when I visited\nhim in Cambridge, where he was now in grad school at Harvard. It\nseemed to me that the web would be a big deal. I'd seen what graphical\nuser interfaces had done for the popularity of microcomputers. It\nseemed like the web would do the same for the internet.If I wanted to get rich, here was the next train leaving the station.\nI was right about that part. What I got wrong was the idea. I decided\nwe should start a company to put art galleries online. I can't\nhonestly say, after reading so many Y Combinator applications, that\nthis was the worst startup idea ever, but it was up there. Art\ngalleries didn't want to be online, and still don't, not the fancy\nones. That's not how they sell. I wrote some software to generate\nweb sites for galleries, and Robert wrote some to resize images and\nset up an http server to serve the pages. Then we tried to sign up\ngalleries. To call this a difficult sale would be an understatement.\nIt was difficult to give away. A few galleries let us make sites\nfor them for free, but none paid us.Then some online stores started to appear, and I realized that\nexcept for the order buttons they were identical to the sites we'd\nbeen generating for galleries. This impressive-sounding thing called\nan \"internet storefront\" was something we already knew how to build.So in the summer of 1995, after I submitted the camera-ready copy\nof ANSI Common Lisp to the publishers, we started trying to write\nsoftware to build online stores. At first this was going to be\nnormal desktop software, which in those days meant Windows software.\nThat was an alarming prospect, because neither of us knew how to\nwrite Windows software or wanted to learn. We lived in the Unix\nworld. But we decided we'd at least try writing a prototype store\nbuilder on Unix. Robert wrote a shopping cart, and I wrote a new\nsite generator for stores \u0097 in Lisp, of course.We were working out of Robert's apartment in Cambridge. His roommate\nwas away for big chunks of time, during which I got to sleep in his\nroom. For some reason there was no bed frame or sheets, just a\nmattress on the floor. One morning as I was lying on this mattress\nI had an idea that made me sit up like a capital L. What if we ran\nthe software on the server, and let users control it by clicking\non links? Then we'd never have to write anything to run on users'\ncomputers. We could generate the sites on the same server we'd serve\nthem from. Users wouldn't need anything more than a browser.This kind of software, known as a web app, is common now, but at\nthe time it wasn't clear that it was even possible. To find out,\nwe decided to try making a version of our store builder that you\ncould control through the browser. A couple days later, on August\n12, we had one that worked. The UI was horrible, but it proved you\ncould build a whole store through the browser, without any client\nsoftware or typing anything into the command line on the server.Now we felt like we were really onto something. I had visions of a\nwhole new generation of software working this way. You wouldn't\nneed versions, or ports, or any of that crap. At Interleaf there\nhad been a whole group called Release Engineering that seemed to\nbe at least as big as the group that actually wrote the software.\nNow you could just update the software right on the server.We started a new company we called Viaweb, after the fact that our\nsoftware worked via the web, and we got $10,000 in seed funding\nfrom Idelle's husband Julian. In return for that and doing the\ninitial legal work and giving us business advice, we gave him 10%\nof the company. Ten years later this deal became the model for Y\nCombinator's. We knew founders needed something like this, because\nwe'd needed it ourselves.At this stage I had a negative net worth, because the thousand\ndollars or so I had in the bank was more than counterbalanced by\nwhat I owed the government in taxes. (Had I diligently set aside\nthe proper proportion of the money I'd made consulting for Interleaf?\nNo, I had not.) So although Robert had his graduate student stipend,\nI needed that seed funding to live on.We originally hoped to launch in September, but we got more ambitious\nabout the software as we worked on it. Eventually we managed to\nbuild a WYSIWYG site builder, in the sense that as you were creating\npages, they looked exactly like the static ones that would be\ngenerated later, except that instead of leading to static pages,\nthe links all referred to closures stored in a hash table on the\nserver.It helped to have studied art, because the main goal of an online\nstore builder is to make users look legit, and the key to looking\nlegit is high production values. If you get page layouts and fonts\nand colors right, you can make a guy running a store out of his\nbedroom look more legit than a big company.(If you're curious why my site looks so old-fashioned, it's because\nit's still made with this software. It may look clunky today, but\nin 1996 it was the last word in slick.)In September, Robert rebelled. \"We've been working on this for a\nmonth,\" he said, \"and it's still not done.\" This is funny in\nretrospect, because he would still be working on it almost 3 years\nlater. But I decided it might be prudent to recruit more programmers,\nand I asked Robert who else in grad school with him was really good.\nHe recommended Trevor Blackwell, which surprised me at first, because\nat that point I knew Trevor mainly for his plan to reduce everything\nin his life to a stack of notecards, which he carried around with\nhim. But Rtm was right, as usual. Trevor turned out to be a\nfrighteningly effective hacker.It was a lot of fun working with Robert and Trevor. They're the two\nmost independent-minded people \nI know, and in completely different\nways. If you could see inside Rtm's brain it would look like a\ncolonial New England church, and if you could see inside Trevor's\nit would look like the worst excesses of Austrian Rococo.We opened for business, with 6 stores, in January 1996. It was just\nas well we waited a few months, because although we worried we were\nlate, we were actually almost fatally early. There was a lot of\ntalk in the press then about ecommerce, but not many people actually\nwanted online stores.\n[8]There were three main parts to the software: the editor, which\npeople used to build sites and which I wrote, the shopping cart,\nwhich Robert wrote, and the manager, which kept track of orders and\nstatistics, and which Trevor wrote. In its time, the editor was one\nof the best general-purpose site builders. I kept the code tight\nand didn't have to integrate with any other software except Robert's\nand Trevor's, so it was quite fun to work on. If all I'd had to do\nwas work on this software, the next 3 years would have been the\neasiest of my life. Unfortunately I had to do a lot more, all of\nit stuff I was worse at than programming, and the next 3 years were\ninstead the most stressful.There were a lot of startups making ecommerce software in the second\nhalf of the 90s. We were determined to be the Microsoft Word, not\nthe Interleaf. Which meant being easy to use and inexpensive. It\nwas lucky for us that we were poor, because that caused us to make\nViaweb even more inexpensive than we realized. We charged $100 a\nmonth for a small store and $300 a month for a big one. This low\nprice was a big attraction, and a constant thorn in the sides of\ncompetitors, but it wasn't because of some clever insight that we\nset the price low. We had no idea what businesses paid for things.\n$300 a month seemed like a lot of money to us.We did a lot of things right by accident like that. For example,\nwe did what's now called \"doing things that \ndon't scale,\" although\nat the time we would have described it as \"being so lame that we're\ndriven to the most desperate measures to get users.\" The most common\nof which was building stores for them. This seemed particularly\nhumiliating, since the whole raison d'etre of our software was that\npeople could use it to make their own stores. But anything to get\nusers.We learned a lot more about retail than we wanted to know. For\nexample, that if you could only have a small image of a man's shirt\n(and all images were small then by present standards), it was better\nto have a closeup of the collar than a picture of the whole shirt.\nThe reason I remember learning this was that it meant I had to\nrescan about 30 images of men's shirts. My first set of scans were\nso beautiful too.Though this felt wrong, it was exactly the right thing to be doing.\nBuilding stores for users taught us about retail, and about how it\nfelt to use our software. I was initially both mystified and repelled\nby \"business\" and thought we needed a \"business person\" to be in\ncharge of it, but once we started to get users, I was converted,\nin much the same way I was converted to \nfatherhood once I had kids.\nWhatever users wanted, I was all theirs. Maybe one day we'd have\nso many users that I couldn't scan their images for them, but in\nthe meantime there was nothing more important to do.Another thing I didn't get at the time is that \ngrowth rate is the\nultimate test of a startup. Our growth rate was fine. We had about\n70 stores at the end of 1996 and about 500 at the end of 1997. I\nmistakenly thought the thing that mattered was the absolute number\nof users. And that is the thing that matters in the sense that\nthat's how much money you're making, and if you're not making enough,\nyou might go out of business. But in the long term the growth rate\ntakes care of the absolute number. If we'd been a startup I was\nadvising at Y Combinator, I would have said: Stop being so stressed\nout, because you're doing fine. You're growing 7x a year. Just don't\nhire too many more people and you'll soon be profitable, and then\nyou'll control your own destiny.Alas I hired lots more people, partly because our investors wanted\nme to, and partly because that's what startups did during the\nInternet Bubble. A company with just a handful of employees would\nhave seemed amateurish. So we didn't reach breakeven until about\nwhen Yahoo bought us in the summer of 1998. Which in turn meant we\nwere at the mercy of investors for the entire life of the company.\nAnd since both we and our investors were noobs at startups, the\nresult was a mess even by startup standards.It was a huge relief when Yahoo bought us. In principle our Viaweb\nstock was valuable. It was a share in a business that was profitable\nand growing rapidly. But it didn't feel very valuable to me; I had\nno idea how to value a business, but I was all too keenly aware of\nthe near-death experiences we seemed to have every few months. Nor\nhad I changed my grad student lifestyle significantly since we\nstarted. So when Yahoo bought us it felt like going from rags to\nriches. Since we were going to California, I bought a car, a yellow\n1998 VW GTI. I remember thinking that its leather seats alone were\nby far the most luxurious thing I owned.The next year, from the summer of 1998 to the summer of 1999, must\nhave been the least productive of my life. I didn't realize it at\nthe time, but I was worn out from the effort and stress of running\nViaweb. For a while after I got to California I tried to continue\nmy usual m.o. of programming till 3 in the morning, but fatigue\ncombined with Yahoo's prematurely aged\nculture and grim cube farm\nin Santa Clara gradually dragged me down. After a few months it\nfelt disconcertingly like working at Interleaf.Yahoo had given us a lot of options when they bought us. At the\ntime I thought Yahoo was so overvalued that they'd never be worth\nanything, but to my astonishment the stock went up 5x in the next\nyear. I hung on till the first chunk of options vested, then in the\nsummer of 1999 I left. It had been so long since I'd painted anything\nthat I'd half forgotten why I was doing this. My brain had been\nentirely full of software and men's shirts for 4 years. But I had\ndone this to get rich so I could paint, I reminded myself, and now\nI was rich, so I should go paint.When I said I was leaving, my boss at Yahoo had a long conversation\nwith me about my plans. I told him all about the kinds of pictures\nI wanted to paint. At the time I was touched that he took such an\ninterest in me. Now I realize it was because he thought I was lying.\nMy options at that point were worth about $2 million a month. If I\nwas leaving that kind of money on the table, it could only be to\ngo and start some new startup, and if I did, I might take people\nwith me. This was the height of the Internet Bubble, and Yahoo was\nground zero of it. My boss was at that moment a billionaire. Leaving\nthen to start a new startup must have seemed to him an insanely,\nand yet also plausibly, ambitious plan.But I really was quitting to paint, and I started immediately.\nThere was no time to lose. I'd already burned 4 years getting rich.\nNow when I talk to founders who are leaving after selling their\ncompanies, my advice is always the same: take a vacation. That's\nwhat I should have done, just gone off somewhere and done nothing\nfor a month or two, but the idea never occurred to me.So I tried to paint, but I just didn't seem to have any energy or\nambition. Part of the problem was that I didn't know many people\nin California. I'd compounded this problem by buying a house up in\nthe Santa Cruz Mountains, with a beautiful view but miles from\nanywhere. I stuck it out for a few more months, then in desperation\nI went back to New York, where unless you understand about rent\ncontrol you'll be surprised to hear I still had my apartment, sealed\nup like a tomb of my old life. Idelle was in New York at least, and\nthere were other people trying to paint there, even though I didn't\nknow any of them.When I got back to New York I resumed my old life, except now I was\nrich. It was as weird as it sounds. I resumed all my old patterns,\nexcept now there were doors where there hadn't been. Now when I was\ntired of walking, all I had to do was raise my hand, and (unless\nit was raining) a taxi would stop to pick me up. Now when I walked\npast charming little restaurants I could go in and order lunch. It\nwas exciting for a while. Painting started to go better. I experimented\nwith a new kind of still life where I'd paint one painting in the\nold way, then photograph it and print it, blown up, on canvas, and\nthen use that as the underpainting for a second still life, painted\nfrom the same objects (which hopefully hadn't rotted yet).Meanwhile I looked for an apartment to buy. Now I could actually\nchoose what neighborhood to live in. Where, I asked myself and\nvarious real estate agents, is the Cambridge of New York? Aided by\noccasional visits to actual Cambridge, I gradually realized there\nwasn't one. Huh.Around this time, in the spring of 2000, I had an idea. It was clear\nfrom our experience with Viaweb that web apps were the future. Why\nnot build a web app for making web apps? Why not let people edit\ncode on our server through the browser, and then host the resulting\napplications for them?\n[9]\nYou could run all sorts of services\non the servers that these applications could use just by making an\nAPI call: making and receiving phone calls, manipulating images,\ntaking credit card payments, etc.I got so excited about this idea that I couldn't think about anything\nelse. It seemed obvious that this was the future. I didn't particularly\nwant to start another company, but it was clear that this idea would\nhave to be embodied as one, so I decided to move to Cambridge and\nstart it. I hoped to lure Robert into working on it with me, but\nthere I ran into a hitch. Robert was now a postdoc at MIT, and\nthough he'd made a lot of money the last time I'd lured him into\nworking on one of my schemes, it had also been a huge time sink.\nSo while he agreed that it sounded like a plausible idea, he firmly\nrefused to work on it.Hmph. Well, I'd do it myself then. I recruited Dan Giffin, who had\nworked for Viaweb, and two undergrads who wanted summer jobs, and\nwe got to work trying to build what it's now clear is about twenty\ncompanies and several open source projects worth of software. The\nlanguage for defining applications would of course be a dialect of\nLisp. But I wasn't so naive as to assume I could spring an overt\nLisp on a general audience; we'd hide the parentheses, like Dylan\ndid.By then there was a name for the kind of company Viaweb was, an\n\"application service provider,\" or ASP. This name didn't last long\nbefore it was replaced by \"software as a service,\" but it was current\nfor long enough that I named this new company after it: it was going\nto be called Aspra.I started working on the application builder, Dan worked on network\ninfrastructure, and the two undergrads worked on the first two\nservices (images and phone calls). But about halfway through the\nsummer I realized I really didn't want to run a company \u0097 especially\nnot a big one, which it was looking like this would have to be. I'd\nonly started Viaweb because I needed the money. Now that I didn't\nneed money anymore, why was I doing this? If this vision had to be\nrealized as a company, then screw the vision. I'd build a subset\nthat could be done as an open source project.Much to my surprise, the time I spent working on this stuff was not\nwasted after all. After we started Y Combinator, I would often\nencounter startups working on parts of this new architecture, and\nit was very useful to have spent so much time thinking about it and\neven trying to write some of it.The subset I would build as an open source project was the new Lisp,\nwhose parentheses I now wouldn't even have to hide. A lot of Lisp\nhackers dream of building a new Lisp, partly because one of the\ndistinctive features of the language is that it has dialects, and\npartly, I think, because we have in our minds a Platonic form of\nLisp that all existing dialects fall short of. I certainly did. So\nat the end of the summer Dan and I switched to working on this new\ndialect of Lisp, which I called Arc, in a house I bought in Cambridge.The following spring, lightning struck. I was invited to give a\ntalk at a Lisp conference, so I gave one about how we'd used Lisp\nat Viaweb. Afterward I put a postscript file of this talk online,\non paulgraham.com, which I'd created years before using Viaweb but\nhad never used for anything. In one day it got 30,000 page views.\nWhat on earth had happened? The referring urls showed that someone\nhad posted it on Slashdot.\n[10]Wow, I thought, there's an audience. If I write something and put\nit on the web, anyone can read it. That may seem obvious now, but\nit was surprising then. In the print era there was a narrow channel\nto readers, guarded by fierce monsters known as editors. The only\nway to get an audience for anything you wrote was to get it published\nas a book, or in a newspaper or magazine. Now anyone could publish\nanything.This had been possible in principle since 1993, but not many people\nhad realized it yet. I had been intimately involved with building\nthe infrastructure of the web for most of that time, and a writer\nas well, and it had taken me 8 years to realize it. Even then it\ntook me several years to understand the implications. It meant there\nwould be a whole new generation of \nessays.\n[11]In the print era, the channel for publishing essays had been\nvanishingly small. Except for a few officially anointed thinkers\nwho went to the right parties in New York, the only people allowed\nto publish essays were specialists writing about their specialties.\nThere were so many essays that had never been written, because there\nhad been no way to publish them. Now they could be, and I was going\nto write them.\n[12]I've worked on several different things, but to the extent there\nwas a turning point where I figured out what to work on, it was\nwhen I started publishing essays online. From then on I knew that\nwhatever else I did, I'd always write essays too.I knew that online essays would be a \nmarginal medium at first.\nSocially they'd seem more like rants posted by nutjobs on their\nGeoCities sites than the genteel and beautifully typeset compositions\npublished in The New Yorker. But by this point I knew enough to\nfind that encouraging instead of discouraging.One of the most conspicuous patterns I've noticed in my life is how\nwell it has worked, for me at least, to work on things that weren't\nprestigious. Still life has always been the least prestigious form\nof painting. Viaweb and Y Combinator both seemed lame when we started\nthem. I still get the glassy eye from strangers when they ask what\nI'm writing, and I explain that it's an essay I'm going to publish\non my web site. Even Lisp, though prestigious intellectually in\nsomething like the way Latin is, also seems about as hip.It's not that unprestigious types of work are good per se. But when\nyou find yourself drawn to some kind of work despite its current\nlack of prestige, it's a sign both that there's something real to\nbe discovered there, and that you have the right kind of motives.\nImpure motives are a big danger for the ambitious. If anything is\ngoing to lead you astray, it will be the desire to impress people.\nSo while working on things that aren't prestigious doesn't guarantee\nyou're on the right track, it at least guarantees you're not on the\nmost common type of wrong one.Over the next several years I wrote lots of essays about all kinds\nof different topics. O'Reilly reprinted a collection of them as a\nbook, called Hackers & Painters after one of the essays in it. I\nalso worked on spam filters, and did some more painting. I used to\nhave dinners for a group of friends every thursday night, which\ntaught me how to cook for groups. And I bought another building in\nCambridge, a former candy factory (and later, twas said, porn\nstudio), to use as an office.One night in October 2003 there was a big party at my house. It was\na clever idea of my friend Maria Daniels, who was one of the thursday\ndiners. Three separate hosts would all invite their friends to one\nparty. So for every guest, two thirds of the other guests would be\npeople they didn't know but would probably like. One of the guests\nwas someone I didn't know but would turn out to like a lot: a woman\ncalled Jessica Livingston. A couple days later I asked her out.Jessica was in charge of marketing at a Boston investment bank.\nThis bank thought it understood startups, but over the next year,\nas she met friends of mine from the startup world, she was surprised\nhow different reality was. And how colorful their stories were. So\nshe decided to compile a book of \ninterviews with startup founders.When the bank had financial problems and she had to fire half her\nstaff, she started looking for a new job. In early 2005 she interviewed\nfor a marketing job at a Boston VC firm. It took them weeks to make\nup their minds, and during this time I started telling her about\nall the things that needed to be fixed about venture capital. They\nshould make a larger number of smaller investments instead of a\nhandful of giant ones, they should be funding younger, more technical\nfounders instead of MBAs, they should let the founders remain as\nCEO, and so on.One of my tricks for writing essays had always been to give talks.\nThe prospect of having to stand up in front of a group of people\nand tell them something that won't waste their time is a great\nspur to the imagination. When the Harvard Computer Society, the\nundergrad computer club, asked me to give a talk, I decided I would\ntell them how to start a startup. Maybe they'd be able to avoid the\nworst of the mistakes we'd made.So I gave this talk, in the course of which I told them that the\nbest sources of seed funding were successful startup founders,\nbecause then they'd be sources of advice too. Whereupon it seemed\nthey were all looking expectantly at me. Horrified at the prospect\nof having my inbox flooded by business plans (if I'd only known),\nI blurted out \"But not me!\" and went on with the talk. But afterward\nit occurred to me that I should really stop procrastinating about\nangel investing. I'd been meaning to since Yahoo bought us, and now\nit was 7 years later and I still hadn't done one angel investment.Meanwhile I had been scheming with Robert and Trevor about projects\nwe could work on together. I missed working with them, and it seemed\nlike there had to be something we could collaborate on.As Jessica and I were walking home from dinner on March 11, at the\ncorner of Garden and Walker streets, these three threads converged.\nScrew the VCs who were taking so long to make up their minds. We'd\nstart our own investment firm and actually implement the ideas we'd\nbeen talking about. I'd fund it, and Jessica could quit her job and\nwork for it, and we'd get Robert and Trevor as partners too.\n[13]Once again, ignorance worked in our favor. We had no idea how to\nbe angel investors, and in Boston in 2005 there were no Ron Conways\nto learn from. So we just made what seemed like the obvious choices,\nand some of the things we did turned out to be novel.There are multiple components to Y Combinator, and we didn't figure\nthem all out at once. The part we got first was to be an angel firm.\nIn those days, those two words didn't go together. There were VC\nfirms, which were organized companies with people whose job it was\nto make investments, but they only did big, million dollar investments.\nAnd there were angels, who did smaller investments, but these were\nindividuals who were usually focused on other things and made\ninvestments on the side. And neither of them helped founders enough\nin the beginning. We knew how helpless founders were in some respects,\nbecause we remembered how helpless we'd been. For example, one thing\nJulian had done for us that seemed to us like magic was to get us\nset up as a company. We were fine writing fairly difficult software,\nbut actually getting incorporated, with bylaws and stock and all\nthat stuff, how on earth did you do that? Our plan was not only to\nmake seed investments, but to do for startups everything Julian had\ndone for us.YC was not organized as a fund. It was cheap enough to run that we\nfunded it with our own money. That went right by 99% of readers,\nbut professional investors are thinking \"Wow, that means they got\nall the returns.\" But once again, this was not due to any particular\ninsight on our part. We didn't know how VC firms were organized.\nIt never occurred to us to try to raise a fund, and if it had, we\nwouldn't have known where to start.\n[14]The most distinctive thing about YC is the batch model: to fund a\nbunch of startups all at once, twice a year, and then to spend three\nmonths focusing intensively on trying to help them. That part we\ndiscovered by accident, not merely implicitly but explicitly due\nto our ignorance about investing. We needed to get experience as\ninvestors. What better way, we thought, than to fund a whole bunch\nof startups at once? We knew undergrads got temporary jobs at tech\ncompanies during the summer. Why not organize a summer program where\nthey'd start startups instead? We wouldn't feel guilty for being\nin a sense fake investors, because they would in a similar sense\nbe fake founders. So while we probably wouldn't make much money out\nof it, we'd at least get to practice being investors on them, and\nthey for their part would probably have a more interesting summer\nthan they would working at Microsoft.We'd use the building I owned in Cambridge as our headquarters.\nWe'd all have dinner there once a week \u0097 on tuesdays, since I was\nalready cooking for the thursday diners on thursdays \u0097 and after\ndinner we'd bring in experts on startups to give talks.We knew undergrads were deciding then about summer jobs, so in a\nmatter of days we cooked up something we called the Summer Founders\nProgram, and I posted an \nannouncement \non my site, inviting undergrads\nto apply. I had never imagined that writing essays would be a way\nto get \"deal flow,\" as investors call it, but it turned out to be\nthe perfect source.\n[15]\nWe got 225 applications for the Summer\nFounders Program, and we were surprised to find that a lot of them\nwere from people who'd already graduated, or were about to that\nspring. Already this SFP thing was starting to feel more serious\nthan we'd intended.We invited about 20 of the 225 groups to interview in person, and\nfrom those we picked 8 to fund. They were an impressive group. That\nfirst batch included reddit, Justin Kan and Emmett Shear, who went\non to found Twitch, Aaron Swartz, who had already helped write the\nRSS spec and would a few years later become a martyr for open access,\nand Sam Altman, who would later become the second president of YC.\nI don't think it was entirely luck that the first batch was so good.\nYou had to be pretty bold to sign up for a weird thing like the\nSummer Founders Program instead of a summer job at a legit place\nlike Microsoft or Goldman Sachs.The deal for startups was based on a combination of the deal we did\nwith Julian ($10k for 10%) and what Robert said MIT grad students\ngot for the summer ($6k). We invested $6k per founder, which in the\ntypical two-founder case was $12k, in return for 6%. That had to\nbe fair, because it was twice as good as the deal we ourselves had\ntaken. Plus that first summer, which was really hot, Jessica brought\nthe founders free air conditioners.\n[16]Fairly quickly I realized that we had stumbled upon the way to scale\nstartup funding. Funding startups in batches was more convenient\nfor us, because it meant we could do things for a lot of startups\nat once, but being part of a batch was better for the startups too.\nIt solved one of the biggest problems faced by founders: the\nisolation. Now you not only had colleagues, but colleagues who\nunderstood the problems you were facing and could tell you how they\nwere solving them.As YC grew, we started to notice other advantages of scale. The\nalumni became a tight community, dedicated to helping one another,\nand especially the current batch, whose shoes they remembered being\nin. We also noticed that the startups were becoming one another's\ncustomers. We used to refer jokingly to the \"YC GDP,\" but as YC\ngrows this becomes less and less of a joke. Now lots of startups\nget their initial set of customers almost entirely from among their\nbatchmates.I had not originally intended YC to be a full-time job. I was going\nto do three things: hack, write essays, and work on YC. As YC grew,\nand I grew more excited about it, it started to take up a lot more\nthan a third of my attention. But for the first few years I was\nstill able to work on other things.In the summer of 2006, Robert and I started working on a new version\nof Arc. This one was reasonably fast, because it was compiled into\nScheme. To test this new Arc, I wrote Hacker News in it. It was\noriginally meant to be a news aggregator for startup founders and\nwas called Startup News, but after a few months I got tired of\nreading about nothing but startups. Plus it wasn't startup founders\nwe wanted to reach. It was future startup founders. So I changed\nthe name to Hacker News and the topic to whatever engaged one's\nintellectual curiosity.HN was no doubt good for YC, but it was also by far the biggest\nsource of stress for me. If all I'd had to do was select and help\nfounders, life would have been so easy. And that implies that HN\nwas a mistake. Surely the biggest source of stress in one's work\nshould at least be something close to the core of the work. Whereas\nI was like someone who was in pain while running a marathon not\nfrom the exertion of running, but because I had a blister from an\nill-fitting shoe. When I was dealing with some urgent problem during\nYC, there was about a 60% chance it had to do with HN, and a 40%\nchance it had do with everything else combined.\n[17]As well as HN, I wrote all of YC's internal software in Arc. But\nwhile I continued to work a good deal in Arc, I gradually stopped\nworking on Arc, partly because I didn't have time to, and partly\nbecause it was a lot less attractive to mess around with the language\nnow that we had all this infrastructure depending on it. So now my\nthree projects were reduced to two: writing essays and working on\nYC.YC was different from other kinds of work I've done. Instead of\ndeciding for myself what to work on, the problems came to me. Every\n6 months there was a new batch of startups, and their problems,\nwhatever they were, became our problems. It was very engaging work,\nbecause their problems were quite varied, and the good founders\nwere very effective. If you were trying to learn the most you could\nabout startups in the shortest possible time, you couldn't have\npicked a better way to do it.There were parts of the job I didn't like. Disputes between cofounders,\nfiguring out when people were lying to us, fighting with people who\nmaltreated the startups, and so on. But I worked hard even at the\nparts I didn't like. I was haunted by something Kevin Hale once\nsaid about companies: \"No one works harder than the boss.\" He meant\nit both descriptively and prescriptively, and it was the second\npart that scared me. I wanted YC to be good, so if how hard I worked\nset the upper bound on how hard everyone else worked, I'd better\nwork very hard.One day in 2010, when he was visiting California for interviews,\nRobert Morris did something astonishing: he offered me unsolicited\nadvice. I can only remember him doing that once before. One day at\nViaweb, when I was bent over double from a kidney stone, he suggested\nthat it would be a good idea for him to take me to the hospital.\nThat was what it took for Rtm to offer unsolicited advice. So I\nremember his exact words very clearly. \"You know,\" he said, \"you\nshould make sure Y Combinator isn't the last cool thing you do.\"At the time I didn't understand what he meant, but gradually it\ndawned on me that he was saying I should quit. This seemed strange\nadvice, because YC was doing great. But if there was one thing rarer\nthan Rtm offering advice, it was Rtm being wrong. So this set me\nthinking. It was true that on my current trajectory, YC would be\nthe last thing I did, because it was only taking up more of my\nattention. It had already eaten Arc, and was in the process of\neating essays too. Either YC was my life's work or I'd have to leave\neventually. And it wasn't, so I would.In the summer of 2012 my mother had a stroke, and the cause turned\nout to be a blood clot caused by colon cancer. The stroke destroyed\nher balance, and she was put in a nursing home, but she really\nwanted to get out of it and back to her house, and my sister and I\nwere determined to help her do it. I used to fly up to Oregon to\nvisit her regularly, and I had a lot of time to think on those\nflights. On one of them I realized I was ready to hand YC over to\nsomeone else.I asked Jessica if she wanted to be president, but she didn't, so\nwe decided we'd try to recruit Sam Altman. We talked to Robert and\nTrevor and we agreed to make it a complete changing of the guard.\nUp till that point YC had been controlled by the original LLC we\nfour had started. But we wanted YC to last for a long time, and to\ndo that it couldn't be controlled by the founders. So if Sam said\nyes, we'd let him reorganize YC. Robert and I would retire, and\nJessica and Trevor would become ordinary partners.When we asked Sam if he wanted to be president of YC, initially he\nsaid no. He wanted to start a startup to make nuclear reactors.\nBut I kept at it, and in October 2013 he finally agreed. We decided\nhe'd take over starting with the winter 2014 batch. For the rest\nof 2013 I left running YC more and more to Sam, partly so he could\nlearn the job, and partly because I was focused on my mother, whose\ncancer had returned.She died on January 15, 2014. We knew this was coming, but it was\nstill hard when it did.I kept working on YC till March, to help get that batch of startups\nthrough Demo Day, then I checked out pretty completely. (I still\ntalk to alumni and to new startups working on things I'm interested\nin, but that only takes a few hours a week.)What should I do next? Rtm's advice hadn't included anything about\nthat. I wanted to do something completely different, so I decided\nI'd paint. I wanted to see how good I could get if I really focused\non it. So the day after I stopped working on YC, I started painting.\nI was rusty and it took a while to get back into shape, but it was\nat least completely engaging.\n[18]I spent most of the rest of 2014 painting. I'd never been able to\nwork so uninterruptedly before, and I got to be better than I had\nbeen. Not good enough, but better. Then in November, right in the\nmiddle of a painting, I ran out of steam. Up till that point I'd\nalways been curious to see how the painting I was working on would\nturn out, but suddenly finishing this one seemed like a chore. So\nI stopped working on it and cleaned my brushes and haven't painted\nsince. So far anyway.I realize that sounds rather wimpy. But attention is a zero sum\ngame. If you can choose what to work on, and you choose a project\nthat's not the best one (or at least a good one) for you, then it's\ngetting in the way of another project that is. And at 50 there was\nsome opportunity cost to screwing around.I started writing essays again, and wrote a bunch of new ones over\nthe next few months. I even wrote a couple that \nweren't about\nstartups. Then in March 2015 I started working on Lisp again.The distinctive thing about Lisp is that its core is a language\ndefined by writing an interpreter in itself. It wasn't originally\nintended as a programming language in the ordinary sense. It was\nmeant to be a formal model of computation, an alternative to the\nTuring machine. If you want to write an interpreter for a language\nin itself, what's the minimum set of predefined operators you need?\nThe Lisp that John McCarthy invented, or more accurately discovered,\nis an answer to that question.\n[19]McCarthy didn't realize this Lisp could even be used to program\ncomputers till his grad student Steve Russell suggested it. Russell\ntranslated McCarthy's interpreter into IBM 704 machine language,\nand from that point Lisp started also to be a programming language\nin the ordinary sense. But its origins as a model of computation\ngave it a power and elegance that other languages couldn't match.\nIt was this that attracted me in college, though I didn't understand\nwhy at the time.McCarthy's 1960 Lisp did nothing more than interpret Lisp expressions.\nIt was missing a lot of things you'd want in a programming language.\nSo these had to be added, and when they were, they weren't defined\nusing McCarthy's original axiomatic approach. That wouldn't have\nbeen feasible at the time. McCarthy tested his interpreter by\nhand-simulating the execution of programs. But it was already getting\nclose to the limit of interpreters you could test that way \u0097 indeed,\nthere was a bug in it that McCarthy had overlooked. To test a more\ncomplicated interpreter, you'd have had to run it, and computers\nthen weren't powerful enough.Now they are, though. Now you could continue using McCarthy's\naxiomatic approach till you'd defined a complete programming language.\nAnd as long as every change you made to McCarthy's Lisp was a\ndiscoveredness-preserving transformation, you could, in principle,\nend up with a complete language that had this quality. Harder to\ndo than to talk about, of course, but if it was possible in principle,\nwhy not try? So I decided to take a shot at it. It took 4 years,\nfrom March 26, 2015 to October 12, 2019. It was fortunate that I\nhad a precisely defined goal, or it would have been hard to keep\nat it for so long.I wrote this new Lisp, called Bel, \nin itself in Arc. That may sound\nlike a contradiction, but it's an indication of the sort of trickery\nI had to engage in to make this work. By means of an egregious\ncollection of hacks I managed to make something close enough to an\ninterpreter written in itself that could actually run. Not fast,\nbut fast enough to test.I had to ban myself from writing essays during most of this time,\nor I'd never have finished. In late 2015 I spent 3 months writing\nessays, and when I went back to working on Bel I could barely\nunderstand the code. Not so much because it was badly written as\nbecause the problem is so convoluted. When you're working on an\ninterpreter written in itself, it's hard to keep track of what's\nhappening at what level, and errors can be practically encrypted\nby the time you get them.So I said no more essays till Bel was done. But I told few people\nabout Bel while I was working on it. So for years it must have\nseemed that I was doing nothing, when in fact I was working harder\nthan I'd ever worked on anything. Occasionally after wrestling for\nhours with some gruesome bug I'd check Twitter or HN and see someone\nasking \"Does Paul Graham still code?\"Working on Bel was hard but satisfying. I worked on it so intensively\nthat at any given time I had a decent chunk of the code in my head\nand could write more there. I remember taking the boys to the\ncoast on a sunny day in 2015 and figuring out how to deal with some\nproblem involving continuations while I watched them play in the\ntide pools. It felt like I was doing life right. I remember that\nbecause I was slightly dismayed at how novel it felt. The good news\nis that I had more moments like this over the next few years.In the summer of 2016 we moved to England. We wanted our kids to\nsee what it was like living in another country, and since I was a\nBritish citizen by birth, that seemed the obvious choice. We only\nmeant to stay for a year, but we liked it so much that we still\nlive there. So most of Bel was written in England.In the fall of 2019, Bel was finally finished. Like McCarthy's\noriginal Lisp, it's a spec rather than an implementation, although\nlike McCarthy's Lisp it's a spec expressed as code.Now that I could write essays again, I wrote a bunch about topics\nI'd had stacked up. I kept writing essays through 2020, but I also\nstarted to think about other things I could work on. How should I\nchoose what to do? Well, how had I chosen what to work on in the\npast? I wrote an essay for myself to answer that question, and I\nwas surprised how long and messy the answer turned out to be. If\nthis surprised me, who'd lived it, then I thought perhaps it would\nbe interesting to other people, and encouraging to those with\nsimilarly messy lives. So I wrote a more detailed version for others\nto read, and this is the last sentence of it.\nNotes[1]\nMy experience skipped a step in the evolution of computers:\ntime-sharing machines with interactive OSes. I went straight from\nbatch processing to microcomputers, which made microcomputers seem\nall the more exciting.[2]\nItalian words for abstract concepts can nearly always be\npredicted from their English cognates (except for occasional traps\nlike polluzione). It's the everyday words that differ. So if you\nstring together a lot of abstract concepts with a few simple verbs,\nyou can make a little Italian go a long way.[3]\nI lived at Piazza San Felice 4, so my walk to the Accademia\nwent straight down the spine of old Florence: past the Pitti, across\nthe bridge, past Orsanmichele, between the Duomo and the Baptistery,\nand then up Via Ricasoli to Piazza San Marco. I saw Florence at\nstreet level in every possible condition, from empty dark winter\nevenings to sweltering summer days when the streets were packed with\ntourists.[4]\nYou can of course paint people like still lives if you want\nto, and they're willing. That sort of portrait is arguably the apex\nof still life painting, though the long sitting does tend to produce\npained expressions in the sitters.[5]\nInterleaf was one of many companies that had smart people and\nbuilt impressive technology, and yet got crushed by Moore's Law.\nIn the 1990s the exponential growth in the power of commodity (i.e.\nIntel) processors rolled up high-end, special-purpose hardware and\nsoftware companies like a bulldozer.[6]\nThe signature style seekers at RISD weren't specifically\nmercenary. In the art world, money and coolness are tightly coupled.\nAnything expensive comes to be seen as cool, and anything seen as\ncool will soon become equally expensive.[7]\nTechnically the apartment wasn't rent-controlled but\nrent-stabilized, but this is a refinement only New Yorkers would\nknow or care about. The point is that it was really cheap, less\nthan half market price.[8]\nMost software you can launch as soon as it's done. But when\nthe software is an online store builder and you're hosting the\nstores, if you don't have any users yet, that fact will be painfully\nobvious. So before we could launch publicly we had to launch\nprivately, in the sense of recruiting an initial set of users and\nmaking sure they had decent-looking stores.[9]\nWe'd had a code editor in Viaweb for users to define their\nown page styles. They didn't know it, but they were editing Lisp\nexpressions underneath. But this wasn't an app editor, because the\ncode ran when the merchants' sites were generated, not when shoppers\nvisited them.[10]\nThis was the first instance of what is now a familiar experience,\nand so was what happened next, when I read the comments and found\nthey were full of angry people. How could I claim that Lisp was\nbetter than other languages? Weren't they all Turing complete?\nPeople who see the responses to essays I write sometimes tell me\nhow sorry they feel for me, but I'm not exaggerating when I reply\nthat it has always been like this, since the very beginning. It\ncomes with the territory. An essay must tell readers things they\ndon't already know, and some \npeople dislike being told such things.[11]\nPeople put plenty of stuff on the internet in the 90s of\ncourse, but putting something online is not the same as publishing\nit online. Publishing online means you treat the online version as\nthe (or at least a) primary version.[12]\nThere is a general lesson here that our experience with Y\nCombinator also teaches: Customs continue to constrain you long\nafter the restrictions that caused them have disappeared. Customary\nVC practice had once, like the customs about publishing essays,\nbeen based on real constraints. Startups had once been much more\nexpensive to start, and proportionally rare. Now they could be cheap\nand common, but the VCs' customs still reflected the old world,\njust as customs about writing essays still reflected the constraints\nof the print era.Which in turn implies that people who are independent-minded (i.e.\nless influenced by custom) will have an advantage in fields affected\nby rapid change (where customs are more likely to be obsolete).Here's an interesting point, though: you can't always predict which\nfields will be affected by rapid change. Obviously software and\nventure capital will be, but who would have predicted that essay\nwriting would be?[13]\nY Combinator was not the original name. At first we were\ncalled Cambridge Seed. But we didn't want a regional name, in case\nsomeone copied us in Silicon Valley, so we renamed ourselves after\none of the coolest tricks in the lambda calculus, the Y combinator.I picked orange as our color partly because it's the warmest, and\npartly because no VC used it. In 2005 all the VCs used staid colors\nlike maroon, navy blue, and forest green, because they were trying\nto appeal to LPs, not founders. The YC logo itself is an inside\njoke: the Viaweb logo had been a white V on a red circle, so I made\nthe YC logo a white Y on an orange square.[14]\nYC did become a fund for a couple years starting in 2009,\nbecause it was getting so big I could no longer afford to fund it\npersonally. But after Heroku got bought we had enough money to go\nback to being self-funded.[15]\nI've never liked the term \"deal flow,\" because it implies\nthat the number of new startups at any given time is fixed. This\nis not only false, but it's the purpose of YC to falsify it, by\ncausing startups to be founded that would not otherwise have existed.[16]\nShe reports that they were all different shapes and sizes,\nbecause there was a run on air conditioners and she had to get\nwhatever she could, but that they were all heavier than she could\ncarry now.[17]\nAnother problem with HN was a bizarre edge case that occurs\nwhen you both write essays and run a forum. When you run a forum,\nyou're assumed to see if not every conversation, at least every\nconversation involving you. And when you write essays, people post\nhighly imaginative misinterpretations of them on forums. Individually\nthese two phenomena are tedious but bearable, but the combination\nis disastrous. You actually have to respond to the misinterpretations,\nbecause the assumption that you're present in the conversation means\nthat not responding to any sufficiently upvoted misinterpretation\nreads as a tacit admission that it's correct. But that in turn\nencourages more; anyone who wants to pick a fight with you senses\nthat now is their chance.[18]\nThe worst thing about leaving YC was not working with Jessica\nanymore. We'd been working on YC almost the whole time we'd known\neach other, and we'd neither tried nor wanted to separate it from\nour personal lives, so leaving was like pulling up a deeply rooted\ntree.[19]\nOne way to get more precise about the concept of invented vs\ndiscovered is to talk about space aliens. Any sufficiently advanced\nalien civilization would certainly know about the Pythagorean\ntheorem, for example. I believe, though with less certainty, that\nthey would also know about the Lisp in McCarthy's 1960 paper.But if so there's no reason to suppose that this is the limit of\nthe language that might be known to them. Presumably aliens need\nnumbers and errors and I/O too. So it seems likely there exists at\nleast one path out of McCarthy's Lisp along which discoveredness\nis preserved.Thanks to Trevor Blackwell, John Collison, Patrick Collison, Daniel\nGackle, Ralph Hazell, Jessica Livingston, Robert Morris, and Harj\nTaggar for reading drafts of this."} {"title": "popular", "text": "May 2001(This article was written as a kind of business plan for a\nnew language.\nSo it is missing (because it takes for granted) the most important\nfeature of a good programming language: very powerful abstractions.)A friend of mine once told an eminent operating systems\nexpert that he wanted to design a really good\nprogramming language. The expert told him that it would be a\nwaste of time, that programming languages don't become popular\nor unpopular based on their merits, and so no matter how\ngood his language was, no one would use it. At least, that\nwas what had happened to the language he had designed.What does make a language popular? Do popular\nlanguages deserve their popularity? Is it worth trying to\ndefine a good programming language? How would you do it?I think the answers to these questions can be found by looking \nat hackers, and learning what they want. Programming\nlanguages are for hackers, and a programming language\nis good as a programming language (rather than, say, an\nexercise in denotational semantics or compiler design)\nif and only if hackers like it.1 The Mechanics of PopularityIt's true, certainly, that most people don't choose programming\nlanguages simply based on their merits. Most programmers are told\nwhat language to use by someone else. And yet I think the effect\nof such external factors on the popularity of programming languages\nis not as great as it's sometimes thought to be. I think a bigger\nproblem is that a hacker's idea of a good programming language is\nnot the same as most language designers'.Between the two, the hacker's opinion is the one that matters.\nProgramming languages are not theorems. They're tools, designed\nfor people, and they have to be designed to suit human strengths\nand weaknesses as much as shoes have to be designed for human feet.\nIf a shoe pinches when you put it on, it's a bad shoe, however\nelegant it may be as a piece of sculpture.It may be that the majority of programmers can't tell a good language\nfrom a bad one. But that's no different with any other tool. It\ndoesn't mean that it's a waste of time to try designing a good\nlanguage. Expert hackers \ncan tell a good language when they see\none, and they'll use it. Expert hackers are a tiny minority,\nadmittedly, but that tiny minority write all the good software,\nand their influence is such that the rest of the programmers will\ntend to use whatever language they use. Often, indeed, it is not\nmerely influence but command: often the expert hackers are the very\npeople who, as their bosses or faculty advisors, tell the other\nprogrammers what language to use.The opinion of expert hackers is not the only force that determines\nthe relative popularity of programming languages \u2014 legacy software\n(Cobol) and hype (Ada, Java) also play a role \u2014 but I think it is\nthe most powerful force over the long term. Given an initial critical\nmass and enough time, a programming language probably becomes about\nas popular as it deserves to be. And popularity further separates\ngood languages from bad ones, because feedback from real live users\nalways leads to improvements. Look at how much any popular language\nhas changed during its life. Perl and Fortran are extreme cases,\nbut even Lisp has changed a lot. Lisp 1.5 didn't have macros, for\nexample; these evolved later, after hackers at MIT had spent a\ncouple years using Lisp to write real programs. [1]So whether or not a language has to be good to be popular, I think\na language has to be popular to be good. And it has to stay popular\nto stay good. The state of the art in programming languages doesn't\nstand still. And yet the Lisps we have today are still pretty much\nwhat they had at MIT in the mid-1980s, because that's the last time\nLisp had a sufficiently large and demanding user base.Of course, hackers have to know about a language before they can\nuse it. How are they to hear? From other hackers. But there has to\nbe some initial group of hackers using the language for others even\nto hear about it. I wonder how large this group has to be; how many\nusers make a critical mass? Off the top of my head, I'd say twenty.\nIf a language had twenty separate users, meaning twenty users who\ndecided on their own to use it, I'd consider it to be real.Getting there can't be easy. I would not be surprised if it is\nharder to get from zero to twenty than from twenty to a thousand.\nThe best way to get those initial twenty users is probably to use\na trojan horse: to give people an application they want, which\nhappens to be written in the new language.2 External FactorsLet's start by acknowledging one external factor that does affect\nthe popularity of a programming language. To become popular, a\nprogramming language has to be the scripting language of a popular\nsystem. Fortran and Cobol were the scripting languages of early\nIBM mainframes. C was the scripting language of Unix, and so, later,\nwas Perl. Tcl is the scripting language of Tk. Java and Javascript\nare intended to be the scripting languages of web browsers.Lisp is not a massively popular language because it is not the\nscripting language of a massively popular system. What popularity\nit retains dates back to the 1960s and 1970s, when it was the\nscripting language of MIT. A lot of the great programmers of the\nday were associated with MIT at some point. And in the early 1970s,\nbefore C, MIT's dialect of Lisp, called MacLisp, was one of the\nonly programming languages a serious hacker would want to use.Today Lisp is the scripting language of two moderately popular\nsystems, Emacs and Autocad, and for that reason I suspect that most\nof the Lisp programming done today is done in Emacs Lisp or AutoLisp.Programming languages don't exist in isolation. To hack is a\ntransitive verb \u2014 hackers are usually hacking something \u2014 and in\npractice languages are judged relative to whatever they're used to\nhack. So if you want to design a popular language, you either have\nto supply more than a language, or you have to design your language\nto replace the scripting language of some existing system.Common Lisp is unpopular partly because it's an orphan. It did\noriginally come with a system to hack: the Lisp Machine. But Lisp\nMachines (along with parallel computers) were steamrollered by the\nincreasing power of general purpose processors in the 1980s. Common\nLisp might have remained popular if it had been a good scripting\nlanguage for Unix. It is, alas, an atrociously bad one.One way to describe this situation is to say that a language isn't\njudged on its own merits. Another view is that a programming language\nreally isn't a programming language unless it's also the scripting\nlanguage of something. This only seems unfair if it comes as a\nsurprise. I think it's no more unfair than expecting a programming\nlanguage to have, say, an implementation. It's just part of what\na programming language is.A programming language does need a good implementation, of course,\nand this must be free. Companies will pay for software, but individual\nhackers won't, and it's the hackers you need to attract.A language also needs to have a book about it. The book should be\nthin, well-written, and full of good examples. K&R is the ideal\nhere. At the moment I'd almost say that a language has to have a\nbook published by O'Reilly. That's becoming the test of mattering\nto hackers.There should be online documentation as well. In fact, the book\ncan start as online documentation. But I don't think that physical\nbooks are outmoded yet. Their format is convenient, and the de\nfacto censorship imposed by publishers is a useful if imperfect\nfilter. Bookstores are one of the most important places for learning\nabout new languages.3 BrevityGiven that you can supply the three things any language needs \u2014 a\nfree implementation, a book, and something to hack \u2014 how do you\nmake a language that hackers will like?One thing hackers like is brevity. Hackers are lazy, in the same\nway that mathematicians and modernist architects are lazy: they\nhate anything extraneous. It would not be far from the truth to\nsay that a hacker about to write a program decides what language\nto use, at least subconsciously, based on the total number of\ncharacters he'll have to type. If this isn't precisely how hackers\nthink, a language designer would do well to act as if it were.It is a mistake to try to baby the user with long-winded expressions\nthat are meant to resemble English. Cobol is notorious for this\nflaw. A hacker would consider being asked to writeadd x to y giving zinstead ofz = x+yas something between an insult to his intelligence and a sin against\nGod.It has sometimes been said that Lisp should use first and rest\ninstead of car and cdr, because it would make programs easier to\nread. Maybe for the first couple hours. But a hacker can learn\nquickly enough that car means the first element of a list and cdr\nmeans the rest. Using first and rest means 50% more typing. And\nthey are also different lengths, meaning that the arguments won't\nline up when they're called, as car and cdr often are, in successive\nlines. I've found that it matters a lot how code lines up on the\npage. I can barely read Lisp code when it is set in a variable-width\nfont, and friends say this is true for other languages too.Brevity is one place where strongly typed languages lose. All other\nthings being equal, no one wants to begin a program with a bunch\nof declarations. Anything that can be implicit, should be.The individual tokens should be short as well. Perl and Common Lisp\noccupy opposite poles on this question. Perl programs can be almost\ncryptically dense, while the names of built-in Common Lisp operators\nare comically long. The designers of Common Lisp probably expected\nusers to have text editors that would type these long names for\nthem. But the cost of a long name is not just the cost of typing\nit. There is also the cost of reading it, and the cost of the space\nit takes up on your screen.4 HackabilityThere is one thing more important than brevity to a hacker: being\nable to do what you want. In the history of programming languages\na surprising amount of effort has gone into preventing programmers\nfrom doing things considered to be improper. This is a dangerously\npresumptuous plan. How can the language designer know what the\nprogrammer is going to need to do? I think language designers would\ndo better to consider their target user to be a genius who will\nneed to do things they never anticipated, rather than a bumbler\nwho needs to be protected from himself. The bumbler will shoot\nhimself in the foot anyway. You may save him from referring to\nvariables in another package, but you can't save him from writing\na badly designed program to solve the wrong problem, and taking\nforever to do it.Good programmers often want to do dangerous and unsavory things.\nBy unsavory I mean things that go behind whatever semantic facade\nthe language is trying to present: getting hold of the internal\nrepresentation of some high-level abstraction, for example. Hackers\nlike to hack, and hacking means getting inside things and second\nguessing the original designer.Let yourself be second guessed. When you make any tool, people use\nit in ways you didn't intend, and this is especially true of a\nhighly articulated tool like a programming language. Many a hacker\nwill want to tweak your semantic model in a way that you never\nimagined. I say, let them; give the programmer access to as much\ninternal stuff as you can without endangering runtime systems like\nthe garbage collector.In Common Lisp I have often wanted to iterate through the fields\nof a struct \u2014 to comb out references to a deleted object, for example,\nor find fields that are uninitialized. I know the structs are just\nvectors underneath. And yet I can't write a general purpose function\nthat I can call on any struct. I can only access the fields by\nname, because that's what a struct is supposed to mean.A hacker may only want to subvert the intended model of things once\nor twice in a big program. But what a difference it makes to be\nable to. And it may be more than a question of just solving a\nproblem. There is a kind of pleasure here too. Hackers share the\nsurgeon's secret pleasure in poking about in gross innards, the\nteenager's secret pleasure in popping zits. [2] For boys, at least,\ncertain kinds of horrors are fascinating. Maxim magazine publishes\nan annual volume of photographs, containing a mix of pin-ups and\ngrisly accidents. They know their audience.Historically, Lisp has been good at letting hackers have their way.\nThe political correctness of Common Lisp is an aberration. Early\nLisps let you get your hands on everything. A good deal of that\nspirit is, fortunately, preserved in macros. What a wonderful thing,\nto be able to make arbitrary transformations on the source code.Classic macros are a real hacker's tool \u2014 simple, powerful, and\ndangerous. It's so easy to understand what they do: you call a\nfunction on the macro's arguments, and whatever it returns gets\ninserted in place of the macro call. Hygienic macros embody the\nopposite principle. They try to protect you from understanding what\nthey're doing. I have never heard hygienic macros explained in one\nsentence. And they are a classic example of the dangers of deciding\nwhat programmers are allowed to want. Hygienic macros are intended\nto protect me from variable capture, among other things, but variable\ncapture is exactly what I want in some macros.A really good language should be both clean and dirty: cleanly\ndesigned, with a small core of well understood and highly orthogonal\noperators, but dirty in the sense that it lets hackers have their\nway with it. C is like this. So were the early Lisps. A real hacker's\nlanguage will always have a slightly raffish character.A good programming language should have features that make the kind\nof people who use the phrase \"software engineering\" shake their\nheads disapprovingly. At the other end of the continuum are languages\nlike Ada and Pascal, models of propriety that are good for teaching\nand not much else.5 Throwaway ProgramsTo be attractive to hackers, a language must be good for writing\nthe kinds of programs they want to write. And that means, perhaps\nsurprisingly, that it has to be good for writing throwaway programs.A throwaway program is a program you write quickly for some limited\ntask: a program to automate some system administration task, or\ngenerate test data for a simulation, or convert data from one format\nto another. The surprising thing about throwaway programs is that,\nlike the \"temporary\" buildings built at so many American universities\nduring World War II, they often don't get thrown away. Many evolve\ninto real programs, with real features and real users.I have a hunch that the best big programs begin life this way,\nrather than being designed big from the start, like the Hoover Dam.\nIt's terrifying to build something big from scratch. When people\ntake on a project that's too big, they become overwhelmed. The\nproject either gets bogged down, or the result is sterile and\nwooden: a shopping mall rather than a real downtown, Brasilia rather\nthan Rome, Ada rather than C.Another way to get a big program is to start with a throwaway\nprogram and keep improving it. This approach is less daunting, and\nthe design of the program benefits from evolution. I think, if one\nlooked, that this would turn out to be the way most big programs\nwere developed. And those that did evolve this way are probably\nstill written in whatever language they were first written in,\nbecause it's rare for a program to be ported, except for political\nreasons. And so, paradoxically, if you want to make a language that\nis used for big systems, you have to make it good for writing\nthrowaway programs, because that's where big systems come from.Perl is a striking example of this idea. It was not only designed\nfor writing throwaway programs, but was pretty much a throwaway\nprogram itself. Perl began life as a collection of utilities for\ngenerating reports, and only evolved into a programming language\nas the throwaway programs people wrote in it grew larger. It was\nnot until Perl 5 (if then) that the language was suitable for\nwriting serious programs, and yet it was already massively popular.What makes a language good for throwaway programs? To start with,\nit must be readily available. A throwaway program is something that\nyou expect to write in an hour. So the language probably must\nalready be installed on the computer you're using. It can't be\nsomething you have to install before you use it. It has to be there.\nC was there because it came with the operating system. Perl was\nthere because it was originally a tool for system administrators,\nand yours had already installed it.Being available means more than being installed, though. An\ninteractive language, with a command-line interface, is more\navailable than one that you have to compile and run separately. A\npopular programming language should be interactive, and start up\nfast.Another thing you want in a throwaway program is brevity. Brevity\nis always attractive to hackers, and never more so than in a program\nthey expect to turn out in an hour.6 LibrariesOf course the ultimate in brevity is to have the program already\nwritten for you, and merely to call it. And this brings us to what\nI think will be an increasingly important feature of programming\nlanguages: library functions. Perl wins because it has large\nlibraries for manipulating strings. This class of library functions\nare especially important for throwaway programs, which are often\noriginally written for converting or extracting data. Many Perl\nprograms probably begin as just a couple library calls stuck\ntogether.I think a lot of the advances that happen in programming languages\nin the next fifty years will have to do with library functions. I\nthink future programming languages will have libraries that are as\ncarefully designed as the core language. Programming language design\nwill not be about whether to make your language strongly or weakly\ntyped, or object oriented, or functional, or whatever, but about\nhow to design great libraries. The kind of language designers who\nlike to think about how to design type systems may shudder at this.\nIt's almost like writing applications! Too bad. Languages are for\nprogrammers, and libraries are what programmers need.It's hard to design good libraries. It's not simply a matter of\nwriting a lot of code. Once the libraries get too big, it can\nsometimes take longer to find the function you need than to write\nthe code yourself. Libraries need to be designed using a small set\nof orthogonal operators, just like the core language. It ought to\nbe possible for the programmer to guess what library call will do\nwhat he needs.Libraries are one place Common Lisp falls short. There are only\nrudimentary libraries for manipulating strings, and almost none\nfor talking to the operating system. For historical reasons, Common\nLisp tries to pretend that the OS doesn't exist. And because you\ncan't talk to the OS, you're unlikely to be able to write a serious\nprogram using only the built-in operators in Common Lisp. You have\nto use some implementation-specific hacks as well, and in practice\nthese tend not to give you everything you want. Hackers would think\na lot more highly of Lisp if Common Lisp had powerful string\nlibraries and good OS support.7 SyntaxCould a language with Lisp's syntax, or more precisely, lack of\nsyntax, ever become popular? I don't know the answer to this\nquestion. I do think that syntax is not the main reason Lisp isn't\ncurrently popular. Common Lisp has worse problems than unfamiliar\nsyntax. I know several programmers who are comfortable with prefix\nsyntax and yet use Perl by default, because it has powerful string\nlibraries and can talk to the os.There are two possible problems with prefix notation: that it is\nunfamiliar to programmers, and that it is not dense enough. The\nconventional wisdom in the Lisp world is that the first problem is\nthe real one. I'm not so sure. Yes, prefix notation makes ordinary\nprogrammers panic. But I don't think ordinary programmers' opinions\nmatter. Languages become popular or unpopular based on what expert\nhackers think of them, and I think expert hackers might be able to\ndeal with prefix notation. Perl syntax can be pretty incomprehensible,\nbut that has not stood in the way of Perl's popularity. If anything\nit may have helped foster a Perl cult.A more serious problem is the diffuseness of prefix notation. For\nexpert hackers, that really is a problem. No one wants to write\n(aref a x y) when they could write a[x,y].In this particular case there is a way to finesse our way out of\nthe problem. If we treat data structures as if they were functions\non indexes, we could write (a x y) instead, which is even shorter\nthan the Perl form. Similar tricks may shorten other types of\nexpressions.We can get rid of (or make optional) a lot of parentheses by making\nindentation significant. That's how programmers read code anyway:\nwhen indentation says one thing and delimiters say another, we go\nby the indentation. Treating indentation as significant would\neliminate this common source of bugs as well as making programs\nshorter.Sometimes infix syntax is easier to read. This is especially true\nfor math expressions. I've used Lisp my whole programming life and\nI still don't find prefix math expressions natural. And yet it is\nconvenient, especially when you're generating code, to have operators\nthat take any number of arguments. So if we do have infix syntax,\nit should probably be implemented as some kind of read-macro.I don't think we should be religiously opposed to introducing syntax\ninto Lisp, as long as it translates in a well-understood way into\nunderlying s-expressions. There is already a good deal of syntax\nin Lisp. It's not necessarily bad to introduce more, as long as no\none is forced to use it. In Common Lisp, some delimiters are reserved\nfor the language, suggesting that at least some of the designers\nintended to have more syntax in the future.One of the most egregiously unlispy pieces of syntax in Common Lisp\noccurs in format strings; format is a language in its own right,\nand that language is not Lisp. If there were a plan for introducing\nmore syntax into Lisp, format specifiers might be able to be included\nin it. It would be a good thing if macros could generate format\nspecifiers the way they generate any other kind of code.An eminent Lisp hacker told me that his copy of CLTL falls open to\nthe section format. Mine too. This probably indicates room for\nimprovement. It may also mean that programs do a lot of I/O.8 EfficiencyA good language, as everyone knows, should generate fast code. But\nin practice I don't think fast code comes primarily from things\nyou do in the design of the language. As Knuth pointed out long\nago, speed only matters in certain critical bottlenecks. And as\nmany programmers have observed since, one is very often mistaken\nabout where these bottlenecks are.So, in practice, the way to get fast code is to have a very good\nprofiler, rather than by, say, making the language strongly typed.\nYou don't need to know the type of every argument in every call in\nthe program. You do need to be able to declare the types of arguments\nin the bottlenecks. And even more, you need to be able to find out\nwhere the bottlenecks are.One complaint people have had with Lisp is that it's hard to tell\nwhat's expensive. This might be true. It might also be inevitable,\nif you want to have a very abstract language. And in any case I\nthink good profiling would go a long way toward fixing the problem:\nyou'd soon learn what was expensive.Part of the problem here is social. Language designers like to\nwrite fast compilers. That's how they measure their skill. They\nthink of the profiler as an add-on, at best. But in practice a good\nprofiler may do more to improve the speed of actual programs written\nin the language than a compiler that generates fast code. Here,\nagain, language designers are somewhat out of touch with their\nusers. They do a really good job of solving slightly the wrong\nproblem.It might be a good idea to have an active profiler \u2014 to push\nperformance data to the programmer instead of waiting for him to\ncome asking for it. For example, the editor could display bottlenecks\nin red when the programmer edits the source code. Another approach\nwould be to somehow represent what's happening in running programs.\nThis would be an especially big win in server-based applications,\nwhere you have lots of running programs to look at. An active\nprofiler could show graphically what's happening in memory as a\nprogram's running, or even make sounds that tell what's happening.Sound is a good cue to problems. In one place I worked, we had a\nbig board of dials showing what was happening to our web servers.\nThe hands were moved by little servomotors that made a slight noise\nwhen they turned. I couldn't see the board from my desk, but I\nfound that I could tell immediately, by the sound, when there was\na problem with a server.It might even be possible to write a profiler that would automatically\ndetect inefficient algorithms. I would not be surprised if certain\npatterns of memory access turned out to be sure signs of bad\nalgorithms. If there were a little guy running around inside the\ncomputer executing our programs, he would probably have as long\nand plaintive a tale to tell about his job as a federal government\nemployee. I often have a feeling that I'm sending the processor on\na lot of wild goose chases, but I've never had a good way to look\nat what it's doing.A number of Lisps now compile into byte code, which is then executed\nby an interpreter. This is usually done to make the implementation\neasier to port, but it could be a useful language feature. It might\nbe a good idea to make the byte code an official part of the\nlanguage, and to allow programmers to use inline byte code in\nbottlenecks. Then such optimizations would be portable too.The nature of speed, as perceived by the end-user, may be changing.\nWith the rise of server-based applications, more and more programs\nmay turn out to be i/o-bound. It will be worth making i/o fast.\nThe language can help with straightforward measures like simple,\nfast, formatted output functions, and also with deep structural\nchanges like caching and persistent objects.Users are interested in response time. But another kind of efficiency\nwill be increasingly important: the number of simultaneous users\nyou can support per processor. Many of the interesting applications\nwritten in the near future will be server-based, and the number of\nusers per server is the critical question for anyone hosting such\napplications. In the capital cost of a business offering a server-based\napplication, this is the divisor.For years, efficiency hasn't mattered much in most end-user\napplications. Developers have been able to assume that each user\nwould have an increasingly powerful processor sitting on their\ndesk. And by Parkinson's Law, software has expanded to use the\nresources available. That will change with server-based applications.\nIn that world, the hardware and software will be supplied together.\nFor companies that offer server-based applications, it will make\na very big difference to the bottom line how many users they can\nsupport per server.In some applications, the processor will be the limiting factor,\nand execution speed will be the most important thing to optimize.\nBut often memory will be the limit; the number of simultaneous\nusers will be determined by the amount of memory you need for each\nuser's data. The language can help here too. Good support for\nthreads will enable all the users to share a single heap. It may\nalso help to have persistent objects and/or language level support\nfor lazy loading.9 TimeThe last ingredient a popular language needs is time. No one wants\nto write programs in a language that might go away, as so many\nprogramming languages do. So most hackers will tend to wait until\na language has been around for a couple years before even considering\nusing it.Inventors of wonderful new things are often surprised to discover\nthis, but you need time to get any message through to people. A\nfriend of mine rarely does anything the first time someone asks\nhim. He knows that people sometimes ask for things that they turn\nout not to want. To avoid wasting his time, he waits till the third\nor fourth time he's asked to do something; by then, whoever's asking\nhim may be fairly annoyed, but at least they probably really do\nwant whatever they're asking for.Most people have learned to do a similar sort of filtering on new\nthings they hear about. They don't even start paying attention\nuntil they've heard about something ten times. They're perfectly\njustified: the majority of hot new whatevers do turn out to be a\nwaste of time, and eventually go away. By delaying learning VRML,\nI avoided having to learn it at all.So anyone who invents something new has to expect to keep repeating\ntheir message for years before people will start to get it. We\nwrote what was, as far as I know, the first web-server based\napplication, and it took us years to get it through to people that\nit didn't have to be downloaded. It wasn't that they were stupid.\nThey just had us tuned out.The good news is, simple repetition solves the problem. All you\nhave to do is keep telling your story, and eventually people will\nstart to hear. It's not when people notice you're there that they\npay attention; it's when they notice you're still there.It's just as well that it usually takes a while to gain momentum.\nMost technologies evolve a good deal even after they're first\nlaunched \u2014 programming languages especially. Nothing could be better,\nfor a new techology, than a few years of being used only by a small\nnumber of early adopters. Early adopters are sophisticated and\ndemanding, and quickly flush out whatever flaws remain in your\ntechnology. When you only have a few users you can be in close\ncontact with all of them. And early adopters are forgiving when\nyou improve your system, even if this causes some breakage.There are two ways new technology gets introduced: the organic\ngrowth method, and the big bang method. The organic growth method\nis exemplified by the classic seat-of-the-pants underfunded garage\nstartup. A couple guys, working in obscurity, develop some new\ntechnology. They launch it with no marketing and initially have\nonly a few (fanatically devoted) users. They continue to improve\nthe technology, and meanwhile their user base grows by word of\nmouth. Before they know it, they're big.The other approach, the big bang method, is exemplified by the\nVC-backed, heavily marketed startup. They rush to develop a product,\nlaunch it with great publicity, and immediately (they hope) have\na large user base.Generally, the garage guys envy the big bang guys. The big bang\nguys are smooth and confident and respected by the VCs. They can\nafford the best of everything, and the PR campaign surrounding the\nlaunch has the side effect of making them celebrities. The organic\ngrowth guys, sitting in their garage, feel poor and unloved. And\nyet I think they are often mistaken to feel sorry for themselves.\nOrganic growth seems to yield better technology and richer founders\nthan the big bang method. If you look at the dominant technologies\ntoday, you'll find that most of them grew organically.This pattern doesn't only apply to companies. You see it in sponsored\nresearch too. Multics and Common Lisp were big-bang projects, and\nUnix and MacLisp were organic growth projects.10 Redesign\"The best writing is rewriting,\" wrote E. B. White. Every good\nwriter knows this, and it's true for software too. The most important\npart of design is redesign. Programming languages, especially,\ndon't get redesigned enough.To write good software you must simultaneously keep two opposing\nideas in your head. You need the young hacker's naive faith in\nhis abilities, and at the same time the veteran's skepticism. You\nhave to be able to think \nhow hard can it be? with one half of\nyour brain while thinking \nit will never work with the other.The trick is to realize that there's no real contradiction here.\nYou want to be optimistic and skeptical about two different things.\nYou have to be optimistic about the possibility of solving the\nproblem, but skeptical about the value of whatever solution you've\ngot so far.People who do good work often think that whatever they're working\non is no good. Others see what they've done and are full of wonder,\nbut the creator is full of worry. This pattern is no coincidence:\nit is the worry that made the work good.If you can keep hope and worry balanced, they will drive a project\nforward the same way your two legs drive a bicycle forward. In the\nfirst phase of the two-cycle innovation engine, you work furiously\non some problem, inspired by your confidence that you'll be able\nto solve it. In the second phase, you look at what you've done in\nthe cold light of morning, and see all its flaws very clearly. But\nas long as your critical spirit doesn't outweigh your hope, you'll\nbe able to look at your admittedly incomplete system, and think,\nhow hard can it be to get the rest of the way?, thereby continuing\nthe cycle.It's tricky to keep the two forces balanced. In young hackers,\noptimism predominates. They produce something, are convinced it's\ngreat, and never improve it. In old hackers, skepticism predominates,\nand they won't even dare to take on ambitious projects.Anything you can do to keep the redesign cycle going is good. Prose\ncan be rewritten over and over until you're happy with it. But\nsoftware, as a rule, doesn't get redesigned enough. Prose has\nreaders, but software has users. If a writer rewrites an essay,\npeople who read the old version are unlikely to complain that their\nthoughts have been broken by some newly introduced incompatibility.Users are a double-edged sword. They can help you improve your\nlanguage, but they can also deter you from improving it. So choose\nyour users carefully, and be slow to grow their number. Having\nusers is like optimization: the wise course is to delay it. Also,\nas a general rule, you can at any given time get away with changing\nmore than you think. Introducing change is like pulling off a\nbandage: the pain is a memory almost as soon as you feel it.Everyone knows that it's not a good idea to have a language designed\nby a committee. Committees yield bad design. But I think the worst\ndanger of committees is that they interfere with redesign. It is\nso much work to introduce changes that no one wants to bother.\nWhatever a committee decides tends to stay that way, even if most\nof the members don't like it.Even a committee of two gets in the way of redesign. This happens\nparticularly in the interfaces between pieces of software written\nby two different people. To change the interface both have to agree\nto change it at once. And so interfaces tend not to change at all,\nwhich is a problem because they tend to be one of the most ad hoc\nparts of any system.One solution here might be to design systems so that interfaces\nare horizontal instead of vertical \u2014 so that modules are always\nvertically stacked strata of abstraction. Then the interface will\ntend to be owned by one of them. The lower of two levels will either\nbe a language in which the upper is written, in which case the\nlower level will own the interface, or it will be a slave, in which\ncase the interface can be dictated by the upper level.11 LispWhat all this implies is that there is hope for a new Lisp. There\nis hope for any language that gives hackers what they want, including\nLisp. I think we may have made a mistake in thinking that hackers\nare turned off by Lisp's strangeness. This comforting illusion may\nhave prevented us from seeing the real problem with Lisp, or at\nleast Common Lisp, which is that it sucks for doing what hackers\nwant to do. A hacker's language needs powerful libraries and\nsomething to hack. Common Lisp has neither. A hacker's language is\nterse and hackable. Common Lisp is not.The good news is, it's not Lisp that sucks, but Common Lisp. If we\ncan develop a new Lisp that is a real hacker's language, I think\nhackers will use it. They will use whatever language does the job.\nAll we have to do is make sure this new Lisp does some important\njob better than other languages.History offers some encouragement. Over time, successive new\nprogramming languages have taken more and more features from Lisp.\nThere is no longer much left to copy before the language you've\nmade is Lisp. The latest hot language, Python, is a watered-down\nLisp with infix syntax and no macros. A new Lisp would be a natural\nstep in this progression.I sometimes think that it would be a good marketing trick to call\nit an improved version of Python. That sounds hipper than Lisp. To\nmany people, Lisp is a slow AI language with a lot of parentheses.\nFritz Kunze's official biography carefully avoids mentioning the\nL-word. But my guess is that we shouldn't be afraid to call the\nnew Lisp Lisp. Lisp still has a lot of latent respect among the\nvery best hackers \u2014 the ones who took 6.001 and understood it, for\nexample. And those are the users you need to win.In \"How to Become a Hacker,\" Eric Raymond describes Lisp as something\nlike Latin or Greek \u2014 a language you should learn as an intellectual\nexercise, even though you won't actually use it:\n\n Lisp is worth learning for the profound enlightenment experience\n you will have when you finally get it; that experience will make\n you a better programmer for the rest of your days, even if you\n never actually use Lisp itself a lot.\n\nIf I didn't know Lisp, reading this would set me asking questions.\nA language that would make me a better programmer, if it means\nanything at all, means a language that would be better for programming.\nAnd that is in fact the implication of what Eric is saying.As long as that idea is still floating around, I think hackers will\nbe receptive enough to a new Lisp, even if it is called Lisp. But\nthis Lisp must be a hacker's language, like the classic Lisps of\nthe 1970s. It must be terse, simple, and hackable. And it must have\npowerful libraries for doing what hackers want to do now.In the matter of libraries I think there is room to beat languages\nlike Perl and Python at their own game. A lot of the new applications\nthat will need to be written in the coming years will be \nserver-based\napplications. There's no reason a new Lisp shouldn't have string\nlibraries as good as Perl, and if this new Lisp also had powerful\nlibraries for server-based applications, it could be very popular.\nReal hackers won't turn up their noses at a new tool that will let\nthem solve hard problems with a few library calls. Remember, hackers\nare lazy.It could be an even bigger win to have core language support for\nserver-based applications. For example, explicit support for programs\nwith multiple users, or data ownership at the level of type tags.Server-based applications also give us the answer to the question\nof what this new Lisp will be used to hack. It would not hurt to\nmake Lisp better as a scripting language for Unix. (It would be\nhard to make it worse.) But I think there are areas where existing\nlanguages would be easier to beat. I think it might be better to\nfollow the model of Tcl, and supply the Lisp together with a complete\nsystem for supporting server-based applications. Lisp is a natural\nfit for server-based applications. Lexical closures provide a way\nto get the effect of subroutines when the ui is just a series of\nweb pages. S-expressions map nicely onto html, and macros are good\nat generating it. There need to be better tools for writing\nserver-based applications, and there needs to be a new Lisp, and\nthe two would work very well together.12 The Dream LanguageBy way of summary, let's try describing the hacker's dream language.\nThe dream language is \nbeautiful, clean, and terse. It has an\ninteractive toplevel that starts up fast. You can write programs\nto solve common problems with very little code. Nearly all the\ncode in any program you write is code that's specific to your\napplication. Everything else has been done for you.The syntax of the language is brief to a fault. You never have to\ntype an unnecessary character, or even to use the shift key much.Using big abstractions you can write the first version of a program\nvery quickly. Later, when you want to optimize, there's a really\ngood profiler that tells you where to focus your attention. You\ncan make inner loops blindingly fast, even writing inline byte code\nif you need to.There are lots of good examples to learn from, and the language is\nintuitive enough that you can learn how to use it from examples in\na couple minutes. You don't need to look in the manual much. The\nmanual is thin, and has few warnings and qualifications.The language has a small core, and powerful, highly orthogonal\nlibraries that are as carefully designed as the core language. The\nlibraries all work well together; everything in the language fits\ntogether like the parts in a fine camera. Nothing is deprecated,\nor retained for compatibility. The source code of all the libraries\nis readily available. It's easy to talk to the operating system\nand to applications written in other languages.The language is built in layers. The higher-level abstractions are\nbuilt in a very transparent way out of lower-level abstractions,\nwhich you can get hold of if you want.Nothing is hidden from you that doesn't absolutely have to be. The\nlanguage offers abstractions only as a way of saving you work,\nrather than as a way of telling you what to do. In fact, the language\nencourages you to be an equal participant in its design. You can\nchange everything about it, including even its syntax, and anything\nyou write has, as much as possible, the same status as what comes\npredefined.Notes[1] Macros very close to the modern idea were proposed by Timothy\nHart in 1964, two years after Lisp 1.5 was released. What was\nmissing, initially, were ways to avoid variable capture and multiple\nevaluation; Hart's examples are subject to both.[2] In When the Air Hits Your Brain, neurosurgeon Frank Vertosick\nrecounts a conversation in which his chief resident, Gary, talks\nabout the difference between surgeons and internists (\"fleas\"):\n\n Gary and I ordered a large pizza and found an open booth. The\n chief lit a cigarette. \"Look at those goddamn fleas, jabbering\n about some disease they'll see once in their lifetimes. That's\n the trouble with fleas, they only like the bizarre stuff. They\n hate their bread and butter cases. That's the difference between\n us and the fucking fleas. See, we love big juicy lumbar disc\n herniations, but they hate hypertension....\"\n\nIt's hard to think of a lumbar disc herniation as juicy (except\nliterally). And yet I think I know what they mean. I've often had\na juicy bug to track down. Someone who's not a programmer would\nfind it hard to imagine that there could be pleasure in a bug.\nSurely it's better if everything just works. In one way, it is.\nAnd yet there is undeniably a grim satisfaction in hunting down\ncertain sorts of bugs."} {"title": "pow", "text": "January 2017People who are powerful but uncharismatic will tend to be disliked.\nTheir power makes them a target for criticism that they don't have\nthe charisma to disarm. That was Hillary Clinton's problem. It also\ntends to be a problem for any CEO who is more of a builder than a\nschmoozer. And yet the builder-type CEO is (like Hillary) probably\nthe best person for the job.I don't think there is any solution to this problem. It's human\nnature. The best we can do is to recognize that it's happening, and\nto understand that being a magnet for criticism is sometimes a sign\nnot that someone is the wrong person for a job, but that they're\nthe right one."} {"title": "submarine", "text": "April 2005\"Suits make a corporate comeback,\" says the New\nYork Times. Why does this sound familiar? Maybe because\nthe suit was also back in February,\n\nSeptember\n2004, June\n2004, March\n2004, September\n2003, \n\nNovember\n2002, \nApril 2002,\nand February\n2002.\n\nWhy do the media keep running stories saying suits are back? Because\nPR firms tell \nthem to. One of the most surprising things I discovered\nduring my brief business career was the existence of the PR industry,\nlurking like a huge, quiet submarine beneath the news. Of the\nstories you read in traditional media that aren't about politics,\ncrimes, or disasters, more than half probably come from PR firms.I know because I spent years hunting such \"press hits.\" Our startup spent\nits entire marketing budget on PR: at a time when we were assembling\nour own computers to save money, we were paying a PR firm $16,000\na month. And they were worth it. PR is the news equivalent of\nsearch engine optimization; instead of buying ads, which readers\nignore, you get yourself inserted directly into the stories. [1]Our PR firm\nwas one of the best in the business. In 18 months, they got press\nhits in over 60 different publications. \nAnd we weren't the only ones they did great things for. \nIn 1997 I got a call from another\nstartup founder considering hiring them to promote his company. I\ntold him they were PR gods, worth every penny of their outrageous \nfees. But I remember thinking his company's name was odd.\nWhy call an auction site \"eBay\"?\nSymbiosisPR is not dishonest. Not quite. In fact, the reason the best PR\nfirms are so effective is precisely that they aren't dishonest.\nThey give reporters genuinely valuable information. A good PR firm\nwon't bug reporters just because the client tells them to; they've\nworked hard to build their credibility with reporters, and they\ndon't want to destroy it by feeding them mere propaganda.If anyone is dishonest, it's the reporters. The main reason PR \nfirms exist is that reporters are lazy. Or, to put it more nicely,\noverworked. Really they ought to be out there digging up stories\nfor themselves. But it's so tempting to sit in their offices and\nlet PR firms bring the stories to them. After all, they know good\nPR firms won't lie to them.A good flatterer doesn't lie, but tells his victim selective truths\n(what a nice color your eyes are). Good PR firms use the same\nstrategy: they give reporters stories that are true, but whose truth\nfavors their clients.For example, our PR firm often pitched stories about how the Web \nlet small merchants compete with big ones. This was perfectly true.\nBut the reason reporters ended up writing stories about this\nparticular truth, rather than some other one, was that small merchants\nwere our target market, and we were paying the piper.Different publications vary greatly in their reliance on PR firms.\nAt the bottom of the heap are the trade press, who make most of\ntheir money from advertising and would give the magazines away for\nfree if advertisers would let them. [2] The average\ntrade publication is a bunch of ads, glued together by just enough\narticles to make it look like a magazine. They're so desperate for\n\"content\" that some will print your press releases almost verbatim,\nif you take the trouble to write them to read like articles.At the other extreme are publications like the New York Times\nand the Wall Street Journal. Their reporters do go out and\nfind their own stories, at least some of the time. They'll listen \nto PR firms, but briefly and skeptically. We managed to get press \nhits in almost every publication we wanted, but we never managed \nto crack the print edition of the Times. [3]The weak point of the top reporters is not laziness, but vanity.\nYou don't pitch stories to them. You have to approach them as if\nyou were a specimen under their all-seeing microscope, and make it\nseem as if the story you want them to run is something they thought \nof themselves.Our greatest PR coup was a two-part one. We estimated, based on\nsome fairly informal math, that there were about 5000 stores on the\nWeb. We got one paper to print this number, which seemed neutral \nenough. But once this \"fact\" was out there in print, we could quote\nit to other publications, and claim that with 1000 users we had 20%\nof the online store market.This was roughly true. We really did have the biggest share of the\nonline store market, and 5000 was our best guess at its size. But\nthe way the story appeared in the press sounded a lot more definite.Reporters like definitive statements. For example, many of the\nstories about Jeremy Jaynes's conviction say that he was one of the\n10 worst spammers. This \"fact\" originated in Spamhaus's ROKSO list,\nwhich I think even Spamhaus would admit is a rough guess at the top\nspammers. The first stories about Jaynes cited this source, but\nnow it's simply repeated as if it were part of the indictment. \n[4]All you can say with certainty about Jaynes is that he was a fairly\nbig spammer. But reporters don't want to print vague stuff like\n\"fairly big.\" They want statements with punch, like \"top ten.\" And\nPR firms give them what they want.\nWearing suits, we're told, will make us \n3.6\npercent more productive.BuzzWhere the work of PR firms really does get deliberately misleading is in\nthe generation of \"buzz.\" They usually feed the same story to \nseveral different publications at once. And when readers see similar\nstories in multiple places, they think there is some important trend\nafoot. Which is exactly what they're supposed to think.When Windows 95 was launched, people waited outside stores\nat midnight to buy the first copies. None of them would have been\nthere without PR firms, who generated such a buzz in\nthe news media that it became self-reinforcing, like a nuclear chain\nreaction.I doubt PR firms realize it yet, but the Web makes it possible to \ntrack them at work. If you search for the obvious phrases, you\nturn up several efforts over the years to place stories about the \nreturn of the suit. For example, the Reuters article \n\nthat got picked up by USA\nToday in September 2004. \"The suit is back,\" it begins.Trend articles like this are almost always the work of\nPR firms. Once you know how to read them, it's straightforward to\nfigure out who the client is. With trend stories, PR firms usually\nline up one or more \"experts\" to talk about the industry generally. \nIn this case we get three: the NPD Group, the creative director of\nGQ, and a research director at Smith Barney. [5] When\nyou get to the end of the experts, look for the client. And bingo, \nthere it is: The Men's Wearhouse.Not surprising, considering The Men's Wearhouse was at that moment \nrunning ads saying \"The Suit is Back.\" Talk about a successful\npress hit-- a wire service article whose first sentence is your own\nad copy.The secret to finding other press hits from a given pitch\nis to realize that they all started from the same document back at\nthe PR firm. Search for a few key phrases and the names of the\nclients and the experts, and you'll turn up other variants of this \nstory.Casual\nfridays are out and dress codes are in writes Diane E. Lewis\nin The Boston Globe. In a remarkable coincidence, Ms. Lewis's\nindustry contacts also include the creative director of GQ.Ripped jeans and T-shirts are out, writes Mary Kathleen Flynn in\nUS News & World Report. And she too knows the \ncreative director of GQ.Men's suits\nare back writes Nicole Ford in Sexbuzz.Com (\"the ultimate men's\nentertainment magazine\").Dressing\ndown loses appeal as men suit up at the office writes Tenisha\nMercer of The Detroit News.\nNow that so many news articles are online, I suspect you could find\na similar pattern for most trend stories placed by PR firms. I\npropose we call this new sport \"PR diving,\" and I'm sure there are\nfar more striking examples out there than this clump of five stories.OnlineAfter spending years chasing them, it's now second nature\nto me to recognize press hits for what they are. But before we\nhired a PR firm I had no idea where articles in the mainstream media\ncame from. I could tell a lot of them were crap, but I didn't\nrealize why.Remember the exercises in critical reading you did in school, where\nyou had to look at a piece of writing and step back and ask whether\nthe author was telling the whole truth? If you really want to be\na critical reader, it turns out you have to step back one step\nfurther, and ask not just whether the author is telling the truth,\nbut why he's writing about this subject at all.Online, the answer tends to be a lot simpler. Most people who\npublish online write what they write for the simple reason that\nthey want to. You\ncan't see the fingerprints of PR firms all over the articles, as\nyou can in so many print publications-- which is one of the reasons,\nthough they may not consciously realize it, that readers trust\nbloggers more than Business Week.I was talking recently to a friend who works for a\nbig newspaper. He thought the print media were in serious trouble,\nand that they were still mostly in denial about it. \"They think\nthe decline is cyclic,\" he said. \"Actually it's structural.\"In other words, the readers are leaving, and they're not coming\nback.\nWhy? I think the main reason is that the writing online is more honest.\nImagine how incongruous the New York Times article about\nsuits would sound if you read it in a blog:\n The urge to look corporate-- sleek, commanding,\n prudent, yet with just a touch of hubris on your well-cut sleeve--\n is an unexpected development in a time of business disgrace.\n \nThe problem\nwith this article is not just that it originated in a PR firm.\nThe whole tone is bogus. This is the tone of someone writing down\nto their audience.Whatever its flaws, the writing you find online\nis authentic. It's not mystery meat cooked up\nout of scraps of pitch letters and press releases, and pressed into \nmolds of zippy\njournalese. It's people writing what they think.I didn't realize, till there was an alternative, just how artificial\nmost of the writing in the mainstream media was. I'm not saying\nI used to believe what I read in Time and Newsweek. Since high\nschool, at least, I've thought of magazines like that more as\nguides to what ordinary people were being\ntold to think than as \nsources of information. But I didn't realize till the last \nfew years that writing for publication didn't have to mean writing\nthat way. I didn't realize you could write as candidly and\ninformally as you would if you were writing to a friend.Readers aren't the only ones who've noticed the\nchange. The PR industry has too.\nA hilarious article\non the site of the PR Society of America gets to the heart of the \nmatter:\n Bloggers are sensitive about becoming mouthpieces\n for other organizations and companies, which is the reason they\n began blogging in the first place. \nPR people fear bloggers for the same reason readers\nlike them. And that means there may be a struggle ahead. As\nthis new kind of writing draws readers away from traditional media, we\nshould be prepared for whatever PR mutates into to compensate. \nWhen I think \nhow hard PR firms work to score press hits in the traditional \nmedia, I can't imagine they'll work any less hard to feed stories\nto bloggers, if they can figure out how.\nNotes[1] PR has at least \none beneficial feature: it favors small companies. If PR didn't \nwork, the only alternative would be to advertise, and only big\ncompanies can afford that.[2] Advertisers pay \nless for ads in free publications, because they assume readers \nignore something they get for free. This is why so many trade\npublications nominally have a cover price and yet give away free\nsubscriptions with such abandon.[3] Different sections\nof the Times vary so much in their standards that they're\npractically different papers. Whoever fed the style section reporter\nthis story about suits coming back would have been sent packing by\nthe regular news reporters.[4] The most striking\nexample I know of this type is the \"fact\" that the Internet worm \nof 1988 infected 6000 computers. I was there when it was cooked up,\nand this was the recipe: someone guessed that there were about\n60,000 computers attached to the Internet, and that the worm might\nhave infected ten percent of them.Actually no one knows how many computers the worm infected, because\nthe remedy was to reboot them, and this destroyed all traces. But\npeople like numbers. And so this one is now replicated\nall over the Internet, like a little worm of its own.[5] Not all were\nnecessarily supplied by the PR firm. Reporters sometimes call a few\nadditional sources on their own, like someone adding a few fresh \nvegetables to a can of soup.\nThanks to Ingrid Basset, Trevor Blackwell, Sarah Harlin, Jessica \nLivingston, Jackie McDonough, Robert Morris, and Aaron Swartz (who\nalso found the PRSA article) for reading drafts of this.Correction: Earlier versions used a recent\nBusiness Week article mentioning del.icio.us as an example\nof a press hit, but Joshua Schachter tells me \nit was spontaneous."} {"title": "sun", "text": "September 2017The most valuable insights are both general and surprising. \nF\u00a0=\u00a0ma for example. But general and surprising is a hard\ncombination to achieve. That territory tends to be picked\nclean, precisely because those insights are so valuable.Ordinarily, the best that people can do is one without the\nother: either surprising without being general (e.g.\ngossip), or general without being surprising (e.g.\nplatitudes).Where things get interesting is the moderately valuable\ninsights. You get those from small additions of whichever\nquality was missing. The more common case is a small\naddition of generality: a piece of gossip that's more than\njust gossip, because it teaches something interesting about\nthe world. But another less common approach is to focus on\nthe most general ideas and see if you can find something new\nto say about them. Because these start out so general, you\nonly need a small delta of novelty to produce a useful\ninsight.A small delta of novelty is all you'll be able to get most\nof the time. Which means if you take this route, your ideas\nwill seem a lot like ones that already exist. Sometimes\nyou'll find you've merely rediscovered an idea that did\nalready exist. But don't be discouraged. Remember the huge\nmultiplier that kicks in when you do manage to think of\nsomething even a little new.Corollary: the more general the ideas you're talking about,\nthe less you should worry about repeating yourself. If you\nwrite enough, it's inevitable you will. Your brain is much\nthe same from year to year and so are the stimuli that hit\nit. I feel slightly bad when I find I've said something\nclose to what I've said before, as if I were plagiarizing\nmyself. But rationally one shouldn't. You won't say\nsomething exactly the same way the second time, and that\nvariation increases the chance you'll get that tiny but\ncritical delta of novelty.And of course, ideas beget ideas. (That sounds \nfamiliar.)\nAn idea with a small amount of novelty could lead to one\nwith more. But only if you keep going. So it's doubly\nimportant not to let yourself be discouraged by people who\nsay there's not much new about something you've discovered.\n\"Not much new\" is a real achievement when you're talking\nabout the most general ideas. It's not true that there's nothing new under the sun. There\nare some domains where there's almost nothing new. But\nthere's a big difference between nothing and almost nothing,\nwhen it's multiplied by the area under the sun.\nThanks to Sam Altman, Patrick Collison, and Jessica\nLivingston for reading drafts of this."} {"title": "weird", "text": "August 2021When people say that in their experience all programming languages\nare basically equivalent, they're making a statement not about\nlanguages but about the kind of programming they've done.99.5% of programming consists of gluing together calls to library\nfunctions. All popular languages are equally good at this. So one\ncan easily spend one's whole career operating in the intersection\nof popular programming languages.But the other .5% of programming is disproportionately interesting.\nIf you want to learn what it consists of, the weirdness of weird\nlanguages is a good clue to follow.Weird languages aren't weird by accident. Not the good ones, at\nleast. The weirdness of the good ones usually implies the existence\nof some form of programming that's not just the usual gluing together\nof library calls.A concrete example: Lisp macros. Lisp macros seem weird even to\nmany Lisp programmers. They're not only not in the intersection of\npopular languages, but by their nature would be hard to implement\nproperly in a language without turning it into a dialect of\nLisp. And macros are definitely evidence of techniques that go\nbeyond glue programming. For example, solving problems by first\nwriting a language for problems of that type, and then writing\nyour specific application in it. Nor is this all you can do with\nmacros; it's just one region in a space of program-manipulating\ntechniques that even now is far from fully explored.So if you want to expand your concept of what programming can be,\none way to do it is by learning weird languages. Pick a language\nthat most programmers consider weird but whose median user is smart,\nand then focus on the differences between this language and the\nintersection of popular languages. What can you say in this language\nthat would be impossibly inconvenient to say in others? In the\nprocess of learning how to say things you couldn't previously say,\nyou'll probably be learning how to think things you couldn't\npreviously think.\nThanks to Trevor Blackwell, Patrick Collison, Daniel Gackle, Amjad\nMasad, and Robert Morris for reading drafts of this.\n"} {"title": "diff", "text": "December 2001 (rev. May 2002)\n\n(This article came about in response to some questions on\nthe LL1 mailing list. It is now\nincorporated in Revenge of the Nerds.)When McCarthy designed Lisp in the late 1950s, it was\na radical departure from existing languages,\nthe most important of which was Fortran.Lisp embodied nine new ideas:\n1. Conditionals. A conditional is an if-then-else\nconstruct. We take these for granted now. They were \ninvented\nby McCarthy in the course of developing Lisp. \n(Fortran at that time only had a conditional\ngoto, closely based on the branch instruction in the \nunderlying hardware.) McCarthy, who was on the Algol committee, got\nconditionals into Algol, whence they spread to most other\nlanguages.2. A function type. In Lisp, functions are first class \nobjects-- they're a data type just like integers, strings,\netc, and have a literal representation, can be stored in variables,\ncan be passed as arguments, and so on.3. Recursion. Recursion existed as a mathematical concept\nbefore Lisp of course, but Lisp was the first programming language to support\nit. (It's arguably implicit in making functions first class\nobjects.)4. A new concept of variables. In Lisp, all variables\nare effectively pointers. Values are what\nhave types, not variables, and assigning or binding\nvariables means copying pointers, not what they point to.5. Garbage-collection.6. Programs composed of expressions. Lisp programs are \ntrees of expressions, each of which returns a value. \n(In some Lisps expressions\ncan return multiple values.) This is in contrast to Fortran\nand most succeeding languages, which distinguish between\nexpressions and statements.It was natural to have this\ndistinction in Fortran because (not surprisingly in a language\nwhere the input format was punched cards) the language was\nline-oriented. You could not nest statements. And\nso while you needed expressions for math to work, there was\nno point in making anything else return a value, because\nthere could not be anything waiting for it.This limitation\nwent away with the arrival of block-structured languages,\nbut by then it was too late. The distinction between\nexpressions and statements was entrenched. It spread from \nFortran into Algol and thence to both their descendants.When a language is made entirely of expressions, you can\ncompose expressions however you want. You can say either\n(using Arc syntax)(if foo (= x 1) (= x 2))or(= x (if foo 1 2))7. A symbol type. Symbols differ from strings in that\nyou can test equality by comparing a pointer.8. A notation for code using trees of symbols.9. The whole language always available. \nThere is\nno real distinction between read-time, compile-time, and runtime.\nYou can compile or run code while reading, read or run code\nwhile compiling, and read or compile code at runtime.Running code at read-time lets users reprogram Lisp's syntax;\nrunning code at compile-time is the basis of macros; compiling\nat runtime is the basis of Lisp's use as an extension\nlanguage in programs like Emacs; and reading at runtime\nenables programs to communicate using s-expressions, an\nidea recently reinvented as XML.\nWhen Lisp was first invented, all these ideas were far\nremoved from ordinary programming practice, which was\ndictated largely by the hardware available in the late 1950s.Over time, the default language, embodied\nin a succession of popular languages, has\ngradually evolved toward Lisp. 1-5 are now widespread.\n6 is starting to appear in the mainstream.\nPython has a form of 7, though there doesn't seem to be\nany syntax for it. \n8, which (with 9) is what makes Lisp macros\npossible, is so far still unique to Lisp,\nperhaps because (a) it requires those parens, or something \njust as bad, and (b) if you add that final increment of power, \nyou can no \nlonger claim to have invented a new language, but only\nto have designed a new dialect of Lisp ; -)Though useful to present-day programmers, it's\nstrange to describe Lisp in terms of its\nvariation from the random expedients other languages\nadopted. That was not, probably, how McCarthy\nthought of it. Lisp wasn't designed to fix the mistakes\nin Fortran; it came about more as the byproduct of an\nattempt to axiomatize computation."} {"title": "hubs", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2011If you look at a list of US cities sorted by population, the number\nof successful startups per capita varies by orders of magnitude.\nSomehow it's as if most places were sprayed with startupicide.I wondered about this for years. I could see the average town was\nlike a roach motel for startup ambitions: smart, ambitious people\nwent in, but no startups came out. But I was never able to figure\nout exactly what happened inside the motel\u2014exactly what was\nkilling all the potential startups.\n[1]A couple weeks ago I finally figured it out. I was framing the\nquestion wrong. The problem is not that most towns kill startups.\nIt's that death is the default for startups,\nand most towns don't save them. Instead of thinking of most places\nas being sprayed with startupicide, it's more accurate to think of\nstartups as all being poisoned, and a few places being sprayed with\nthe antidote.Startups in other places are just doing what startups naturally do:\nfail. The real question is, what's saving startups in places\nlike Silicon Valley?\n[2]EnvironmentI think there are two components to the antidote: being in a place\nwhere startups are the cool thing to do, and chance meetings with\npeople who can help you. And what drives them both is the number\nof startup people around you.The first component is particularly helpful in the first stage of\na startup's life, when you go from merely having an interest in\nstarting a company to actually doing it. It's quite a leap to start\na startup. It's an unusual thing to do. But in Silicon Valley it\nseems normal.\n[3]In most places, if you start a startup, people treat you as if\nyou're unemployed. People in the Valley aren't automatically\nimpressed with you just because you're starting a company, but they\npay attention. Anyone who's been here any amount of time knows not\nto default to skepticism, no matter how inexperienced you seem or\nhow unpromising your idea sounds at first, because they've all seen\ninexperienced founders with unpromising sounding ideas who a few\nyears later were billionaires.Having people around you care about what you're doing is an\nextraordinarily powerful force. Even the\nmost willful people are susceptible to it. About a year after we\nstarted Y Combinator I said something to a partner at a well known\nVC firm that gave him the (mistaken) impression I was considering\nstarting another startup. He responded so eagerly that for about\nhalf a second I found myself considering doing it.In most other cities, the prospect of starting a startup just doesn't\nseem real. In the Valley it's not only real but fashionable. That\nno doubt causes a lot of people to start startups who shouldn't.\nBut I think that's ok. Few people are suited to running a startup,\nand it's very hard to predict beforehand which are (as I know all\ntoo well from being in the business of trying to predict beforehand),\nso lots of people starting startups who shouldn't is probably the\noptimal state of affairs. As long as you're at a point in your\nlife when you can bear the risk of failure, the best way to find\nout if you're suited to running a startup is to try\nit.ChanceThe second component of the antidote is chance meetings with people\nwho can help you. This force works in both phases: both in the\ntransition from the desire to start a startup to starting one, and\nthe transition from starting a company to succeeding. The power\nof chance meetings is more variable than people around you caring\nabout startups, which is like a sort of background radiation that\naffects everyone equally, but at its strongest it is far stronger.Chance meetings produce miracles to compensate for the disasters\nthat characteristically befall startups. In the Valley, terrible\nthings happen to startups all the time, just like they do to startups\neverywhere. The reason startups are more likely to make it here\nis that great things happen to them too. In the Valley, lightning\nhas a sign bit.For example, you start a site for college students and you decide\nto move to the Valley for the summer to work on it. And then on a\nrandom suburban street in Palo Alto you happen to run into Sean\nParker, who understands the domain really well because he started\na similar startup himself, and also knows all the investors. And\nmoreover has advanced views, for 2004, on founders retaining control of their companies.You can't say precisely what the miracle will be, or even for sure\nthat one will happen. The best one can say is: if you're in a\nstartup hub, unexpected good things will probably happen to you,\nespecially if you deserve them.I bet this is true even for startups we fund. Even with us working\nto make things happen for them on purpose rather than by accident,\nthe frequency of helpful chance meetings in the Valley is so high\nthat it's still a significant increment on what we can deliver.Chance meetings play a role like the role relaxation plays in having\nideas. Most people have had the experience of working hard on some\nproblem, not being able to solve it, giving up and going to bed,\nand then thinking of the answer in the shower in the morning. What\nmakes the answer appear is letting your thoughts drift a bit\u2014and thus drift off the wrong\npath you'd been pursuing last night and onto the right one adjacent\nto it.Chance meetings let your acquaintance drift in the same way taking\na shower lets your thoughts drift. The critical thing in both cases\nis that they drift just the right amount. The meeting between Larry\nPage and Sergey Brin was a good example. They let their acquaintance\ndrift, but only a little; they were both meeting someone they had\na lot in common with.For Larry Page the most important component of the antidote was\nSergey Brin, and vice versa. The antidote is \npeople. It's not the\nphysical infrastructure of Silicon Valley that makes it work, or\nthe weather, or anything like that. Those helped get it started,\nbut now that the reaction is self-sustaining what drives it is the\npeople.Many observers have noticed that one of the most distinctive things\nabout startup hubs is the degree to which people help one another\nout, with no expectation of getting anything in return. I'm not\nsure why this is so. Perhaps it's because startups are less of a\nzero sum game than most types of business; they are rarely killed\nby competitors. Or perhaps it's because so many startup founders\nhave backgrounds in the sciences, where collaboration is encouraged.A large part of YC's function is to accelerate that process. We're\na sort of Valley within the Valley, where the density of people\nworking on startups and their willingness to help one another are\nboth artificially amplified.NumbersBoth components of the antidote\u2014an environment that encourages\nstartups, and chance meetings with people who help you\u2014are\ndriven by the same underlying cause: the number of startup people\naround you. To make a startup hub, you need a lot of people\ninterested in startups.There are three reasons. The first, obviously, is that if you don't\nhave enough density, the chance meetings don't happen.\n[4]\nThe second is that different startups need such different things, so\nyou need a lot of people to supply each startup with what they need\nmost. Sean Parker was exactly what Facebook needed in 2004. Another\nstartup might have needed a database guy, or someone with connections\nin the movie business.This is one of the reasons we fund such a large number of companies,\nincidentally. The bigger the community, the greater the chance it\nwill contain the person who has that one thing you need most.The third reason you need a lot of people to make a startup hub is\nthat once you have enough people interested in the same problem,\nthey start to set the social norms. And it is a particularly\nvaluable thing when the atmosphere around you encourages you to do\nsomething that would otherwise seem too ambitious. In most places\nthe atmosphere pulls you back toward the mean.I flew into the Bay Area a few days ago. I notice this every time\nI fly over the Valley: somehow you can sense something is going on. \nObviously you can sense prosperity in how well kept a\nplace looks. But there are different kinds of prosperity. Silicon\nValley doesn't look like Boston, or New York, or LA, or DC. I tried\nasking myself what word I'd use to describe the feeling the Valley\nradiated, and the word that came to mind was optimism.Notes[1]\nI'm not saying it's impossible to succeed in a city with few\nother startups, just harder. If you're sufficiently good at\ngenerating your own morale, you can survive without external\nencouragement. Wufoo was based in Tampa and they succeeded. But\nthe Wufoos are exceptionally disciplined.[2]\nIncidentally, this phenomenon is not limited to startups. Most\nunusual ambitions fail, unless the person who has them manages to\nfind the right sort of community.[3]\nStarting a company is common, but starting a startup is rare.\nI've talked about the distinction between the two elsewhere, but\nessentially a startup is a new business designed for scale. Most\nnew businesses are service businesses and except in rare cases those\ndon't scale.[4]\nAs I was writing this, I had a demonstration of the density of\nstartup people in the Valley. Jessica and I bicycled to University\nAve in Palo Alto to have lunch at the fabulous Oren's Hummus. As\nwe walked in, we met Charlie Cheever sitting near the door. Selina\nTobaccowala stopped to say hello on her way out. Then Josh Wilson\ncame in to pick up a take out order. After lunch we went to get\nfrozen yogurt. On the way we met Rajat Suri. When we got to the\nyogurt place, we found Dave Shen there, and as we walked out we ran\ninto Yuri Sagalov. We walked with him for a block or so and we ran\ninto Muzzammil Zaveri, and then a block later we met Aydin Senkut.\nThis is everyday life in Palo Alto. I wasn't trying to meet people;\nI was just having lunch. And I'm sure for every startup founder\nor investor I saw that I knew, there were 5 more I didn't. If Ron\nConway had been with us he would have met 30 people he knew.Thanks to Sam Altman, Paul Buchheit, Jessica Livingston, and\nHarj Taggar for reading drafts of this."} {"title": "iflisp", "text": "May 2003If Lisp is so great, why don't more people use it? I was \nasked this question by a student in the audience at a \ntalk I gave recently. Not for the first time, either.In languages, as in so many things, there's not much \ncorrelation between popularity and quality. Why does \nJohn Grisham (King of Torts sales rank, 44) outsell\nJane Austen (Pride and Prejudice sales rank, 6191)?\nWould even Grisham claim that it's because he's a better\nwriter?Here's the first sentence of Pride and Prejudice:\n\nIt is a truth universally acknowledged, that a single man \nin possession of a good fortune must be in want of a\nwife.\n\n\"It is a truth universally acknowledged?\" Long words for\nthe first sentence of a love story.Like Jane Austen, Lisp looks hard. Its syntax, or lack\nof syntax, makes it look completely unlike \nthe languages\nmost people are used to. Before I learned Lisp, I was afraid\nof it too. I recently came across a notebook from 1983\nin which I'd written:\n\nI suppose I should learn Lisp, but it seems so foreign.\n\nFortunately, I was 19 at the time and not too resistant to learning\nnew things. I was so ignorant that learning\nalmost anything meant learning new things.People frightened by Lisp make up other reasons for not\nusing it. The standard\nexcuse, back when C was the default language, was that Lisp\nwas too slow. Now that Lisp dialects are among\nthe faster\nlanguages available, that excuse has gone away.\nNow the standard excuse is openly circular: that other languages\nare more popular.(Beware of such reasoning. It gets you Windows.)Popularity is always self-perpetuating, but it's especially\nso in programming languages. More libraries\nget written for popular languages, which makes them still\nmore popular. Programs often have to work with existing programs,\nand this is easier if they're written in the same language,\nso languages spread from program to program like a virus.\nAnd managers prefer popular languages, because they give them \nmore leverage over developers, who can more easily be replaced.Indeed, if programming languages were all more or less equivalent,\nthere would be little justification for using any but the most\npopular. But they aren't all equivalent, not by a long\nshot. And that's why less popular languages, like Jane Austen's \nnovels, continue to survive at all. When everyone else is reading \nthe latest John Grisham novel, there will always be a few people \nreading Jane Austen instead."} {"title": "know", "text": "December 2014I've read Villehardouin's chronicle of the Fourth Crusade at least\ntwo times, maybe three. And yet if I had to write down everything\nI remember from it, I doubt it would amount to much more than a\npage. Multiply this times several hundred, and I get an uneasy\nfeeling when I look at my bookshelves. What use is it to read all\nthese books if I remember so little from them?A few months ago, as I was reading Constance Reid's excellent\nbiography of Hilbert, I figured out if not the answer to this\nquestion, at least something that made me feel better about it.\nShe writes:\n\n Hilbert had no patience with mathematical lectures which filled\n the students with facts but did not teach them how to frame a\n problem and solve it. He often used to tell them that \"a perfect\n formulation of a problem is already half its solution.\"\n\nThat has always seemed to me an important point, and I was even\nmore convinced of it after hearing it confirmed by Hilbert.But how had I come to believe in this idea in the first place? A\ncombination of my own experience and other things I'd read. None\nof which I could at that moment remember! And eventually I'd forget\nthat Hilbert had confirmed it too. But my increased belief in the\nimportance of this idea would remain something I'd learned from\nthis book, even after I'd forgotten I'd learned it.Reading and experience train your model of the world. And even if\nyou forget the experience or what you read, its effect on your model\nof the world persists. Your mind is like a compiled program you've\nlost the source of. It works, but you don't know why.The place to look for what I learned from Villehardouin's chronicle\nis not what I remember from it, but my mental models of the crusades,\nVenice, medieval culture, siege warfare, and so on. Which doesn't\nmean I couldn't have read more attentively, but at least the harvest\nof reading is not so miserably small as it might seem.This is one of those things that seem obvious in retrospect. But\nit was a surprise to me and presumably would be to anyone else who\nfelt uneasy about (apparently) forgetting so much they'd read.Realizing it does more than make you feel a little better about\nforgetting, though. There are specific implications.For example, reading and experience are usually \"compiled\" at the\ntime they happen, using the state of your brain at that time. The\nsame book would get compiled differently at different points in\nyour life. Which means it is very much worth reading important\nbooks multiple times. I always used to feel some misgivings about\nrereading books. I unconsciously lumped reading together with work\nlike carpentry, where having to do something again is a sign you\ndid it wrong the first time. Whereas now the phrase \"already read\"\nseems almost ill-formed.Intriguingly, this implication isn't limited to books. Technology\nwill increasingly make it possible to relive our experiences. When\npeople do that today it's usually to enjoy them again (e.g. when\nlooking at pictures of a trip) or to find the origin of some bug in\ntheir compiled code (e.g. when Stephen Fry succeeded in remembering\nthe childhood trauma that prevented him from singing). But as\ntechnologies for recording and playing back your life improve, it\nmay become common for people to relive experiences without any goal\nin mind, simply to learn from them again as one might when rereading\na book.Eventually we may be able not just to play back experiences but\nalso to index and even edit them. So although not knowing how you\nknow things may seem part of being human, it may not be.\nThanks to Sam Altman, Jessica Livingston, and Robert Morris for reading \ndrafts of this."} {"title": "rss", "text": "Aaron Swartz created a scraped\nfeed\nof the essays page."} {"title": "todo", "text": "April 2012A palliative care nurse called Bronnie Ware made a list of the\nbiggest regrets\nof the dying. Her list seems plausible. I could see\nmyself \u2014 can see myself \u2014 making at least 4 of these\n5 mistakes.If you had to compress them into a single piece of advice, it might\nbe: don't be a cog. The 5 regrets paint a portrait of post-industrial\nman, who shrinks himself into a shape that fits his circumstances,\nthen turns dutifully till he stops.The alarming thing is, the mistakes that produce these regrets are\nall errors of omission. You forget your dreams, ignore your family,\nsuppress your feelings, neglect your friends, and forget to be\nhappy. Errors of omission are a particularly dangerous type of\nmistake, because you make them by default.I would like to avoid making these mistakes. But how do you avoid\nmistakes you make by default? Ideally you transform your life so\nit has other defaults. But it may not be possible to do that\ncompletely. As long as these mistakes happen by default, you probably\nhave to be reminded not to make them. So I inverted the 5 regrets,\nyielding a list of 5 commands\n\n Don't ignore your dreams; don't work too much; say what you\n think; cultivate friendships; be happy.\n\nwhich I then put at the top of the file I use as a todo list."} {"title": "vb", "text": "January 2016Life is short, as everyone knows. When I was a kid I used to wonder\nabout this. Is life actually short, or are we really complaining\nabout its finiteness? Would we be just as likely to feel life was\nshort if we lived 10 times as long?Since there didn't seem any way to answer this question, I stopped\nwondering about it. Then I had kids. That gave me a way to answer\nthe question, and the answer is that life actually is short.Having kids showed me how to convert a continuous quantity, time,\ninto discrete quantities. You only get 52 weekends with your 2 year\nold. If Christmas-as-magic lasts from say ages 3 to 10, you only\nget to watch your child experience it 8 times. And while it's\nimpossible to say what is a lot or a little of a continuous quantity\nlike time, 8 is not a lot of something. If you had a handful of 8\npeanuts, or a shelf of 8 books to choose from, the quantity would\ndefinitely seem limited, no matter what your lifespan was.Ok, so life actually is short. Does it make any difference to know\nthat?It has for me. It means arguments of the form \"Life is too short\nfor x\" have great force. It's not just a figure of speech to say\nthat life is too short for something. It's not just a synonym for\nannoying. If you find yourself thinking that life is too short for\nsomething, you should try to eliminate it if you can.When I ask myself what I've found life is too short for, the word\nthat pops into my head is \"bullshit.\" I realize that answer is\nsomewhat tautological. It's almost the definition of bullshit that\nit's the stuff that life is too short for. And yet bullshit does\nhave a distinctive character. There's something fake about it.\nIt's the junk food of experience.\n[1]If you ask yourself what you spend your time on that's bullshit,\nyou probably already know the answer. Unnecessary meetings, pointless\ndisputes, bureaucracy, posturing, dealing with other people's\nmistakes, traffic jams, addictive but unrewarding pastimes.There are two ways this kind of thing gets into your life: it's\neither forced on you, or it tricks you. To some extent you have to\nput up with the bullshit forced on you by circumstances. You need\nto make money, and making money consists mostly of errands. Indeed,\nthe law of supply and demand insures that: the more rewarding some\nkind of work is, the cheaper people will do it. It may be that\nless bullshit is forced on you than you think, though. There has\nalways been a stream of people who opt out of the default grind and\ngo live somewhere where opportunities are fewer in the conventional\nsense, but life feels more authentic. This could become more common.You can do it on a smaller scale without moving. The amount of\ntime you have to spend on bullshit varies between employers. Most\nlarge organizations (and many small ones) are steeped in it. But\nif you consciously prioritize bullshit avoidance over other factors\nlike money and prestige, you can probably find employers that will\nwaste less of your time.If you're a freelancer or a small company, you can do this at the\nlevel of individual customers. If you fire or avoid toxic customers,\nyou can decrease the amount of bullshit in your life by more than\nyou decrease your income.But while some amount of bullshit is inevitably forced on you, the\nbullshit that sneaks into your life by tricking you is no one's\nfault but your own. And yet the bullshit you choose may be harder\nto eliminate than the bullshit that's forced on you. Things that\nlure you into wasting your time have to be really good at\ntricking you. An example that will be familiar to a lot of people\nis arguing online. When someone\ncontradicts you, they're in a sense attacking you. Sometimes pretty\novertly. Your instinct when attacked is to defend yourself. But\nlike a lot of instincts, this one wasn't designed for the world we\nnow live in. Counterintuitive as it feels, it's better most of\nthe time not to defend yourself. Otherwise these people are literally\ntaking your life.\n[2]Arguing online is only incidentally addictive. There are more\ndangerous things than that. As I've written before, one byproduct\nof technical progress is that things we like tend to become more\naddictive. Which means we will increasingly have to make a conscious\neffort to avoid addictions \u0097 to stand outside ourselves and ask \"is\nthis how I want to be spending my time?\"As well as avoiding bullshit, one should actively seek out things\nthat matter. But different things matter to different people, and\nmost have to learn what matters to them. A few are lucky and realize\nearly on that they love math or taking care of animals or writing,\nand then figure out a way to spend a lot of time doing it. But\nmost people start out with a life that's a mix of things that\nmatter and things that don't, and only gradually learn to distinguish\nbetween them.For the young especially, much of this confusion is induced by the\nartificial situations they find themselves in. In middle school and\nhigh school, what the other kids think of you seems the most important\nthing in the world. But when you ask adults what they got wrong\nat that age, nearly all say they cared too much what other kids\nthought of them.One heuristic for distinguishing stuff that matters is to ask\nyourself whether you'll care about it in the future. Fake stuff\nthat matters usually has a sharp peak of seeming to matter. That's\nhow it tricks you. The area under the curve is small, but its shape\njabs into your consciousness like a pin.The things that matter aren't necessarily the ones people would\ncall \"important.\" Having coffee with a friend matters. You won't\nfeel later like that was a waste of time.One great thing about having small children is that they make you\nspend time on things that matter: them. They grab your sleeve as\nyou're staring at your phone and say \"will you play with me?\" And\nodds are that is in fact the bullshit-minimizing option.If life is short, we should expect its shortness to take us by\nsurprise. And that is just what tends to happen. You take things\nfor granted, and then they're gone. You think you can always write\nthat book, or climb that mountain, or whatever, and then you realize\nthe window has closed. The saddest windows close when other people\ndie. Their lives are short too. After my mother died, I wished I'd\nspent more time with her. I lived as if she'd always be there.\nAnd in her typical quiet way she encouraged that illusion. But an\nillusion it was. I think a lot of people make the same mistake I\ndid.The usual way to avoid being taken by surprise by something is to\nbe consciously aware of it. Back when life was more precarious,\npeople used to be aware of death to a degree that would now seem a\nbit morbid. I'm not sure why, but it doesn't seem the right answer\nto be constantly reminding oneself of the grim reaper hovering at\neveryone's shoulder. Perhaps a better solution is to look at the\nproblem from the other end. Cultivate a habit of impatience about\nthe things you most want to do. Don't wait before climbing that\nmountain or writing that book or visiting your mother. You don't\nneed to be constantly reminding yourself why you shouldn't wait.\nJust don't wait.I can think of two more things one does when one doesn't have much\nof something: try to get more of it, and savor what one has. Both\nmake sense here.How you live affects how long you live. Most people could do better.\nMe among them.But you can probably get even more effect by paying closer attention\nto the time you have. It's easy to let the days rush by. The\n\"flow\" that imaginative people love so much has a darker cousin\nthat prevents you from pausing to savor life amid the daily slurry\nof errands and alarms. One of the most striking things I've read\nwas not in a book, but the title of one: James Salter's Burning\nthe Days.It is possible to slow time somewhat. I've gotten better at it.\nKids help. When you have small children, there are a lot of moments\nso perfect that you can't help noticing.It does help too to feel that you've squeezed everything out of\nsome experience. The reason I'm sad about my mother is not just\nthat I miss her but that I think of all the things we could have\ndone that we didn't. My oldest son will be 7 soon. And while I\nmiss the 3 year old version of him, I at least don't have any regrets\nover what might have been. We had the best time a daddy and a 3\nyear old ever had.Relentlessly prune bullshit, don't wait to do things that matter,\nand savor the time you have. That's what you do when life is short.Notes[1]\nAt first I didn't like it that the word that came to mind was\none that had other meanings. But then I realized the other meanings\nare fairly closely related. Bullshit in the sense of things you\nwaste your time on is a lot like intellectual bullshit.[2]\nI chose this example deliberately as a note to self. I get\nattacked a lot online. People tell the craziest lies about me.\nAnd I have so far done a pretty mediocre job of suppressing the\nnatural human inclination to say \"Hey, that's not true!\"Thanks to Jessica Livingston and Geoff Ralston for reading drafts\nof this."} {"title": "web20", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nNovember 2005Does \"Web 2.0\" mean anything? Till recently I thought it didn't,\nbut the truth turns out to be more complicated. Originally, yes,\nit was meaningless. Now it seems to have acquired a meaning. And\nyet those who dislike the term are probably right, because if it\nmeans what I think it does, we don't need it.I first heard the phrase \"Web 2.0\" in the name of the Web 2.0\nconference in 2004. At the time it was supposed to mean using \"the\nweb as a platform,\" which I took to refer to web-based applications.\n[1]So I was surprised at a conference this summer when Tim O'Reilly\nled a session intended to figure out a definition of \"Web 2.0.\"\nDidn't it already mean using the web as a platform? And if it\ndidn't already mean something, why did we need the phrase at all?OriginsTim says the phrase \"Web 2.0\" first\narose in \"a brainstorming session between\nO'Reilly and Medialive International.\" What is Medialive International?\n\"Producers of technology tradeshows and conferences,\" according to\ntheir site. So presumably that's what this brainstorming session\nwas about. O'Reilly wanted to organize a conference about the web,\nand they were wondering what to call it.I don't think there was any deliberate plan to suggest there was a\nnew version of the web. They just wanted to make the point\nthat the web mattered again. It was a kind of semantic deficit\nspending: they knew new things were coming, and the \"2.0\" referred\nto whatever those might turn out to be.And they were right. New things were coming. But the new version\nnumber led to some awkwardness in the short term. In the process\nof developing the pitch for the first conference, someone must have\ndecided they'd better take a stab at explaining what that \"2.0\"\nreferred to. Whatever it meant, \"the web as a platform\" was at\nleast not too constricting.The story about \"Web 2.0\" meaning the web as a platform didn't live\nmuch past the first conference. By the second conference, what\n\"Web 2.0\" seemed to mean was something about democracy. At least,\nit did when people wrote about it online. The conference itself\ndidn't seem very grassroots. It cost $2800, so the only people who\ncould afford to go were VCs and people from big companies.And yet, oddly enough, Ryan Singel's article\nabout the conference in Wired News spoke of \"throngs of\ngeeks.\" When a friend of mine asked Ryan about this, it was news\nto him. He said he'd originally written something like \"throngs\nof VCs and biz dev guys\" but had later shortened it just to \"throngs,\"\nand that this must have in turn been expanded by the editors into\n\"throngs of geeks.\" After all, a Web 2.0 conference would presumably\nbe full of geeks, right?Well, no. There were about 7. Even Tim O'Reilly was wearing a \nsuit, a sight so alien I couldn't parse it at first. I saw\nhim walk by and said to one of the O'Reilly people \"that guy looks\njust like Tim.\"\"Oh, that's Tim. He bought a suit.\"\nI ran after him, and sure enough, it was. He explained that he'd\njust bought it in Thailand.The 2005 Web 2.0 conference reminded me of Internet trade shows\nduring the Bubble, full of prowling VCs looking for the next hot\nstartup. There was that same odd atmosphere created by a large \nnumber of people determined not to miss out. Miss out on what?\nThey didn't know. Whatever was going to happen\u2014whatever Web 2.0\nturned out to be.I wouldn't quite call it \"Bubble 2.0\" just because VCs are eager\nto invest again. The Internet is a genuinely big deal. The bust\nwas as much an overreaction as\nthe boom. It's to be expected that once we started to pull out of\nthe bust, there would be a lot of growth in this area, just as there\nwas in the industries that spiked the sharpest before the Depression.The reason this won't turn into a second Bubble is that the IPO\nmarket is gone. Venture investors\nare driven by exit strategies. The reason they were funding all \nthose laughable startups during the late 90s was that they hoped\nto sell them to gullible retail investors; they hoped to be laughing\nall the way to the bank. Now that route is closed. Now the default\nexit strategy is to get bought, and acquirers are less prone to\nirrational exuberance than IPO investors. The closest you'll get \nto Bubble valuations is Rupert Murdoch paying $580 million for \nMyspace. That's only off by a factor of 10 or so.1. AjaxDoes \"Web 2.0\" mean anything more than the name of a conference\nyet? I don't like to admit it, but it's starting to. When people\nsay \"Web 2.0\" now, I have some idea what they mean. And the fact\nthat I both despise the phrase and understand it is the surest proof\nthat it has started to mean something.One ingredient of its meaning is certainly Ajax, which I can still\nonly just bear to use without scare quotes. Basically, what \"Ajax\"\nmeans is \"Javascript now works.\" And that in turn means that\nweb-based applications can now be made to work much more like desktop\nones.As you read this, a whole new generation\nof software is being written to take advantage of Ajax. There\nhasn't been such a wave of new applications since microcomputers\nfirst appeared. Even Microsoft sees it, but it's too late for them\nto do anything more than leak \"internal\" \ndocuments designed to give the impression they're on top of this\nnew trend.In fact the new generation of software is being written way too\nfast for Microsoft even to channel it, let alone write their own\nin house. Their only hope now is to buy all the best Ajax startups\nbefore Google does. And even that's going to be hard, because\nGoogle has as big a head start in buying microstartups as it did\nin search a few years ago. After all, Google Maps, the canonical\nAjax application, was the result of a startup they bought.So ironically the original description of the Web 2.0 conference\nturned out to be partially right: web-based applications are a big\ncomponent of Web 2.0. But I'm convinced they got this right by \naccident. The Ajax boom didn't start till early 2005, when Google\nMaps appeared and the term \"Ajax\" was coined.2. DemocracyThe second big element of Web 2.0 is democracy. We now have several\nexamples to prove that amateurs can \nsurpass professionals, when they have the right kind of system to \nchannel their efforts. Wikipedia\nmay be the most famous. Experts have given Wikipedia middling\nreviews, but they miss the critical point: it's good enough. And \nit's free, which means people actually read it. On the web, articles\nyou have to pay for might as well not exist. Even if you were \nwilling to pay to read them yourself, you can't link to them. \nThey're not part of the conversation.Another place democracy seems to win is in deciding what counts as\nnews. I never look at any news site now except Reddit.\n[2]\n I know if something major\nhappens, or someone writes a particularly interesting article, it \nwill show up there. Why bother checking the front page of any\nspecific paper or magazine? Reddit's like an RSS feed for the whole\nweb, with a filter for quality. Similar sites include Digg, a technology news site that's\nrapidly approaching Slashdot in popularity, and del.icio.us, the collaborative\nbookmarking network that set off the \"tagging\" movement. And whereas\nWikipedia's main appeal is that it's good enough and free, these\nsites suggest that voters do a significantly better job than human\neditors.The most dramatic example of Web 2.0 democracy is not in the selection\nof ideas, but their production. \nI've noticed for a while that the stuff I read on individual people's\nsites is as good as or better than the stuff I read in newspapers\nand magazines. And now I have independent evidence: the top links\non Reddit are generally links to individual people's sites rather \nthan to magazine articles or news stories.My experience of writing\nfor magazines suggests an explanation. Editors. They control the\ntopics you can write about, and they can generally rewrite whatever\nyou produce. The result is to damp extremes. Editing yields 95th\npercentile writing\u201495% of articles are improved by it, but 5% are\ndragged down. 5% of the time you get \"throngs of geeks.\"On the web, people can publish whatever they want. Nearly all of\nit falls short of the editor-damped writing in print publications.\nBut the pool of writers is very, very large. If it's large enough,\nthe lack of damping means the best writing online should surpass \nthe best in print.\n[3] \nAnd now that the web has evolved mechanisms\nfor selecting good stuff, the web wins net. Selection beats damping,\nfor the same reason market economies beat centrally planned ones.Even the startups are different this time around. They are to the \nstartups of the Bubble what bloggers are to the print media. During\nthe Bubble, a startup meant a company headed by an MBA that was \nblowing through several million dollars of VC money to \"get big\nfast\" in the most literal sense. Now it means a smaller, younger, more technical group that just \ndecided to make something great. They'll decide later if they want \nto raise VC-scale funding, and if they take it, they'll take it on\ntheir terms.3. Don't Maltreat UsersI think everyone would agree that democracy and Ajax are elements\nof \"Web 2.0.\" I also see a third: not to maltreat users. During\nthe Bubble a lot of popular sites were quite high-handed with users.\nAnd not just in obvious ways, like making them register, or subjecting\nthem to annoying ads. The very design of the average site in the \nlate 90s was an abuse. Many of the most popular sites were loaded\nwith obtrusive branding that made them slow to load and sent the\nuser the message: this is our site, not yours. (There's a physical\nanalog in the Intel and Microsoft stickers that come on some\nlaptops.)I think the root of the problem was that sites felt they were giving\nsomething away for free, and till recently a company giving anything\naway for free could be pretty high-handed about it. Sometimes it\nreached the point of economic sadism: site owners assumed that the\nmore pain they caused the user, the more benefit it must be to them. \nThe most dramatic remnant of this model may be at salon.com, where \nyou can read the beginning of a story, but to get the rest you have\nsit through a movie.At Y Combinator we advise all the startups we fund never to lord\nit over users. Never make users register, unless you need to in\norder to store something for them. If you do make users register, \nnever make them wait for a confirmation link in an email; in fact,\ndon't even ask for their email address unless you need it for some\nreason. Don't ask them any unnecessary questions. Never send them\nemail unless they explicitly ask for it. Never frame pages you\nlink to, or open them in new windows. If you have a free version \nand a pay version, don't make the free version too restricted. And\nif you find yourself asking \"should we allow users to do x?\" just \nanswer \"yes\" whenever you're unsure. Err on the side of generosity.In How to Start a Startup I advised startups\nnever to let anyone fly under them, meaning never to let any other\ncompany offer a cheaper, easier solution. Another way to fly low \nis to give users more power. Let users do what they want. If you \ndon't and a competitor does, you're in trouble.iTunes is Web 2.0ish in this sense. Finally you can buy individual\nsongs instead of having to buy whole albums. The recording industry\nhated the idea and resisted it as long as possible. But it was\nobvious what users wanted, so Apple flew under the labels.\n[4]\nThough really it might be better to describe iTunes as Web 1.5. \nWeb 2.0 applied to music would probably mean individual bands giving\naway DRMless songs for free.The ultimate way to be nice to users is to give them something for\nfree that competitors charge for. During the 90s a lot of people \nprobably thought we'd have some working system for micropayments \nby now. In fact things have gone in the other direction. The most \nsuccessful sites are the ones that figure out new ways to give stuff\naway for free. Craigslist has largely destroyed the classified ad\nsites of the 90s, and OkCupid looks likely to do the same to the\nprevious generation of dating sites.Serving web pages is very, very cheap. If you can make even a \nfraction of a cent per page view, you can make a profit. And\ntechnology for targeting ads continues to improve. I wouldn't be\nsurprised if ten years from now eBay had been supplanted by an \nad-supported freeBay (or, more likely, gBay).Odd as it might sound, we tell startups that they should try to\nmake as little money as possible. If you can figure out a way to\nturn a billion dollar industry into a fifty million dollar industry,\nso much the better, if all fifty million go to you. Though indeed,\nmaking things cheaper often turns out to generate more money in the\nend, just as automating things often turns out to generate more\njobs.The ultimate target is Microsoft. What a bang that balloon is going\nto make when someone pops it by offering a free web-based alternative \nto MS Office.\n[5]\nWho will? Google? They seem to be taking their\ntime. I suspect the pin will be wielded by a couple of 20 year old\nhackers who are too naive to be intimidated by the idea. (How hard\ncan it be?)The Common ThreadAjax, democracy, and not dissing users. What do they all have in \ncommon? I didn't realize they had anything in common till recently,\nwhich is one of the reasons I disliked the term \"Web 2.0\" so much.\nIt seemed that it was being used as a label for whatever happened\nto be new\u2014that it didn't predict anything.But there is a common thread. Web 2.0 means using the web the way\nit's meant to be used. The \"trends\" we're seeing now are simply\nthe inherent nature of the web emerging from under the broken models\nthat got imposed on it during the Bubble.I realized this when I read an interview with\nJoe Kraus, the co-founder of Excite.\n[6]\n\n Excite really never got the business model right at all. We fell \n into the classic problem of how when a new medium comes out it\n adopts the practices, the content, the business models of the old\n medium\u2014which fails, and then the more appropriate models get\n figured out.\n\nIt may have seemed as if not much was happening during the years\nafter the Bubble burst. But in retrospect, something was happening:\nthe web was finding its natural angle of repose. The democracy \ncomponent, for example\u2014that's not an innovation, in the sense of\nsomething someone made happen. That's what the web naturally tends\nto produce.Ditto for the idea of delivering desktop-like applications over the\nweb. That idea is almost as old as the web. But the first time \naround it was co-opted by Sun, and we got Java applets. Java has\nsince been remade into a generic replacement for C++, but in 1996\nthe story about Java was that it represented a new model of software.\nInstead of desktop applications, you'd run Java \"applets\" delivered\nfrom a server.This plan collapsed under its own weight. Microsoft helped kill it,\nbut it would have died anyway. There was no uptake among hackers.\nWhen you find PR firms promoting\nsomething as the next development platform, you can be sure it's\nnot. If it were, you wouldn't need PR firms to tell you, because \nhackers would already be writing stuff on top of it, the way sites \nlike Busmonster used Google Maps as a\nplatform before Google even meant it to be one.The proof that Ajax is the next hot platform is that thousands of \nhackers have spontaneously started building things on top\nof it. Mikey likes it.There's another thing all three components of Web 2.0 have in common.\nHere's a clue. Suppose you approached investors with the following\nidea for a Web 2.0 startup:\n\n Sites like del.icio.us and flickr allow users to \"tag\" content\n with descriptive tokens. But there is also huge source of\n implicit tags that they ignore: the text within web links.\n Moreover, these links represent a social network connecting the \n individuals and organizations who created the pages, and by using\n graph theory we can compute from this network an estimate of the\n reputation of each member. We plan to mine the web for these \n implicit tags, and use them together with the reputation hierarchy\n they embody to enhance web searches.\n\nHow long do you think it would take them on average to realize that\nit was a description of Google?Google was a pioneer in all three components of Web 2.0: their core\nbusiness sounds crushingly hip when described in Web 2.0 terms, \n\"Don't maltreat users\" is a subset of \"Don't be evil,\" and of course\nGoogle set off the whole Ajax boom with Google Maps.Web 2.0 means using the web as it was meant to be used, and Google\ndoes. That's their secret. They're sailing with the wind, instead of sitting \nbecalmed praying for a business model, like the print media, or \ntrying to tack upwind by suing their customers, like Microsoft and \nthe record labels.\n[7]Google doesn't try to force things to happen their way. They try \nto figure out what's going to happen, and arrange to be standing \nthere when it does. That's the way to approach technology\u2014and \nas business includes an ever larger technological component, the\nright way to do business.The fact that Google is a \"Web 2.0\" company shows that, while\nmeaningful, the term is also rather bogus. It's like the word\n\"allopathic.\" It just means doing things right, and it's a bad \nsign when you have a special word for that.\nNotes[1]\nFrom the conference\nsite, June 2004: \"While the first wave of the Web was closely \ntied to the browser, the second wave extends applications across \nthe web and enables a new generation of services and business\nopportunities.\" To the extent this means anything, it seems to be\nabout \nweb-based applications.[2]\nDisclosure: Reddit was funded by \nY Combinator. But although\nI started using it out of loyalty to the home team, I've become a\ngenuine addict. While we're at it, I'm also an investor in\n!MSFT, having sold all my shares earlier this year.[3]\nI'm not against editing. I spend more time editing than\nwriting, and I have a group of picky friends who proofread almost\neverything I write. What I dislike is editing done after the fact \nby someone else.[4]\nObvious is an understatement. Users had been climbing in through \nthe window for years before Apple finally moved the door.[5]\nHint: the way to create a web-based alternative to Office may\nnot be to write every component yourself, but to establish a protocol\nfor web-based apps to share a virtual home directory spread across\nmultiple servers. Or it may be to write it all yourself.[6]\nIn Jessica Livingston's\nFounders at\nWork.[7]\nMicrosoft didn't sue their customers directly, but they seem \nto have done all they could to help SCO sue them.Thanks to Trevor Blackwell, Sarah Harlin, Jessica Livingston, Peter\nNorvig, Aaron Swartz, and Jeff Weiner for reading drafts of this, and to the\nguys at O'Reilly and Adaptive Path for answering my questions."} {"title": "addiction", "text": "July 2010What hard liquor, cigarettes, heroin, and crack have in common is\nthat they're all more concentrated forms of less addictive predecessors.\nMost if not all the things we describe as addictive are. And the\nscary thing is, the process that created them is accelerating.We wouldn't want to stop it. It's the same process that cures\ndiseases: technological progress. Technological progress means\nmaking things do more of what we want. When the thing we want is\nsomething we want to want, we consider technological progress good.\nIf some new technique makes solar cells x% more efficient, that\nseems strictly better. When progress concentrates something we\ndon't want to want\u2014when it transforms opium into heroin\u2014it seems\nbad. But it's the same process at work.\n[1]No one doubts this process is accelerating, which means increasing\nnumbers of things we like will be transformed into things we like\ntoo much.\n[2]As far as I know there's no word for something we like too much.\nThe closest is the colloquial sense of \"addictive.\" That usage has\nbecome increasingly common during my lifetime. And it's clear why:\nthere are an increasing number of things we need it for. At the\nextreme end of the spectrum are crack and meth. Food has been\ntransformed by a combination of factory farming and innovations in\nfood processing into something with way more immediate bang for the\nbuck, and you can see the results in any town in America. Checkers\nand solitaire have been replaced by World of Warcraft and FarmVille.\nTV has become much more engaging, and even so it can't compete with Facebook.The world is more addictive than it was 40 years ago. And unless\nthe forms of technological progress that produced these things are\nsubject to different laws than technological progress in general,\nthe world will get more addictive in the next 40 years than it did\nin the last 40.The next 40 years will bring us some wonderful things. I don't\nmean to imply they're all to be avoided. Alcohol is a dangerous\ndrug, but I'd rather live in a world with wine than one without.\nMost people can coexist with alcohol; but you have to be careful.\nMore things we like will mean more things we have to be careful\nabout.Most people won't, unfortunately. Which means that as the world\nbecomes more addictive, the two senses in which one can live a\nnormal life will be driven ever further apart. One sense of \"normal\"\nis statistically normal: what everyone else does. The other is the\nsense we mean when we talk about the normal operating range of a\npiece of machinery: what works best.These two senses are already quite far apart. Already someone\ntrying to live well would seem eccentrically abstemious in most of\nthe US. That phenomenon is only going to become more pronounced.\nYou can probably take it as a rule of thumb from now on that if\npeople don't think you're weird, you're living badly.Societies eventually develop antibodies to addictive new things.\nI've seen that happen with cigarettes. When cigarettes first\nappeared, they spread the way an infectious disease spreads through\na previously isolated population. Smoking rapidly became a\n(statistically) normal thing. There were ashtrays everywhere. We\nhad ashtrays in our house when I was a kid, even though neither of\nmy parents smoked. You had to for guests.As knowledge spread about the dangers of smoking, customs changed.\nIn the last 20 years, smoking has been transformed from something\nthat seemed totally normal into a rather seedy habit: from something\nmovie stars did in publicity shots to something small huddles of\naddicts do outside the doors of office buildings. A lot of the\nchange was due to legislation, of course, but the legislation\ncouldn't have happened if customs hadn't already changed.It took a while though\u2014on the order of 100 years. And unless the\nrate at which social antibodies evolve can increase to match the\naccelerating rate at which technological progress throws off new\naddictions, we'll be increasingly unable to rely on customs to\nprotect us.\n[3]\nUnless we want to be canaries in the coal mine\nof each new addiction\u2014the people whose sad example becomes a\nlesson to future generations\u2014we'll have to figure out for ourselves\nwhat to avoid and how. It will actually become a reasonable strategy\n(or a more reasonable strategy) to suspect \neverything new.In fact, even that won't be enough. We'll have to worry not just\nabout new things, but also about existing things becoming more\naddictive. That's what bit me. I've avoided most addictions, but\nthe Internet got me because it became addictive while I was using\nit.\n[4]Most people I know have problems with Internet addiction. We're\nall trying to figure out our own customs for getting free of it.\nThat's why I don't have an iPhone, for example; the last thing I\nwant is for the Internet to follow me out into the world.\n[5]\nMy latest trick is taking long hikes. I used to think running was a\nbetter form of exercise than hiking because it took less time. Now\nthe slowness of hiking seems an advantage, because the longer I\nspend on the trail, the longer I have to think without interruption.Sounds pretty eccentric, doesn't it? It always will when you're\ntrying to solve problems where there are no customs yet to guide\nyou. Maybe I can't plead Occam's razor; maybe I'm simply eccentric.\nBut if I'm right about the acceleration of addictiveness, then this\nkind of lonely squirming to avoid it will increasingly be the fate\nof anyone who wants to get things done. We'll increasingly be\ndefined by what we say no to.\nNotes[1]\nCould you restrict technological progress to areas where you\nwanted it? Only in a limited way, without becoming a police state.\nAnd even then your restrictions would have undesirable side effects.\n\"Good\" and \"bad\" technological progress aren't sharply differentiated,\nso you'd find you couldn't slow the latter without also slowing the\nformer. And in any case, as Prohibition and the \"war on drugs\"\nshow, bans often do more harm than good.[2]\nTechnology has always been accelerating. By Paleolithic\nstandards, technology evolved at a blistering pace in the Neolithic\nperiod.[3]\nUnless we mass produce social customs. I suspect the recent\nresurgence of evangelical Christianity in the US is partly a reaction\nto drugs. In desperation people reach for the sledgehammer; if\ntheir kids won't listen to them, maybe they'll listen to God. But\nthat solution has broader consequences than just getting kids to\nsay no to drugs. You end up saying no to \nscience as well.\nI worry we may be heading for a future in which only a few people\nplot their own itinerary through no-land, while everyone else books\na package tour. Or worse still, has one booked for them by the\ngovernment.[4]\nPeople commonly use the word \"procrastination\" to describe\nwhat they do on the Internet. It seems to me too mild to describe\nwhat's happening as merely not-doing-work. We don't call it\nprocrastination when someone gets drunk instead of working.[5]\nSeveral people have told me they like the iPad because it\nlets them bring the Internet into situations where a laptop would\nbe too conspicuous. In other words, it's a hip flask. (This is\ntrue of the iPhone too, of course, but this advantage isn't as\nobvious because it reads as a phone, and everyone's used to those.)Thanks to Sam Altman, Patrick Collison, Jessica Livingston, and\nRobert Morris for reading drafts of this."} {"title": "philosophy", "text": "September 2007In high school I decided I was going to study philosophy in college.\nI had several motives, some more honorable than others. One of the\nless honorable was to shock people. College was regarded as job\ntraining where I grew up, so studying philosophy seemed an impressively\nimpractical thing to do. Sort of like slashing holes in your clothes\nor putting a safety pin through your ear, which were other forms\nof impressive impracticality then just coming into fashion.But I had some more honest motives as well. I thought studying\nphilosophy would be a shortcut straight to wisdom. All the people\nmajoring in other things would just end up with a bunch of domain\nknowledge. I would be learning what was really what.I'd tried to read a few philosophy books. Not recent ones; you\nwouldn't find those in our high school library. But I tried to\nread Plato and Aristotle. I doubt I believed I understood them,\nbut they sounded like they were talking about something important.\nI assumed I'd learn what in college.The summer before senior year I took some college classes. I learned\na lot in the calculus class, but I didn't learn much in Philosophy\n101. And yet my plan to study philosophy remained intact. It was\nmy fault I hadn't learned anything. I hadn't read the books we\nwere assigned carefully enough. I'd give Berkeley's Principles\nof Human Knowledge another shot in college. Anything so admired\nand so difficult to read must have something in it, if one could\nonly figure out what.Twenty-six years later, I still don't understand Berkeley. I have\na nice edition of his collected works. Will I ever read it? Seems\nunlikely.The difference between then and now is that now I understand why\nBerkeley is probably not worth trying to understand. I think I see\nnow what went wrong with philosophy, and how we might fix it.WordsI did end up being a philosophy major for most of college. It\ndidn't work out as I'd hoped. I didn't learn any magical truths\ncompared to which everything else was mere domain knowledge. But\nI do at least know now why I didn't. Philosophy doesn't really\nhave a subject matter in the way math or history or most other\nuniversity subjects do. There is no core of knowledge one must\nmaster. The closest you come to that is a knowledge of what various\nindividual philosophers have said about different topics over the\nyears. Few were sufficiently correct that people have forgotten\nwho discovered what they discovered.Formal logic has some subject matter. I took several classes in\nlogic. I don't know if I learned anything from them.\n[1]\nIt does seem to me very important to be able to flip ideas around in\none's head: to see when two ideas don't fully cover the space of\npossibilities, or when one idea is the same as another but with a\ncouple things changed. But did studying logic teach me the importance\nof thinking this way, or make me any better at it? I don't know.There are things I know I learned from studying philosophy. The\nmost dramatic I learned immediately, in the first semester of\nfreshman year, in a class taught by Sydney Shoemaker. I learned\nthat I don't exist. I am (and you are) a collection of cells that\nlurches around driven by various forces, and calls itself I. But\nthere's no central, indivisible thing that your identity goes with.\nYou could conceivably lose half your brain and live. Which means\nyour brain could conceivably be split into two halves and each\ntransplanted into different bodies. Imagine waking up after such\nan operation. You have to imagine being two people.The real lesson here is that the concepts we use in everyday life\nare fuzzy, and break down if pushed too hard. Even a concept as\ndear to us as I. It took me a while to grasp this, but when I\ndid it was fairly sudden, like someone in the nineteenth century\ngrasping evolution and realizing the story of creation they'd been\ntold as a child was all wrong. \n[2]\nOutside of math there's a limit\nto how far you can push words; in fact, it would not be a bad\ndefinition of math to call it the study of terms that have precise\nmeanings. Everyday words are inherently imprecise. They work well\nenough in everyday life that you don't notice. Words seem to work,\njust as Newtonian physics seems to. But you can always make them\nbreak if you push them far enough.I would say that this has been, unfortunately for philosophy, the\ncentral fact of philosophy. Most philosophical debates are not\nmerely afflicted by but driven by confusions over words. Do we\nhave free will? Depends what you mean by \"free.\" Do abstract ideas\nexist? Depends what you mean by \"exist.\"Wittgenstein is popularly credited with the idea that most philosophical\ncontroversies are due to confusions over language. I'm not sure\nhow much credit to give him. I suspect a lot of people realized\nthis, but reacted simply by not studying philosophy, rather than\nbecoming philosophy professors.How did things get this way? Can something people have spent\nthousands of years studying really be a waste of time? Those are\ninteresting questions. In fact, some of the most interesting\nquestions you can ask about philosophy. The most valuable way to\napproach the current philosophical tradition may be neither to get\nlost in pointless speculations like Berkeley, nor to shut them down\nlike Wittgenstein, but to study it as an example of reason gone\nwrong.HistoryWestern philosophy really begins with Socrates, Plato, and Aristotle.\nWhat we know of their predecessors comes from fragments and references\nin later works; their doctrines could be described as speculative\ncosmology that occasionally strays into analysis. Presumably they\nwere driven by whatever makes people in every other society invent\ncosmologies.\n[3]With Socrates, Plato, and particularly Aristotle, this tradition\nturned a corner. There started to be a lot more analysis. I suspect\nPlato and Aristotle were encouraged in this by progress in math.\nMathematicians had by then shown that you could figure things out\nin a much more conclusive way than by making up fine sounding stories\nabout them. \n[4]People talk so much about abstractions now that we don't realize\nwhat a leap it must have been when they first started to. It was\npresumably many thousands of years between when people first started\ndescribing things as hot or cold and when someone asked \"what is\nheat?\" No doubt it was a very gradual process. We don't know if\nPlato or Aristotle were the first to ask any of the questions they\ndid. But their works are the oldest we have that do this on a large\nscale, and there is a freshness (not to say naivete) about them\nthat suggests some of the questions they asked were new to them,\nat least.Aristotle in particular reminds me of the phenomenon that happens\nwhen people discover something new, and are so excited by it that\nthey race through a huge percentage of the newly discovered territory\nin one lifetime. If so, that's evidence of how new this kind of\nthinking was. \n[5]This is all to explain how Plato and Aristotle can be very impressive\nand yet naive and mistaken. It was impressive even to ask the\nquestions they did. That doesn't mean they always came up with\ngood answers. It's not considered insulting to say that ancient\nGreek mathematicians were naive in some respects, or at least lacked\nsome concepts that would have made their lives easier. So I hope\npeople will not be too offended if I propose that ancient philosophers\nwere similarly naive. In particular, they don't seem to have fully\ngrasped what I earlier called the central fact of philosophy: that\nwords break if you push them too far.\"Much to the surprise of the builders of the first digital computers,\"\nRod Brooks wrote, \"programs written for them usually did not work.\"\n[6]\nSomething similar happened when people first started trying\nto talk about abstractions. Much to their surprise, they didn't\narrive at answers they agreed upon. In fact, they rarely seemed\nto arrive at answers at all.They were in effect arguing about artifacts induced by sampling at\ntoo low a resolution.The proof of how useless some of their answers turned out to be is\nhow little effect they have. No one after reading Aristotle's\nMetaphysics does anything differently as a result.\n[7]Surely I'm not claiming that ideas have to have practical applications\nto be interesting? No, they may not have to. Hardy's boast that\nnumber theory had no use whatsoever wouldn't disqualify it. But\nhe turned out to be mistaken. In fact, it's suspiciously hard to\nfind a field of math that truly has no practical use. And Aristotle's\nexplanation of the ultimate goal of philosophy in Book A of the\nMetaphysics implies that philosophy should be useful too.Theoretical KnowledgeAristotle's goal was to find the most general of general principles.\nThe examples he gives are convincing: an ordinary worker builds\nthings a certain way out of habit; a master craftsman can do more\nbecause he grasps the underlying principles. The trend is clear:\nthe more general the knowledge, the more admirable it is. But then\nhe makes a mistake\u2014possibly the most important mistake in the\nhistory of philosophy. He has noticed that theoretical knowledge\nis often acquired for its own sake, out of curiosity, rather than\nfor any practical need. So he proposes there are two kinds of\ntheoretical knowledge: some that's useful in practical matters and\nsome that isn't. Since people interested in the latter are interested\nin it for its own sake, it must be more noble. So he sets as his\ngoal in the Metaphysics the exploration of knowledge that has no\npractical use. Which means no alarms go off when he takes on grand\nbut vaguely understood questions and ends up getting lost in a sea\nof words.His mistake was to confuse motive and result. Certainly, people\nwho want a deep understanding of something are often driven by\ncuriosity rather than any practical need. But that doesn't mean\nwhat they end up learning is useless. It's very valuable in practice\nto have a deep understanding of what you're doing; even if you're\nnever called on to solve advanced problems, you can see shortcuts\nin the solution of simple ones, and your knowledge won't break down\nin edge cases, as it would if you were relying on formulas you\ndidn't understand. Knowledge is power. That's what makes theoretical\nknowledge prestigious. It's also what causes smart people to be\ncurious about certain things and not others; our DNA is not so\ndisinterested as we might think.So while ideas don't have to have immediate practical applications\nto be interesting, the kinds of things we find interesting will\nsurprisingly often turn out to have practical applications.The reason Aristotle didn't get anywhere in the Metaphysics was\npartly that he set off with contradictory aims: to explore the most\nabstract ideas, guided by the assumption that they were useless.\nHe was like an explorer looking for a territory to the north of\nhim, starting with the assumption that it was located to the south.And since his work became the map used by generations of future\nexplorers, he sent them off in the wrong direction as well. \n[8]\nPerhaps worst of all, he protected them from both the criticism of\noutsiders and the promptings of their own inner compass by establishing\nthe principle that the most noble sort of theoretical knowledge had\nto be useless.The Metaphysics is mostly a failed experiment. A few ideas from\nit turned out to be worth keeping; the bulk of it has had no effect\nat all. The Metaphysics is among the least read of all famous\nbooks. It's not hard to understand the way Newton's Principia\nis, but the way a garbled message is.Arguably it's an interesting failed experiment. But unfortunately\nthat was not the conclusion Aristotle's successors derived from\nworks like the Metaphysics. \n[9]\nSoon after, the western world\nfell on intellectual hard times. Instead of version 1s to be\nsuperseded, the works of Plato and Aristotle became revered texts\nto be mastered and discussed. And so things remained for a shockingly\nlong time. It was not till around 1600 (in Europe, where the center\nof gravity had shifted by then) that one found people confident\nenough to treat Aristotle's work as a catalog of mistakes. And\neven then they rarely said so outright.If it seems surprising that the gap was so long, consider how little\nprogress there was in math between Hellenistic times and the\nRenaissance.In the intervening years an unfortunate idea took hold: that it\nwas not only acceptable to produce works like the Metaphysics,\nbut that it was a particularly prestigious line of work, done by a\nclass of people called philosophers. No one thought to go back and\ndebug Aristotle's motivating argument. And so instead of correcting\nthe problem Aristotle discovered by falling into it\u2014that you can\neasily get lost if you talk too loosely about very abstract ideas\u2014they \ncontinued to fall into it.The SingularityCuriously, however, the works they produced continued to attract\nnew readers. Traditional philosophy occupies a kind of singularity\nin this respect. If you write in an unclear way about big ideas,\nyou produce something that seems tantalizingly attractive to\ninexperienced but intellectually ambitious students. Till one knows\nbetter, it's hard to distinguish something that's hard to understand\nbecause the writer was unclear in his own mind from something like\na mathematical proof that's hard to understand because the ideas\nit represents are hard to understand. To someone who hasn't learned\nthe difference, traditional philosophy seems extremely attractive:\nas hard (and therefore impressive) as math, yet broader in scope.\nThat was what lured me in as a high school student.This singularity is even more singular in having its own defense\nbuilt in. When things are hard to understand, people who suspect\nthey're nonsense generally keep quiet. There's no way to prove a\ntext is meaningless. The closest you can get is to show that the\nofficial judges of some class of texts can't distinguish them from\nplacebos. \n[10]And so instead of denouncing philosophy, most people who suspected\nit was a waste of time just studied other things. That alone is\nfairly damning evidence, considering philosophy's claims. It's\nsupposed to be about the ultimate truths. Surely all smart people\nwould be interested in it, if it delivered on that promise.Because philosophy's flaws turned away the sort of people who might\nhave corrected them, they tended to be self-perpetuating. Bertrand\nRussell wrote in a letter in 1912:\n\n Hitherto the people attracted to philosophy have been mostly those\n who loved the big generalizations, which were all wrong, so that\n few people with exact minds have taken up the subject.\n[11]\n\nHis response was to launch Wittgenstein at it, with dramatic results.I think Wittgenstein deserves to be famous not for the discovery\nthat most previous philosophy was a waste of time, which judging\nfrom the circumstantial evidence must have been made by every smart\nperson who studied a little philosophy and declined to pursue it\nfurther, but for how he acted in response.\n[12]\nInstead of quietly\nswitching to another field, he made a fuss, from inside. He was\nGorbachev.The field of philosophy is still shaken from the fright Wittgenstein\ngave it. \n[13]\nLater in life he spent a lot of time talking about\nhow words worked. Since that seems to be allowed, that's what a\nlot of philosophers do now. Meanwhile, sensing a vacuum in the\nmetaphysical speculation department, the people who used to do\nliterary criticism have been edging Kantward, under new names like\n\"literary theory,\" \"critical theory,\" and when they're feeling\nambitious, plain \"theory.\" The writing is the familiar word salad:\n\n Gender is not like some of the other grammatical modes which\n express precisely a mode of conception without any reality that\n corresponds to the conceptual mode, and consequently do not express\n precisely something in reality by which the intellect could be\n moved to conceive a thing the way it does, even where that motive\n is not something in the thing as such.\n [14]\n\nThe singularity I've described is not going away. There's a market\nfor writing that sounds impressive and can't be disproven. There\nwill always be both supply and demand. So if one group abandons\nthis territory, there will always be others ready to occupy it.A ProposalWe may be able to do better. Here's an intriguing possibility.\nPerhaps we should do what Aristotle meant to do, instead of what\nhe did. The goal he announces in the Metaphysics seems one worth\npursuing: to discover the most general truths. That sounds good.\nBut instead of trying to discover them because they're useless,\nlet's try to discover them because they're useful.I propose we try again, but that we use that heretofore despised\ncriterion, applicability, as a guide to keep us from wondering\noff into a swamp of abstractions. Instead of trying to answer the\nquestion:\n\n What are the most general truths?\n\nlet's try to answer the question\n\n Of all the useful things we can say, which are the most general?\n\nThe test of utility I propose is whether we cause people who read\nwhat we've written to do anything differently afterward. Knowing\nwe have to give definite (if implicit) advice will keep us from\nstraying beyond the resolution of the words we're using.The goal is the same as Aristotle's; we just approach it from a\ndifferent direction.As an example of a useful, general idea, consider that of the\ncontrolled experiment. There's an idea that has turned out to be\nwidely applicable. Some might say it's part of science, but it's\nnot part of any specific science; it's literally meta-physics (in\nour sense of \"meta\"). The idea of evolution is another. It turns\nout to have quite broad applications\u2014for example, in genetic\nalgorithms and even product design. Frankfurt's distinction between\nlying and bullshitting seems a promising recent example.\n[15]These seem to me what philosophy should look like: quite general\nobservations that would cause someone who understood them to do\nsomething differently.Such observations will necessarily be about things that are imprecisely\ndefined. Once you start using words with precise meanings, you're\ndoing math. So starting from utility won't entirely solve the\nproblem I described above\u2014it won't flush out the metaphysical\nsingularity. But it should help. It gives people with good\nintentions a new roadmap into abstraction. And they may thereby\nproduce things that make the writing of the people with bad intentions\nlook bad by comparison.One drawback of this approach is that it won't produce the sort of\nwriting that gets you tenure. And not just because it's not currently\nthe fashion. In order to get tenure in any field you must not\narrive at conclusions that members of tenure committees can disagree\nwith. In practice there are two kinds of solutions to this problem.\nIn math and the sciences, you can prove what you're saying, or at\nany rate adjust your conclusions so you're not claiming anything\nfalse (\"6 of 8 subjects had lower blood pressure after the treatment\").\nIn the humanities you can either avoid drawing any definite conclusions\n(e.g. conclude that an issue is a complex one), or draw conclusions\nso narrow that no one cares enough to disagree with you.The kind of philosophy I'm advocating won't be able to take either\nof these routes. At best you'll be able to achieve the essayist's\nstandard of proof, not the mathematician's or the experimentalist's.\nAnd yet you won't be able to meet the usefulness test without\nimplying definite and fairly broadly applicable conclusions. Worse\nstill, the usefulness test will tend to produce results that annoy\npeople: there's no use in telling people things they already believe,\nand people are often upset to be told things they don't.Here's the exciting thing, though. Anyone can do this. Getting\nto general plus useful by starting with useful and cranking up the\ngenerality may be unsuitable for junior professors trying to get\ntenure, but it's better for everyone else, including professors who\nalready have it. This side of the mountain is a nice gradual slope.\nYou can start by writing things that are useful but very specific,\nand then gradually make them more general. Joe's has good burritos.\nWhat makes a good burrito? What makes good food? What makes\nanything good? You can take as long as you want. You don't have\nto get all the way to the top of the mountain. You don't have to\ntell anyone you're doing philosophy.If it seems like a daunting task to do philosophy, here's an\nencouraging thought. The field is a lot younger than it seems.\nThough the first philosophers in the western tradition lived about\n2500 years ago, it would be misleading to say the field is 2500\nyears old, because for most of that time the leading practitioners\nweren't doing much more than writing commentaries on Plato or\nAristotle while watching over their shoulders for the next invading\narmy. In the times when they weren't, philosophy was hopelessly\nintermingled with religion. It didn't shake itself free till a\ncouple hundred years ago, and even then was afflicted by the\nstructural problems I've described above. If I say this, some will\nsay it's a ridiculously overbroad and uncharitable generalization,\nand others will say it's old news, but here goes: judging from their\nworks, most philosophers up to the present have been wasting their\ntime. So in a sense the field is still at the first step. \n[16]That sounds a preposterous claim to make. It won't seem so\npreposterous in 10,000 years. Civilization always seems old, because\nit's always the oldest it's ever been. The only way to say whether\nsomething is really old or not is by looking at structural evidence,\nand structurally philosophy is young; it's still reeling from the\nunexpected breakdown of words.Philosophy is as young now as math was in 1500. There is a lot\nmore to discover.Notes\n[1]\nIn practice formal logic is not much use, because despite\nsome progress in the last 150 years we're still only able to formalize\na small percentage of statements. We may never do that much better,\nfor the same reason 1980s-style \"knowledge representation\" could\nnever have worked; many statements may have no representation more\nconcise than a huge, analog brain state.[2]\nIt was harder for Darwin's contemporaries to grasp this than\nwe can easily imagine. The story of creation in the Bible is not\njust a Judeo-Christian concept; it's roughly what everyone must\nhave believed since before people were people. The hard part of\ngrasping evolution was to realize that species weren't, as they\nseem to be, unchanging, but had instead evolved from different,\nsimpler organisms over unimaginably long periods of time.Now we don't have to make that leap. No one in an industrialized\ncountry encounters the idea of evolution for the first time as an\nadult. Everyone's taught about it as a child, either as truth or\nheresy.[3]\nGreek philosophers before Plato wrote in verse. This must\nhave affected what they said. If you try to write about the nature\nof the world in verse, it inevitably turns into incantation. Prose\nlets you be more precise, and more tentative.[4]\nPhilosophy is like math's\nne'er-do-well brother. It was born when Plato and Aristotle looked\nat the works of their predecessors and said in effect \"why can't\nyou be more like your brother?\" Russell was still saying the same\nthing 2300 years later.Math is the precise half of the most abstract ideas, and philosophy\nthe imprecise half. It's probably inevitable that philosophy will\nsuffer by comparison, because there's no lower bound to its precision.\nBad math is merely boring, whereas bad philosophy is nonsense. And\nyet there are some good ideas in the imprecise half.[5]\nAristotle's best work was in logic and zoology, both of which\nhe can be said to have invented. But the most dramatic departure\nfrom his predecessors was a new, much more analytical style of\nthinking. He was arguably the first scientist.[6]\nBrooks, Rodney, Programming in Common Lisp, Wiley, 1985, p.\n94.[7]\nSome would say we depend on Aristotle more than we realize,\nbecause his ideas were one of the ingredients in our common culture.\nCertainly a lot of the words we use have a connection with Aristotle,\nbut it seems a bit much to suggest that we wouldn't have the concept\nof the essence of something or the distinction between matter and\nform if Aristotle hadn't written about them.One way to see how much we really depend on Aristotle would be to\ndiff European culture with Chinese: what ideas did European culture\nhave in 1800 that Chinese culture didn't, in virtue of Aristotle's\ncontribution?[8]\nThe meaning of the word \"philosophy\" has changed over time.\nIn ancient times it covered a broad range of topics, comparable in\nscope to our \"scholarship\" (though without the methodological\nimplications). Even as late as Newton's time it included what we\nnow call \"science.\" But core of the subject today is still what\nseemed to Aristotle the core: the attempt to discover the most\ngeneral truths.Aristotle didn't call this \"metaphysics.\" That name got assigned\nto it because the books we now call the Metaphysics came after\n(meta = after) the Physics in the standard edition of Aristotle's\nworks compiled by Andronicus of Rhodes three centuries later. What\nwe call \"metaphysics\" Aristotle called \"first philosophy.\"[9]\nSome of Aristotle's immediate successors may have realized\nthis, but it's hard to say because most of their works are lost.[10]\nSokal, Alan, \"Transgressing the Boundaries: Toward a Transformative\nHermeneutics of Quantum Gravity,\" Social Text 46/47, pp. 217-252.Abstract-sounding nonsense seems to be most attractive when it's\naligned with some axe the audience already has to grind. If this\nis so we should find it's most popular with groups that are (or\nfeel) weak. The powerful don't need its reassurance.[11]\nLetter to Ottoline Morrell, December 1912. Quoted in:Monk, Ray, Ludwig Wittgenstein: The Duty of Genius, Penguin, 1991,\np. 75.[12]\nA preliminary result, that all metaphysics between Aristotle\nand 1783 had been a waste of time, is due to I. Kant.[13]\nWittgenstein asserted a sort of mastery to which the inhabitants\nof early 20th century Cambridge seem to have been peculiarly\nvulnerable\u2014perhaps partly because so many had been raised religious\nand then stopped believing, so had a vacant space in their heads\nfor someone to tell them what to do (others chose Marx or Cardinal\nNewman), and partly because a quiet, earnest place like Cambridge\nin that era had no natural immunity to messianic figures, just as\nEuropean politics then had no natural immunity to dictators.[14]\nThis is actually from the Ordinatio of Duns Scotus (ca.\n1300), with \"number\" replaced by \"gender.\" Plus ca change.Wolter, Allan (trans), Duns Scotus: Philosophical Writings, Nelson,\n1963, p. 92.[15]\nFrankfurt, Harry, On Bullshit, Princeton University Press,\n2005.[16]\nSome introductions to philosophy now take the line that\nphilosophy is worth studying as a process rather than for any\nparticular truths you'll learn. The philosophers whose works they\ncover would be rolling in their graves at that. They hoped they\nwere doing more than serving as examples of how to argue: they hoped\nthey were getting results. Most were wrong, but it doesn't seem\nan impossible hope.This argument seems to me like someone in 1500 looking at the lack\nof results achieved by alchemy and saying its value was as a process.\nNo, they were going about it wrong. It turns out it is possible\nto transmute lead into gold (though not economically at current\nenergy prices), but the route to that knowledge was to\nbacktrack and try another approach.Thanks to Trevor Blackwell, Paul Buchheit, Jessica Livingston, \nRobert Morris, Mark Nitzberg, and Peter Norvig for reading drafts of this."} {"title": "unions", "text": "May 2007People who worry about the increasing gap between rich and poor\ngenerally look back on the mid twentieth century as a golden age.\nIn those days we had a large number of high-paying union manufacturing\njobs that boosted the median income. I wouldn't quite call the\nhigh-paying union job a myth, but I think people who dwell on it\nare reading too much into it.Oddly enough, it was working with startups that made me realize\nwhere the high-paying union job came from. In a rapidly growing\nmarket, you don't worry too much about efficiency. It's more\nimportant to grow fast. If there's some mundane problem getting\nin your way, and there's a simple solution that's somewhat expensive,\njust take it and get on with more important things. EBay didn't\nwin by paying less for servers than their competitors.Difficult though it may be to imagine now, manufacturing was a\ngrowth industry in the mid twentieth century. This was an era when\nsmall firms making everything from cars to candy were getting\nconsolidated into a new kind of corporation with national reach and\nhuge economies of scale. You had to grow fast or die. Workers\nwere for these companies what servers are for an Internet startup.\nA reliable supply was more important than low cost.If you looked in the head of a 1950s auto executive, the attitude\nmust have been: sure, give 'em whatever they ask for, so long as\nthe new model isn't delayed.In other words, those workers were not paid what their work was\nworth. Circumstances being what they were, companies would have\nbeen stupid to insist on paying them so little.If you want a less controversial example of this phenomenon, ask\nanyone who worked as a consultant building web sites during the\nInternet Bubble. In the late nineties you could get paid huge sums\nof money for building the most trivial things. And yet does anyone\nwho was there have any expectation those days will ever return? I\ndoubt it. Surely everyone realizes that was just a temporary\naberration.The era of labor unions seems to have been the same kind of aberration, \njust spread\nover a longer period, and mixed together with a lot of ideology\nthat prevents people from viewing it with as cold an eye as they\nwould something like consulting during the Bubble.Basically, unions were just Razorfish.People who think the labor movement was the creation of heroic union\norganizers have a problem to explain: why are unions shrinking now?\nThe best they can do is fall back on the default explanation of\npeople living in fallen civilizations. Our ancestors were giants.\nThe workers of the early twentieth century must have had a moral\ncourage that's lacking today.In fact there's a simpler explanation. The early twentieth century\nwas just a fast-growing startup overpaying for infrastructure. And\nwe in the present are not a fallen people, who have abandoned\nwhatever mysterious high-minded principles produced the high-paying\nunion job. We simply live in a time when the fast-growing companies\noverspend on different things."} {"title": "apple", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nNovember 2009I don't think Apple realizes how badly the App Store approval process\nis broken. Or rather, I don't think they realize how much it matters\nthat it's broken.The way Apple runs the App Store has harmed their reputation with\nprogrammers more than anything else they've ever done. \nTheir reputation with programmers used to be great.\nIt used to be the most common complaint you heard\nabout Apple was that their fans admired them too uncritically.\nThe App Store has changed that. Now a lot of programmers\nhave started to see Apple as evil.How much of the goodwill Apple once had with programmers have they\nlost over the App Store? A third? Half? And that's just so far.\nThe App Store is an ongoing karma leak.* * *How did Apple get into this mess? Their fundamental problem is\nthat they don't understand software.They treat iPhone apps the way they treat the music they sell through\niTunes. Apple is the channel; they own the user; if you want to\nreach users, you do it on their terms. The record labels agreed,\nreluctantly. But this model doesn't work for software. It doesn't\nwork for an intermediary to own the user. The software business\nlearned that in the early 1980s, when companies like VisiCorp showed\nthat although the words \"software\" and \"publisher\" fit together,\nthe underlying concepts don't. Software isn't like music or books.\nIt's too complicated for a third party to act as an intermediary\nbetween developer and user. And yet that's what Apple is trying\nto be with the App Store: a software publisher. And a particularly\noverreaching one at that, with fussy tastes and a rigidly enforced\nhouse style.If software publishing didn't work in 1980, it works even less now\nthat software development has evolved from a small number of big\nreleases to a constant stream of small ones. But Apple doesn't\nunderstand that either. Their model of product development derives\nfrom hardware. They work on something till they think it's finished,\nthen they release it. You have to do that with hardware, but because\nsoftware is so easy to change, its design can benefit from evolution.\nThe standard way to develop applications now is to launch fast and\niterate. Which means it's a disaster to have long, random delays\neach time you release a new version.Apparently Apple's attitude is that developers should be more careful\nwhen they submit a new version to the App Store. They would say\nthat. But powerful as they are, they're not powerful enough to\nturn back the evolution of technology. Programmers don't use\nlaunch-fast-and-iterate out of laziness. They use it because it\nyields the best results. By obstructing that process, Apple is\nmaking them do bad work, and programmers hate that as much as Apple\nwould.How would Apple like it if when they discovered a serious bug in\nOS\u00a0X, instead of releasing a software update immediately, they had\nto submit their code to an intermediary who sat on it for a month\nand then rejected it because it contained an icon they didn't like?By breaking software development, Apple gets the opposite of what\nthey intended: the version of an app currently available in the App\nStore tends to be an old and buggy one. One developer told me:\n\n As a result of their process, the App Store is full of half-baked\n applications. I make a new version almost every day that I release\n to beta users. The version on the App Store feels old and crappy.\n I'm sure that a lot of developers feel this way: One emotion is\n \"I'm not really proud about what's in the App Store\", and it's\n combined with the emotion \"Really, it's Apple's fault.\"\n\nAnother wrote:\n\n I believe that they think their approval process helps users by\n ensuring quality. In reality, bugs like ours get through all the\n time and then it can take 4-8 weeks to get that bug fix approved,\n leaving users to think that iPhone apps sometimes just don't work.\n Worse for Apple, these apps work just fine on other platforms\n that have immediate approval processes.\n\nActually I suppose Apple has a third misconception: that all the\ncomplaints about App Store approvals are not a serious problem.\nThey must hear developers complaining. But partners and suppliers\nare always complaining. It would be a bad sign if they weren't;\nit would mean you were being too easy on them. Meanwhile the iPhone\nis selling better than ever. So why do they need to fix anything?They get away with maltreating developers, in the short term, because\nthey make such great hardware. I just bought a new 27\" iMac a\ncouple days ago. It's fabulous. The screen's too shiny, and the\ndisk is surprisingly loud, but it's so beautiful that you can't\nmake yourself care.So I bought it, but I bought it, for the first time, with misgivings.\nI felt the way I'd feel buying something made in a country with a\nbad human rights record. That was new. In the past when I bought\nthings from Apple it was an unalloyed pleasure. Oh boy! They make\nsuch great stuff. This time it felt like a Faustian bargain. They\nmake such great stuff, but they're such assholes. Do I really want\nto support this company?* * *Should Apple care what people like me think? What difference does\nit make if they alienate a small minority of their users?There are a couple reasons they should care. One is that these\nusers are the people they want as employees. If your company seems\nevil, the best programmers won't work for you. That hurt Microsoft\na lot starting in the 90s. Programmers started to feel sheepish\nabout working there. It seemed like selling out. When people from\nMicrosoft were talking to other programmers and they mentioned where\nthey worked, there were a lot of self-deprecating jokes about having\ngone over to the dark side. But the real problem for Microsoft\nwasn't the embarrassment of the people they hired. It was the\npeople they never got. And you know who got them? Google and\nApple. If Microsoft was the Empire, they were the Rebel Alliance.\nAnd it's largely because they got more of the best people that\nGoogle and Apple are doing so much better than Microsoft today.Why are programmers so fussy about their employers' morals? Partly\nbecause they can afford to be. The best programmers can work\nwherever they want. They don't have to work for a company they\nhave qualms about.But the other reason programmers are fussy, I think, is that evil\nbegets stupidity. An organization that wins by exercising power\nstarts to lose the ability to win by doing better work. And it's\nnot fun for a smart person to work in a place where the best ideas\naren't the ones that win. I think the reason Google embraced \"Don't\nbe evil\" so eagerly was not so much to impress the outside world\nas to inoculate themselves against arrogance.\n[1]That has worked for Google so far. They've become more\nbureaucratic, but otherwise they seem to have held true to their\noriginal principles. With Apple that seems less the case. When you\nlook at the famous \n1984 ad \nnow, it's easier to imagine Apple as the\ndictator on the screen than the woman with the hammer.\n[2]\nIn fact, if you read the dictator's speech it sounds uncannily like a\nprophecy of the App Store.\n\n We have triumphed over the unprincipled dissemination of facts.We have created, for the first time in all history, a garden of\n pure ideology, where each worker may bloom secure from the pests\n of contradictory and confusing truths.\n\nThe other reason Apple should care what programmers think of them\nis that when you sell a platform, developers make or break you. If\nanyone should know this, Apple should. VisiCalc made the Apple II.And programmers build applications for the platforms they use. Most\napplications\u2014most startups, probably\u2014grow out of personal projects.\nApple itself did. Apple made microcomputers because that's what\nSteve Wozniak wanted for himself. He couldn't have afforded a\nminicomputer. \n[3]\n Microsoft likewise started out making interpreters\nfor little microcomputers because\nBill Gates and Paul Allen were interested in using them. It's a\nrare startup that doesn't build something the founders use.The main reason there are so many iPhone apps is that so many programmers\nhave iPhones. They may know, because they read it in an article,\nthat Blackberry has such and such market share. But in practice\nit's as if RIM didn't exist. If they're going to build something,\nthey want to be able to use it themselves, and that means building\nan iPhone app.So programmers continue to develop iPhone apps, even though Apple\ncontinues to maltreat them. They're like someone stuck in an abusive\nrelationship. They're so attracted to the iPhone that they can't\nleave. But they're looking for a way out. One wrote:\n\n While I did enjoy developing for the iPhone, the control they\n place on the App Store does not give me the drive to develop\n applications as I would like. In fact I don't intend to make any\n more iPhone applications unless absolutely necessary.\n[4]\n\nCan anything break this cycle? No device I've seen so far could.\nPalm and RIM haven't a hope. The only credible contender is Android.\nBut Android is an orphan; Google doesn't really care about it, not\nthe way Apple cares about the iPhone. Apple cares about the iPhone\nthe way Google cares about search.* * *Is the future of handheld devices one locked down by Apple? It's\na worrying prospect. It would be a bummer to have another grim\nmonoculture like we had in the 1990s. In 1995, writing software\nfor end users was effectively identical with writing Windows\napplications. Our horror at that prospect was the single biggest\nthing that drove us to start building web apps.At least we know now what it would take to break Apple's lock.\nYou'd have to get iPhones out of programmers' hands. If programmers\nused some other device for mobile web access, they'd start to develop\napps for that instead.How could you make a device programmers liked better than the iPhone?\nIt's unlikely you could make something better designed. Apple\nleaves no room there. So this alternative device probably couldn't\nwin on general appeal. It would have to win by virtue of some\nappeal it had to programmers specifically.One way to appeal to programmers is with software. If you\ncould think of an application programmers had to have, but that\nwould be impossible in the circumscribed world of the iPhone, \nyou could presumably get them to switch.That would definitely happen if programmers started to use handhelds\nas development machines\u2014if handhelds displaced laptops the\nway laptops displaced desktops. You need more control of a development\nmachine than Apple will let you have over an iPhone.Could anyone make a device that you'd carry around in your pocket\nlike a phone, and yet would also work as a development machine?\nIt's hard to imagine what it would look like. But I've learned\nnever to say never about technology. A phone-sized device that\nwould work as a development machine is no more miraculous by present\nstandards than the iPhone itself would have seemed by the standards\nof 1995.My current development machine is a MacBook Air, which I use with\nan external monitor and keyboard in my office, and by itself when\ntraveling. If there was a version half the size I'd prefer it.\nThat still wouldn't be small enough to carry around everywhere like\na phone, but we're within a factor of 4 or so. Surely that gap is\nbridgeable. In fact, let's make it an\nRFS. Wanted: \nWoman with hammer.Notes[1]\nWhen Google adopted \"Don't be evil,\" they were still so small\nthat no one would have expected them to be, yet.\n[2]\nThe dictator in the 1984 ad isn't Microsoft, incidentally;\nit's IBM. IBM seemed a lot more frightening in those days, but\nthey were friendlier to developers than Apple is now.[3]\nHe couldn't even afford a monitor. That's why the Apple\nI used a TV as a monitor.[4]\nSeveral people I talked to mentioned how much they liked the\niPhone SDK. The problem is not Apple's products but their policies.\nFortunately policies are software; Apple can change them instantly\nif they want to. Handy that, isn't it?Thanks to Sam Altman, Trevor Blackwell, Ross Boucher, \nJames Bracy, Gabor Cselle,\nPatrick Collison, Jason Freedman, John Gruber, Joe Hewitt, Jessica Livingston,\nRobert Morris, Teng Siong Ong, Nikhil Pandit, Savraj Singh, and Jared Tame for reading drafts of this."} {"title": "boss", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nMarch 2008, rev. June 2008Technology tends to separate normal from natural. Our bodies\nweren't designed to eat the foods that people in rich countries eat, or\nto get so little exercise. \nThere may be a similar problem with the way we work: \na normal job may be as bad for us intellectually as white flour\nor sugar is for us physically.I began to suspect this after spending several years working \nwith startup founders. I've now worked with over 200 of them, and I've\nnoticed a definite difference between programmers working on their\nown startups and those working for large organizations.\nI wouldn't say founders seem happier, necessarily;\nstarting a startup can be very stressful. Maybe the best way to put\nit is to say that they're happier in the sense that your body is\nhappier during a long run than sitting on a sofa eating\ndoughnuts.Though they're statistically abnormal, startup founders seem to be\nworking in a way that's more natural for humans.I was in Africa last year and saw a lot of animals in the wild that\nI'd only seen in zoos before. It was remarkable how different they\nseemed. Particularly lions. Lions in the wild seem about ten times\nmore alive. They're like different animals. I suspect that working\nfor oneself feels better to humans in much the same way that living\nin the wild must feel better to a wide-ranging predator like a lion.\nLife in a zoo is easier, but it isn't the life they were designed\nfor.\nTreesWhat's so unnatural about working for a big company? The root of\nthe problem is that humans weren't meant to work in such large\ngroups.Another thing you notice when you see animals in the wild is that\neach species thrives in groups of a certain size. A herd of impalas\nmight have 100 adults; baboons maybe 20; lions rarely 10. Humans\nalso seem designed to work in groups, and what I've read about\nhunter-gatherers accords with research on organizations and my own\nexperience to suggest roughly what the ideal size is: groups of 8\nwork well; by 20 they're getting hard to manage; and a group of 50\nis really unwieldy.\n[1]\nWhatever the upper limit is, we are clearly not meant to work in\ngroups of several hundred. And yet\u2014for reasons having more\nto do with technology than human nature\u2014a great many people\nwork for companies with hundreds or thousands of employees.Companies know groups that large wouldn't work, so they divide\nthemselves into units small enough to work together. But to\ncoordinate these they have to introduce something new: bosses.These smaller groups are always arranged in a tree structure. Your\nboss is the point where your group attaches to the tree. But when\nyou use this trick for dividing a large group into smaller ones,\nsomething strange happens that I've never heard anyone mention\nexplicitly. In the group one level up from yours, your boss\nrepresents your entire group. A group of 10 managers is not merely\na group of 10 people working together in the usual way. It's really\na group of groups. Which means for a group of 10 managers to work\ntogether as if they were simply a group of 10 individuals, the group\nworking for each manager would have to work as if they were a single\nperson\u2014the workers and manager would each share only one\nperson's worth of freedom between them.In practice a group of people are never able to act as if they were\none person. But in a large organization divided into groups in\nthis way, the pressure is always in that direction. Each group\ntries its best to work as if it were the small group of individuals\nthat humans were designed to work in. That was the point of creating\nit. And when you propagate that constraint, the result is that\neach person gets freedom of action in inverse proportion to the\nsize of the entire tree.\n[2]Anyone who's worked for a large organization has felt this. You\ncan feel the difference between working for a company with 100\nemployees and one with 10,000, even if your group has only 10 people.\nCorn SyrupA group of 10 people within a large organization is a kind of fake\ntribe. The number of people you interact with is about right. But\nsomething is missing: individual initiative. Tribes of hunter-gatherers\nhave much more freedom. The leaders have a little more power than other\nmembers of the tribe, but they don't generally tell them what to\ndo and when the way a boss can.It's not your boss's fault. The real problem is that in the group\nabove you in the hierarchy, your entire group is one virtual person.\nYour boss is just the way that constraint is imparted to you.So working in a group of 10 people within a large organization feels\nboth right and wrong at the same time. On the surface it feels\nlike the kind of group you're meant to work in, but something major\nis missing. A job at a big company is like high fructose corn\nsyrup: it has some of the qualities of things you're meant to like,\nbut is disastrously lacking in others.Indeed, food is an excellent metaphor to explain what's wrong with\nthe usual sort of job.For example, working for a big company is the default thing to do,\nat least for programmers. How bad could it be? Well, food shows\nthat pretty clearly. If you were dropped at a random point in\nAmerica today, nearly all the food around you would be bad for you.\nHumans were not designed to eat white flour, refined sugar, high\nfructose corn syrup, and hydrogenated vegetable oil. And yet if\nyou analyzed the contents of the average grocery store you'd probably\nfind these four ingredients accounted for most of the calories.\n\"Normal\" food is terribly bad for you. The only people who eat\nwhat humans were actually designed to eat are a few Birkenstock-wearing\nweirdos in Berkeley.If \"normal\" food is so bad for us, why is it so common? There are\ntwo main reasons. One is that it has more immediate appeal. You\nmay feel lousy an hour after eating that pizza, but eating the first\ncouple bites feels great. The other is economies of scale.\nProducing junk food scales; producing fresh vegetables doesn't.\nWhich means (a) junk food can be very cheap, and (b) it's worth\nspending a lot to market it.If people have to choose between something that's cheap, heavily\nmarketed, and appealing in the short term, and something that's\nexpensive, obscure, and appealing in the long term, which do you\nthink most will choose?It's the same with work. The average MIT graduate wants to work\nat Google or Microsoft, because it's a recognized brand, it's safe,\nand they'll get paid a good salary right away. It's the job\nequivalent of the pizza they had for lunch. The drawbacks will\nonly become apparent later, and then only in a vague sense of\nmalaise.And founders and early employees of startups, meanwhile, are like\nthe Birkenstock-wearing weirdos of Berkeley: though a tiny minority\nof the population, they're the ones living as humans are meant to.\nIn an artificial world, only extremists live naturally.\nProgrammersThe restrictiveness of big company jobs is particularly hard on\nprogrammers, because the essence of programming is to build new\nthings. Sales people make much the same pitches every day; support\npeople answer much the same questions; but once you've written a\npiece of code you don't need to write it again. So a programmer\nworking as programmers are meant to is always making new things.\nAnd when you're part of an organization whose structure gives each\nperson freedom in inverse proportion to the size of the tree, you're\ngoing to face resistance when you do something new.This seems an inevitable consequence of bigness. It's true even\nin the smartest companies. I was talking recently to a founder who\nconsidered starting a startup right out of college, but went to\nwork for Google instead because he thought he'd learn more there.\nHe didn't learn as much as he expected. Programmers learn by doing,\nand most of the things he wanted to do, he couldn't\u2014sometimes\nbecause the company wouldn't let him, but often because the company's\ncode wouldn't let him. Between the drag of legacy code, the overhead\nof doing development in such a large organization, and the restrictions\nimposed by interfaces owned by other groups, he could only try a\nfraction of the things he would have liked to. He said he has\nlearned much more in his own startup, despite the fact that he has\nto do all the company's errands as well as programming, because at\nleast when he's programming he can do whatever he wants.An obstacle downstream propagates upstream. If you're not allowed\nto implement new ideas, you stop having them. And vice versa: when\nyou can do whatever you want, you have more ideas about what to do.\nSo working for yourself makes your brain more powerful in the same\nway a low-restriction exhaust system makes an engine more powerful.Working for yourself doesn't have to mean starting a startup, of\ncourse. But a programmer deciding between a regular job at a big\ncompany and their own startup is probably going to learn more doing\nthe startup.You can adjust the amount of freedom you get by scaling the size\nof company you work for. If you start the company, you'll have the\nmost freedom. If you become one of the first 10 employees you'll\nhave almost as much freedom as the founders. Even a company with\n100 people will feel different from one with 1000.Working for a small company doesn't ensure freedom. The tree\nstructure of large organizations sets an upper bound on freedom,\nnot a lower bound. The head of a small company may still choose\nto be a tyrant. The point is that a large organization is compelled\nby its structure to be one.\nConsequencesThat has real consequences for both organizations and individuals.\nOne is that companies will inevitably slow down as they grow larger,\nno matter how hard they try to keep their startup mojo. It's a\nconsequence of the tree structure that every large organization is\nforced to adopt.Or rather, a large organization could only avoid slowing down if\nthey avoided tree structure. And since human nature limits the\nsize of group that can work together, the only way I can imagine\nfor larger groups to avoid tree structure would be to have no\nstructure: to have each group actually be independent, and to work\ntogether the way components of a market economy do.That might be worth exploring. I suspect there are already some\nhighly partitionable businesses that lean this way. But I don't\nknow any technology companies that have done it.There is one thing companies can do short of structuring themselves\nas sponges: they can stay small. If I'm right, then it really\npays to keep a company as small as it can be at every stage.\nParticularly a technology company. Which means it's doubly important\nto hire the best people. Mediocre hires hurt you twice: they get\nless done, but they also make you big, because you need more of\nthem to solve a given problem.For individuals the upshot is the same: aim small. It will always\nsuck to work for large organizations, and the larger the organization,\nthe more it will suck.In an essay I wrote a couple years ago \nI advised graduating seniors\nto work for a couple years for another company before starting their\nown. I'd modify that now. Work for another company if you want\nto, but only for a small one, and if you want to start your own\nstartup, go ahead.The reason I suggested college graduates not start startups immediately\nwas that I felt most would fail. And they will. But ambitious\nprogrammers are better off doing their own thing and failing than\ngoing to work at a big company. Certainly they'll learn more. They\nmight even be better off financially. A lot of people in their\nearly twenties get into debt, because their expenses grow even\nfaster than the salary that seemed so high when they left school.\nAt least if you start a startup and fail your net worth will be\nzero rather than negative. \n[3]We've now funded so many different types of founders that we have\nenough data to see patterns, and there seems to be no benefit from\nworking for a big company. The people who've worked for a few years\ndo seem better than the ones straight out of college, but only\nbecause they're that much older.The people who come to us from big companies often seem kind of\nconservative. It's hard to say how much is because big companies\nmade them that way, and how much is the natural conservatism that\nmade them work for the big companies in the first place. But\ncertainly a large part of it is learned. I know because I've seen\nit burn off.Having seen that happen so many times is one of the things that\nconvinces me that working for oneself, or at least for a small\ngroup, is the natural way for programmers to live. Founders arriving\nat Y Combinator often have the downtrodden air of refugees. Three\nmonths later they're transformed: they have so much more \nconfidence\nthat they seem as if they've grown several inches taller. \n[4]\nStrange as this sounds, they seem both more worried and happier at the same\ntime. Which is exactly how I'd describe the way lions seem in the\nwild.Watching employees get transformed into founders makes it clear\nthat the difference between the two is due mostly to environment\u2014and\nin particular that the environment in big companies is toxic to\nprogrammers. In the first couple weeks of working on their own\nstartup they seem to come to life, because finally they're working\nthe way people are meant to.Notes[1]\nWhen I talk about humans being meant or designed to live a\ncertain way, I mean by evolution.[2]\nIt's not only the leaves who suffer. The constraint propagates\nup as well as down. So managers are constrained too; instead of\njust doing things, they have to act through subordinates.[3]\nDo not finance your startup with credit cards. Financing a\nstartup with debt is usually a stupid move, and credit card debt\nstupidest of all. Credit card debt is a bad idea, period. It is\na trap set by evil companies for the desperate and the foolish.[4]\nThe founders we fund used to be younger (initially we encouraged\nundergrads to apply), and the first couple times I saw this I used\nto wonder if they were actually getting physically taller.Thanks to Trevor Blackwell, Ross Boucher, Aaron Iba, Abby\nKirigin, Ivan Kirigin, Jessica Livingston, and Robert Morris for\nreading drafts of this."} {"title": "desres", "text": "January 2003(This article is derived from a keynote talk at the fall 2002 meeting\nof NEPLS.)Visitors to this country are often surprised to find that\nAmericans like to begin a conversation by asking \"what do you do?\"\nI've never liked this question. I've rarely had a\nneat answer to it. But I think I have finally solved the problem.\nNow, when someone asks me what I do, I look them straight\nin the eye and say \"I'm designing a \nnew dialect of Lisp.\" \nI recommend this answer to anyone who doesn't like being asked what\nthey do. The conversation will turn immediately to other topics.I don't consider myself to be doing research on programming languages.\nI'm just designing one, in the same way that someone might design\na building or a chair or a new typeface.\nI'm not trying to discover anything new. I just want\nto make a language that will be good to program in. In some ways,\nthis assumption makes life a lot easier.The difference between design and research seems to be a question\nof new versus good. Design doesn't have to be new, but it has to \nbe good. Research doesn't have to be good, but it has to be new.\nI think these two paths converge at the top: the best design\nsurpasses its predecessors by using new ideas, and the best research\nsolves problems that are not only new, but actually worth solving.\nSo ultimately we're aiming for the same destination, just approaching\nit from different directions.What I'm going to talk about today is what your target looks like\nfrom the back. What do you do differently when you treat\nprogramming languages as a design problem instead of a research topic?The biggest difference is that you focus more on the user.\nDesign begins by asking, who is this\nfor and what do they need from it? A good architect,\nfor example, does not begin by creating a design that he then\nimposes on the users, but by studying the intended users and figuring\nout what they need.Notice I said \"what they need,\" not \"what they want.\" I don't mean\nto give the impression that working as a designer means working as \na sort of short-order cook, making whatever the client tells you\nto. This varies from field to field in the arts, but\nI don't think there is any field in which the best work is done by\nthe people who just make exactly what the customers tell them to.The customer is always right in\nthe sense that the measure of good design is how well it works\nfor the user. If you make a novel that bores everyone, or a chair\nthat's horribly uncomfortable to sit in, then you've done a bad\njob, period. It's no defense to say that the novel or the chair \nis designed according to the most advanced theoretical principles.And yet, making what works for the user doesn't mean simply making\nwhat the user tells you to. Users don't know what all the choices\nare, and are often mistaken about what they really want.The answer to the paradox, I think, is that you have to design\nfor the user, but you have to design what the user needs, not simply \nwhat he says he wants.\nIt's much like being a doctor. You can't just treat a patient's\nsymptoms. When a patient tells you his symptoms, you have to figure\nout what's actually wrong with him, and treat that.This focus on the user is a kind of axiom from which most of the\npractice of good design can be derived, and around which most design\nissues center.If good design must do what the user needs, who is the user? When\nI say that design must be for users, I don't mean to imply that good \ndesign aims at some kind of \nlowest common denominator. You can pick any group of users you\nwant. If you're designing a tool, for example, you can design it\nfor anyone from beginners to experts, and what's good design\nfor one group might be bad for another. The point\nis, you have to pick some group of users. I don't think you can\neven talk about good or bad design except with\nreference to some intended user.You're most likely to get good design if the intended users include\nthe designer himself. When you design something\nfor a group that doesn't include you, it tends to be for people\nyou consider to be less sophisticated than you, not more sophisticated.That's a problem, because looking down on the user, however benevolently,\nseems inevitably to corrupt the designer.\nI suspect that very few housing\nprojects in the US were designed by architects who expected to live\nin them. You can see the same thing\nin programming languages. C, Lisp, and Smalltalk were created for\ntheir own designers to use. Cobol, Ada, and Java, were created \nfor other people to use.If you think you're designing something for idiots, the odds are\nthat you're not designing something good, even for idiots.\nEven if you're designing something for the most sophisticated\nusers, though, you're still designing for humans. It's different \nin research. In math you\ndon't choose abstractions because they're\neasy for humans to understand; you choose whichever make the\nproof shorter. I think this is true for the sciences generally.\nScientific ideas are not meant to be ergonomic.Over in the arts, things are very different. Design is\nall about people. The human body is a strange\nthing, but when you're designing a chair,\nthat's what you're designing for, and there's no way around it.\nAll the arts have to pander to the interests and limitations\nof humans. In painting, for example, all other things being\nequal a painting with people in it will be more interesting than\none without. It is not merely an accident of history that\nthe great paintings of the Renaissance are all full of people.\nIf they hadn't been, painting as a medium wouldn't have the prestige\nthat it does.Like it or not, programming languages are also for people,\nand I suspect the human brain is just as lumpy and idiosyncratic\nas the human body. Some ideas are easy for people to grasp\nand some aren't. For example, we seem to have a very limited\ncapacity for dealing with detail. It's this fact that makes\nprograming languages a good idea in the first place; if we\ncould handle the detail, we could just program in machine\nlanguage.Remember, too, that languages are not\nprimarily a form for finished programs, but something that\nprograms have to be developed in. Anyone in the arts could\ntell you that you might want different mediums for the\ntwo situations. Marble, for example, is a nice, durable\nmedium for finished ideas, but a hopelessly inflexible one\nfor developing new ideas.A program, like a proof,\nis a pruned version of a tree that in the past has had\nfalse starts branching off all over it. So the test of\na language is not simply how clean the finished program looks\nin it, but how clean the path to the finished program was.\nA design choice that gives you elegant finished programs\nmay not give you an elegant design process. For example, \nI've written a few macro-defining macros full of nested\nbackquotes that look now like little gems, but writing them\ntook hours of the ugliest trial and error, and frankly, I'm still\nnot entirely sure they're correct.We often act as if the test of a language were how good\nfinished programs look in it.\nIt seems so convincing when you see the same program\nwritten in two languages, and one version is much shorter.\nWhen you approach the problem from the direction of the\narts, you're less likely to depend on this sort of\ntest. You don't want to end up with a programming\nlanguage like marble.For example, it is a huge win in developing software to\nhave an interactive toplevel, what in Lisp is called a\nread-eval-print loop. And when you have one this has\nreal effects on the design of the language. It would not\nwork well for a language where you have to declare\nvariables before using them, for example. When you're\njust typing expressions into the toplevel, you want to be \nable to set x to some value and then start doing things\nto x. You don't want to have to declare the type of x\nfirst. You may dispute either of the premises, but if\na language has to have a toplevel to be convenient, and\nmandatory type declarations are incompatible with a\ntoplevel, then no language that makes type declarations \nmandatory could be convenient to program in.In practice, to get good design you have to get close, and stay\nclose, to your users. You have to calibrate your ideas on actual\nusers constantly, especially in the beginning. One of the reasons\nJane Austen's novels are so good is that she read them out loud to\nher family. That's why she never sinks into self-indulgently arty\ndescriptions of landscapes,\nor pretentious philosophizing. (The philosophy's there, but it's\nwoven into the story instead of being pasted onto it like a label.)\nIf you open an average \"literary\" novel and imagine reading it out loud\nto your friends as something you'd written, you'll feel all too\nkeenly what an imposition that kind of thing is upon the reader.In the software world, this idea is known as Worse is Better.\nActually, there are several ideas mixed together in the concept of\nWorse is Better, which is why people are still arguing about\nwhether worse\nis actually better or not. But one of the main ideas in that\nmix is that if you're building something new, you should get a\nprototype in front of users as soon as possible.The alternative approach might be called the Hail Mary strategy.\nInstead of getting a prototype out quickly and gradually refining\nit, you try to create the complete, finished, product in one long\ntouchdown pass. As far as I know, this is a\nrecipe for disaster. Countless startups destroyed themselves this\nway during the Internet bubble. I've never heard of a case\nwhere it worked.What people outside the software world may not realize is that\nWorse is Better is found throughout the arts.\nIn drawing, for example, the idea was discovered during the\nRenaissance. Now almost every drawing teacher will tell you that\nthe right way to get an accurate drawing is not to\nwork your way slowly around the contour of an object, because errors will\naccumulate and you'll find at the end that the lines don't meet.\nInstead you should draw a few quick lines in roughly the right place,\nand then gradually refine this initial sketch.In most fields, prototypes\nhave traditionally been made out of different materials.\nTypefaces to be cut in metal were initially designed \nwith a brush on paper. Statues to be cast in bronze \nwere modelled in wax. Patterns to be embroidered on tapestries\nwere drawn on paper with ink wash. Buildings to be\nconstructed from stone were tested on a smaller scale in wood.What made oil paint so exciting, when it\nfirst became popular in the fifteenth century, was that you\ncould actually make the finished work from the prototype.\nYou could make a preliminary drawing if you wanted to, but you\nweren't held to it; you could work out all the details, and\neven make major changes, as you finished the painting.You can do this in software too. A prototype doesn't have to\nbe just a model; you can refine it into the finished product.\nI think you should always do this when you can. It lets you\ntake advantage of new insights you have along the way. But\nperhaps even more important, it's good for morale.Morale is key in design. I'm surprised people\ndon't talk more about it. One of my first\ndrawing teachers told me: if you're bored when you're\ndrawing something, the drawing will look boring.\nFor example, suppose you have to draw a building, and you\ndecide to draw each brick individually. You can do this\nif you want, but if you get bored halfway through and start\nmaking the bricks mechanically instead of observing each one, \nthe drawing will look worse than if you had merely suggested\nthe bricks.Building something by gradually refining a prototype is good\nfor morale because it keeps you engaged. In software, my \nrule is: always have working code. If you're writing\nsomething that you'll be able to test in an hour, then you\nhave the prospect of an immediate reward to motivate you.\nThe same is true in the arts, and particularly in oil painting.\nMost painters start with a blurry sketch and gradually\nrefine it.\nIf you work this way, then in principle\nyou never have to end the day with something that actually\nlooks unfinished. Indeed, there is even a saying among\npainters: \"A painting is never finished, you just stop\nworking on it.\" This idea will be familiar to anyone who\nhas worked on software.Morale is another reason that it's hard to design something\nfor an unsophisticated user. It's hard to stay interested in\nsomething you don't like yourself. To make something \ngood, you have to be thinking, \"wow, this is really great,\"\nnot \"what a piece of shit; those fools will love it.\"Design means making things for humans. But it's not just the\nuser who's human. The designer is human too.Notice all this time I've been talking about \"the designer.\"\nDesign usually has to be under the control of a single person to\nbe any good. And yet it seems to be possible for several people\nto collaborate on a research project. This seems to\nme one of the most interesting differences between research and\ndesign.There have been famous instances of collaboration in the arts,\nbut most of them seem to have been cases of molecular bonding rather\nthan nuclear fusion. In an opera it's common for one person to\nwrite the libretto and another to write the music. And during the Renaissance, \njourneymen from northern\nEurope were often employed to do the landscapes in the\nbackgrounds of Italian paintings. But these aren't true collaborations.\nThey're more like examples of Robert Frost's\n\"good fences make good neighbors.\" You can stick instances\nof good design together, but within each individual project,\none person has to be in control.I'm not saying that good design requires that one person think\nof everything. There's nothing more valuable than the advice\nof someone whose judgement you trust. But after the talking is\ndone, the decision about what to do has to rest with one person.Why is it that research can be done by collaborators and \ndesign can't? This is an interesting question. I don't \nknow the answer. Perhaps,\nif design and research converge, the best research is also\ngood design, and in fact can't be done by collaborators.\nA lot of the most famous scientists seem to have worked alone.\nBut I don't know enough to say whether there\nis a pattern here. It could be simply that many famous scientists\nworked when collaboration was less common.Whatever the story is in the sciences, true collaboration\nseems to be vanishingly rare in the arts. Design by committee is a\nsynonym for bad design. Why is that so? Is there some way to\nbeat this limitation?I'm inclined to think there isn't-- that good design requires\na dictator. One reason is that good design has to \nbe all of a piece. Design is not just for humans, but\nfor individual humans. If a design represents an idea that \nfits in one person's head, then the idea will fit in the user's\nhead too.Related:"} {"title": "founders", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2010\n\n(I wrote this for Forbes, who asked me to write something\nabout the qualities we look for in founders. In print they had to cut\nthe last item because they didn't have room.)1. DeterminationThis has turned out to be the most important quality in startup\nfounders. We thought when we started Y Combinator that the most\nimportant quality would be intelligence. That's the myth in the\nValley. And certainly you don't want founders to be stupid. But\nas long as you're over a certain threshold of intelligence, what\nmatters most is determination. You're going to hit a lot of\nobstacles. You can't be the sort of person who gets demoralized\neasily.Bill Clerico and Rich Aberman of WePay \nare a good example. They're\ndoing a finance startup, which means endless negotiations with big,\nbureaucratic companies. When you're starting a startup that depends\non deals with big companies to exist, it often feels like they're\ntrying to ignore you out of existence. But when Bill Clerico starts\ncalling you, you may as well do what he asks, because he is not\ngoing away.\n2. FlexibilityYou do not however want the sort of determination implied by phrases\nlike \"don't give up on your dreams.\" The world of startups is so\nunpredictable that you need to be able to modify your dreams on the\nfly. The best metaphor I've found for the combination of determination\nand flexibility you need is a running back. \nHe's determined to get\ndownfield, but at any given moment he may need to go sideways or\neven backwards to get there.The current record holder for flexibility may be Daniel Gross of\nGreplin. He applied to YC with \nsome bad ecommerce idea. We told\nhim we'd fund him if he did something else. He thought for a second,\nand said ok. He then went through two more ideas before settling\non Greplin. He'd only been working on it for a couple days when\nhe presented to investors at Demo Day, but he got a lot of interest.\nHe always seems to land on his feet.\n3. ImaginationIntelligence does matter a lot of course. It seems like the type\nthat matters most is imagination. It's not so important to be able\nto solve predefined problems quickly as to be able to come up with\nsurprising new ideas. In the startup world, most good ideas \nseem\nbad initially. If they were obviously good, someone would already\nbe doing them. So you need the kind of intelligence that produces\nideas with just the right level of craziness.Airbnb is that kind of idea. \nIn fact, when we funded Airbnb, we\nthought it was too crazy. We couldn't believe large numbers of\npeople would want to stay in other people's places. We funded them\nbecause we liked the founders so much. As soon as we heard they'd\nbeen supporting themselves by selling Obama and McCain branded\nbreakfast cereal, they were in. And it turned out the idea was on\nthe right side of crazy after all.\n4. NaughtinessThough the most successful founders are usually good people, they\ntend to have a piratical gleam in their eye. They're not Goody\nTwo-Shoes type good. Morally, they care about getting the big\nquestions right, but not about observing proprieties. That's why\nI'd use the word naughty rather than evil. They delight in \nbreaking\nrules, but not rules that matter. This quality may be redundant\nthough; it may be implied by imagination.Sam Altman of Loopt \nis one of the most successful alumni, so we\nasked him what question we could put on the Y Combinator application\nthat would help us discover more people like him. He said to ask\nabout a time when they'd hacked something to their advantage\u2014hacked in the sense of beating the system, not breaking into\ncomputers. It has become one of the questions we pay most attention\nto when judging applications.\n5. FriendshipEmpirically it seems to be hard to start a startup with just \none\nfounder. Most of the big successes have two or three. And the\nrelationship between the founders has to be strong. They must\ngenuinely like one another, and work well together. Startups do\nto the relationship between the founders what a dog does to a sock:\nif it can be pulled apart, it will be.Emmett Shear and Justin Kan of Justin.tv \nare a good example of close\nfriends who work well together. They've known each other since\nsecond grade. They can practically read one another's minds. I'm\nsure they argue, like all founders, but I have never once sensed\nany unresolved tension between them.Thanks to Jessica Livingston and Chris Steiner for reading drafts of this."} {"title": "vw", "text": "January 2012A few hours before the Yahoo acquisition was announced in June 1998\nI took a snapshot of Viaweb's\nsite. I thought it might be interesting to look at one day.The first thing one notices is is how tiny the pages are. Screens\nwere a lot smaller in 1998. If I remember correctly, our frontpage\nused to just fit in the size window people typically used then.Browsers then (IE 6 was still 3 years in the future) had few fonts\nand they weren't antialiased. If you wanted to make pages that\nlooked good, you had to render display text as images.You may notice a certain similarity between the Viaweb and Y Combinator logos. We did that\nas an inside joke when we started YC. Considering how basic a red\ncircle is, it seemed surprising to me when we started Viaweb how\nfew other companies used one as their logo. A bit later I realized\nwhy.On the Company\npage you'll notice a mysterious individual called John McArtyem.\nRobert Morris (aka Rtm) was so publicity averse after the \nWorm that he\ndidn't want his name on the site. I managed to get him to agree\nto a compromise: we could use his bio but not his name. He has\nsince relaxed a bit\non that point.Trevor graduated at about the same time the acquisition closed, so in the\ncourse of 4 days he went from impecunious grad student to millionaire\nPhD. The culmination of my career as a writer of press releases\nwas one celebrating\nhis graduation, illustrated with a drawing I did of him during\na meeting.(Trevor also appears as Trevino\nBagwell in our directory of web designers merchants could hire\nto build stores for them. We inserted him as a ringer in case some\ncompetitor tried to spam our web designers. We assumed his logo\nwould deter any actual customers, but it did not.)Back in the 90s, to get users you had to get mentioned in magazines\nand newspapers. There were not the same ways to get found online\nthat there are today. So we used to pay a PR\nfirm $16,000 a month to get us mentioned in the press. Fortunately\nreporters liked\nus.In our advice about\ngetting traffic from search engines (I don't think the term SEO\nhad been coined yet), we say there are only 7 that matter: Yahoo,\nAltaVista, Excite, WebCrawler, InfoSeek, Lycos, and HotBot. Notice\nanything missing? Google was incorporated that September.We supported online transactions via a company called \nCybercash,\nsince if we lacked that feature we'd have gotten beaten up in product\ncomparisons. But Cybercash was so bad and most stores' order volumes\nwere so low that it was better if merchants processed orders like phone orders. We had a page in our site trying to talk merchants\nout of doing real time authorizations.The whole site was organized like a funnel, directing people to the\ntest drive.\nIt was a novel thing to be able to try out software online. We put\ncgi-bin in our dynamic urls to fool competitors about how our\nsoftware worked.We had some well\nknown users. Needless to say, Frederick's of Hollywood got the\nmost traffic. We charged a flat fee of $300/month for big stores,\nso it was a little alarming to have users who got lots of traffic.\nI once calculated how much Frederick's was costing us in bandwidth,\nand it was about $300/month.Since we hosted all the stores, which together were getting just\nover 10 million page views per month in June 1998, we consumed what\nat the time seemed a lot of bandwidth. We had 2 T1s (3 Mb/sec)\ncoming into our offices. In those days there was no AWS. Even\ncolocating servers seemed too risky, considering how often things\nwent wrong with them. So we had our servers in our offices. Or\nmore precisely, in Trevor's office. In return for the unique\nprivilege of sharing his office with no other humans, he had to\nshare it with 6 shrieking tower servers. His office was nicknamed\nthe Hot Tub on account of the heat they generated. Most days his\nstack of window air conditioners could keep up.For describing pages, we had a template language called RTML, which\nsupposedly stood for something, but which in fact I named after\nRtm. RTML was Common Lisp augmented by some macros and libraries,\nand concealed under a structure editor that made it look like it\nhad syntax.Since we did continuous releases, our software didn't actually have\nversions. But in those days the trade press expected versions, so\nwe made them up. If we wanted to get lots of attention, we made\nthe version number an\ninteger. That \"version 4.0\" icon was generated by our own\nbutton generator, incidentally. The whole Viaweb site was made\nwith our software, even though it wasn't an online store, because\nwe wanted to experience what our users did.At the end of 1997, we released a general purpose shopping search\nengine called Shopfind. It\nwas pretty advanced for the time. It had a programmable crawler\nthat could crawl most of the different stores online and pick out\nthe products."} {"title": "want", "text": "November 2022Since I was about 9 I've been puzzled by the apparent contradiction\nbetween being made of matter that behaves in a predictable way, and\nthe feeling that I could choose to do whatever I wanted. At the\ntime I had a self-interested motive for exploring the question. At\nthat age (like most succeeding ages) I was always in trouble with\nthe authorities, and it seemed to me that there might possibly be\nsome way to get out of trouble by arguing that I wasn't responsible\nfor my actions. I gradually lost hope of that, but the puzzle\nremained: How do you reconcile being a machine made of matter with\nthe feeling that you're free to choose what you do?\n[1]The best way to explain the answer may be to start with a slightly\nwrong version, and then fix it. The wrong version is: You can do\nwhat you want, but you can't want what you want. Yes, you can control\nwhat you do, but you'll do what you want, and you can't control\nthat.The reason this is mistaken is that people do sometimes change what\nthey want. People who don't want to want something \u2014 drug addicts,\nfor example \u2014 can sometimes make themselves stop wanting it. And\npeople who want to want something \u2014 who want to like classical\nmusic, or broccoli \u2014 sometimes succeed.So we modify our initial statement: You can do what you want, but\nyou can't want to want what you want.That's still not quite true. It's possible to change what you want\nto want. I can imagine someone saying \"I decided to stop wanting\nto like classical music.\" But we're getting closer to the truth.\nIt's rare for people to change what they want to want, and the more\n\"want to\"s we add, the rarer it gets.We can get arbitrarily close to a true statement by adding more \"want\nto\"s in much the same way we can get arbitrarily close to 1 by adding\nmore 9s to a string of 9s following a decimal point. In practice\nthree or four \"want to\"s must surely be enough. It's hard even to\nenvision what it would mean to change what you want to want to want\nto want, let alone actually do it.So one way to express the correct answer is to use a regular\nexpression. You can do what you want, but there's some statement\nof the form \"you can't (want to)* want what you want\" that's true.\nUltimately you get back to a want that you don't control.\n[2]\nNotes[1]\nI didn't know when I was 9 that matter might behave randomly,\nbut I don't think it affects the problem much. Randomness destroys\nthe ghost in the machine as effectively as determinism.[2]\nIf you don't like using an expression, you can make the same\npoint using higher-order desires: There is some n such that you\ndon't control your nth-order desires.\nThanks to Trevor Blackwell,\nJessica Livingston, Robert Morris, and\nMichael Nielsen for reading drafts of this."} {"title": "avg", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nApril 2001, rev. April 2003(This article is derived from a talk given at the 2001 Franz\nDeveloper Symposium.)\nIn the summer of 1995, my friend Robert Morris and I\nstarted a startup called \nViaweb. \nOur plan was to write\nsoftware that would let end users build online stores.\nWhat was novel about this software, at the time, was\nthat it ran on our server, using ordinary Web pages\nas the interface.A lot of people could have been having this idea at the\nsame time, of course, but as far as I know, Viaweb was\nthe first Web-based application. It seemed such\na novel idea to us that we named the company after it:\nViaweb, because our software worked via the Web,\ninstead of running on your desktop computer.Another unusual thing about this software was that it\nwas written primarily in a programming language called\nLisp. It was one of the first big end-user\napplications to be written in Lisp, which up till then\nhad been used mostly in universities and research labs. [1]The Secret WeaponEric Raymond has written an essay called \"How to Become a Hacker,\"\nand in it, among other things, he tells would-be hackers what\nlanguages they should learn. He suggests starting with Python and\nJava, because they are easy to learn. The serious hacker will also\nwant to learn C, in order to hack Unix, and Perl for system\nadministration and cgi scripts. Finally, the truly serious hacker\nshould consider learning Lisp:\n\n Lisp is worth learning for the profound enlightenment experience\n you will have when you finally get it; that experience will make\n you a better programmer for the rest of your days, even if you\n never actually use Lisp itself a lot.\n\nThis is the same argument you tend to hear for learning Latin. It\nwon't get you a job, except perhaps as a classics professor, but\nit will improve your mind, and make you a better writer in languages\nyou do want to use, like English.But wait a minute. This metaphor doesn't stretch that far. The\nreason Latin won't get you a job is that no one speaks it. If you\nwrite in Latin, no one can understand you. But Lisp is a computer\nlanguage, and computers speak whatever language you, the programmer,\ntell them to.So if Lisp makes you a better programmer, like he says, why wouldn't\nyou want to use it? If a painter were offered a brush that would\nmake him a better painter, it seems to me that he would want to\nuse it in all his paintings, wouldn't he? I'm not trying to make\nfun of Eric Raymond here. On the whole, his advice is good. What\nhe says about Lisp is pretty much the conventional wisdom. But\nthere is a contradiction in the conventional wisdom: Lisp will\nmake you a better programmer, and yet you won't use it.Why not? Programming languages are just tools, after all. If Lisp\nreally does yield better programs, you should use it. And if it\ndoesn't, then who needs it?This is not just a theoretical question. Software is a very\ncompetitive business, prone to natural monopolies. A company that\ngets software written faster and better will, all other things\nbeing equal, put its competitors out of business. And when you're\nstarting a startup, you feel this very keenly. Startups tend to\nbe an all or nothing proposition. You either get rich, or you get\nnothing. In a startup, if you bet on the wrong technology, your\ncompetitors will crush you.Robert and I both knew Lisp well, and we couldn't see any reason\nnot to trust our instincts and go with Lisp. We knew that everyone\nelse was writing their software in C++ or Perl. But we also knew\nthat that didn't mean anything. If you chose technology that way,\nyou'd be running Windows. When you choose technology, you have to\nignore what other people are doing, and consider only what will\nwork the best.This is especially true in a startup. In a big company, you can\ndo what all the other big companies are doing. But a startup can't\ndo what all the other startups do. I don't think a lot of people\nrealize this, even in startups.The average big company grows at about ten percent a year. So if\nyou're running a big company and you do everything the way the\naverage big company does it, you can expect to do as well as the\naverage big company-- that is, to grow about ten percent a year.The same thing will happen if you're running a startup, of course.\nIf you do everything the way the average startup does it, you should\nexpect average performance. The problem here is, average performance\nmeans that you'll go out of business. The survival rate for startups\nis way less than fifty percent. So if you're running a startup,\nyou had better be doing something odd. If not, you're in trouble.Back in 1995, we knew something that I don't think our competitors\nunderstood, and few understand even now: when you're writing\nsoftware that only has to run on your own servers, you can use\nany language you want. When you're writing desktop software,\nthere's a strong bias toward writing applications in the same\nlanguage as the operating system. Ten years ago, writing applications\nmeant writing applications in C. But with Web-based software,\nespecially when you have the source code of both the language and\nthe operating system, you can use whatever language you want.This new freedom is a double-edged sword, however. Now that you\ncan use any language, you have to think about which one to use.\nCompanies that try to pretend nothing has changed risk finding that\ntheir competitors do not.If you can use any language, which do you use? We chose Lisp.\nFor one thing, it was obvious that rapid development would be\nimportant in this market. We were all starting from scratch, so\na company that could get new features done before its competitors\nwould have a big advantage. We knew Lisp was a really good language\nfor writing software quickly, and server-based applications magnify\nthe effect of rapid development, because you can release software\nthe minute it's done.If other companies didn't want to use Lisp, so much the better.\nIt might give us a technological edge, and we needed all the help\nwe could get. When we started Viaweb, we had no experience in\nbusiness. We didn't know anything about marketing, or hiring\npeople, or raising money, or getting customers. Neither of us had\never even had what you would call a real job. The only thing we\nwere good at was writing software. We hoped that would save us.\nAny advantage we could get in the software department, we would\ntake.So you could say that using Lisp was an experiment. Our hypothesis\nwas that if we wrote our software in Lisp, we'd be able to get\nfeatures done faster than our competitors, and also to do things\nin our software that they couldn't do. And because Lisp was so\nhigh-level, we wouldn't need a big development team, so our costs\nwould be lower. If this were so, we could offer a better product\nfor less money, and still make a profit. We would end up getting\nall the users, and our competitors would get none, and eventually\ngo out of business. That was what we hoped would happen, anyway.What were the results of this experiment? Somewhat surprisingly,\nit worked. We eventually had many competitors, on the order of\ntwenty to thirty of them, but none of their software could compete\nwith ours. We had a wysiwyg online store builder that ran on the\nserver and yet felt like a desktop application. Our competitors\nhad cgi scripts. And we were always far ahead of them in features.\nSometimes, in desperation, competitors would try to introduce\nfeatures that we didn't have. But with Lisp our development cycle\nwas so fast that we could sometimes duplicate a new feature within\na day or two of a competitor announcing it in a press release. By\nthe time journalists covering the press release got round to calling\nus, we would have the new feature too.It must have seemed to our competitors that we had some kind of\nsecret weapon-- that we were decoding their Enigma traffic or\nsomething. In fact we did have a secret weapon, but it was simpler\nthan they realized. No one was leaking news of their features to\nus. We were just able to develop software faster than anyone\nthought possible.When I was about nine I happened to get hold of a copy of The Day\nof the Jackal, by Frederick Forsyth. The main character is an\nassassin who is hired to kill the president of France. The assassin\nhas to get past the police to get up to an apartment that overlooks\nthe president's route. He walks right by them, dressed up as an\nold man on crutches, and they never suspect him.Our secret weapon was similar. We wrote our software in a weird\nAI language, with a bizarre syntax full of parentheses. For years\nit had annoyed me to hear Lisp described that way. But now it\nworked to our advantage. In business, there is nothing more valuable\nthan a technical advantage your competitors don't understand. In\nbusiness, as in war, surprise is worth as much as force.And so, I'm a little embarrassed to say, I never said anything\npublicly about Lisp while we were working on Viaweb. We never\nmentioned it to the press, and if you searched for Lisp on our Web\nsite, all you'd find were the titles of two books in my bio. This\nwas no accident. A startup should give its competitors as little\ninformation as possible. If they didn't know what language our\nsoftware was written in, or didn't care, I wanted to keep it that\nway.[2]The people who understood our technology best were the customers.\nThey didn't care what language Viaweb was written in either, but\nthey noticed that it worked really well. It let them build great\nlooking online stores literally in minutes. And so, by word of\nmouth mostly, we got more and more users. By the end of 1996 we\nhad about 70 stores online. At the end of 1997 we had 500. Six\nmonths later, when Yahoo bought us, we had 1070 users. Today, as\nYahoo Store, this software continues to dominate its market. It's\none of the more profitable pieces of Yahoo, and the stores built\nwith it are the foundation of Yahoo Shopping. I left Yahoo in\n1999, so I don't know exactly how many users they have now, but\nthe last I heard there were about 20,000.\nThe Blub ParadoxWhat's so great about Lisp? And if Lisp is so great, why doesn't\neveryone use it? These sound like rhetorical questions, but actually\nthey have straightforward answers. Lisp is so great not because\nof some magic quality visible only to devotees, but because it is\nsimply the most powerful language available. And the reason everyone\ndoesn't use it is that programming languages are not merely\ntechnologies, but habits of mind as well, and nothing changes\nslower. Of course, both these answers need explaining.I'll begin with a shockingly controversial statement: programming\nlanguages vary in power.Few would dispute, at least, that high level languages are more\npowerful than machine language. Most programmers today would agree\nthat you do not, ordinarily, want to program in machine language.\nInstead, you should program in a high-level language, and have a\ncompiler translate it into machine language for you. This idea is\neven built into the hardware now: since the 1980s, instruction sets\nhave been designed for compilers rather than human programmers.Everyone knows it's a mistake to write your whole program by hand\nin machine language. What's less often understood is that there\nis a more general principle here: that if you have a choice of\nseveral languages, it is, all other things being equal, a mistake\nto program in anything but the most powerful one. [3]There are many exceptions to this rule. If you're writing a program\nthat has to work very closely with a program written in a certain\nlanguage, it might be a good idea to write the new program in the\nsame language. If you're writing a program that only has to do\nsomething very simple, like number crunching or bit manipulation,\nyou may as well use a less abstract language, especially since it\nmay be slightly faster. And if you're writing a short, throwaway\nprogram, you may be better off just using whatever language has\nthe best library functions for the task. But in general, for\napplication software, you want to be using the most powerful\n(reasonably efficient) language you can get, and using anything\nelse is a mistake, of exactly the same kind, though possibly in a\nlesser degree, as programming in machine language.You can see that machine language is very low level. But, at least\nas a kind of social convention, high-level languages are often all\ntreated as equivalent. They're not. Technically the term \"high-level\nlanguage\" doesn't mean anything very definite. There's no dividing\nline with machine languages on one side and all the high-level\nlanguages on the other. Languages fall along a continuum [4] of\nabstractness, from the most powerful all the way down to machine\nlanguages, which themselves vary in power.Consider Cobol. Cobol is a high-level language, in the sense that\nit gets compiled into machine language. Would anyone seriously\nargue that Cobol is equivalent in power to, say, Python? It's\nprobably closer to machine language than Python.Or how about Perl 4? Between Perl 4 and Perl 5, lexical closures\ngot added to the language. Most Perl hackers would agree that Perl\n5 is more powerful than Perl 4. But once you've admitted that,\nyou've admitted that one high level language can be more powerful\nthan another. And it follows inexorably that, except in special\ncases, you ought to use the most powerful you can get.This idea is rarely followed to its conclusion, though. After a\ncertain age, programmers rarely switch languages voluntarily.\nWhatever language people happen to be used to, they tend to consider\njust good enough.Programmers get very attached to their favorite languages, and I\ndon't want to hurt anyone's feelings, so to explain this point I'm\ngoing to use a hypothetical language called Blub. Blub falls right\nin the middle of the abstractness continuum. It is not the most\npowerful language, but it is more powerful than Cobol or machine\nlanguage.And in fact, our hypothetical Blub programmer wouldn't use either\nof them. Of course he wouldn't program in machine language. That's\nwhat compilers are for. And as for Cobol, he doesn't know how\nanyone can get anything done with it. It doesn't even have x (Blub\nfeature of your choice).As long as our hypothetical Blub programmer is looking down the\npower continuum, he knows he's looking down. Languages less powerful\nthan Blub are obviously less powerful, because they're missing some\nfeature he's used to. But when our hypothetical Blub programmer\nlooks in the other direction, up the power continuum, he doesn't\nrealize he's looking up. What he sees are merely weird languages.\nHe probably considers them about equivalent in power to Blub, but\nwith all this other hairy stuff thrown in as well. Blub is good\nenough for him, because he thinks in Blub.When we switch to the point of view of a programmer using any of\nthe languages higher up the power continuum, however, we find that\nhe in turn looks down upon Blub. How can you get anything done in\nBlub? It doesn't even have y.By induction, the only programmers in a position to see all the\ndifferences in power between the various languages are those who\nunderstand the most powerful one. (This is probably what Eric\nRaymond meant about Lisp making you a better programmer.) You can't\ntrust the opinions of the others, because of the Blub paradox:\nthey're satisfied with whatever language they happen to use, because\nit dictates the way they think about programs.I know this from my own experience, as a high school kid writing\nprograms in Basic. That language didn't even support recursion.\nIt's hard to imagine writing programs without using recursion, but\nI didn't miss it at the time. I thought in Basic. And I was a\nwhiz at it. Master of all I surveyed.The five languages that Eric Raymond recommends to hackers fall at\nvarious points on the power continuum. Where they fall relative\nto one another is a sensitive topic. What I will say is that I\nthink Lisp is at the top. And to support this claim I'll tell you\nabout one of the things I find missing when I look at the other\nfour languages. How can you get anything done in them, I think,\nwithout macros? [5]Many languages have something called a macro. But Lisp macros are\nunique. And believe it or not, what they do is related to the\nparentheses. The designers of Lisp didn't put all those parentheses\nin the language just to be different. To the Blub programmer, Lisp\ncode looks weird. But those parentheses are there for a reason.\nThey are the outward evidence of a fundamental difference between\nLisp and other languages.Lisp code is made out of Lisp data objects. And not in the trivial\nsense that the source files contain characters, and strings are\none of the data types supported by the language. Lisp code, after\nit's read by the parser, is made of data structures that you can\ntraverse.If you understand how compilers work, what's really going on is\nnot so much that Lisp has a strange syntax as that Lisp has no\nsyntax. You write programs in the parse trees that get generated\nwithin the compiler when other languages are parsed. But these\nparse trees are fully accessible to your programs. You can write\nprograms that manipulate them. In Lisp, these programs are called\nmacros. They are programs that write programs.Programs that write programs? When would you ever want to do that?\nNot very often, if you think in Cobol. All the time, if you think\nin Lisp. It would be convenient here if I could give an example\nof a powerful macro, and say there! how about that? But if I did,\nit would just look like gibberish to someone who didn't know Lisp;\nthere isn't room here to explain everything you'd need to know to\nunderstand what it meant. In \nAnsi Common Lisp I tried to move\nthings along as fast as I could, and even so I didn't get to macros\nuntil page 160.But I think I can give a kind of argument that might be convincing.\nThe source code of the Viaweb editor was probably about 20-25%\nmacros. Macros are harder to write than ordinary Lisp functions,\nand it's considered to be bad style to use them when they're not\nnecessary. So every macro in that code is there because it has to\nbe. What that means is that at least 20-25% of the code in this\nprogram is doing things that you can't easily do in any other\nlanguage. However skeptical the Blub programmer might be about my\nclaims for the mysterious powers of Lisp, this ought to make him\ncurious. We weren't writing this code for our own amusement. We\nwere a tiny startup, programming as hard as we could in order to\nput technical barriers between us and our competitors.A suspicious person might begin to wonder if there was some\ncorrelation here. A big chunk of our code was doing things that\nare very hard to do in other languages. The resulting software\ndid things our competitors' software couldn't do. Maybe there was\nsome kind of connection. I encourage you to follow that thread.\nThere may be more to that old man hobbling along on his crutches\nthan meets the eye.Aikido for StartupsBut I don't expect to convince anyone \n(over 25) \nto go out and learn\nLisp. The purpose of this article is not to change anyone's mind,\nbut to reassure people already interested in using Lisp-- people\nwho know that Lisp is a powerful language, but worry because it\nisn't widely used. In a competitive situation, that's an advantage.\nLisp's power is multiplied by the fact that your competitors don't\nget it.If you think of using Lisp in a startup, you shouldn't worry that\nit isn't widely understood. You should hope that it stays that\nway. And it's likely to. It's the nature of programming languages\nto make most people satisfied with whatever they currently use.\nComputer hardware changes so much faster than personal habits that\nprogramming practice is usually ten to twenty years behind the\nprocessor. At places like MIT they were writing programs in\nhigh-level languages in the early 1960s, but many companies continued\nto write code in machine language well into the 1980s. I bet a\nlot of people continued to write machine language until the processor,\nlike a bartender eager to close up and go home, finally kicked them\nout by switching to a risc instruction set.Ordinarily technology changes fast. But programming languages are\ndifferent: programming languages are not just technology, but what\nprogrammers think in. They're half technology and half religion.[6]\nAnd so the median language, meaning whatever language the median\nprogrammer uses, moves as slow as an iceberg. Garbage collection,\nintroduced by Lisp in about 1960, is now widely considered to be\na good thing. Runtime typing, ditto, is growing in popularity.\nLexical closures, introduced by Lisp in the early 1970s, are now,\njust barely, on the radar screen. Macros, introduced by Lisp in the\nmid 1960s, are still terra incognita.Obviously, the median language has enormous momentum. I'm not\nproposing that you can fight this powerful force. What I'm proposing\nis exactly the opposite: that, like a practitioner of Aikido, you\ncan use it against your opponents.If you work for a big company, this may not be easy. You will have\na hard time convincing the pointy-haired boss to let you build\nthings in Lisp, when he has just read in the paper that some other\nlanguage is poised, like Ada was twenty years ago, to take over\nthe world. But if you work for a startup that doesn't have\npointy-haired bosses yet, you can, like we did, turn the Blub\nparadox to your advantage: you can use technology that your\ncompetitors, glued immovably to the median language, will never be\nable to match.If you ever do find yourself working for a startup, here's a handy\ntip for evaluating competitors. Read their job listings. Everything\nelse on their site may be stock photos or the prose equivalent,\nbut the job listings have to be specific about what they want, or\nthey'll get the wrong candidates.During the years we worked on Viaweb I read a lot of job descriptions.\nA new competitor seemed to emerge out of the woodwork every month\nor so. The first thing I would do, after checking to see if they\nhad a live online demo, was look at their job listings. After a\ncouple years of this I could tell which companies to worry about\nand which not to. The more of an IT flavor the job descriptions\nhad, the less dangerous the company was. The safest kind were the\nones that wanted Oracle experience. You never had to worry about\nthose. You were also safe if they said they wanted C++ or Java\ndevelopers. If they wanted Perl or Python programmers, that would\nbe a bit frightening-- that's starting to sound like a company\nwhere the technical side, at least, is run by real hackers. If I\nhad ever seen a job posting looking for Lisp hackers, I would have\nbeen really worried.\nNotes[1] Viaweb at first had two parts: the editor, written in Lisp,\nwhich people used to build their sites, and the ordering system,\nwritten in C, which handled orders. The first version was mostly\nLisp, because the ordering system was small. Later we added two\nmore modules, an image generator written in C, and a back-office\nmanager written mostly in Perl.In January 2003, Yahoo released a new version of the editor \nwritten in C++ and Perl. It's hard to say whether the program is no\nlonger written in Lisp, though, because to translate this program\ninto C++ they literally had to write a Lisp interpreter: the source\nfiles of all the page-generating templates are still, as far as I\nknow, Lisp code. (See Greenspun's Tenth Rule.)[2] Robert Morris says that I didn't need to be secretive, because\neven if our competitors had known we were using Lisp, they wouldn't\nhave understood why: \"If they were that smart they'd already be\nprogramming in Lisp.\"[3] All languages are equally powerful in the sense of being Turing\nequivalent, but that's not the sense of the word programmers care\nabout. (No one wants to program a Turing machine.) The kind of\npower programmers care about may not be formally definable, but\none way to explain it would be to say that it refers to features\nyou could only get in the less powerful language by writing an\ninterpreter for the more powerful language in it. If language A\nhas an operator for removing spaces from strings and language B\ndoesn't, that probably doesn't make A more powerful, because you\ncan probably write a subroutine to do it in B. But if A supports,\nsay, recursion, and B doesn't, that's not likely to be something\nyou can fix by writing library functions.[4] Note to nerds: or possibly a lattice, narrowing toward the top;\nit's not the shape that matters here but the idea that there is at\nleast a partial order.[5] It is a bit misleading to treat macros as a separate feature.\nIn practice their usefulness is greatly enhanced by other Lisp\nfeatures like lexical closures and rest parameters.[6] As a result, comparisons of programming languages either take\nthe form of religious wars or undergraduate textbooks so determinedly\nneutral that they're really works of anthropology. People who\nvalue their peace, or want tenure, avoid the topic. But the question\nis only half a religious one; there is something there worth\nstudying, especially if you want to design new languages."} {"title": "goodtaste", "text": "November 2021(This essay is derived from a talk at the Cambridge Union.)When I was a kid, I'd have said there wasn't. My father told me so.\nSome people like some things, and other people like other things,\nand who's to say who's right?It seemed so obvious that there was no such thing as good taste\nthat it was only through indirect evidence that I realized my father\nwas wrong. And that's what I'm going to give you here: a proof by\nreductio ad absurdum. If we start from the premise that there's no\nsuch thing as good taste, we end up with conclusions that are\nobviously false, and therefore the premise must be wrong.We'd better start by saying what good taste is. There's a narrow\nsense in which it refers to aesthetic judgements and a broader one\nin which it refers to preferences of any kind. The strongest proof\nwould be to show that taste exists in the narrowest sense, so I'm\ngoing to talk about taste in art. You have better taste than me if\nthe art you like is better than the art I like.If there's no such thing as good taste, then there's no such thing\nas good art. Because if there is such a\nthing as good art, it's\neasy to tell which of two people has better taste. Show them a lot\nof works by artists they've never seen before and ask them to\nchoose the best, and whoever chooses the better art has better\ntaste.So if you want to discard the concept of good taste, you also have\nto discard the concept of good art. And that means you have to\ndiscard the possibility of people being good at making it. Which\nmeans there's no way for artists to be good at their jobs. And not\njust visual artists, but anyone who is in any sense an artist. You\ncan't have good actors, or novelists, or composers, or dancers\neither. You can have popular novelists, but not good ones.We don't realize how far we'd have to go if we discarded the concept\nof good taste, because we don't even debate the most obvious cases.\nBut it doesn't just mean we can't say which of two famous painters\nis better. It means we can't say that any painter is better than a\nrandomly chosen eight year old.That was how I realized my father was wrong. I started studying\npainting. And it was just like other kinds of work I'd done: you\ncould do it well, or badly, and if you tried hard, you could get\nbetter at it. And it was obvious that Leonardo and Bellini were\nmuch better at it than me. That gap between us was not imaginary.\nThey were so good. And if they could be good, then art could be\ngood, and there was such a thing as good taste after all.Now that I've explained how to show there is such a thing as good\ntaste, I should also explain why people think there isn't. There\nare two reasons. One is that there's always so much disagreement\nabout taste. Most people's response to art is a tangle of unexamined\nimpulses. Is the artist famous? Is the subject attractive? Is this\nthe sort of art they're supposed to like? Is it hanging in a famous\nmuseum, or reproduced in a big, expensive book? In practice most\npeople's response to art is dominated by such extraneous factors.And the people who do claim to have good taste are so often mistaken.\nThe paintings admired by the so-called experts in one generation\nare often so different from those admired a few generations later.\nIt's easy to conclude there's nothing real there at all. It's only\nwhen you isolate this force, for example by trying to paint and\ncomparing your work to Bellini's, that you can see that it does in\nfact exist.The other reason people doubt that art can be good is that there\ndoesn't seem to be any room in the art for this goodness. The\nargument goes like this. Imagine several people looking at a work\nof art and judging how good it is. If being good art really is a\nproperty of objects, it should be in the object somehow. But it\ndoesn't seem to be; it seems to be something happening in the heads\nof each of the observers. And if they disagree, how do you choose\nbetween them?The solution to this puzzle is to realize that the purpose of art\nis to work on its human audience, and humans have a lot in common.\nAnd to the extent the things an object acts upon respond in the\nsame way, that's arguably what it means for the object to have the\ncorresponding property. If everything a particle interacts with\nbehaves as if the particle had a mass of m, then it has a mass of\nm. So the distinction between \"objective\" and \"subjective\" is not\nbinary, but a matter of degree, depending on how much the subjects\nhave in common. Particles interacting with one another are at one\npole, but people interacting with art are not all the way at the\nother; their reactions aren't random.Because people's responses to art aren't random, art can be designed\nto operate on people, and be good or bad depending on how effectively\nit does so. Much as a vaccine can be. If someone were talking about\nthe ability of a vaccine to confer immunity, it would seem very\nfrivolous to object that conferring immunity wasn't really a property\nof vaccines, because acquiring immunity is something that happens\nin the immune system of each individual person. Sure, people's\nimmune systems vary, and a vaccine that worked on one might not\nwork on another, but that doesn't make it meaningless to talk about\nthe effectiveness of a vaccine.The situation with art is messier, of course. You can't measure\neffectiveness by simply taking a vote, as you do with vaccines.\nYou have to imagine the responses of subjects with a deep knowledge\nof art, and enough clarity of mind to be able to ignore extraneous\ninfluences like the fame of the artist. And even then you'd still\nsee some disagreement. People do vary, and judging art is hard,\nespecially recent art. There is definitely not a total order either\nof works or of people's ability to judge them. But there is equally\ndefinitely a partial order of both. So while it's not possible to\nhave perfect taste, it is possible to have good taste.\nThanks to the Cambridge Union for inviting me, and to Trevor\nBlackwell, Jessica Livingston, and Robert Morris for reading drafts\nof this.\n"} {"title": "newideas", "text": "May 2021There's one kind of opinion I'd be very afraid to express publicly.\nIf someone I knew to be both a domain expert and a reasonable person\nproposed an idea that sounded preposterous, I'd be very reluctant\nto say \"That will never work.\"Anyone who has studied the history of ideas, and especially the\nhistory of science, knows that's how big things start. Someone\nproposes an idea that sounds crazy, most people dismiss it, then\nit gradually takes over the world.Most implausible-sounding ideas are in fact bad and could be safely\ndismissed. But not when they're proposed by reasonable domain\nexperts. If the person proposing the idea is reasonable, then they\nknow how implausible it sounds. And yet they're proposing it anyway.\nThat suggests they know something you don't. And if they have deep\ndomain expertise, that's probably the source of it.\n[1]Such ideas are not merely unsafe to dismiss, but disproportionately\nlikely to be interesting. When the average person proposes an\nimplausible-sounding idea, its implausibility is evidence of their\nincompetence. But when a reasonable domain expert does it, the\nsituation is reversed. There's something like an efficient market\nhere: on average the ideas that seem craziest will, if correct,\nhave the biggest effect. So if you can eliminate the theory that\nthe person proposing an implausible-sounding idea is incompetent,\nits implausibility switches from evidence that it's boring to\nevidence that it's exciting.\n[2]Such ideas are not guaranteed to work. But they don't have to be.\nThey just have to be sufficiently good bets \u2014 to have sufficiently\nhigh expected value. And I think on average they do. I think if you\nbet on the entire set of implausible-sounding ideas proposed by\nreasonable domain experts, you'd end up net ahead.The reason is that everyone is too conservative. The word \"paradigm\"\nis overused, but this is a case where it's warranted. Everyone is\ntoo much in the grip of the current paradigm. Even the people who\nhave the new ideas undervalue them initially. Which means that\nbefore they reach the stage of proposing them publicly, they've\nalready subjected them to an excessively strict filter.\n[3]The wise response to such an idea is not to make statements, but\nto ask questions, because there's a real mystery here. Why has this\nsmart and reasonable person proposed an idea that seems so wrong?\nAre they mistaken, or are you? One of you has to be. If you're the\none who's mistaken, that would be good to know, because it means\nthere's a hole in your model of the world. But even if they're\nmistaken, it should be interesting to learn why. A trap that an\nexpert falls into is one you have to worry about too.This all seems pretty obvious. And yet there are clearly a lot of\npeople who don't share my fear of dismissing new ideas. Why do they\ndo it? Why risk looking like a jerk now and a fool later, instead\nof just reserving judgement?One reason they do it is envy. If you propose a radical new idea\nand it succeeds, your reputation (and perhaps also your wealth)\nwill increase proportionally. Some people would be envious if that\nhappened, and this potential envy propagates back into a conviction\nthat you must be wrong.Another reason people dismiss new ideas is that it's an easy way\nto seem sophisticated. When a new idea first emerges, it usually\nseems pretty feeble. It's a mere hatchling. Received wisdom is a\nfull-grown eagle by comparison. So it's easy to launch a devastating\nattack on a new idea, and anyone who does will seem clever to those\nwho don't understand this asymmetry.This phenomenon is exacerbated by the difference between how those\nworking on new ideas and those attacking them are rewarded. The\nrewards for working on new ideas are weighted by the value of the\noutcome. So it's worth working on something that only has a 10%\nchance of succeeding if it would make things more than 10x better.\nWhereas the rewards for attacking new ideas are roughly constant;\nsuch attacks seem roughly equally clever regardless of the target.People will also attack new ideas when they have a vested interest\nin the old ones. It's not surprising, for example, that some of\nDarwin's harshest critics were churchmen. People build whole careers\non some ideas. When someone claims they're false or obsolete, they\nfeel threatened.The lowest form of dismissal is mere factionalism: to automatically\ndismiss any idea associated with the opposing faction. The lowest\nform of all is to dismiss an idea because of who proposed it.But the main thing that leads reasonable people to dismiss new ideas\nis the same thing that holds people back from proposing them: the\nsheer pervasiveness of the current paradigm. It doesn't just affect\nthe way we think; it is the Lego blocks we build thoughts out of.\nPopping out of the current paradigm is something only a few people\ncan do. And even they usually have to suppress their intuitions at\nfirst, like a pilot flying through cloud who has to trust his\ninstruments over his sense of balance.\n[4]Paradigms don't just define our present thinking. They also vacuum\nup the trail of crumbs that led to them, making our standards for\nnew ideas impossibly high. The current paradigm seems so perfect\nto us, its offspring, that we imagine it must have been accepted\ncompletely as soon as it was discovered \u2014 that whatever the church thought\nof the heliocentric model, astronomers must have been convinced as\nsoon as Copernicus proposed it. Far, in fact, from it. Copernicus\npublished the heliocentric model in 1532, but it wasn't till the\nmid seventeenth century that the balance of scientific opinion\nshifted in its favor.\n[5]Few understand how feeble new ideas look when they first appear.\nSo if you want to have new ideas yourself, one of the most valuable\nthings you can do is to learn what they look like when they're born.\nRead about how new ideas happened, and try to get yourself into the\nheads of people at the time. How did things look to them, when the\nnew idea was only half-finished, and even the person who had it was\nonly half-convinced it was right?But you don't have to stop at history. You can observe big new ideas\nbeing born all around you right now. Just look for a reasonable\ndomain expert proposing something that sounds wrong.If you're nice, as well as wise, you won't merely resist attacking\nsuch people, but encourage them. Having new ideas is a lonely\nbusiness. Only those who've tried it know how lonely. These people\nneed your help. And if you help them, you'll probably learn something\nin the process.Notes[1]\nThis domain expertise could be in another field. Indeed,\nsuch crossovers tend to be particularly promising.[2]\nI'm not claiming this principle extends much beyond math,\nengineering, and the hard sciences. In politics, for example,\ncrazy-sounding ideas generally are as bad as they sound. Though\narguably this is not an exception, because the people who propose\nthem are not in fact domain experts; politicians are domain experts\nin political tactics, like how to get elected and how to get\nlegislation passed, but not in the world that policy acts upon.\nPerhaps no one could be.[3]\nThis sense of \"paradigm\" was defined by Thomas Kuhn in his\nStructure of Scientific Revolutions, but I also recommend his\nCopernican Revolution, where you can see him at work developing the\nidea.[4]\nThis is one reason people with a touch of Asperger's may have\nan advantage in discovering new ideas. They're always flying on\ninstruments.[5]\nHall, Rupert. From Galileo to Newton. Collins, 1963. This\nbook is particularly good at getting into contemporaries' heads.Thanks to Trevor Blackwell, Patrick Collison, Suhail Doshi, Daniel\nGackle, Jessica Livingston, and Robert Morris for reading drafts of this."} {"title": "superangels", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2010After barely changing at all for decades, the startup funding\nbusiness is now in what could, at least by comparison, be called\nturmoil. At Y Combinator we've seen dramatic changes in the funding\nenvironment for startups. Fortunately one of them is much higher\nvaluations.The trends we've been seeing are probably not YC-specific. I wish\nI could say they were, but the main cause is probably just that we\nsee trends first\u2014partly because the startups we fund are very\nplugged into the Valley and are quick to take advantage of anything\nnew, and partly because we fund so many that we have enough data\npoints to see patterns clearly.What we're seeing now, everyone's probably going to be seeing in\nthe next couple years. So I'm going to explain what we're seeing,\nand what that will mean for you if you try to raise money.Super-AngelsLet me start by describing what the world of startup funding used\nto look like. There used to be two sharply differentiated types\nof investors: angels and venture capitalists. Angels are individual\nrich people who invest small amounts of their own money, while VCs\nare employees of funds that invest large amounts of other people's.For decades there were just those two types of investors, but now\na third type has appeared halfway between them: the so-called\nsuper-angels. \n[1]\n And VCs have been provoked by their arrival\ninto making a lot of angel-style investments themselves. So the\npreviously sharp line between angels and VCs has become hopelessly\nblurred.There used to be a no man's land between angels and VCs. Angels\nwould invest $20k to $50k apiece, and VCs usually a million or more.\nSo an angel round meant a collection of angel investments that\ncombined to maybe $200k, and a VC round meant a series A round in\nwhich a single VC fund (or occasionally two) invested $1-5 million.The no man's land between angels and VCs was a very inconvenient\none for startups, because it coincided with the amount many wanted\nto raise. Most startups coming out of Demo Day wanted to raise\naround $400k. But it was a pain to stitch together that much out\nof angel investments, and most VCs weren't interested in investments\nso small. That's the fundamental reason the super-angels have\nappeared. They're responding to the market.The arrival of a new type of investor is big news for startups,\nbecause there used to be only two and they rarely competed with one\nanother. Super-angels compete with both angels and VCs. That's\ngoing to change the rules about how to raise money. I don't know\nyet what the new rules will be, but it looks like most of the changes\nwill be for the better.A super-angel has some of the qualities of an angel, and some of\nthe qualities of a VC. They're usually individuals, like angels.\nIn fact many of the current super-angels were initially angels of\nthe classic type. But like VCs, they invest other people's money.\nThis allows them to invest larger amounts than angels: a typical\nsuper-angel investment is currently about $100k. They make investment\ndecisions quickly, like angels. And they make a lot more investments\nper partner than VCs\u2014up to 10 times as many.The fact that super-angels invest other people's money makes them\ndoubly alarming to VCs. They don't just compete for startups; they\nalso compete for investors. What super-angels really are is a new\nform of fast-moving, lightweight VC fund. And those of us in the\ntechnology world know what usually happens when something comes\nalong that can be described in terms like that. Usually it's the\nreplacement.Will it be? As of now, few of the startups that take money from\nsuper-angels are ruling out taking VC money. They're just postponing\nit. But that's still a problem for VCs. Some of the startups that\npostpone raising VC money may do so well on the angel money they\nraise that they never bother to raise more. And those who do raise\nVC rounds will be able to get higher valuations when they do. If\nthe best startups get 10x higher valuations when they raise series\nA rounds, that would cut VCs' returns from winners at least tenfold.\n[2]So I think VC funds are seriously threatened by the super-angels.\nBut one thing that may save them to some extent is the uneven\ndistribution of startup outcomes: practically all the returns are\nconcentrated in a few big successes. The expected value of a startup\nis the percentage chance it's Google. So to the extent that winning\nis a matter of absolute returns, the super-angels could win practically\nall the battles for individual startups and yet lose the war, if\nthey merely failed to get those few big winners. And there's a\nchance that could happen, because the top VC funds have better\nbrands, and can also do more for their portfolio companies. \n[3]Because super-angels make more investments per partner, they have\nless partner per investment. They can't pay as much attention to\nyou as a VC on your board could. How much is that extra attention\nworth? It will vary enormously from one partner to another. There's\nno consensus yet in the general case. So for now this is something\nstartups are deciding individually.Till now, VCs' claims about how much value they added were sort of\nlike the government's. Maybe they made you feel better, but you\nhad no choice in the matter, if you needed money on the scale only\nVCs could supply. Now that VCs have competitors, that's going to\nput a market price on the help they offer. The interesting thing\nis, no one knows yet what it will be.Do startups that want to get really big need the sort of advice and\nconnections only the top VCs can supply? Or would super-angel money\ndo just as well? The VCs will say you need them, and the super-angels\nwill say you don't. But the truth is, no one knows yet, not even\nthe VCs and super-angels themselves. All the super-angels know\nis that their new model seems promising enough to be worth trying,\nand all the VCs know is that it seems promising enough to worry\nabout.RoundsWhatever the outcome, the conflict between VCs and super-angels is\ngood news for founders. And not just for the obvious reason that\nmore competition for deals means better terms. The whole shape of\ndeals is changing.One of the biggest differences between angels and VCs is the amount\nof your company they want. VCs want a lot. In a series A round\nthey want a third of your company, if they can get it. They don't\ncare much how much they pay for it, but they want a lot because the\nnumber of series A investments they can do is so small. In a\ntraditional series A investment, at least one partner from the VC\nfund takes a seat on your board. \n[4]\n Since board seats last about\n5 years and each partner can't handle more than about 10 at once,\nthat means a VC fund can only do about 2 series A deals per partner\nper year. And that means they need to get as much of the company\nas they can in each one. You'd have to be a very promising startup\nindeed to get a VC to use up one of his 10 board seats for only a\nfew percent of you.Since angels generally don't take board seats, they don't have this\nconstraint. They're happy to buy only a few percent of you. And\nalthough the super-angels are in most respects mini VC funds, they've\nretained this critical property of angels. They don't take board\nseats, so they don't need a big percentage of your company.Though that means you'll get correspondingly less attention from\nthem, it's good news in other respects. Founders never really liked\ngiving up as much equity as VCs wanted. It was a lot of the company\nto give up in one shot. Most founders doing series A deals would\nprefer to take half as much money for half as much stock, and then\nsee what valuation they could get for the second half of the stock\nafter using the first half of the money to increase its value. But\nVCs never offered that option.Now startups have another alternative. Now it's easy to raise angel\nrounds about half the size of series A rounds. Many of the startups\nwe fund are taking this route, and I predict that will be true of\nstartups in general.A typical big angel round might be $600k on a convertible note with\na valuation cap of $4 million premoney. Meaning that when the note\nconverts into stock (in a later round, or upon acquisition), the\ninvestors in that round will get .6 / 4.6, or 13% of the company.\nThat's a lot less than the 30 to 40% of the company you usually\ngive up in a series A round if you do it so early. \n[5]But the advantage of these medium-sized rounds is not just that\nthey cause less dilution. You also lose less control. After an\nangel round, the founders almost always still have control of the\ncompany, whereas after a series A round they often don't. The\ntraditional board structure after a series A round is two founders,\ntwo VCs, and a (supposedly) neutral fifth person. Plus series A\nterms usually give the investors a veto over various kinds of\nimportant decisions, including selling the company. Founders usually\nhave a lot of de facto control after a series A, as long as things\nare going well. But that's not the same as just being able to do\nwhat you want, like you could before.A third and quite significant advantage of angel rounds is that\nthey're less stressful to raise. Raising a traditional series A\nround has in the past taken weeks, if not months. When a VC firm\ncan only do 2 deals per partner per year, they're careful about\nwhich they do. To get a traditional series A round you have to go\nthrough a series of meetings, culminating in a full partner meeting\nwhere the firm as a whole says yes or no. That's the really scary\npart for founders: not just that series A rounds take so long, but\nat the end of this long process the VCs might still say no. The\nchance of getting rejected after the full partner meeting averages\nabout 25%. At some firms it's over 50%.Fortunately for founders, VCs have been getting a lot faster.\nNowadays Valley VCs are more likely to take 2 weeks than 2 months.\nBut they're still not as fast as angels and super-angels, the most\ndecisive of whom sometimes decide in hours.Raising an angel round is not only quicker, but you get feedback\nas it progresses. An angel round is not an all or nothing thing\nlike a series A. It's composed of multiple investors with varying\ndegrees of seriousness, ranging from the upstanding ones who commit\nunequivocally to the jerks who give you lines like \"come back to\nme to fill out the round.\" You usually start collecting money from\nthe most committed investors and work your way out toward the\nambivalent ones, whose interest increases as the round fills up.But at each point you know how you're doing. If investors turn\ncold you may have to raise less, but when investors in an angel\nround turn cold the process at least degrades gracefully, instead\nof blowing up in your face and leaving you with nothing, as happens\nif you get rejected by a VC fund after a full partner meeting.\nWhereas if investors seem hot, you can not only close the round\nfaster, but now that convertible notes are becoming the norm,\nactually raise the price to reflect demand.ValuationHowever, the VCs have a weapon they can use against the super-angels,\nand they have started to use it. VCs have started making angel-sized\ninvestments too. The term \"angel round\" doesn't mean that all the\ninvestors in it are angels; it just describes the structure of the\nround. Increasingly the participants include VCs making investments\nof a hundred thousand or two. And when VCs invest in angel rounds\nthey can do things that super-angels don't like. VCs are quite\nvaluation-insensitive in angel rounds\u2014partly because they are\nin general, and partly because they don't care that much about the\nreturns on angel rounds, which they still view mostly as a way to\nrecruit startups for series A rounds later. So VCs who invest in\nangel rounds can blow up the valuations for angels and super-angels\nwho invest in them. \n[6]Some super-angels seem to care about valuations. Several turned\ndown YC-funded startups after Demo Day because their valuations\nwere too high. This was not a problem for the startups; by definition\na high valuation means enough investors were willing to accept it.\nBut it was mysterious to me that the super-angels would quibble\nabout valuations. Did they not understand that the big returns\ncome from a few big successes, and that it therefore mattered far\nmore which startups you picked than how much you paid for them?After thinking about it for a while and observing certain other\nsigns, I have a theory that explains why the super-angels may be\nsmarter than they seem. It would make sense for super-angels to\nwant low valuations if they're hoping to invest in startups that\nget bought early. If you're hoping to hit the next Google, you\nshouldn't care if the valuation is 20 million. But if you're looking\nfor companies that are going to get bought for 30 million, you care.\nIf you invest at 20 and the company gets bought for 30, you only\nget 1.5x. You might as well buy Apple.So if some of the super-angels were looking for companies that could\nget acquired quickly, that would explain why they'd care about\nvaluations. But why would they be looking for those? Because\ndepending on the meaning of \"quickly,\" it could actually be very\nprofitable. A company that gets acquired for 30 million is a failure\nto a VC, but it could be a 10x return for an angel, and moreover,\na quick 10x return. Rate of return is what matters in\ninvesting\u2014not the multiple you get, but the multiple per year.\nIf a super-angel gets 10x in one year, that's a higher rate of\nreturn than a VC could ever hope to get from a company that took 6\nyears to go public. To get the same rate of return, the VC would\nhave to get a multiple of 10^6\u2014one million x. Even Google\ndidn't come close to that.So I think at least some super-angels are looking for companies\nthat will get bought. That's the only rational explanation for\nfocusing on getting the right valuations, instead of the right\ncompanies. And if so they'll be different to deal with than VCs.\nThey'll be tougher on valuations, but more accommodating if you want\nto sell early.PrognosisWho will win, the super-angels or the VCs? I think the answer to\nthat is, some of each. They'll each become more like one another.\nThe super-angels will start to invest larger amounts, and the VCs\nwill gradually figure out ways to make more, smaller investments\nfaster. A decade from now the players will be hard to tell apart,\nand there will probably be survivors from each group.What does that mean for founders? One thing it means is that the\nhigh valuations startups are presently getting may not last forever.\nTo the extent that valuations are being driven up by price-insensitive\nVCs, they'll fall again if VCs become more like super-angels and\nstart to become more miserly about valuations. Fortunately if this\ndoes happen it will take years.The short term forecast is more competition between investors, which\nis good news for you. The super-angels will try to undermine the\nVCs by acting faster, and the VCs will try to undermine the\nsuper-angels by driving up valuations. Which for founders will\nresult in the perfect combination: funding rounds that close fast,\nwith high valuations.But remember that to get that combination, your startup will have\nto appeal to both super-angels and VCs. If you don't seem like you\nhave the potential to go public, you won't be able to use VCs to\ndrive up the valuation of an angel round.There is a danger of having VCs in an angel round: the so-called\nsignalling risk. If VCs are only doing it in the hope of investing\nmore later, what happens if they don't? That's a signal to everyone\nelse that they think you're lame.How much should you worry about that? The seriousness of signalling\nrisk depends on how far along you are. If by the next time you\nneed to raise money, you have graphs showing rising revenue or\ntraffic month after month, you don't have to worry about any signals\nyour existing investors are sending. Your results will speak for\nthemselves. \n[7]Whereas if the next time you need to raise money you won't yet have\nconcrete results, you may need to think more about the message your\ninvestors might send if they don't invest more. I'm not sure yet\nhow much you have to worry, because this whole phenomenon of VCs\ndoing angel investments is so new. But my instincts tell me you\ndon't have to worry much. Signalling risk smells like one of those\nthings founders worry about that's not a real problem. As a rule,\nthe only thing that can kill a good startup is the startup itself.\nStartups hurt themselves way more often than competitors hurt them,\nfor example. I suspect signalling risk is in this category too.One thing YC-funded startups have been doing to mitigate the risk\nof taking money from VCs in angel rounds is not to take too much\nfrom any one VC. Maybe that will help, if you have the luxury of\nturning down money.Fortunately, more and more startups will. After decades of competition\nthat could best be described as intramural, the startup funding\nbusiness is finally getting some real competition. That should\nlast several years at least, and maybe a lot longer. Unless there's\nsome huge market crash, the next couple years are going to be a\ngood time for startups to raise money. And that's exciting because\nit means lots more startups will happen.\nNotes[1]\nI've also heard them called \"Mini-VCs\" and \"Micro-VCs.\" I\ndon't know which name will stick.There were a couple predecessors. Ron Conway had angel funds\nstarting in the 1990s, and in some ways First Round Capital is closer to a\nsuper-angel than a VC fund.[2]\nIt wouldn't cut their overall returns tenfold, because investing\nlater would probably (a) cause them to lose less on investments\nthat failed, and (b) not allow them to get as large a percentage\nof startups as they do now. So it's hard to predict precisely what\nwould happen to their returns.[3]\nThe brand of an investor derives mostly from the success of\ntheir portfolio companies. The top VCs thus have a big brand\nadvantage over the super-angels. They could make it self-perpetuating\nif they used it to get all the best new startups. But I don't think\nthey'll be able to. To get all the best startups, you have to do\nmore than make them want you. You also have to want them; you have\nto recognize them when you see them, and that's much harder.\nSuper-angels will snap up stars that VCs miss. And that will cause\nthe brand gap between the top VCs and the super-angels gradually\nto erode.[4]\nThough in a traditional series A round VCs put two partners\non your board, there are signs now that VCs may begin to conserve\nboard seats by switching to what used to be considered an angel-round\nboard, consisting of two founders and one VC. Which is also to the\nfounders' advantage if it means they still control the company.[5]\nIn a series A round, you usually have to give up more than\nthe actual amount of stock the VCs buy, because they insist you\ndilute yourselves to set aside an \"option pool\" as well. I predict\nthis practice will gradually disappear though.[6]\nThe best thing for founders, if they can get it, is a convertible\nnote with no valuation cap at all. In that case the money invested\nin the angel round just converts into stock at the valuation of the\nnext round, no matter how large. Angels and super-angels tend not\nto like uncapped notes. They have no idea how much of the company\nthey're buying. If the company does well and the valuation of the\nnext round is high, they may end up with only a sliver of it. So\nby agreeing to uncapped notes, VCs who don't care about valuations\nin angel rounds can make offers that super-angels hate to match.[7]\nObviously signalling risk is also not a problem if you'll\nnever need to raise more money. But startups are often mistaken\nabout that.Thanks to Sam Altman, John Bautista, Patrick Collison, James\nLindenbaum, Reid Hoffman, Jessica Livingston and Harj Taggar\nfor reading drafts\nof this."} {"title": "useful", "text": "February 2020What should an essay be? Many people would say persuasive. That's\nwhat a lot of us were taught essays should be. But I think we can\naim for something more ambitious: that an essay should be useful.To start with, that means it should be correct. But it's not enough\nmerely to be correct. It's easy to make a statement correct by\nmaking it vague. That's a common flaw in academic writing, for\nexample. If you know nothing at all about an issue, you can't go\nwrong by saying that the issue is a complex one, that there are\nmany factors to be considered, that it's a mistake to take too\nsimplistic a view of it, and so on.Though no doubt correct, such statements tell the reader nothing.\nUseful writing makes claims that are as strong as they can be made\nwithout becoming false.For example, it's more useful to say that Pike's Peak is near the\nmiddle of Colorado than merely somewhere in Colorado. But if I say\nit's in the exact middle of Colorado, I've now gone too far, because\nit's a bit east of the middle.Precision and correctness are like opposing forces. It's easy to\nsatisfy one if you ignore the other. The converse of vaporous\nacademic writing is the bold, but false, rhetoric of demagogues.\nUseful writing is bold, but true.It's also two other things: it tells people something important,\nand that at least some of them didn't already know.Telling people something they didn't know doesn't always mean\nsurprising them. Sometimes it means telling them something they\nknew unconsciously but had never put into words. In fact those may\nbe the more valuable insights, because they tend to be more\nfundamental.Let's put them all together. Useful writing tells people something\ntrue and important that they didn't already know, and tells them\nas unequivocally as possible.Notice these are all a matter of degree. For example, you can't\nexpect an idea to be novel to everyone. Any insight that you have\nwill probably have already been had by at least one of the world's\n7 billion people. But it's sufficient if an idea is novel to a lot\nof readers.Ditto for correctness, importance, and strength. In effect the four\ncomponents are like numbers you can multiply together to get a score\nfor usefulness. Which I realize is almost awkwardly reductive, but\nnonetheless true._____\nHow can you ensure that the things you say are true and novel and\nimportant? Believe it or not, there is a trick for doing this. I\nlearned it from my friend Robert Morris, who has a horror of saying\nanything dumb. His trick is not to say anything unless he's sure\nit's worth hearing. This makes it hard to get opinions out of him,\nbut when you do, they're usually right.Translated into essay writing, what this means is that if you write\na bad sentence, you don't publish it. You delete it and try again.\nOften you abandon whole branches of four or five paragraphs. Sometimes\na whole essay.You can't ensure that every idea you have is good, but you can\nensure that every one you publish is, by simply not publishing the\nones that aren't.In the sciences, this is called publication bias, and is considered\nbad. When some hypothesis you're exploring gets inconclusive results,\nyou're supposed to tell people about that too. But with essay\nwriting, publication bias is the way to go.My strategy is loose, then tight. I write the first draft of an\nessay fast, trying out all kinds of ideas. Then I spend days rewriting\nit very carefully.I've never tried to count how many times I proofread essays, but\nI'm sure there are sentences I've read 100 times before publishing\nthem. When I proofread an essay, there are usually passages that\nstick out in an annoying way, sometimes because they're clumsily\nwritten, and sometimes because I'm not sure they're true. The\nannoyance starts out unconscious, but after the tenth reading or\nso I'm saying \"Ugh, that part\" each time I hit it. They become like\nbriars that catch your sleeve as you walk past. Usually I won't\npublish an essay till they're all gone \u0097 till I can read through\nthe whole thing without the feeling of anything catching.I'll sometimes let through a sentence that seems clumsy, if I can't\nthink of a way to rephrase it, but I will never knowingly let through\none that doesn't seem correct. You never have to. If a sentence\ndoesn't seem right, all you have to do is ask why it doesn't, and\nyou've usually got the replacement right there in your head.This is where essayists have an advantage over journalists. You\ndon't have a deadline. You can work for as long on an essay as you\nneed to get it right. You don't have to publish the essay at all,\nif you can't get it right. Mistakes seem to lose courage in the\nface of an enemy with unlimited resources. Or that's what it feels\nlike. What's really going on is that you have different expectations\nfor yourself. You're like a parent saying to a child \"we can sit\nhere all night till you eat your vegetables.\" Except you're the\nchild too.I'm not saying no mistake gets through. For example, I added condition\n(c) in \"A Way to Detect Bias\" \nafter readers pointed out that I'd\nomitted it. But in practice you can catch nearly all of them.There's a trick for getting importance too. It's like the trick I\nsuggest to young founders for getting startup ideas: to make something\nyou yourself want. You can use yourself as a proxy for the reader.\nThe reader is not completely unlike you, so if you write about\ntopics that seem important to you, they'll probably seem important\nto a significant number of readers as well.Importance has two factors. It's the number of people something\nmatters to, times how much it matters to them. Which means of course\nthat it's not a rectangle, but a sort of ragged comb, like a Riemann\nsum.The way to get novelty is to write about topics you've thought about\na lot. Then you can use yourself as a proxy for the reader in this\ndepartment too. Anything you notice that surprises you, who've\nthought about the topic a lot, will probably also surprise a\nsignificant number of readers. And here, as with correctness and\nimportance, you can use the Morris technique to ensure that you\nwill. If you don't learn anything from writing an essay, don't\npublish it.You need humility to measure novelty, because acknowledging the\nnovelty of an idea means acknowledging your previous ignorance of\nit. Confidence and humility are often seen as opposites, but in\nthis case, as in many others, confidence helps you to be humble.\nIf you know you're an expert on some topic, you can freely admit\nwhen you learn something you didn't know, because you can be confident\nthat most other people wouldn't know it either.The fourth component of useful writing, strength, comes from two\nthings: thinking well, and the skillful use of qualification. These\ntwo counterbalance each other, like the accelerator and clutch in\na car with a manual transmission. As you try to refine the expression\nof an idea, you adjust the qualification accordingly. Something\nyou're sure of, you can state baldly with no qualification at all,\nas I did the four components of useful writing. Whereas points that\nseem dubious have to be held at arm's length with perhapses.As you refine an idea, you're pushing in the direction of less\nqualification. But you can rarely get it down to zero. Sometimes\nyou don't even want to, if it's a side point and a fully refined\nversion would be too long.Some say that qualifications weaken writing. For example, that you\nshould never begin a sentence in an essay with \"I think,\" because\nif you're saying it, then of course you think it. And it's true\nthat \"I think x\" is a weaker statement than simply \"x.\" Which is\nexactly why you need \"I think.\" You need it to express your degree\nof certainty.But qualifications are not scalars. They're not just experimental\nerror. There must be 50 things they can express: how broadly something\napplies, how you know it, how happy you are it's so, even how it\ncould be falsified. I'm not going to try to explore the structure\nof qualification here. It's probably more complex than the whole\ntopic of writing usefully. Instead I'll just give you a practical\ntip: Don't underestimate qualification. It's an important skill in\nits own right, not just a sort of tax you have to pay in order to\navoid saying things that are false. So learn and use its full range.\nIt may not be fully half of having good ideas, but it's part of\nhaving them.There's one other quality I aim for in essays: to say things as\nsimply as possible. But I don't think this is a component of\nusefulness. It's more a matter of consideration for the reader. And\nit's a practical aid in getting things right; a mistake is more\nobvious when expressed in simple language. But I'll admit that the\nmain reason I write simply is not for the reader's sake or because\nit helps get things right, but because it bothers me to use more\nor fancier words than I need to. It seems inelegant, like a program\nthat's too long.I realize florid writing works for some people. But unless you're\nsure you're one of them, the best advice is to write as simply as\nyou can._____\nI believe the formula I've given you, importance + novelty +\ncorrectness + strength, is the recipe for a good essay. But I should\nwarn you that it's also a recipe for making people mad.The root of the problem is novelty. When you tell people something\nthey didn't know, they don't always thank you for it. Sometimes the\nreason people don't know something is because they don't want to\nknow it. Usually because it contradicts some cherished belief. And\nindeed, if you're looking for novel ideas, popular but mistaken\nbeliefs are a good place to find them. Every popular mistaken belief\ncreates a dead zone of ideas around \nit that are relatively unexplored because they contradict it.The strength component just makes things worse. If there's anything\nthat annoys people more than having their cherished assumptions\ncontradicted, it's having them flatly contradicted.Plus if you've used the Morris technique, your writing will seem\nquite confident. Perhaps offensively confident, to people who\ndisagree with you. The reason you'll seem confident is that you are\nconfident: you've cheated, by only publishing the things you're\nsure of. It will seem to people who try to disagree with you that\nyou never admit you're wrong. In fact you constantly admit you're\nwrong. You just do it before publishing instead of after.And if your writing is as simple as possible, that just makes things\nworse. Brevity is the diction of command. If you watch someone\ndelivering unwelcome news from a position of inferiority, you'll\nnotice they tend to use lots of words, to soften the blow. Whereas\nto be short with someone is more or less to be rude to them.It can sometimes work to deliberately phrase statements more weakly\nthan you mean. To put \"perhaps\" in front of something you're actually\nquite sure of. But you'll notice that when writers do this, they\nusually do it with a wink.I don't like to do this too much. It's cheesy to adopt an ironic\ntone for a whole essay. I think we just have to face the fact that\nelegance and curtness are two names for the same thing.You might think that if you work sufficiently hard to ensure that\nan essay is correct, it will be invulnerable to attack. That's sort\nof true. It will be invulnerable to valid attacks. But in practice\nthat's little consolation.In fact, the strength component of useful writing will make you\nparticularly vulnerable to misrepresentation. If you've stated an\nidea as strongly as you could without making it false, all anyone\nhas to do is to exaggerate slightly what you said, and now it is\nfalse.Much of the time they're not even doing it deliberately. One of the\nmost surprising things you'll discover, if you start writing essays,\nis that people who disagree with you rarely disagree with what\nyou've actually written. Instead they make up something you said\nand disagree with that.For what it's worth, the countermove is to ask someone who does\nthis to quote a specific sentence or passage you wrote that they\nbelieve is false, and explain why. I say \"for what it's worth\"\nbecause they never do. So although it might seem that this could\nget a broken discussion back on track, the truth is that it was\nnever on track in the first place.Should you explicitly forestall likely misinterpretations? Yes, if\nthey're misinterpretations a reasonably smart and well-intentioned\nperson might make. In fact it's sometimes better to say something\nslightly misleading and then add the correction than to try to get\nan idea right in one shot. That can be more efficient, and can also\nmodel the way such an idea would be discovered.But I don't think you should explicitly forestall intentional\nmisinterpretations in the body of an essay. An essay is a place to\nmeet honest readers. You don't want to spoil your house by putting\nbars on the windows to protect against dishonest ones. The place\nto protect against intentional misinterpretations is in end-notes.\nBut don't think you can predict them all. People are as ingenious\nat misrepresenting you when you say something they don't want to\nhear as they are at coming up with rationalizations for things they\nwant to do but know they shouldn't. I suspect it's the same skill._____\nAs with most other things, the way to get better at writing essays\nis to practice. But how do you start? Now that we've examined the\nstructure of useful writing, we can rephrase that question more\nprecisely. Which constraint do you relax initially? The answer is,\nthe first component of importance: the number of people who care\nabout what you write.If you narrow the topic sufficiently, you can probably find something\nyou're an expert on. Write about that to start with. If you only\nhave ten readers who care, that's fine. You're helping them, and\nyou're writing. Later you can expand the breadth of topics you write\nabout.The other constraint you can relax is a little surprising: publication.\nWriting essays doesn't have to mean publishing them. That may seem\nstrange now that the trend is to publish every random thought, but\nit worked for me. I wrote what amounted to essays in notebooks for\nabout 15 years. I never published any of them and never expected\nto. I wrote them as a way of figuring things out. But when the web\ncame along I'd had a lot of practice.Incidentally, \nSteve \nWozniak did the same thing. In high school he\ndesigned computers on paper for fun. He couldn't build them because\nhe couldn't afford the components. But when Intel launched 4K DRAMs\nin 1975, he was ready._____\nHow many essays are there left to write though? The answer to that\nquestion is probably the most exciting thing I've learned about\nessay writing. Nearly all of them are left to write.Although the essay \nis an old form, it hasn't been assiduously\ncultivated. In the print era, publication was expensive, and there\nwasn't enough demand for essays to publish that many. You could\npublish essays if you were already well known for writing something\nelse, like novels. Or you could write book reviews that you took\nover to express your own ideas. But there was not really a direct\npath to becoming an essayist. Which meant few essays got written,\nand those that did tended to be about a narrow range of subjects.Now, thanks to the internet, there's a path. Anyone can publish\nessays online. You start in obscurity, perhaps, but at least you\ncan start. You don't need anyone's permission.It sometimes happens that an area of knowledge sits quietly for\nyears, till some change makes it explode. Cryptography did this to\nnumber theory. The internet is doing it to the essay.The exciting thing is not that there's a lot left to write, but\nthat there's a lot left to discover. There's a certain kind of idea\nthat's best discovered by writing essays. If most essays are still\nunwritten, most such ideas are still undiscovered.Notes[1] Put railings on the balconies, but don't put bars on the windows.[2] Even now I sometimes write essays that are not meant for\npublication. I wrote several to figure out what Y Combinator should\ndo, and they were really helpful.Thanks to Trevor Blackwell, Daniel Gackle, Jessica Livingston, and\nRobert Morris for reading drafts of this."} {"title": "aord", "text": "October 2015When I talk to a startup that's been operating for more than 8 or\n9 months, the first thing I want to know is almost always the same.\nAssuming their expenses remain constant and their revenue growth\nis what it has been over the last several months, do they make it to\nprofitability on the money they have left? Or to put it more\ndramatically, by default do they live or die?The startling thing is how often the founders themselves don't know.\nHalf the founders I talk to don't know whether they're default alive\nor default dead.If you're among that number, Trevor Blackwell has made a handy\ncalculator you can use to find out.The reason I want to know first whether a startup is default alive\nor default dead is that the rest of the conversation depends on the\nanswer. If the company is default alive, we can talk about ambitious\nnew things they could do. If it's default dead, we probably need\nto talk about how to save it. We know the current trajectory ends\nbadly. How can they get off that trajectory?Why do so few founders know whether they're default alive or default\ndead? Mainly, I think, because they're not used to asking that.\nIt's not a question that makes sense to ask early on, any more than\nit makes sense to ask a 3 year old how he plans to support\nhimself. But as the company grows older, the question switches from\nmeaningless to critical. That kind of switch often takes people\nby surprise.I propose the following solution: instead of starting to ask too\nlate whether you're default alive or default dead, start asking too\nearly. It's hard to say precisely when the question switches\npolarity. But it's probably not that dangerous to start worrying\ntoo early that you're default dead, whereas it's very dangerous to\nstart worrying too late.The reason is a phenomenon I wrote about earlier: the\nfatal pinch.\nThe fatal pinch is default dead + slow growth + not enough\ntime to fix it. And the way founders end up in it is by not realizing\nthat's where they're headed.There is another reason founders don't ask themselves whether they're\ndefault alive or default dead: they assume it will be easy to raise\nmore money. But that assumption is often false, and worse still, the\nmore you depend on it, the falser it becomes.Maybe it will help to separate facts from hopes. Instead of thinking\nof the future with vague optimism, explicitly separate the components.\nSay \"We're default dead, but we're counting on investors to save\nus.\" Maybe as you say that, it will set off the same alarms in your\nhead that it does in mine. And if you set off the alarms sufficiently\nearly, you may be able to avoid the fatal pinch.It would be safe to be default dead if you could count on investors\nsaving you. As a rule their interest is a function of\ngrowth. If you have steep revenue growth, say over 5x a year, you\ncan start to count on investors being interested even if you're not\nprofitable.\n[1]\nBut investors are so fickle that you can never\ndo more than start to count on them. Sometimes something about your\nbusiness will spook investors even if your growth is great. So no\nmatter how good your growth is, you can never safely treat fundraising\nas more than a plan A. You should always have a plan B as well: you\nshould know (as in write down) precisely what you'll need to do to\nsurvive if you can't raise more money, and precisely when you'll \nhave to switch to plan B if plan A isn't working.In any case, growing fast versus operating cheaply is far from the\nsharp dichotomy many founders assume it to be. In practice there\nis surprisingly little connection between how much a startup spends\nand how fast it grows. When a startup grows fast, it's usually\nbecause the product hits a nerve, in the sense of hitting some big\nneed straight on. When a startup spends a lot, it's usually because\nthe product is expensive to develop or sell, or simply because\nthey're wasteful.If you're paying attention, you'll be asking at this point not just\nhow to avoid the fatal pinch, but how to avoid being default dead.\nThat one is easy: don't hire too fast. Hiring too fast is by far\nthe biggest killer of startups that raise money.\n[2]Founders tell themselves they need to hire in order to grow. But\nmost err on the side of overestimating this need rather than\nunderestimating it. Why? Partly because there's so much work to\ndo. Naive founders think that if they can just hire enough\npeople, it will all get done. Partly because successful startups have\nlots of employees, so it seems like that's what one does in order\nto be successful. In fact the large staffs of successful startups\nare probably more the effect of growth than the cause. And\npartly because when founders have slow growth they don't want to\nface what is usually the real reason: the product is not appealing\nenough.Plus founders who've just raised money are often encouraged to\noverhire by the VCs who funded them. Kill-or-cure strategies are\noptimal for VCs because they're protected by the portfolio effect.\nVCs want to blow you up, in one sense of the phrase or the other.\nBut as a founder your incentives are different. You want above all\nto survive.\n[3]Here's a common way startups die. They make something moderately\nappealing and have decent initial growth. They raise their first\nround fairly easily, because the founders seem smart and the idea\nsounds plausible. But because the product is only moderately\nappealing, growth is ok but not great. The founders convince\nthemselves that hiring a bunch of people is the way to boost growth.\nTheir investors agree. But (because the product is only moderately\nappealing) the growth never comes. Now they're rapidly running out\nof runway. They hope further investment will save them. But because\nthey have high expenses and slow growth, they're now unappealing\nto investors. They're unable to raise more, and the company dies.What the company should have done is address the fundamental problem:\nthat the product is only moderately appealing. Hiring people is\nrarely the way to fix that. More often than not it makes it harder.\nAt this early stage, the product needs to evolve more than to be\n\"built out,\" and that's usually easier with fewer people.\n[4]Asking whether you're default alive or default dead may save you\nfrom this. Maybe the alarm bells it sets off will counteract the\nforces that push you to overhire. Instead you'll be compelled to\nseek growth in other ways. For example, by doing\nthings that don't scale, or by redesigning the product in the\nway only founders can.\nAnd for many if not most startups, these paths to growth will be\nthe ones that actually work.Airbnb waited 4 months after raising money at the end of Y\u00a0Combinator\nbefore they hired their first employee. In the meantime the founders\nwere terribly overworked. But they were overworked evolving Airbnb\ninto the astonishingly successful organism it is now.Notes[1]\nSteep usage growth will also interest investors. Revenue\nwill ultimately be a constant multiple of usage, so x% usage growth\npredicts x% revenue growth. But in practice investors discount\nmerely predicted revenue, so if you're measuring usage you need a\nhigher growth rate to impress investors.[2]\nStartups that don't raise money are saved from hiring too\nfast because they can't afford to. But that doesn't mean you should\navoid raising money in order to avoid this problem, any more than\nthat total abstinence is the only way to avoid becoming an alcoholic.[3]\nI would not be surprised if VCs' tendency to push founders\nto overhire is not even in their own interest. They don't know how\nmany of the companies that get killed by overspending might have\ndone well if they'd survived. My guess is a significant number.[4]\nAfter reading a draft, Sam Altman wrote:\"I think you should make the hiring point more strongly. I think\nit's roughly correct to say that YC's most successful companies\nhave never been the fastest to hire, and one of the marks of a great\nfounder is being able to resist this urge.\"Paul Buchheit adds:\"A related problem that I see a lot is premature scaling\u2014founders\ntake a small business that isn't really working (bad unit economics,\ntypically) and then scale it up because they want impressive growth\nnumbers. This is similar to over-hiring in that it makes the business\nmuch harder to fix once it's big, plus they are bleeding cash really\nfast.\"\nThanks to Sam Altman, Paul Buchheit, Joe Gebbia, Jessica Livingston,\nand Geoff Ralston for reading drafts of this."} {"title": "before", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2014(This essay is derived from a guest lecture in Sam Altman's startup class at\nStanford. It's intended for college students, but much of it is\napplicable to potential founders at other ages.)One of the advantages of having kids is that when you have to give\nadvice, you can ask yourself \"what would I tell my own kids?\" My\nkids are little, but I can imagine what I'd tell them about startups\nif they were in college, and that's what I'm going to tell you.Startups are very counterintuitive. I'm not sure why. Maybe it's\njust because knowledge about them hasn't permeated our culture yet.\nBut whatever the reason, starting a startup is a task where you\ncan't always trust your instincts.It's like skiing in that way. When you first try skiing and you\nwant to slow down, your instinct is to lean back. But if you lean\nback on skis you fly down the hill out of control. So part of\nlearning to ski is learning to suppress that impulse. Eventually\nyou get new habits, but at first it takes a conscious effort. At\nfirst there's a list of things you're trying to remember as you\nstart down the hill.Startups are as unnatural as skiing, so there's a similar list for\nstartups. Here I'm going to give you the first part of it \u2014 the things\nto remember if you want to prepare yourself to start a startup.\nCounterintuitiveThe first item on it is the fact I already mentioned: that startups\nare so weird that if you trust your instincts, you'll make a lot\nof mistakes. If you know nothing more than this, you may at least\npause before making them.When I was running Y Combinator I used to joke that our function\nwas to tell founders things they would ignore. It's really true.\nBatch after batch, the YC partners warn founders about mistakes\nthey're about to make, and the founders ignore them, and then come\nback a year later and say \"I wish we'd listened.\"Why do the founders ignore the partners' advice? Well, that's the\nthing about counterintuitive ideas: they contradict your intuitions.\nThey seem wrong. So of course your first impulse is to disregard\nthem. And in fact my joking description is not merely the curse\nof Y Combinator but part of its raison d'etre. If founders' instincts\nalready gave them the right answers, they wouldn't need us. You\nonly need other people to give you advice that surprises you. That's\nwhy there are a lot of ski instructors and not many running\ninstructors.\n[1]You can, however, trust your instincts about people. And in fact\none of the most common mistakes young founders make is not to\ndo that enough. They get involved with people who seem impressive,\nbut about whom they feel some misgivings personally. Later when\nthings blow up they say \"I knew there was something off about him,\nbut I ignored it because he seemed so impressive.\"If you're thinking about getting involved with someone \u2014 as a\ncofounder, an employee, an investor, or an acquirer \u2014 and you\nhave misgivings about them, trust your gut. If someone seems\nslippery, or bogus, or a jerk, don't ignore it.This is one case where it pays to be self-indulgent. Work with\npeople you genuinely like, and you've known long enough to be sure.\nExpertiseThe second counterintuitive point is that it's not that important\nto know a lot about startups. The way to succeed in a startup is\nnot to be an expert on startups, but to be an expert on your users\nand the problem you're solving for them.\nMark Zuckerberg didn't succeed because he was an expert on startups.\nHe succeeded despite being a complete noob at startups, because he\nunderstood his users really well.If you don't know anything about, say, how to raise an angel round,\ndon't feel bad on that account. That sort of thing you can learn\nwhen you need to, and forget after you've done it.In fact, I worry it's not merely unnecessary to learn in great\ndetail about the mechanics of startups, but possibly somewhat\ndangerous. If I met an undergrad who knew all about convertible\nnotes and employee agreements and (God forbid) class FF stock, I\nwouldn't think \"here is someone who is way ahead of their peers.\"\nIt would set off alarms. Because another of the characteristic\nmistakes of young founders is to go through the motions of starting\na startup. They make up some plausible-sounding idea, raise money\nat a good valuation, rent a cool office, hire a bunch of people.\nFrom the outside that seems like what startups do. But the next\nstep after rent a cool office and hire a bunch of people is: gradually\nrealize how completely fucked they are, because while imitating all\nthe outward forms of a startup they have neglected the one thing\nthat's actually essential: making something people want.\nGameWe saw this happen so often that we made up a name for it: playing\nhouse. Eventually I realized why it was happening. The reason\nyoung founders go through the motions of starting a startup is\nbecause that's what they've been trained to do for their whole lives\nup to that point. Think about what you have to do to get into\ncollege, for example. Extracurricular activities, check. Even in\ncollege classes most of the work is as artificial as running laps.I'm not attacking the educational system for being this way. There\nwill always be a certain amount of fakeness in the work you do when\nyou're being taught something, and if you measure their performance\nit's inevitable that people will exploit the difference to the point\nwhere much of what you're measuring is artifacts of the fakeness.I confess I did it myself in college. I found that in a lot of\nclasses there might only be 20 or 30 ideas that were the right shape\nto make good exam questions. The way I studied for exams in these\nclasses was not (except incidentally) to master the material taught\nin the class, but to make a list of potential exam questions and\nwork out the answers in advance. When I walked into the final, the\nmain thing I'd be feeling was curiosity about which of my questions\nwould turn up on the exam. It was like a game.It's not surprising that after being trained for their whole lives\nto play such games, young founders' first impulse on starting a\nstartup is to try to figure out the tricks for winning at this new\ngame. Since fundraising appears to be the measure of success for\nstartups (another classic noob mistake), they always want to know what the\ntricks are for convincing investors. We tell them the best way to\nconvince investors is to make a startup\nthat's actually doing well, meaning growing fast, and then simply\ntell investors so. Then they want to know what the tricks are for\ngrowing fast. And we have to tell them the best way to do that is\nsimply to make something people want.So many of the conversations YC partners have with young founders\nbegin with the founder asking \"How do we...\" and the partner replying\n\"Just...\"Why do the founders always make things so complicated? The reason,\nI realized, is that they're looking for the trick.So this is the third counterintuitive thing to remember about\nstartups: starting a startup is where gaming the system stops\nworking. Gaming the system may continue to work if you go to work\nfor a big company. Depending on how broken the company is, you can\nsucceed by sucking up to the right people, giving the impression\nof productivity, and so on. \n[2]\nBut that doesn't work with startups.\nThere is no boss to trick, only users, and all users care about is\nwhether your product does what they want. Startups are as impersonal\nas physics. You have to make something people want, and you prosper\nonly to the extent you do.The dangerous thing is, faking does work to some degree on investors.\nIf you're super good at sounding like you know what you're talking\nabout, you can fool investors for at least one and perhaps even two\nrounds of funding. But it's not in your interest to. The company\nis ultimately doomed. All you're doing is wasting your own time\nriding it down.So stop looking for the trick. There are tricks in startups, as\nthere are in any domain, but they are an order of magnitude less\nimportant than solving the real problem. A founder who knows nothing\nabout fundraising but has made something users love will have an\neasier time raising money than one who knows every trick in the\nbook but has a flat usage graph. And more importantly, the founder\nwho has made something users love is the one who will go on to\nsucceed after raising the money.Though in a sense it's bad news in that you're deprived of one of\nyour most powerful weapons, I think it's exciting that gaming the\nsystem stops working when you start a startup. It's exciting that\nthere even exist parts of the world where you win by doing good\nwork. Imagine how depressing the world would be if it were all\nlike school and big companies, where you either have to spend a lot\nof time on bullshit things or lose to people who do.\n[3]\nI would\nhave been delighted if I'd realized in college that there were parts\nof the real world where gaming the system mattered less than others,\nand a few where it hardly mattered at all. But there are, and this\nvariation is one of the most important things to consider when\nyou're thinking about your future. How do you win in each type of\nwork, and what would you like to win by doing?\n[4]\nAll-ConsumingThat brings us to our fourth counterintuitive point: startups are\nall-consuming. If you start a startup, it will take over your life\nto a degree you cannot imagine. And if your startup succeeds, it\nwill take over your life for a long time: for several years at the\nvery least, maybe for a decade, maybe for the rest of your working\nlife. So there is a real opportunity cost here.Larry Page may seem to have an enviable life, but there are aspects\nof it that are unenviable. Basically at 25 he started running as\nfast as he could and it must seem to him that he hasn't stopped to\ncatch his breath since. Every day new shit happens in the Google\nempire that only the CEO can deal with, and he, as CEO, has to deal\nwith it. If he goes on vacation for even a week, a whole week's\nbacklog of shit accumulates. And he has to bear this uncomplainingly,\npartly because as the company's daddy he can never show fear or\nweakness, and partly because billionaires get less than zero sympathy\nif they talk about having difficult lives. Which has the strange\nside effect that the difficulty of being a successful startup founder\nis concealed from almost everyone except those who've done it.Y Combinator has now funded several companies that can be called\nbig successes, and in every single case the founders say the same\nthing. It never gets any easier. The nature of the problems change.\nYou're worrying about construction delays at your London office\ninstead of the broken air conditioner in your studio apartment.\nBut the total volume of worry never decreases; if anything it\nincreases.Starting a successful startup is similar to having kids in that\nit's like a button you push that changes your life irrevocably.\nAnd while it's truly wonderful having kids, there are a lot of\nthings that are easier to do before you have them than after. Many\nof which will make you a better parent when you do have kids. And\nsince you can delay pushing the button for a while, most people in\nrich countries do.Yet when it comes to startups, a lot of people seem to think they're\nsupposed to start them while they're still in college. Are you\ncrazy? And what are the universities thinking? They go out of\ntheir way to ensure their students are well supplied with contraceptives,\nand yet they're setting up entrepreneurship programs and startup\nincubators left and right.To be fair, the universities have their hand forced here. A lot\nof incoming students are interested in startups. Universities are,\nat least de facto, expected to prepare them for their careers. So\nstudents who want to start startups hope universities can teach\nthem about startups. And whether universities can do this or not,\nthere's some pressure to claim they can, lest they lose applicants\nto other universities that do.Can universities teach students about startups? Yes and no. They\ncan teach students about startups, but as I explained before, this\nis not what you need to know. What you need to learn about are the\nneeds of your own users, and you can't do that until you actually\nstart the company.\n[5]\nSo starting a startup is intrinsically\nsomething you can only really learn by doing it. And it's impossible\nto do that in college, for the reason I just explained: startups\ntake over your life. You can't start a startup for real as a\nstudent, because if you start a startup for real you're not a student\nanymore. You may be nominally a student for a bit, but you won't even\nbe that for long.\n[6]Given this dichotomy, which of the two paths should you take? Be\na real student and not start a startup, or start a real startup and\nnot be a student? I can answer that one for you. Do not start a\nstartup in college. How to start a startup is just a subset of a\nbigger problem you're trying to solve: how to have a good life.\nAnd though starting a startup can be part of a good life for a lot\nof ambitious people, age 20 is not the optimal time to do it.\nStarting a startup is like a brutally fast depth-first search. Most\npeople should still be searching breadth-first at 20.You can do things in your early 20s that you can't do as well before\nor after, like plunge deeply into projects on a whim and travel\nsuper cheaply with no sense of a deadline. For unambitious people,\nthis sort of thing is the dreaded \"failure to launch,\" but for the\nambitious ones it can be an incomparably valuable sort of exploration.\nIf you start a startup at 20 and you're sufficiently successful,\nyou'll never get to do it.\n[7]Mark Zuckerberg will never get to bum around a foreign country. He\ncan do other things most people can't, like charter jets to fly him\nto foreign countries. But success has taken a lot of the serendipity\nout of his life. Facebook is running him as much as he's running\nFacebook. And while it can be very cool to be in the grip of a\nproject you consider your life's work, there are advantages to\nserendipity too, especially early in life. Among other things it\ngives you more options to choose your life's work from.There's not even a tradeoff here. You're not sacrificing anything\nif you forgo starting a startup at 20, because you're more likely\nto succeed if you wait. In the unlikely case that you're 20 and\none of your side projects takes off like Facebook did, you'll face\na choice of running with it or not, and it may be reasonable to run\nwith it. But the usual way startups take off is for the founders\nto make them take off, and it's gratuitously\nstupid to do that at 20.\nTryShould you do it at any age? I realize I've made startups sound\npretty hard. If I haven't, let me try again: starting a startup\nis really hard. What if it's too hard? How can you tell if you're\nup to this challenge?The answer is the fifth counterintuitive point: you can't tell. Your\nlife so far may have given you some idea what your prospects might\nbe if you tried to become a mathematician, or a professional football\nplayer. But unless you've had a very strange life you haven't done\nmuch that was like being a startup founder.\nStarting a startup will change you a lot. So what you're trying\nto estimate is not just what you are, but what you could grow into,\nand who can do that?For the past 9 years it was my job to predict whether people would\nhave what it took to start successful startups. It was easy to\ntell how smart they were, and most people reading this will be over\nthat threshold. The hard part was predicting how tough and ambitious they would become. There\nmay be no one who has more experience at trying to predict that,\nso I can tell you how much an expert can know about it, and the\nanswer is: not much. I learned to keep a completely open mind about\nwhich of the startups in each batch would turn out to be the stars.The founders sometimes think they know. Some arrive feeling sure\nthey will ace Y Combinator just as they've aced every one of the (few,\nartificial, easy) tests they've faced in life so far. Others arrive\nwondering how they got in, and hoping YC doesn't discover whatever\nmistake caused it to accept them. But there is little correlation\nbetween founders' initial attitudes and how well their companies\ndo.I've read that the same is true in the military \u2014 that the\nswaggering recruits are no more likely to turn out to be really\ntough than the quiet ones. And probably for the same reason: that\nthe tests involved are so different from the ones in their previous\nlives.If you're absolutely terrified of starting a startup, you probably\nshouldn't do it. But if you're merely unsure whether you're up to\nit, the only way to find out is to try. Just not now.\nIdeasSo if you want to start a startup one day, what should you do in\ncollege? There are only two things you need initially: an idea and\ncofounders. And the m.o. for getting both is the same. Which leads\nto our sixth and last counterintuitive point: that the way to get\nstartup ideas is not to try to think of startup ideas.I've written a whole essay on this,\nso I won't repeat it all here. But the short version is that if\nyou make a conscious effort to think of startup ideas, the ideas\nyou come up with will not merely be bad, but bad and plausible-sounding,\nmeaning you'll waste a lot of time on them before realizing they're\nbad.The way to come up with good startup ideas is to take a step back.\nInstead of making a conscious effort to think of startup ideas,\nturn your mind into the type that startup ideas form in without any\nconscious effort. In fact, so unconsciously that you don't even\nrealize at first that they're startup ideas.This is not only possible, it's how Apple, Yahoo, Google, and\nFacebook all got started. None of these companies were even meant\nto be companies at first. They were all just side projects. The\nbest startups almost have to start as side projects, because great\nideas tend to be such outliers that your conscious mind would reject\nthem as ideas for companies.Ok, so how do you turn your mind into the type that startup ideas\nform in unconsciously? (1) Learn a lot about things that matter,\nthen (2) work on problems that interest you (3) with people you\nlike and respect. The third part, incidentally, is how you get\ncofounders at the same time as the idea.The first time I wrote that paragraph, instead of \"learn a lot about\nthings that matter,\" I wrote \"become good at some technology.\" But\nthat prescription, though sufficient, is too narrow. What was\nspecial about Brian Chesky and Joe Gebbia was not that they were\nexperts in technology. They were good at design, and perhaps even\nmore importantly, they were good at organizing groups and making\nprojects happen. So you don't have to work on technology per se,\nso long as you work on problems demanding enough to stretch you.What kind of problems are those? That is very hard to answer in\nthe general case. History is full of examples of young people who\nwere working on important problems that no\none else at the time thought were important, and in particular\nthat their parents didn't think were important. On the other hand,\nhistory is even fuller of examples of parents who thought their\nkids were wasting their time and who were right. So how do you\nknow when you're working on real stuff?\n[8]I know how I know. Real problems are interesting, and I am\nself-indulgent in the sense that I always want to work on interesting\nthings, even if no one else cares about them (in fact, especially\nif no one else cares about them), and find it very hard to make\nmyself work on boring things, even if they're supposed to be\nimportant.My life is full of case after case where I worked on something just\nbecause it seemed interesting, and it turned out later to be useful\nin some worldly way. Y\nCombinator itself was something I only did because it seemed\ninteresting. So I seem to have some sort of internal compass that\nhelps me out. But I don't know what other people have in their\nheads. Maybe if I think more about this I can come up with heuristics\nfor recognizing genuinely interesting problems, but for the moment\nthe best I can offer is the hopelessly question-begging advice that\nif you have a taste for genuinely interesting problems, indulging\nit energetically is the best way to prepare yourself for a startup.\nAnd indeed, probably also the best way to live.\n[9]But although I can't explain in the general case what counts as an\ninteresting problem, I can tell you about a large subset of them.\nIf you think of technology as something that's spreading like a\nsort of fractal stain, every moving point on the edge represents\nan interesting problem. So one guaranteed way to turn your mind\ninto the type that has good startup ideas is to get yourself to the\nleading edge of some technology \u2014 to cause yourself, as Paul\nBuchheit put it, to \"live in the future.\" When you reach that point,\nideas that will seem to other people uncannily prescient will seem\nobvious to you. You may not realize they're startup ideas, but\nyou'll know they're something that ought to exist.For example, back at Harvard in the mid 90s a fellow grad student\nof my friends Robert and Trevor wrote his own voice over IP software.\nHe didn't mean it to be a startup, and he never tried to turn it\ninto one. He just wanted to talk to his girlfriend in Taiwan without\npaying for long distance calls, and since he was an expert on\nnetworks it seemed obvious to him that the way to do it was turn\nthe sound into packets and ship it over the Internet. He never did\nany more with his software than talk to his girlfriend, but this\nis exactly the way the best startups get started.So strangely enough the optimal thing to do in college if you want\nto be a successful startup founder is not some sort of new, vocational\nversion of college focused on \"entrepreneurship.\" It's the classic\nversion of college as education for its own sake. If you want to\nstart a startup after college, what you should do in college is\nlearn powerful things. And if you have genuine intellectual\ncuriosity, that's what you'll naturally tend to do if you just\nfollow your own inclinations.\n[10]The component of entrepreneurship that really matters is domain\nexpertise. The way to become Larry Page was to become an expert\non search. And the way to become an expert on search was to be\ndriven by genuine curiosity, not some ulterior motive.At its best, starting a startup is merely an ulterior motive for\ncuriosity. And you'll do it best if you introduce the ulterior\nmotive toward the end of the process.So here is the ultimate advice for young would-be startup founders,\nboiled down to two words: just learn.\nNotes[1]\nSome founders listen more than others, and this tends to be a\npredictor of success. One of the things I\nremember about the Airbnbs during YC is how intently they listened.[2]\nIn fact, this is one of the reasons startups are possible. If\nbig companies weren't plagued by internal inefficiencies, they'd\nbe proportionately more effective, leaving less room for startups.[3]\nIn a startup you have to spend a lot of time on schleps, but this sort of work is merely\nunglamorous, not bogus.[4]\nWhat should you do if your true calling is gaming the system?\nManagement consulting.[5]\nThe company may not be incorporated, but if you start to get\nsignificant numbers of users, you've started it, whether you realize\nit yet or not.[6]\nIt shouldn't be that surprising that colleges can't teach\nstudents how to be good startup founders, because they can't teach\nthem how to be good employees either.The way universities \"teach\" students how to be employees is to\nhand off the task to companies via internship programs. But you\ncouldn't do the equivalent thing for startups, because by definition\nif the students did well they would never come back.[7]\nCharles Darwin was 22 when he received an invitation to travel\naboard the HMS Beagle as a naturalist. It was only because he was\notherwise unoccupied, to a degree that alarmed his family, that he\ncould accept it. And yet if he hadn't we probably would not know\nhis name.[8]\nParents can sometimes be especially conservative in this\ndepartment. There are some whose definition of important problems\nincludes only those on the critical path to med school.[9]\nI did manage to think of a heuristic for detecting whether you\nhave a taste for interesting ideas: whether you find known boring\nideas intolerable. Could you endure studying literary theory, or\nworking in middle management at a large company?[10]\nIn fact, if your goal is to start a startup, you can stick\neven more closely to the ideal of a liberal education than past\ngenerations have. Back when students focused mainly on getting a\njob after college, they thought at least a little about how the\ncourses they took might look to an employer. And perhaps even\nworse, they might shy away from taking a difficult class lest they\nget a low grade, which would harm their all-important GPA. Good\nnews: users don't care what your GPA\nwas. And I've never heard of investors caring either. Y Combinator\ncertainly never asks what classes you took in college or what grades\nyou got in them.\nThanks to Sam Altman, Paul Buchheit, John Collison, Patrick\nCollison, Jessica Livingston, Robert Morris, Geoff Ralston, and\nFred Wilson for reading drafts of this."} {"title": "bias", "text": "October 2015This will come as a surprise to a lot of people, but in some cases\nit's possible to detect bias in a selection process without knowing\nanything about the applicant pool. Which is exciting because among\nother things it means third parties can use this technique to detect\nbias whether those doing the selecting want them to or not.You can use this technique whenever (a) you have at least\na random sample of the applicants that were selected, (b) their\nsubsequent performance is measured, and (c) the groups of\napplicants you're comparing have roughly equal distribution of ability.How does it work? Think about what it means to be biased. What\nit means for a selection process to be biased against applicants\nof type x is that it's harder for them to make it through. Which\nmeans applicants of type x have to be better to get selected than\napplicants not of type x.\n[1]\nWhich means applicants of type x\nwho do make it through the selection process will outperform other\nsuccessful applicants. And if the performance of all the successful\napplicants is measured, you'll know if they do.Of course, the test you use to measure performance must be a valid\none. And in particular it must not be invalidated by the bias you're\ntrying to measure.\nBut there are some domains where performance can be measured, and\nin those detecting bias is straightforward. Want to know if the\nselection process was biased against some type of applicant? Check\nwhether they outperform the others. This is not just a heuristic\nfor detecting bias. It's what bias means.For example, many suspect that venture capital firms are biased\nagainst female founders. This would be easy to detect: among their\nportfolio companies, do startups with female founders outperform\nthose without? A couple months ago, one VC firm (almost certainly\nunintentionally) published a study showing bias of this type. First\nRound Capital found that among its portfolio companies, startups\nwith female founders outperformed\nthose without by 63%. \n[2]The reason I began by saying that this technique would come as a\nsurprise to many people is that we so rarely see analyses of this\ntype. I'm sure it will come as a surprise to First Round that they\nperformed one. I doubt anyone there realized that by limiting their\nsample to their own portfolio, they were producing a study not of\nstartup trends but of their own biases when selecting companies.I predict we'll see this technique used more in the future. The\ninformation needed to conduct such studies is increasingly available.\nData about who applies for things is usually closely guarded by the\norganizations selecting them, but nowadays data about who gets\nselected is often publicly available to anyone who takes the trouble\nto aggregate it.\nNotes[1]\nThis technique wouldn't work if the selection process looked\nfor different things from different types of applicants\u2014for\nexample, if an employer hired men based on their ability but women\nbased on their appearance.[2]\nAs Paul Buchheit points out, First Round excluded their most \nsuccessful investment, Uber, from the study. And while it \nmakes sense to exclude outliers from some types of studies, \nstudies of returns from startup investing, which is all about \nhitting outliers, are not one of them.\nThanks to Sam Altman, Jessica Livingston, and Geoff Ralston for reading\ndrafts of this."} {"title": "copy", "text": "July 2006\nWhen I was in high school I spent a lot of time imitating bad\nwriters. What we studied in English classes was mostly fiction,\nso I assumed that was the highest form of writing. Mistake number\none. The stories that seemed to be most admired were ones in which\npeople suffered in complicated ways. Anything funny or\ngripping was ipso facto suspect, unless it was old enough to be hard to\nunderstand, like Shakespeare or Chaucer. Mistake number two. The\nideal medium seemed the short story, which I've since learned had\nquite a brief life, roughly coincident with the peak of magazine\npublishing. But since their size made them perfect for use in\nhigh school classes, we read a lot of them, which gave us the\nimpression the short story was flourishing. Mistake number three.\nAnd because they were so short, nothing really had to happen; you\ncould just show a randomly truncated slice of life, and that was\nconsidered advanced. Mistake number four. The result was that I\nwrote a lot of stories in which nothing happened except that someone\nwas unhappy in a way that seemed deep.For most of college I was a philosophy major. I was very impressed\nby the papers published in philosophy journals. They were so\nbeautifully typeset, and their tone was just captivating\u2014alternately\ncasual and buffer-overflowingly technical. A fellow would be walking\nalong a street and suddenly modality qua modality would spring upon\nhim. I didn't ever quite understand these papers, but I figured\nI'd get around to that later, when I had time to reread them more\nclosely. In the meantime I tried my best to imitate them. This\nwas, I can now see, a doomed undertaking, because they weren't\nreally saying anything. No philosopher ever refuted another, for\nexample, because no one said anything definite enough to refute.\nNeedless to say, my imitations didn't say anything either.In grad school I was still wasting time imitating the wrong things.\nThere was then a fashionable type of program called an expert system,\nat the core of which was something called an inference engine. I\nlooked at what these things did and thought \"I could write that in\na thousand lines of code.\" And yet eminent professors were writing\nbooks about them, and startups were selling them for a year's salary\na copy. What an opportunity, I thought; these impressive things\nseem easy to me; I must be pretty sharp. Wrong. It was simply a\nfad. The books the professors wrote about expert systems are now\nignored. They were not even on a path to anything interesting.\nAnd the customers paying so much for them were largely the same\ngovernment agencies that paid thousands for screwdrivers and toilet\nseats.How do you avoid copying the wrong things? Copy only what you\ngenuinely like. That would have saved me in all three cases. I\ndidn't enjoy the short stories we had to read in English classes;\nI didn't learn anything from philosophy papers; I didn't use expert\nsystems myself. I believed these things were good because they\nwere admired.It can be hard to separate the things you like from the things\nyou're impressed with. One trick is to ignore presentation. Whenever\nI see a painting impressively hung in a museum, I ask myself: how\nmuch would I pay for this if I found it at a garage sale, dirty and\nframeless, and with no idea who painted it? If you walk around a\nmuseum trying this experiment, you'll find you get some truly\nstartling results. Don't ignore this data point just because it's\nan outlier.Another way to figure out what you like is to look at what you enjoy\nas guilty pleasures. Many things people like, especially if they're\nyoung and ambitious, they like largely for the feeling of virtue\nin liking them. 99% of people reading Ulysses are thinking\n\"I'm reading Ulysses\" as they do it. A guilty pleasure is\nat least a pure one. What do you read when you don't feel up to being\nvirtuous? What kind of book do you read and feel sad that there's\nonly half of it left, instead of being impressed that you're half\nway through? That's what you really like.Even when you find genuinely good things to copy, there's another\npitfall to be avoided. Be careful to copy what makes them good,\nrather than their flaws. It's easy to be drawn into imitating\nflaws, because they're easier to see, and of course easier to copy\ntoo. For example, most painters in the eighteenth and nineteenth\ncenturies used brownish colors. They were imitating the great\npainters of the Renaissance, whose paintings by that time were brown\nwith dirt. Those paintings have since been cleaned, revealing\nbrilliant colors; their imitators are of course still brown.It was painting, incidentally, that cured me of copying the wrong\nthings. Halfway through grad school I decided I wanted to try being\na painter, and the art world was so manifestly corrupt that it\nsnapped the leash of credulity. These people made philosophy\nprofessors seem as scrupulous as mathematicians. It was so clearly\na choice of doing good work xor being an insider that I was forced\nto see the distinction. It's there to some degree in almost every\nfield, but I had till then managed to avoid facing it.That was one of the most valuable things I learned from painting:\nyou have to figure out for yourself what's \ngood. You can't trust\nauthorities. They'll lie to you on this one.\n\nComment on this essay."} {"title": "ecw", "text": "December 2014If the world were static, we could have monotonically increasing\nconfidence in our beliefs. The more (and more varied) experience\na belief survived, the less likely it would be false. Most people\nimplicitly believe something like this about their opinions. And\nthey're justified in doing so with opinions about things that don't\nchange much, like human nature. But you can't trust your opinions\nin the same way about things that change, which could include\npractically everything else.When experts are wrong, it's often because they're experts on an\nearlier version of the world.Is it possible to avoid that? Can you protect yourself against\nobsolete beliefs? To some extent, yes. I spent almost a decade\ninvesting in early stage startups, and curiously enough protecting\nyourself against obsolete beliefs is exactly what you have to do\nto succeed as a startup investor. Most really good startup ideas\nlook like bad ideas at first, and many of those look bad specifically\nbecause some change in the world just switched them from bad to\ngood. I spent a lot of time learning to recognize such ideas, and\nthe techniques I used may be applicable to ideas in general.The first step is to have an explicit belief in change. People who\nfall victim to a monotonically increasing confidence in their\nopinions are implicitly concluding the world is static. If you\nconsciously remind yourself it isn't, you start to look for change.Where should one look for it? Beyond the moderately useful\ngeneralization that human nature doesn't change much, the unfortunate\nfact is that change is hard to predict. This is largely a tautology\nbut worth remembering all the same: change that matters usually\ncomes from an unforeseen quarter.So I don't even try to predict it. When I get asked in interviews\nto predict the future, I always have to struggle to come up with\nsomething plausible-sounding on the fly, like a student who hasn't\nprepared for an exam.\n[1]\nBut it's not out of laziness that I haven't\nprepared. It seems to me that beliefs about the future are so\nrarely correct that they usually aren't worth the extra rigidity\nthey impose, and that the best strategy is simply to be aggressively\nopen-minded. Instead of trying to point yourself in the right\ndirection, admit you have no idea what the right direction is, and\ntry instead to be super sensitive to the winds of change.It's ok to have working hypotheses, even though they may constrain\nyou a bit, because they also motivate you. It's exciting to chase\nthings and exciting to try to guess answers. But you have to be\ndisciplined about not letting your hypotheses harden into anything\nmore.\n[2]I believe this passive m.o. works not just for evaluating new ideas\nbut also for having them. The way to come up with new ideas is not\nto try explicitly to, but to try to solve problems and simply not\ndiscount weird hunches you have in the process.The winds of change originate in the unconscious minds of domain\nexperts. If you're sufficiently expert in a field, any weird idea\nor apparently irrelevant question that occurs to you is ipso facto\nworth exploring. \n[3]\n Within Y Combinator, when an idea is described\nas crazy, it's a compliment\u2014in fact, on average probably a\nhigher compliment than when an idea is described as good.Startup investors have extraordinary incentives for correcting\nobsolete beliefs. If they can realize before other investors that\nsome apparently unpromising startup isn't, they can make a huge\namount of money. But the incentives are more than just financial.\nInvestors' opinions are explicitly tested: startups come to them\nand they have to say yes or no, and then, fairly quickly, they learn\nwhether they guessed right. The investors who say no to a Google\n(and there were several) will remember it for the rest of their\nlives.Anyone who must in some sense bet on ideas rather than merely\ncommenting on them has similar incentives. Which means anyone who\nwants such incentives can have them, by turning their comments into\nbets: if you write about a topic in some fairly durable and public\nform, you'll find you worry much more about getting things right\nthan most people would in a casual conversation.\n[4]Another trick I've found to protect myself against obsolete beliefs\nis to focus initially on people rather than ideas. Though the nature\nof future discoveries is hard to predict, I've found I can predict\nquite well what sort of people will make them. Good new ideas come\nfrom earnest, energetic, independent-minded people.Betting on people over ideas saved me countless times as an investor.\nWe thought Airbnb was a bad idea, for example. But we could tell\nthe founders were earnest, energetic, and independent-minded.\n(Indeed, almost pathologically so.) So we suspended disbelief and\nfunded them.This too seems a technique that should be generally applicable.\nSurround yourself with the sort of people new ideas come from. If\nyou want to notice quickly when your beliefs become obsolete, you\ncan't do better than to be friends with the people whose discoveries\nwill make them so.It's hard enough already not to become the prisoner of your own\nexpertise, but it will only get harder, because change is accelerating.\nThat's not a recent trend; change has been accelerating since the\npaleolithic era. Ideas beget ideas. I don't expect that to change.\nBut I could be wrong.\nNotes[1]\nMy usual trick is to talk about aspects of the present that\nmost people haven't noticed yet.[2]\nEspecially if they become well enough known that people start\nto identify them with you. You have to be extra skeptical about\nthings you want to believe, and once a hypothesis starts to be\nidentified with you, it will almost certainly start to be in that\ncategory.[3]\nIn practice \"sufficiently expert\" doesn't require one to be\nrecognized as an expert\u2014which is a trailing indicator in any\ncase. In many fields a year of focused work plus caring a lot would\nbe enough.[4]\nThough they are public and persist indefinitely, comments on\ne.g. forums and places like Twitter seem empirically to work like\ncasual conversation. The threshold may be whether what you write\nhas a title.\nThanks to Sam Altman, Patrick Collison, and Robert Morris\nfor reading drafts of this."} {"title": "foundervisa", "text": "\n\nApril 2009I usually avoid politics, but since we now seem to have an administration that's open to suggestions, I'm going to risk making one. The single biggest thing the government could do to increase the number of startups in this country is a policy that would cost nothing: establish a new class of visa for startup founders.The biggest constraint on the number of new startups that get created in the US is not tax policy or employment law or even Sarbanes-Oxley. It's that we won't let the people who want to start them into the country.Letting just 10,000 startup founders into the country each year could have a visible effect on the economy. If we assume 4 people per startup, which is probably an overestimate, that's 2500 new companies. Each year. They wouldn't all grow as big as Google, but out of 2500 some would come close.By definition these 10,000 founders wouldn't be taking jobs from Americans: it could be part of the terms of the visa that they couldn't work for existing companies, only new ones they'd founded. In fact they'd cause there to be \nmore jobs for Americans, because the companies they started would hire more employees as they grew.The tricky part might seem to be how one defined a startup. But that could be solved quite easily: let the market decide. Startup investors work hard to find the best startups. The government could not do better than to piggyback on their expertise, and use investment by recognized startup investors as the test of whether a company was a real startup.How would the government decide who's a startup investor? The same way they decide what counts as a university for student visas. We'll establish our own accreditation procedure. We know who one another are.10,000 people is a drop in the bucket by immigration standards, but would represent a huge increase in the pool of startup founders. I think this would have such a visible effect on the economy that it would make the legislator who introduced the bill famous. The only way to know for sure would be to try it, and that would cost practically nothing.\nThanks to Trevor Blackwell, Paul Buchheit, Jeff Clavier, David Hornik, Jessica Livingston, Greg Mcadoo, Aydin Senkut, and Fred Wilson for reading drafts of this.Related:"} {"title": "gap", "text": "May 2004When people care enough about something to do it well, those who\ndo it best tend to be far better than everyone else. There's a\nhuge gap between Leonardo and second-rate contemporaries like\nBorgognone. You see the same gap between Raymond Chandler and the\naverage writer of detective novels. A top-ranked professional chess\nplayer could play ten thousand games against an ordinary club player\nwithout losing once.Like chess or painting or writing novels, making money is a very\nspecialized skill. But for some reason we treat this skill\ndifferently. No one complains when a few people surpass all the\nrest at playing chess or writing novels, but when a few people make\nmore money than the rest, we get editorials saying this is wrong.Why? The pattern of variation seems no different than for any other\nskill. What causes people to react so strongly when the skill is\nmaking money?I think there are three reasons we treat making money as different:\nthe misleading model of wealth we learn as children; the disreputable\nway in which, till recently, most fortunes were accumulated; and\nthe worry that great variations in income are somehow bad for\nsociety. As far as I can tell, the first is mistaken, the second\noutdated, and the third empirically false. Could it be that, in a\nmodern democracy, variation in income is actually a sign of health?The Daddy Model of WealthWhen I was five I thought electricity was created by electric\nsockets. I didn't realize there were power plants out there\ngenerating it. Likewise, it doesn't occur to most kids that wealth\nis something that has to be generated. It seems to be something\nthat flows from parents.Because of the circumstances in which they encounter it, children\ntend to misunderstand wealth. They confuse it with money. They\nthink that there is a fixed amount of it. And they think of it as\nsomething that's distributed by authorities (and so should be\ndistributed equally), rather than something that has to be created\n(and might be created unequally).In fact, wealth is not money. Money is just a convenient way of\ntrading one form of wealth for another. Wealth is the underlying\nstuff\u2014the goods and services we buy. When you travel to a\nrich or poor country, you don't have to look at people's bank\naccounts to tell which kind you're in. You can see\nwealth\u2014in buildings and streets, in the clothes and the health\nof the people.Where does wealth come from? People make it. This was easier to\ngrasp when most people lived on farms, and made many of the things\nthey wanted with their own hands. Then you could see in the house,\nthe herds, and the granary the wealth that each family created. It\nwas obvious then too that the wealth of the world was not a fixed\nquantity that had to be shared out, like slices of a pie. If you\nwanted more wealth, you could make it.This is just as true today, though few of us create wealth directly\nfor ourselves (except for a few vestigial domestic tasks). Mostly\nwe create wealth for other people in exchange for money, which we\nthen trade for the forms of wealth we want. \n[1]Because kids are unable to create wealth, whatever they have has\nto be given to them. And when wealth is something you're given,\nthen of course it seems that it should be distributed equally.\n[2]\nAs in most families it is. The kids see to that. \"Unfair,\" they\ncry, when one sibling gets more than another.In the real world, you can't keep living off your parents. If you\nwant something, you either have to make it, or do something of\nequivalent value for someone else, in order to get them to give you\nenough money to buy it. In the real world, wealth is (except for\na few specialists like thieves and speculators) something you have\nto create, not something that's distributed by Daddy. And since\nthe ability and desire to create it vary from person to person,\nit's not made equally.You get paid by doing or making something people want, and those\nwho make more money are often simply better at doing what people\nwant. Top actors make a lot more money than B-list actors. The\nB-list actors might be almost as charismatic, but when people go\nto the theater and look at the list of movies playing, they want\nthat extra oomph that the big stars have.Doing what people want is not the only way to get money, of course.\nYou could also rob banks, or solicit bribes, or establish a monopoly.\nSuch tricks account for some variation in wealth, and indeed for\nsome of the biggest individual fortunes, but they are not the root\ncause of variation in income. The root cause of variation in income,\nas Occam's Razor implies, is the same as the root cause of variation\nin every other human skill.In the United States, the CEO of a large public company makes about\n100 times as much as the average person. \n[3]\nBasketball players\nmake about 128 times as much, and baseball players 72 times as much.\nEditorials quote this kind of statistic with horror. But I have\nno trouble imagining that one person could be 100 times as productive\nas another. In ancient Rome the price of slaves varied by\na factor of 50 depending on their skills. \n[4]\nAnd that's without\nconsidering motivation, or the extra leverage in productivity that\nyou can get from modern technology.Editorials about athletes' or CEOs' salaries remind me of early\nChristian writers, arguing from first principles about whether the\nEarth was round, when they could just walk outside and check.\n[5]\nHow much someone's work is worth is not a policy question. It's\nsomething the market already determines.\"Are they really worth 100 of us?\" editorialists ask. Depends on\nwhat you mean by worth. If you mean worth in the sense of what\npeople will pay for their skills, the answer is yes, apparently.A few CEOs' incomes reflect some kind of wrongdoing. But are there\nnot others whose incomes really do reflect the wealth they generate?\nSteve Jobs saved a company that was in a terminal decline. And not\nmerely in the way a turnaround specialist does, by cutting costs;\nhe had to decide what Apple's next products should be. Few others\ncould have done it. And regardless of the case with CEOs, it's\nhard to see how anyone could argue that the salaries of professional\nbasketball players don't reflect supply and demand.It may seem unlikely in principle that one individual could really\ngenerate so much more wealth than another. The key to this mystery\nis to revisit that question, are they really worth 100 of us?\nWould a basketball team trade one of their players for 100\nrandom people? What would Apple's next product look like if you\nreplaced Steve Jobs with a committee of 100 random people? \n[6]\nThese\nthings don't scale linearly. Perhaps the CEO or the professional\nathlete has only ten times (whatever that means) the skill and\ndetermination of an ordinary person. But it makes all the difference\nthat it's concentrated in one individual.When we say that one kind of work is overpaid and another underpaid,\nwhat are we really saying? In a free market, prices are determined\nby what buyers want. People like baseball more than poetry, so\nbaseball players make more than poets. To say that a certain kind\nof work is underpaid is thus identical with saying that people want\nthe wrong things.Well, of course people want the wrong things. It seems odd to be\nsurprised by that. And it seems even odder to say that it's\nunjust that certain kinds of work are underpaid. \n[7]\nThen\nyou're saying that it's unjust that people want the wrong things.\nIt's lamentable that people prefer reality TV and corndogs to\nShakespeare and steamed vegetables, but unjust? That seems like\nsaying that blue is heavy, or that up is circular.The appearance of the word \"unjust\" here is the unmistakable spectral\nsignature of the Daddy Model. Why else would this idea occur in\nthis odd context? Whereas if the speaker were still operating on\nthe Daddy Model, and saw wealth as something that flowed from a\ncommon source and had to be shared out, rather than something\ngenerated by doing what other people wanted, this is exactly what\nyou'd get on noticing that some people made much more than others.When we talk about \"unequal distribution of income,\" we should\nalso ask, where does that income come from?\n[8]\nWho made the wealth\nit represents? Because to the extent that income varies simply\naccording to how much wealth people create, the distribution may\nbe unequal, but it's hardly unjust.Stealing ItThe second reason we tend to find great disparities of wealth\nalarming is that for most of human history the usual way to accumulate\na fortune was to steal it: in pastoral societies by cattle raiding;\nin agricultural societies by appropriating others' estates in times\nof war, and taxing them in times of peace.In conflicts, those on the winning side would receive the estates\nconfiscated from the losers. In England in the 1060s, when William\nthe Conqueror distributed the estates of the defeated Anglo-Saxon\nnobles to his followers, the conflict was military. By the 1530s,\nwhen Henry VIII distributed the estates of the monasteries to his\nfollowers, it was mostly political. \n[9]\nBut the principle was the\nsame. Indeed, the same principle is at work now in Zimbabwe.In more organized societies, like China, the ruler and his officials\nused taxation instead of confiscation. But here too we see the\nsame principle: the way to get rich was not to create wealth, but\nto serve a ruler powerful enough to appropriate it.This started to change in Europe with the rise of the middle class.\nNow we think of the middle class as people who are neither rich nor\npoor, but originally they were a distinct group. In a feudal\nsociety, there are just two classes: a warrior aristocracy, and the\nserfs who work their estates. The middle class were a new, third\ngroup who lived in towns and supported themselves by manufacturing\nand trade.Starting in the tenth and eleventh centuries, petty nobles and\nformer serfs banded together in towns that gradually became powerful\nenough to ignore the local feudal lords. \n[10]\nLike serfs, the middle\nclass made a living largely by creating wealth. (In port cities\nlike Genoa and Pisa, they also engaged in piracy.) But unlike serfs\nthey had an incentive to create a lot of it. Any wealth a serf\ncreated belonged to his master. There was not much point in making\nmore than you could hide. Whereas the independence of the townsmen\nallowed them to keep whatever wealth they created.Once it became possible to get rich by creating wealth, society as\na whole started to get richer very rapidly. Nearly everything we\nhave was created by the middle class. Indeed, the other two classes\nhave effectively disappeared in industrial societies, and their\nnames been given to either end of the middle class. (In the original\nsense of the word, Bill Gates is middle class.)But it was not till the Industrial Revolution that wealth creation\ndefinitively replaced corruption as the best way to get rich. In\nEngland, at least, corruption only became unfashionable (and in\nfact only started to be called \"corruption\") when there started to\nbe other, faster ways to get rich.Seventeenth-century England was much like the third world today,\nin that government office was a recognized route to wealth. The\ngreat fortunes of that time still derived more from what we would\nnow call corruption than from commerce. \n[11]\nBy the nineteenth\ncentury that had changed. There continued to be bribes, as there\nstill are everywhere, but politics had by then been left to men who\nwere driven more by vanity than greed. Technology had made it\npossible to create wealth faster than you could steal it. The\nprototypical rich man of the nineteenth century was not a courtier\nbut an industrialist.With the rise of the middle class, wealth stopped being a zero-sum\ngame. Jobs and Wozniak didn't have to make us poor to make themselves\nrich. Quite the opposite: they created things that made our lives\nmaterially richer. They had to, or we wouldn't have paid for them.But since for most of the world's history the main route to wealth\nwas to steal it, we tend to be suspicious of rich people. Idealistic\nundergraduates find their unconsciously preserved child's model of\nwealth confirmed by eminent writers of the past. It is a case of\nthe mistaken meeting the outdated.\"Behind every great fortune, there is a crime,\" Balzac wrote. Except\nhe didn't. What he actually said was that a great fortune with no\napparent cause was probably due to a crime well enough executed\nthat it had been forgotten. If we were talking about Europe in\n1000, or most of the third world today, the standard misquotation\nwould be spot on. But Balzac lived in nineteenth-century France,\nwhere the Industrial Revolution was well advanced. He knew you\ncould make a fortune without stealing it. After all, he did himself,\nas a popular novelist.\n[12]Only a few countries (by no coincidence, the richest ones) have\nreached this stage. In most, corruption still has the upper hand.\nIn most, the fastest way to get wealth is by stealing it. And so\nwhen we see increasing differences in income in a rich country,\nthere is a tendency to worry that it's sliding back toward becoming\nanother Venezuela. I think the opposite is happening. I think\nyou're seeing a country a full step ahead of Venezuela.The Lever of TechnologyWill technology increase the gap between rich and poor? It will\ncertainly increase the gap between the productive and the unproductive.\nThat's the whole point of technology. With a tractor an energetic\nfarmer could plow six times as much land in a day as he could with\na team of horses. But only if he mastered a new kind of farming.I've seen the lever of technology grow visibly in my own time. In\nhigh school I made money by mowing lawns and scooping ice cream at\nBaskin-Robbins. This was the only kind of work available at the\ntime. Now high school kids could write software or design web\nsites. But only some of them will; the rest will still be scooping\nice cream.I remember very vividly when in 1985 improved technology made it\npossible for me to buy a computer of my own. Within months I was\nusing it to make money as a freelance programmer. A few years\nbefore, I couldn't have done this. A few years before, there was\nno such thing as a freelance programmer. But Apple created\nwealth, in the form of powerful, inexpensive computers, and programmers\nimmediately set to work using it to create more.As this example suggests, the rate at which technology increases\nour productive capacity is probably exponential, rather than linear.\nSo we should expect to see ever-increasing variation in individual\nproductivity as time goes on. Will that increase the gap between\nrich and the poor? Depends which gap you mean.Technology should increase the gap in income, but it seems to\ndecrease other gaps. A hundred years ago, the rich led a different\nkind of life from ordinary people. They lived in houses\nfull of servants, wore elaborately uncomfortable clothes, and\ntravelled about in carriages drawn by teams of horses which themselves\nrequired their own houses and servants. Now, thanks to technology,\nthe rich live more like the average person.Cars are a good example of why. It's possible to buy expensive,\nhandmade cars that cost hundreds of thousands of dollars. But there\nis not much point. Companies make more money by building a large\nnumber of ordinary cars than a small number of expensive ones. So\na company making a mass-produced car can afford to spend a lot more\non its design. If you buy a custom-made car, something will always\nbe breaking. The only point of buying one now is to advertise that\nyou can.Or consider watches. Fifty years ago, by spending a lot of money\non a watch you could get better performance. When watches had\nmechanical movements, expensive watches kept better time. Not any\nmore. Since the invention of the quartz movement, an ordinary Timex\nis more accurate than a Patek Philippe costing hundreds of thousands\nof dollars.\n[13]\nIndeed, as with expensive cars, if you're determined\nto spend a lot of money on a watch, you have to put up with some\ninconvenience to do it: as well as keeping worse time, mechanical\nwatches have to be wound.The only thing technology can't cheapen is brand. Which is precisely\nwhy we hear ever more about it. Brand is the residue left as the\nsubstantive differences between rich and poor evaporate. But what\nlabel you have on your stuff is a much smaller matter than having\nit versus not having it. In 1900, if you kept a carriage, no one\nasked what year or brand it was. If you had one, you were rich.\nAnd if you weren't rich, you took the omnibus or walked. Now even\nthe poorest Americans drive cars, and it is only because we're so\nwell trained by advertising that we can even recognize the especially\nexpensive ones.\n[14]The same pattern has played out in industry after industry. If\nthere is enough demand for something, technology will make it cheap\nenough to sell in large volumes, and the mass-produced versions\nwill be, if not better, at least more convenient.\n[15]\nAnd there\nis nothing the rich like more than convenience. The rich people I\nknow drive the same cars, wear the same clothes, have the same kind\nof furniture, and eat the same foods as my other friends. Their\nhouses are in different neighborhoods, or if in the same neighborhood\nare different sizes, but within them life is similar. The houses\nare made using the same construction techniques and contain much\nthe same objects. It's inconvenient to do something expensive and\ncustom.The rich spend their time more like everyone else too. Bertie\nWooster seems long gone. Now, most people who are rich enough not\nto work do anyway. It's not just social pressure that makes them;\nidleness is lonely and demoralizing.Nor do we have the social distinctions there were a hundred years\nago. The novels and etiquette manuals of that period read now\nlike descriptions of some strange tribal society. \"With respect\nto the continuance of friendships...\" hints Mrs. Beeton's Book\nof Household Management (1880), \"it may be found necessary, in\nsome cases, for a mistress to relinquish, on assuming the responsibility\nof a household, many of those commenced in the earlier part of her\nlife.\" A woman who married a rich man was expected to drop friends\nwho didn't. You'd seem a barbarian if you behaved that way today.\nYou'd also have a very boring life. People still tend to segregate\nthemselves somewhat, but much more on the basis of education than\nwealth.\n[16]Materially and socially, technology seems to be decreasing the gap\nbetween the rich and the poor, not increasing it. If Lenin walked\naround the offices of a company like Yahoo or Intel or Cisco, he'd\nthink communism had won. Everyone would be wearing the same clothes,\nhave the same kind of office (or rather, cubicle) with the same\nfurnishings, and address one another by their first names instead\nof by honorifics. Everything would seem exactly as he'd predicted,\nuntil he looked at their bank accounts. Oops.Is it a problem if technology increases that gap? It doesn't seem\nto be so far. As it increases the gap in income, it seems to\ndecrease most other gaps.Alternative to an AxiomOne often hears a policy criticized on the grounds that it would\nincrease the income gap between rich and poor. As if it were an\naxiom that this would be bad. It might be true that increased\nvariation in income would be bad, but I don't see how we can say\nit's axiomatic.Indeed, it may even be false, in industrial democracies. In a\nsociety of serfs and warlords, certainly, variation in income is a\nsign of an underlying problem. But serfdom is not the only cause\nof variation in income. A 747 pilot doesn't make 40 times as much\nas a checkout clerk because he is a warlord who somehow holds her\nin thrall. His skills are simply much more valuable.I'd like to propose an alternative idea: that in a modern society,\nincreasing variation in income is a sign of health. Technology\nseems to increase the variation in productivity at faster than\nlinear rates. If we don't see corresponding variation in income,\nthere are three possible explanations: (a) that technical innovation\nhas stopped, (b) that the people who would create the most wealth\naren't doing it, or (c) that they aren't getting paid for it.I think we can safely say that (a) and (b) would be bad. If you\ndisagree, try living for a year using only the resources available\nto the average Frankish nobleman in 800, and report back to us.\n(I'll be generous and not send you back to the stone age.)The only option, if you're going to have an increasingly prosperous\nsociety without increasing variation in income, seems to be (c),\nthat people will create a lot of wealth without being paid for it.\nThat Jobs and Wozniak, for example, will cheerfully work 20-hour\ndays to produce the Apple computer for a society that allows them,\nafter taxes, to keep just enough of their income to match what they\nwould have made working 9 to 5 at a big company.Will people create wealth if they can't get paid for it? Only if\nit's fun. People will write operating systems for free. But they\nwon't install them, or take support calls, or train customers to\nuse them. And at least 90% of the work that even the highest tech\ncompanies do is of this second, unedifying kind.All the unfun kinds of wealth creation slow dramatically in a society\nthat confiscates private fortunes. We can confirm this empirically.\nSuppose you hear a strange noise that you think may be due to a\nnearby fan. You turn the fan off, and the noise stops. You turn\nthe fan back on, and the noise starts again. Off, quiet. On,\nnoise. In the absence of other information, it would seem the noise\nis caused by the fan.At various times and places in history, whether you could accumulate\na fortune by creating wealth has been turned on and off. Northern\nItaly in 800, off (warlords would steal it). Northern Italy in\n1100, on. Central France in 1100, off (still feudal). England in\n1800, on. England in 1974, off (98% tax on investment income).\nUnited States in 1974, on. We've even had a twin study: West\nGermany, on; East Germany, off. In every case, the creation of\nwealth seems to appear and disappear like the noise of a fan as you\nswitch on and off the prospect of keeping it.There is some momentum involved. It probably takes at least a\ngeneration to turn people into East Germans (luckily for England).\nBut if it were merely a fan we were studying, without all the extra\nbaggage that comes from the controversial topic of wealth, no one\nwould have any doubt that the fan was causing the noise.If you suppress variations in income, whether by stealing private\nfortunes, as feudal rulers used to do, or by taxing them away, as\nsome modern governments have done, the result always seems to be\nthe same. Society as a whole ends up poorer.If I had a choice of living in a society where I was materially\nmuch better off than I am now, but was among the poorest, or in one\nwhere I was the richest, but much worse off than I am now, I'd take\nthe first option. If I had children, it would arguably be immoral\nnot to. It's absolute poverty you want to avoid, not relative\npoverty. If, as the evidence so far implies, you have to have one\nor the other in your society, take relative poverty.You need rich people in your society not so much because in spending\ntheir money they create jobs, but because of what they have to do\nto get rich. I'm not talking about the trickle-down effect\nhere. I'm not saying that if you let Henry Ford get rich, he'll\nhire you as a waiter at his next party. I'm saying that he'll make\nyou a tractor to replace your horse.Notes[1]\nPart of the reason this subject is so contentious is that some\nof those most vocal on the subject of wealth\u2014university\nstudents, heirs, professors, politicians, and journalists\u2014have\nthe least experience creating it. (This phenomenon will be familiar\nto anyone who has overheard conversations about sports in a bar.)Students are mostly still on the parental dole, and have not stopped\nto think about where that money comes from. Heirs will be on the\nparental dole for life. Professors and politicians live within\nsocialist eddies of the economy, at one remove from the creation\nof wealth, and are paid a flat rate regardless of how hard they\nwork. And journalists as part of their professional code segregate\nthemselves from the revenue-collecting half of the businesses they\nwork for (the ad sales department). Many of these people never\ncome face to face with the fact that the money they receive represents\nwealth\u2014wealth that, except in the case of journalists, someone\nelse created earlier. They live in a world in which income is\ndoled out by a central authority according to some abstract notion\nof fairness (or randomly, in the case of heirs), rather than given\nby other people in return for something they wanted, so it may seem\nto them unfair that things don't work the same in the rest of the\neconomy.(Some professors do create a great deal of wealth for\nsociety. But the money they're paid isn't a quid pro quo.\nIt's more in the nature of an investment.)[2]\nWhen one reads about the origins of the Fabian Society, it\nsounds like something cooked up by the high-minded Edwardian\nchild-heroes of Edith Nesbit's The Wouldbegoods.[3]\nAccording to a study by the Corporate Library, the median total\ncompensation, including salary, bonus, stock grants, and the exercise\nof stock options, of S&P 500 CEOs in 2002 was $3.65 million.\nAccording to Sports Illustrated, the average NBA player's\nsalary during the 2002-03 season was $4.54 million, and the average\nmajor league baseball player's salary at the start of the 2003\nseason was $2.56 million. According to the Bureau of Labor\nStatistics, the mean annual wage in the US in 2002 was $35,560.[4]\nIn the early empire the price of an ordinary adult slave seems\nto have been about 2,000 sestertii (e.g. Horace, Sat. ii.7.43).\nA servant girl cost 600 (Martial vi.66), while Columella (iii.3.8)\nsays that a skilled vine-dresser was worth 8,000. A doctor, P.\nDecimus Eros Merula, paid 50,000 sestertii for his freedom (Dessau,\nInscriptiones 7812). Seneca (Ep. xxvii.7) reports\nthat one Calvisius Sabinus paid 100,000 sestertii apiece for slaves\nlearned in the Greek classics. Pliny (Hist. Nat. vii.39)\nsays that the highest price paid for a slave up to his time was\n700,000 sestertii, for the linguist (and presumably teacher) Daphnis,\nbut that this had since been exceeded by actors buying their own\nfreedom.Classical Athens saw a similar variation in prices. An ordinary\nlaborer was worth about 125 to 150 drachmae. Xenophon (Mem.\nii.5) mentions prices ranging from 50 to 6,000 drachmae (for the\nmanager of a silver mine).For more on the economics of ancient slavery see:Jones, A. H. M., \"Slavery in the Ancient World,\" Economic History\nReview, 2:9 (1956), 185-199, reprinted in Finley, M. I. (ed.),\nSlavery in Classical Antiquity, Heffer, 1964.[5]\nEratosthenes (276\u2014195 BC) used shadow lengths in different\ncities to estimate the Earth's circumference. He was off by only\nabout 2%.[6]\nNo, and Windows, respectively.[7]\nOne of the biggest divergences between the Daddy Model and\nreality is the valuation of hard work. In the Daddy Model, hard\nwork is in itself deserving. In reality, wealth is measured by\nwhat one delivers, not how much effort it costs. If I paint someone's\nhouse, the owner shouldn't pay me extra for doing it with a toothbrush.It will seem to someone still implicitly operating on the Daddy\nModel that it is unfair when someone works hard and doesn't get\npaid much. To help clarify the matter, get rid of everyone else\nand put our worker on a desert island, hunting and gathering fruit.\nIf he's bad at it he'll work very hard and not end up with much\nfood. Is this unfair? Who is being unfair to him?[8]\nPart of the reason for the tenacity of the Daddy Model may be\nthe dual meaning of \"distribution.\" When economists talk about\n\"distribution of income,\" they mean statistical distribution. But\nwhen you use the phrase frequently, you can't help associating it\nwith the other sense of the word (as in e.g. \"distribution of alms\"),\nand thereby subconsciously seeing wealth as something that flows\nfrom some central tap. The word \"regressive\" as applied to tax\nrates has a similar effect, at least on me; how can anything\nregressive be good?[9]\n\"From the beginning of the reign Thomas Lord Roos was an assiduous\ncourtier of the young Henry VIII and was soon to reap the rewards.\nIn 1525 he was made a Knight of the Garter and given the Earldom\nof Rutland. In the thirties his support of the breach with Rome,\nhis zeal in crushing the Pilgrimage of Grace, and his readiness to\nvote the death-penalty in the succession of spectacular treason\ntrials that punctuated Henry's erratic matrimonial progress made\nhim an obvious candidate for grants of monastic property.\"Stone, Lawrence, Family and Fortune: Studies in Aristocratic\nFinance in the Sixteenth and Seventeenth Centuries, Oxford\nUniversity Press, 1973, p. 166.[10]\nThere is archaeological evidence for large settlements earlier,\nbut it's hard to say what was happening in them.Hodges, Richard and David Whitehouse, Mohammed, Charlemagne and\nthe Origins of Europe, Cornell University Press, 1983.[11]\nWilliam Cecil and his son Robert were each in turn the most\npowerful minister of the crown, and both used their position to\namass fortunes among the largest of their times. Robert in particular\ntook bribery to the point of treason. \"As Secretary of State and\nthe leading advisor to King James on foreign policy, [he] was a\nspecial recipient of favour, being offered large bribes by the Dutch\nnot to make peace with Spain, and large bribes by Spain to make\npeace.\" (Stone, op. cit., p. 17.)[12]\nThough Balzac made a lot of money from writing, he was notoriously\nimprovident and was troubled by debts all his life.[13]\nA Timex will gain or lose about .5 seconds per day. The most\naccurate mechanical watch, the Patek Philippe 10 Day Tourbillon,\nis rated at -1.5 to +2 seconds. Its retail price is about $220,000.[14]\nIf asked to choose which was more expensive, a well-preserved\n1989 Lincoln Town Car ten-passenger limousine ($5,000) or a 2004\nMercedes S600 sedan ($122,000), the average Edwardian might well\nguess wrong.[15]\nTo say anything meaningful about income trends, you have to\ntalk about real income, or income as measured in what it can buy.\nBut the usual way of calculating real income ignores much of the\ngrowth in wealth over time, because it depends on a consumer price\nindex created by bolting end to end a series of numbers that are\nonly locally accurate, and that don't include the prices of new\ninventions until they become so common that their prices stabilize.So while we might think it was very much better to live in a world\nwith antibiotics or air travel or an electric power grid than\nwithout, real income statistics calculated in the usual way will\nprove to us that we are only slightly richer for having these things.Another approach would be to ask, if you were going back to the\nyear x in a time machine, how much would you have to spend on trade\ngoods to make your fortune? For example, if you were going back\nto 1970 it would certainly be less than $500, because the processing\npower you can get for $500 today would have been worth at least\n$150 million in 1970. The function goes asymptotic fairly quickly,\nbecause for times over a hundred years or so you could get all you\nneeded in present-day trash. In 1800 an empty plastic drink bottle\nwith a screw top would have seemed a miracle of workmanship.[16]\nSome will say this amounts to the same thing, because the rich\nhave better opportunities for education. That's a valid point. It\nis still possible, to a degree, to buy your kids' way into top\ncolleges by sending them to private schools that in effect hack the\ncollege admissions process.According to a 2002 report by the National Center for Education\nStatistics, about 1.7% of American kids attend private, non-sectarian\nschools. At Princeton, 36% of the class of 2007 came from such\nschools. (Interestingly, the number at Harvard is significantly\nlower, about 28%.) Obviously this is a huge loophole. It does at\nleast seem to be closing, not widening.Perhaps the designers of admissions processes should take a lesson\nfrom the example of computer security, and instead of just assuming\nthat their system can't be hacked, measure the degree to which it\nis."} {"title": "gh", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nJuly 2004(This essay is derived from a talk at Oscon 2004.)\nA few months ago I finished a new \nbook, \nand in reviews I keep\nnoticing words like \"provocative'' and \"controversial.'' To say\nnothing of \"idiotic.''I didn't mean to make the book controversial. I was trying to make\nit efficient. I didn't want to waste people's time telling them\nthings they already knew. It's more efficient just to give them\nthe diffs. But I suppose that's bound to yield an alarming book.EdisonsThere's no controversy about which idea is most controversial:\nthe suggestion that variation in wealth might not be as big a\nproblem as we think.I didn't say in the book that variation in wealth was in itself a\ngood thing. I said in some situations it might be a sign of good\nthings. A throbbing headache is not a good thing, but it can be\na sign of a good thing-- for example, that you're recovering\nconsciousness after being hit on the head.Variation in wealth can be a sign of variation in productivity.\n(In a society of one, they're identical.) And that\nis almost certainly a good thing: if your society has no variation\nin productivity, it's probably not because everyone is Thomas\nEdison. It's probably because you have no Thomas Edisons.In a low-tech society you don't see much variation in productivity.\nIf you have a tribe of nomads collecting sticks for a fire, how\nmuch more productive is the best stick gatherer going to be than\nthe worst? A factor of two? Whereas when you hand people a complex tool\nlike a computer, the variation in what they can do with\nit is enormous.That's not a new idea. Fred Brooks wrote about it in 1974, and\nthe study he quoted was published in 1968. But I think he\nunderestimated the variation between programmers. He wrote about productivity in lines\nof code: the best programmers can solve a given problem in a tenth\nthe time. But what if the problem isn't given? In programming, as\nin many fields, the hard part isn't solving problems, but deciding\nwhat problems to solve. Imagination is hard to measure, but\nin practice it dominates the kind of productivity that's measured\nin lines of code.Productivity varies in any field, but there are few in which it\nvaries so much. The variation between programmers\nis so great that it becomes a difference in kind. I don't\nthink this is something intrinsic to programming, though. In every field,\ntechnology magnifies differences in productivity. I think what's\nhappening in programming is just that we have a lot of technological\nleverage. But in every field the lever is getting longer, so the\nvariation we see is something that more and more fields will see\nas time goes on. And the success of companies, and countries, will\ndepend increasingly on how they deal with it.If variation in productivity increases with technology, then the\ncontribution of the most productive individuals will not only be\ndisproportionately large, but will actually grow with time. When\nyou reach the point where 90% of a group's output is created by 1%\nof its members, you lose big if something (whether Viking raids,\nor central planning) drags their productivity down to the average.If we want to get the most out of them, we need to understand these\nespecially productive people. What motivates them? What do they\nneed to do their jobs? How do you recognize them? How do you\nget them to come and work for you? And then of course there's the\nquestion, how do you become one?More than MoneyI know a handful of super-hackers, so I sat down and thought about\nwhat they have in common. Their defining quality is probably that\nthey really love to program. Ordinary programmers write code to pay\nthe bills. Great hackers think of it as something they do for fun,\nand which they're delighted to find people will pay them for.Great programmers are sometimes said to be indifferent to money.\nThis isn't quite true. It is true that all they really care about\nis doing interesting work. But if you make enough money, you get\nto work on whatever you want, and for that reason hackers are\nattracted by the idea of making really large amounts of money.\nBut as long as they still have to show up for work every day, they\ncare more about what they do there than how much they get paid for\nit.Economically, this is a fact of the greatest importance, because\nit means you don't have to pay great hackers anything like what\nthey're worth. A great programmer might be ten or a hundred times\nas productive as an ordinary one, but he'll consider himself lucky\nto get paid three times as much. As I'll explain later, this is\npartly because great hackers don't know how good they are. But\nit's also because money is not the main thing they want.What do hackers want? Like all craftsmen, hackers like good tools.\nIn fact, that's an understatement. Good hackers find it unbearable\nto use bad tools. They'll simply refuse to work on projects with\nthe wrong infrastructure.At a startup I once worked for, one of the things pinned up on our\nbulletin board was an ad from IBM. It was a picture of an AS400,\nand the headline read, I think, \"hackers despise\nit.'' [1]When you decide what infrastructure to use for a project, you're\nnot just making a technical decision. You're also making a social\ndecision, and this may be the more important of the two. For\nexample, if your company wants to write some software, it might\nseem a prudent choice to write it in Java. But when you choose a\nlanguage, you're also choosing a community. The programmers you'll\nbe able to hire to work on a Java project won't be as\nsmart as the\nones you could get to work on a project written in Python.\nAnd the quality of your hackers probably matters more than the\nlanguage you choose. Though, frankly, the fact that good hackers\nprefer Python to Java should tell you something about the relative\nmerits of those languages.Business types prefer the most popular languages because they view\nlanguages as standards. They don't want to bet the company on\nBetamax. The thing about languages, though, is that they're not\njust standards. If you have to move bits over a network, by all\nmeans use TCP/IP. But a programming language isn't just a format.\nA programming language is a medium of expression.I've read that Java has just overtaken Cobol as the most popular\nlanguage. As a standard, you couldn't wish for more. But as a\nmedium of expression, you could do a lot better. Of all the great\nprogrammers I can think of, I know of only one who would voluntarily\nprogram in Java. And of all the great programmers I can think of\nwho don't work for Sun, on Java, I know of zero.Great hackers also generally insist on using open source software.\nNot just because it's better, but because it gives them more control.\nGood hackers insist on control. This is part of what makes them\ngood hackers: when something's broken, they need to fix it. You\nwant them to feel this way about the software they're writing for\nyou. You shouldn't be surprised when they feel the same way about\nthe operating system.A couple years ago a venture capitalist friend told me about a new\nstartup he was involved with. It sounded promising. But the next\ntime I talked to him, he said they'd decided to build their software\non Windows NT, and had just hired a very experienced NT developer\nto be their chief technical officer. When I heard this, I thought,\nthese guys are doomed. One, the CTO couldn't be a first rate\nhacker, because to become an eminent NT developer he would have\nhad to use NT voluntarily, multiple times, and I couldn't imagine\na great hacker doing that; and two, even if he was good, he'd have\na hard time hiring anyone good to work for him if the project had\nto be built on NT. [2]The Final FrontierAfter software, the most important tool to a hacker is probably\nhis office. Big companies think the function of office space is to express\nrank. But hackers use their offices for more than that: they\nuse their office as a place to think in. And if you're a technology\ncompany, their thoughts are your product. So making hackers work\nin a noisy, distracting environment is like having a paint factory\nwhere the air is full of soot.The cartoon strip Dilbert has a lot to say about cubicles, and with\ngood reason. All the hackers I know despise them. The mere prospect\nof being interrupted is enough to prevent hackers from working on\nhard problems. If you want to get real work done in an office with\ncubicles, you have two options: work at home, or come in early or\nlate or on a weekend, when no one else is there. Don't companies\nrealize this is a sign that something is broken? An office\nenvironment is supposed to be something that helps\nyou work, not something you work despite.Companies like Cisco are proud that everyone there has a cubicle,\neven the CEO. But they're not so advanced as they think; obviously\nthey still view office space as a badge of rank. Note too that\nCisco is famous for doing very little product development in house.\nThey get new technology by buying the startups that created it-- where\npresumably the hackers did have somewhere quiet to work.One big company that understands what hackers need is Microsoft.\nI once saw a recruiting ad for Microsoft with a big picture of a\ndoor. Work for us, the premise was, and we'll give you a place to\nwork where you can actually get work done. And you know, Microsoft\nis remarkable among big companies in that they are able to develop\nsoftware in house. Not well, perhaps, but well enough.If companies want hackers to be productive, they should look at\nwhat they do at home. At home, hackers can arrange things themselves\nso they can get the most done. And when they work at home, hackers\ndon't work in noisy, open spaces; they work in rooms with doors. They\nwork in cosy, neighborhoody places with people around and somewhere\nto walk when they need to mull something over, instead of in glass\nboxes set in acres of parking lots. They have a sofa they can take\na nap on when they feel tired, instead of sitting in a coma at\ntheir desk, pretending to work. There's no crew of people with\nvacuum cleaners that roars through every evening during the prime\nhacking hours. There are no meetings or, God forbid, corporate\nretreats or team-building exercises. And when you look at what\nthey're doing on that computer, you'll find it reinforces what I\nsaid earlier about tools. They may have to use Java and Windows\nat work, but at home, where they can choose for themselves, you're\nmore likely to find them using Perl and Linux.Indeed, these statistics about Cobol or Java being the most popular\nlanguage can be misleading. What we ought to look at, if we want\nto know what tools are best, is what hackers choose when they can\nchoose freely-- that is, in projects of their own. When you ask\nthat question, you find that open source operating systems already\nhave a dominant market share, and the number one language is probably\nPerl.InterestingAlong with good tools, hackers want interesting projects. What\nmakes a project interesting? Well, obviously overtly sexy\napplications like stealth planes or special effects software would\nbe interesting to work on. But any application can be interesting\nif it poses novel technical challenges. So it's hard to predict\nwhich problems hackers will like, because some become\ninteresting only when the people working on them discover a new\nkind of solution. Before ITA\n(who wrote the software inside Orbitz),\nthe people working on airline fare searches probably thought it\nwas one of the most boring applications imaginable. But ITA made\nit interesting by \nredefining the problem in a more ambitious way.I think the same thing happened at Google. When Google was founded,\nthe conventional wisdom among the so-called portals was that search\nwas boring and unimportant. But the guys at Google didn't think\nsearch was boring, and that's why they do it so well.This is an area where managers can make a difference. Like a parent\nsaying to a child, I bet you can't clean up your whole room in\nten minutes, a good manager can sometimes redefine a problem as a\nmore interesting one. Steve Jobs seems to be particularly good at\nthis, in part simply by having high standards. There were a lot\nof small, inexpensive computers before the Mac. He redefined the\nproblem as: make one that's beautiful. And that probably drove\nthe developers harder than any carrot or stick could.They certainly delivered. When the Mac first appeared, you didn't\neven have to turn it on to know it would be good; you could tell\nfrom the case. A few weeks ago I was walking along the street in\nCambridge, and in someone's trash I saw what appeared to be a Mac\ncarrying case. I looked inside, and there was a Mac SE. I carried\nit home and plugged it in, and it booted. The happy Macintosh\nface, and then the finder. My God, it was so simple. It was just\nlike ... Google.Hackers like to work for people with high standards. But it's not\nenough just to be exacting. You have to insist on the right things.\nWhich usually means that you have to be a hacker yourself. I've\nseen occasional articles about how to manage programmers. Really\nthere should be two articles: one about what to do if\nyou are yourself a programmer, and one about what to do if you're not. And the \nsecond could probably be condensed into two words: give up.The problem is not so much the day to day management. Really good\nhackers are practically self-managing. The problem is, if you're\nnot a hacker, you can't tell who the good hackers are. A similar\nproblem explains why American cars are so ugly. I call it the\ndesign paradox. You might think that you could make your products\nbeautiful just by hiring a great designer to design them. But if\nyou yourself don't have good taste, \nhow are you going to recognize\na good designer? By definition you can't tell from his portfolio.\nAnd you can't go by the awards he's won or the jobs he's had,\nbecause in design, as in most fields, those tend to be driven by\nfashion and schmoozing, with actual ability a distant third.\nThere's no way around it: you can't manage a process intended to\nproduce beautiful things without knowing what beautiful is. American\ncars are ugly because American car companies are run by people with\nbad taste.Many people in this country think of taste as something elusive,\nor even frivolous. It is neither. To drive design, a manager must\nbe the most demanding user of a company's products. And if you\nhave really good taste, you can, as Steve Jobs does, make satisfying\nyou the kind of problem that good people like to work on.Nasty Little ProblemsIt's pretty easy to say what kinds of problems are not interesting:\nthose where instead of solving a few big, clear, problems, you have\nto solve a lot of nasty little ones. One of the worst kinds of\nprojects is writing an interface to a piece of software that's\nfull of bugs. Another is when you have to customize\nsomething for an individual client's complex and ill-defined needs.\nTo hackers these kinds of projects are the death of a thousand\ncuts.The distinguishing feature of nasty little problems is that you\ndon't learn anything from them. Writing a compiler is interesting\nbecause it teaches you what a compiler is. But writing an interface\nto a buggy piece of software doesn't teach you anything, because the\nbugs are random. [3] So it's not just fastidiousness that makes good\nhackers avoid nasty little problems. It's more a question of\nself-preservation. Working on nasty little problems makes you\nstupid. Good hackers avoid it for the same reason models avoid\ncheeseburgers.Of course some problems inherently have this character. And because\nof supply and demand, they pay especially well. So a company that\nfound a way to get great hackers to work on tedious problems would\nbe very successful. How would you do it?One place this happens is in startups. At our startup we had \nRobert Morris working as a system administrator. That's like having the\nRolling Stones play at a bar mitzvah. You can't hire that kind of\ntalent. But people will do any amount of drudgery for companies\nof which they're the founders. [4]Bigger companies solve the problem by partitioning the company.\nThey get smart people to work for them by establishing a separate\nR&D department where employees don't have to work directly on\ncustomers' nasty little problems. [5] In this model, the research\ndepartment functions like a mine. They produce new ideas; maybe\nthe rest of the company will be able to use them.You may not have to go to this extreme. \nBottom-up programming\nsuggests another way to partition the company: have the smart people\nwork as toolmakers. If your company makes software to do x, have\none group that builds tools for writing software of that type, and\nanother that uses these tools to write the applications. This way\nyou might be able to get smart people to write 99% of your code,\nbut still keep them almost as insulated from users as they would\nbe in a traditional research department. The toolmakers would have\nusers, but they'd only be the company's own developers. [6]If Microsoft used this approach, their software wouldn't be so full\nof security holes, because the less smart people writing the actual\napplications wouldn't be doing low-level stuff like allocating\nmemory. Instead of writing Word directly in C, they'd be plugging\ntogether big Lego blocks of Word-language. (Duplo, I believe, is\nthe technical term.)ClumpingAlong with interesting problems, what good hackers like is other\ngood hackers. Great hackers tend to clump together-- sometimes\nspectacularly so, as at Xerox Parc. So you won't attract good\nhackers in linear proportion to how good an environment you create\nfor them. The tendency to clump means it's more like the square\nof the environment. So it's winner take all. At any given time,\nthere are only about ten or twenty places where hackers most want to\nwork, and if you aren't one of them, you won't just have fewer\ngreat hackers, you'll have zero.Having great hackers is not, by itself, enough to make a company\nsuccessful. It works well for Google and ITA, which are two of\nthe hot spots right now, but it didn't help Thinking Machines or\nXerox. Sun had a good run for a while, but their business model\nis a down elevator. In that situation, even the best hackers can't\nsave you.I think, though, that all other things being equal, a company that\ncan attract great hackers will have a huge advantage. There are\npeople who would disagree with this. When we were making the rounds\nof venture capital firms in the 1990s, several told us that software\ncompanies didn't win by writing great software, but through brand,\nand dominating channels, and doing the right deals.They really seemed to believe this, and I think I know why. I\nthink what a lot of VCs are looking for, at least unconsciously,\nis the next Microsoft. And of course if Microsoft is your model,\nyou shouldn't be looking for companies that hope to win by writing\ngreat software. But VCs are mistaken to look for the next Microsoft,\nbecause no startup can be the next Microsoft unless some other\ncompany is prepared to bend over at just the right moment and be\nthe next IBM.It's a mistake to use Microsoft as a model, because their whole\nculture derives from that one lucky break. Microsoft is a bad data\npoint. If you throw them out, you find that good products do tend\nto win in the market. What VCs should be looking for is the next\nApple, or the next Google.I think Bill Gates knows this. What worries him about Google is\nnot the power of their brand, but the fact that they have\nbetter hackers. [7]\nRecognitionSo who are the great hackers? How do you know when you meet one?\nThat turns out to be very hard. Even hackers can't tell. I'm\npretty sure now that my friend Trevor Blackwell is a great hacker.\nYou may have read on Slashdot how he made his \nown Segway. The\nremarkable thing about this project was that he wrote all the\nsoftware in one day (in Python, incidentally).For Trevor, that's\npar for the course. But when I first met him, I thought he was a\ncomplete idiot. He was standing in Robert Morris's office babbling\nat him about something or other, and I remember standing behind\nhim making frantic gestures at Robert to shoo this nut out of his\noffice so we could go to lunch. Robert says he misjudged Trevor\nat first too. Apparently when Robert first met him, Trevor had\njust begun a new scheme that involved writing down everything about\nevery aspect of his life on a stack of index cards, which he carried\nwith him everywhere. He'd also just arrived from Canada, and had\na strong Canadian accent and a mullet.The problem is compounded by the fact that hackers, despite their\nreputation for social obliviousness, sometimes put a good deal of\neffort into seeming smart. When I was in grad school I used to\nhang around the MIT AI Lab occasionally. It was kind of intimidating\nat first. Everyone there spoke so fast. But after a while I\nlearned the trick of speaking fast. You don't have to think any\nfaster; just use twice as many words to say everything. With this amount of noise in the signal, it's hard to tell good\nhackers when you meet them. I can't tell, even now. You also\ncan't tell from their resumes. It seems like the only way to judge\na hacker is to work with him on something.And this is the reason that high-tech areas \nonly happen around universities. The active ingredient\nhere is not so much the professors as the students. Startups grow up\naround universities because universities bring together promising young\npeople and make them work on the same projects. The\nsmart ones learn who the other smart ones are, and together\nthey cook up new projects of their own.Because you can't tell a great hacker except by working with him,\nhackers themselves can't tell how good they are. This is true to\na degree in most fields. I've found that people who\nare great at something are not so much convinced of their own\ngreatness as mystified at why everyone else seems so incompetent.\nBut it's particularly hard for hackers to know how good they are,\nbecause it's hard to compare their work. This is easier in most\nother fields. In the hundred meters, you know in 10 seconds who's\nfastest. Even in math there seems to be a general consensus about\nwhich problems are hard to solve, and what constitutes a good\nsolution. But hacking is like writing. Who can say which of two\nnovels is better? Certainly not the authors.With hackers, at least, other hackers can tell. That's because,\nunlike novelists, hackers collaborate on projects. When you get\nto hit a few difficult problems over the net at someone, you learn\npretty quickly how hard they hit them back. But hackers can't\nwatch themselves at work. So if you ask a great hacker how good\nhe is, he's almost certain to reply, I don't know. He's not just\nbeing modest. He really doesn't know.And none of us know, except about people we've actually worked\nwith. Which puts us in a weird situation: we don't know who our\nheroes should be. The hackers who become famous tend to become\nfamous by random accidents of PR. Occasionally I need to give an\nexample of a great hacker, and I never know who to use. The first\nnames that come to mind always tend to be people I know personally,\nbut it seems lame to use them. So, I think, maybe I should say\nRichard Stallman, or Linus Torvalds, or Alan Kay, or someone famous\nlike that. But I have no idea if these guys are great hackers.\nI've never worked with them on anything.If there is a Michael Jordan of hacking, no one knows, including\nhim.CultivationFinally, the question the hackers have all been wondering about:\nhow do you become a great hacker? I don't know if it's possible\nto make yourself into one. But it's certainly possible to do things\nthat make you stupid, and if you can make yourself stupid, you\ncan probably make yourself smart too.The key to being a good hacker may be to work on what you like.\nWhen I think about the great hackers I know, one thing they have\nin common is the extreme \ndifficulty of making them work \non anything they\ndon't want to. I don't know if this is cause or effect; it may be\nboth.To do something well you have to love it. \nSo to the extent you\ncan preserve hacking as something you love, you're likely to do it\nwell. Try to keep the sense of wonder you had about programming at\nage 14. If you're worried that your current job is rotting your\nbrain, it probably is.The best hackers tend to be smart, of course, but that's true in\na lot of fields. Is there some quality that's unique to hackers?\nI asked some friends, and the number one thing they mentioned was\ncuriosity. \nI'd always supposed that all smart people were curious--\nthat curiosity was simply the first derivative of knowledge. But\napparently hackers are particularly curious, especially about how\nthings work. That makes sense, because programs are in effect\ngiant descriptions of how things work.Several friends mentioned hackers' ability to concentrate-- their\nability, as one put it, to \"tune out everything outside their own\nheads.'' I've certainly noticed this. And I've heard several \nhackers say that after drinking even half a beer they can't program at\nall. So maybe hacking does require some special ability to focus.\nPerhaps great hackers can load a large amount of context into their\nhead, so that when they look at a line of code, they see not just\nthat line but the whole program around it. John McPhee\nwrote that Bill Bradley's success as a basketball player was due\npartly to his extraordinary peripheral vision. \"Perfect'' eyesight\nmeans about 47 degrees of vertical peripheral vision. Bill Bradley\nhad 70; he could see the basket when he was looking at the floor.\nMaybe great hackers have some similar inborn ability. (I cheat by\nusing a very dense language, \nwhich shrinks the court.)This could explain the disconnect over cubicles. Maybe the people\nin charge of facilities, not having any concentration to shatter,\nhave no idea that working in a cubicle feels to a hacker like having\none's brain in a blender. (Whereas Bill, if the rumors of autism\nare true, knows all too well.)One difference I've noticed between great hackers and smart people\nin general is that hackers are more \npolitically incorrect. To the\nextent there is a secret handshake among good hackers, it's when they\nknow one another well enough to express opinions that would get\nthem stoned to death by the general public. And I can see why\npolitical incorrectness would be a useful quality in programming.\nPrograms are very complex and, at least in the hands of good\nprogrammers, very fluid. In such situations it's helpful to have\na habit of questioning assumptions.Can you cultivate these qualities? I don't know. But you can at\nleast not repress them. So here is my best shot at a recipe. If\nit is possible to make yourself into a great hacker, the way to do\nit may be to make the following deal with yourself: you never have\nto work on boring projects (unless your family will starve otherwise),\nand in return, you'll never allow yourself to do a half-assed job.\nAll the great hackers I know seem to have made that deal, though\nperhaps none of them had any choice in the matter.Notes\n[1] In fairness, I have to say that IBM makes decent hardware. I\nwrote this on an IBM laptop.[2] They did turn out to be doomed. They shut down a few months\nlater.[3] I think this is what people mean when they talk\nabout the \"meaning of life.\" On the face of it, this seems an \nodd idea. Life isn't an expression; how could it have meaning?\nBut it can have a quality that feels a lot like meaning. In a project\nlike a compiler, you have to solve a lot of problems, but the problems\nall fall into a pattern, as in a signal. Whereas when the problems\nyou have to solve are random, they seem like noise.\n[4] Einstein at one point worked designing refrigerators. (He had equity.)[5] It's hard to say exactly what constitutes research in the\ncomputer world, but as a first approximation, it's software that\ndoesn't have users.I don't think it's publication that makes the best hackers want to work\nin research departments. I think it's mainly not having to have a\nthree hour meeting with a product manager about problems integrating\nthe Korean version of Word 13.27 with the talking paperclip.[6] Something similar has been happening for a long time in the\nconstruction industry. When you had a house built a couple hundred\nyears ago, the local builders built everything in it. But increasingly\nwhat builders do is assemble components designed and manufactured\nby someone else. This has, like the arrival of desktop publishing,\ngiven people the freedom to experiment in disastrous ways, but it\nis certainly more efficient.[7] Google is much more dangerous to Microsoft than Netscape was.\nProbably more dangerous than any other company has ever been. Not\nleast because they're determined to fight. On their job listing\npage, they say that one of their \"core values'' is \"Don't be evil.''\nFrom a company selling soybean oil or mining equipment, such a\nstatement would merely be eccentric. But I think all of us in the\ncomputer world recognize who that is a declaration of war on.Thanks to Jessica Livingston, Robert Morris, and Sarah Harlin\nfor reading earlier versions of this talk."} {"title": "mod", "text": "December 2019There are two distinct ways to be politically moderate: on purpose\nand by accident. Intentional moderates are trimmers, deliberately\nchoosing a position mid-way between the extremes of right and left.\nAccidental moderates end up in the middle, on average, because they\nmake up their own minds about each question, and the far right and\nfar left are roughly equally wrong.You can distinguish intentional from accidental moderates by the\ndistribution of their opinions. If the far left opinion on some\nmatter is 0 and the far right opinion 100, an intentional moderate's\nopinion on every question will be near 50. Whereas an accidental\nmoderate's opinions will be scattered over a broad range, but will,\nlike those of the intentional moderate, average to about 50.Intentional moderates are similar to those on the far left and the\nfar right in that their opinions are, in a sense, not their own.\nThe defining quality of an ideologue, whether on the left or the\nright, is to acquire one's opinions in bulk. You don't get to pick\nand choose. Your opinions about taxation can be predicted from your\nopinions about sex. And although intentional moderates\nmight seem to be the opposite of ideologues, their beliefs (though\nin their case the word \"positions\" might be more accurate) are also\nacquired in bulk. If the median opinion shifts to the right or left,\nthe intentional moderate must shift with it. Otherwise they stop\nbeing moderate.Accidental moderates, on the other hand, not only choose their own\nanswers, but choose their own questions. They may not care at all\nabout questions that the left and right both think are terribly\nimportant. So you can only even measure the politics of an accidental\nmoderate from the intersection of the questions they care about and\nthose the left and right care about, and this can\nsometimes be vanishingly small.It is not merely a manipulative rhetorical trick to say \"if you're\nnot with us, you're against us,\" but often simply false.Moderates are sometimes derided as cowards, particularly by \nthe extreme left. But while it may be accurate to call intentional\nmoderates cowards, openly being an accidental moderate requires the\nmost courage of all, because you get attacked from both right and\nleft, and you don't have the comfort of being an orthodox member\nof a large group to sustain you.Nearly all the most impressive people I know are accidental moderates.\nIf I knew a lot of professional athletes, or people in the entertainment\nbusiness, that might be different. Being on the far left or far\nright doesn't affect how fast you run or how well you sing. But\nsomeone who works with ideas has to be independent-minded to do it\nwell.Or more precisely, you have to be independent-minded about the ideas\nyou work with. You could be mindlessly doctrinaire in your politics\nand still be a good mathematician. In the 20th century, a lot of\nvery smart people were Marxists \u0097 just no one who was smart about\nthe subjects Marxism involves. But if the ideas you use in your\nwork intersect with the politics of your time, you have two choices:\nbe an accidental moderate, or be mediocre.Notes[1] It's possible in theory for one side to be entirely right and\nthe other to be entirely wrong. Indeed, ideologues must always\nbelieve this is the case. But historically it rarely has been.[2] For some reason the far right tend to ignore moderates rather\nthan despise them as backsliders. I'm not sure why. Perhaps it\nmeans that the far right is less ideological than the far left. Or\nperhaps that they are more confident, or more resigned, or simply\nmore disorganized. I just don't know.[3] Having heretical opinions doesn't mean you have to express\nthem openly. It may be\neasier to have them if you don't.\nThanks to Austen Allred, Trevor Blackwell, Patrick Collison, Jessica Livingston,\nAmjad Masad, Ryan Petersen, and Harj Taggar for reading drafts of this."} {"title": "rootsoflisp", "text": "May 2001\n\n(I wrote this article to help myself understand exactly\nwhat McCarthy discovered. You don't need to know this stuff\nto program in Lisp, but it should be helpful to \nanyone who wants to\nunderstand the essence of Lisp \u0097 both in the sense of its\norigins and its semantic core. The fact that it has such a core\nis one of Lisp's distinguishing features, and the reason why,\nunlike other languages, Lisp has dialects.)In 1960, John \nMcCarthy published a remarkable paper in\nwhich he did for programming something like what Euclid did for\ngeometry. He showed how, given a handful of simple\noperators and a notation for functions, you can\nbuild a whole programming language.\nHe called this language Lisp, for \"List Processing,\"\nbecause one of his key ideas was to use a simple\ndata structure called a list for both\ncode and data.It's worth understanding what McCarthy discovered, not\njust as a landmark in the history of computers, but as\na model for what programming is tending to become in\nour own time. It seems to me that there have been\ntwo really clean, consistent models of programming so\nfar: the C model and the Lisp model.\nThese two seem points of high ground, with swampy lowlands\nbetween them. As computers have grown more powerful,\nthe new languages being developed have been moving\nsteadily toward the Lisp model. A popular recipe\nfor new programming languages in the past 20 years \nhas been to take the C model of computing and add to\nit, piecemeal, parts taken from the Lisp model,\nlike runtime typing and garbage collection.In this article I'm going to try to explain in the\nsimplest possible terms what McCarthy discovered.\nThe point is not just to learn about an interesting\ntheoretical result someone figured out forty years ago,\nbut to show where languages are heading.\nThe unusual thing about Lisp \u0097 in fact, the defining\nquality of Lisp \u0097 is that it can be written in\nitself. To understand what McCarthy meant by this,\nwe're going to retrace his steps, with his mathematical\nnotation translated into running Common Lisp code."} {"title": "siliconvalley", "text": "May 2006(This essay is derived from a keynote at Xtech.)Could you reproduce Silicon Valley elsewhere, or is there something\nunique about it?It wouldn't be surprising if it were hard to reproduce in other\ncountries, because you couldn't reproduce it in most of the US\neither. What does it take to make a silicon valley even here?What it takes is the right people. If you could get the right ten\nthousand people to move from Silicon Valley to Buffalo, Buffalo\nwould become Silicon Valley. \n[1]That's a striking departure from the past. Up till a couple decades\nago, geography was destiny for cities. All great cities were located\non waterways, because cities made money by trade, and water was the\nonly economical way to ship.Now you could make a great city anywhere, if you could get the right\npeople to move there. So the question of how to make a silicon\nvalley becomes: who are the right people, and how do you get them\nto move?Two TypesI think you only need two kinds of people to create a technology\nhub: rich people and nerds. They're the limiting reagents in the\nreaction that produces startups, because they're the only ones\npresent when startups get started. Everyone else will move.Observation bears this out: within the US, towns have become startup\nhubs if and only if they have both rich people and nerds. Few\nstartups happen in Miami, for example, because although it's full\nof rich people, it has few nerds. It's not the kind of place nerds\nlike.Whereas Pittsburgh has the opposite problem: plenty of nerds, but\nno rich people. The top US Computer Science departments are said\nto be MIT, Stanford, Berkeley, and Carnegie-Mellon. MIT yielded\nRoute 128. Stanford and Berkeley yielded Silicon Valley. But\nCarnegie-Mellon? The record skips at that point. Lower down the\nlist, the University of Washington yielded a high-tech community\nin Seattle, and the University of Texas at Austin yielded one in\nAustin. But what happened in Pittsburgh? And in Ithaca, home of\nCornell, which is also high on the list?I grew up in Pittsburgh and went to college at Cornell, so I can\nanswer for both. The weather is terrible, particularly in winter,\nand there's no interesting old city to make up for it, as there is\nin Boston. Rich people don't want to live in Pittsburgh or Ithaca.\nSo while there are plenty of hackers who could start startups,\nthere's no one to invest in them.Not BureaucratsDo you really need the rich people? Wouldn't it work to have the\ngovernment invest in the nerds? No, it would not. Startup investors\nare a distinct type of rich people. They tend to have a lot of\nexperience themselves in the technology business. This (a) helps\nthem pick the right startups, and (b) means they can supply advice\nand connections as well as money. And the fact that they have a\npersonal stake in the outcome makes them really pay attention.Bureaucrats by their nature are the exact opposite sort of people\nfrom startup investors. The idea of them making startup investments\nis comic. It would be like mathematicians running Vogue-- or\nperhaps more accurately, Vogue editors running a math journal.\n[2]Though indeed, most things bureaucrats do, they do badly. We just\ndon't notice usually, because they only have to compete against\nother bureaucrats. But as startup investors they'd have to compete\nagainst pros with a great deal more experience and motivation.Even corporations that have in-house VC groups generally forbid\nthem to make their own investment decisions. Most are only allowed\nto invest in deals where some reputable private VC firm is willing\nto act as lead investor.Not BuildingsIf you go to see Silicon Valley, what you'll see are buildings.\nBut it's the people that make it Silicon Valley, not the buildings.\nI read occasionally about attempts to set up \"technology\nparks\" in other places, as if the active ingredient of Silicon\nValley were the office space. An article about Sophia Antipolis\nbragged that companies there included Cisco, Compaq, IBM, NCR, and\nNortel. Don't the French realize these aren't startups?Building office buildings for technology companies won't get you a\nsilicon valley, because the key stage in the life of a startup\nhappens before they want that kind of space. The key stage is when\nthey're three guys operating out of an apartment. Wherever the\nstartup is when it gets funded, it will stay. The defining quality\nof Silicon Valley is not that Intel or Apple or Google have offices\nthere, but that they were started there.So if you want to reproduce Silicon Valley, what you need to reproduce\nis those two or three founders sitting around a kitchen table\ndeciding to start a company. And to reproduce that you need those\npeople.UniversitiesThe exciting thing is, all you need are the people. If you could\nattract a critical mass of nerds and investors to live somewhere,\nyou could reproduce Silicon Valley. And both groups are highly\nmobile. They'll go where life is good. So what makes a place good\nto them?What nerds like is other nerds. Smart people will go wherever other\nsmart people are. And in particular, to great universities. In\ntheory there could be other ways to attract them, but so far\nuniversities seem to be indispensable. Within the US, there are\nno technology hubs without first-rate universities-- or at least,\nfirst-rate computer science departments.So if you want to make a silicon valley, you not only need a\nuniversity, but one of the top handful in the world. It has to be\ngood enough to act as a magnet, drawing the best people from thousands\nof miles away. And that means it has to stand up to existing magnets\nlike MIT and Stanford.This sounds hard. Actually it might be easy. My professor friends,\nwhen they're deciding where they'd like to work, consider one thing\nabove all: the quality of the other faculty. What attracts professors\nis good colleagues. So if you managed to recruit, en masse, a\nsignificant number of the best young researchers, you could create\na first-rate university from nothing overnight. And you could do\nthat for surprisingly little. If you paid 200 people hiring bonuses\nof $3 million apiece, you could put together a faculty that would\nbear comparison with any in the world. And from that point the\nchain reaction would be self-sustaining. So whatever it costs to\nestablish a mediocre university, for an additional half billion or\nso you could have a great one. \n[3]PersonalityHowever, merely creating a new university would not be enough to\nstart a silicon valley. The university is just the seed. It has\nto be planted in the right soil, or it won't germinate. Plant it\nin the wrong place, and you just create Carnegie-Mellon.To spawn startups, your university has to be in a town that has\nattractions other than the university. It has to be a place where\ninvestors want to live, and students want to stay after they graduate.The two like much the same things, because most startup investors\nare nerds themselves. So what do nerds look for in a town? Their\ntastes aren't completely different from other people's, because a\nlot of the towns they like most in the US are also big tourist\ndestinations: San Francisco, Boston, Seattle. But their tastes\ncan't be quite mainstream either, because they dislike other big\ntourist destinations, like New York, Los Angeles, and Las Vegas.There has been a lot written lately about the \"creative class.\" The\nthesis seems to be that as wealth derives increasingly from ideas,\ncities will prosper only if they attract those who have them. That\nis certainly true; in fact it was the basis of Amsterdam's prosperity\n400 years ago.A lot of nerd tastes they share with the creative class in general.\nFor example, they like well-preserved old neighborhoods instead of\ncookie-cutter suburbs, and locally-owned shops and restaurants\ninstead of national chains. Like the rest of the creative class,\nthey want to live somewhere with personality.What exactly is personality? I think it's the feeling that each\nbuilding is the work of a distinct group of people. A town with\npersonality is one that doesn't feel mass-produced. So if you want\nto make a startup hub-- or any town to attract the \"creative class\"--\nyou probably have to ban large development projects.\nWhen a large tract has been developed by a single organization, you\ncan always tell. \n[4]Most towns with personality are old, but they don't have to be.\nOld towns have two advantages: they're denser, because they were\nlaid out before cars, and they're more varied, because they were\nbuilt one building at a time. You could have both now. Just have\nbuilding codes that ensure density, and ban large scale developments.A corollary is that you have to keep out the biggest developer of\nall: the government. A government that asks \"How can we build a\nsilicon valley?\" has probably ensured failure by the way they framed\nthe question. You don't build a silicon valley; you let one grow.NerdsIf you want to attract nerds, you need more than a town with\npersonality. You need a town with the right personality. Nerds\nare a distinct subset of the creative class, with different tastes\nfrom the rest. You can see this most clearly in New York, which\nattracts a lot of creative people, but few nerds. \n[5]What nerds like is the kind of town where people walk around smiling.\nThis excludes LA, where no one walks at all, and also New York,\nwhere people walk, but not smiling. When I was in grad school in\nBoston, a friend came to visit from New York. On the subway back\nfrom the airport she asked \"Why is everyone smiling?\" I looked and\nthey weren't smiling. They just looked like they were compared to\nthe facial expressions she was used to.If you've lived in New York, you know where these facial expressions\ncome from. It's the kind of place where your mind may be excited,\nbut your body knows it's having a bad time. People don't so much\nenjoy living there as endure it for the sake of the excitement.\nAnd if you like certain kinds of excitement, New York is incomparable.\nIt's a hub of glamour, a magnet for all the shorter half-life\nisotopes of style and fame.Nerds don't care about glamour, so to them the appeal of New York\nis a mystery. People who like New York will pay a fortune for a\nsmall, dark, noisy apartment in order to live in a town where the\ncool people are really cool. A nerd looks at that deal and sees\nonly: pay a fortune for a small, dark, noisy apartment.Nerds will pay a premium to live in a town where the smart people\nare really smart, but you don't have to pay as much for that. It's\nsupply and demand: glamour is popular, so you have to pay a lot for\nit.Most nerds like quieter pleasures. They like cafes instead of\nclubs; used bookshops instead of fashionable clothing shops; hiking\ninstead of dancing; sunlight instead of tall buildings. A nerd's\nidea of paradise is Berkeley or Boulder.YouthIt's the young nerds who start startups, so it's those specifically\nthe city has to appeal to. The startup hubs in the US are all\nyoung-feeling towns. This doesn't mean they have to be new.\nCambridge has the oldest town plan in America, but it feels young\nbecause it's full of students.What you can't have, if you want to create a silicon valley, is a\nlarge, existing population of stodgy people. It would be a waste\nof time to try to reverse the fortunes of a declining industrial town\nlike Detroit or Philadelphia by trying to encourage startups. Those\nplaces have too much momentum in the wrong direction. You're better\noff starting with a blank slate in the form of a small town. Or\nbetter still, if there's a town young people already flock to, that\none.The Bay Area was a magnet for the young and optimistic for decades\nbefore it was associated with technology. It was a place people\nwent in search of something new. And so it became synonymous with\nCalifornia nuttiness. There's still a lot of that there. If you\nwanted to start a new fad-- a new way to focus one's \"energy,\" for\nexample, or a new category of things not to eat-- the Bay Area would\nbe the place to do it. But a place that tolerates oddness in the\nsearch for the new is exactly what you want in a startup hub, because\neconomically that's what startups are. Most good startup ideas\nseem a little crazy; if they were obviously good ideas, someone\nwould have done them already.(How many people are going to want computers in their houses?\nWhat, another search engine?)That's the connection between technology and liberalism. Without\nexception the high-tech cities in the US are also the most liberal.\nBut it's not because liberals are smarter that this is so. It's\nbecause liberal cities tolerate odd ideas, and smart people by\ndefinition have odd ideas.Conversely, a town that gets praised for being \"solid\" or representing\n\"traditional values\" may be a fine place to live, but it's never\ngoing to succeed as a startup hub. The 2004 presidential election,\nthough a disaster in other respects, conveniently supplied us with\na county-by-county \nmap of such places. \n[6]To attract the young, a town must have an intact center. In most\nAmerican cities the center has been abandoned, and the growth, if\nany, is in the suburbs. Most American cities have been turned\ninside out. But none of the startup hubs has: not San Francisco,\nor Boston, or Seattle. They all have intact centers.\n[7]\nMy guess is that no city with a dead center could be turned into a\nstartup hub. Young people don't want to live in the suburbs.Within the US, the two cities I think could most easily be turned\ninto new silicon valleys are Boulder and Portland. Both have the\nkind of effervescent feel that attracts the young. They're each\nonly a great university short of becoming a silicon valley, if they\nwanted to.TimeA great university near an attractive town. Is that all it takes?\nThat was all it took to make the original Silicon Valley. Silicon\nValley traces its origins to William Shockley, one of the inventors\nof the transistor. He did the research that won him the Nobel Prize\nat Bell Labs, but when he started his own company in 1956 he moved\nto Palo Alto to do it. At the time that was an odd thing to do.\nWhy did he? Because he had grown up there and remembered how nice\nit was. Now Palo Alto is suburbia, but then it was a charming\ncollege town-- a charming college town with perfect weather and San\nFrancisco only an hour away.The companies that rule Silicon Valley now are all descended in\nvarious ways from Shockley Semiconductor. Shockley was a difficult\nman, and in 1957 his top people-- \"the traitorous eight\"-- left to\nstart a new company, Fairchild Semiconductor. Among them were\nGordon Moore and Robert Noyce, who went on to found Intel, and\nEugene Kleiner, who founded the VC firm Kleiner Perkins. Forty-two\nyears later, Kleiner Perkins funded Google, and the partner responsible\nfor the deal was John Doerr, who came to Silicon Valley in 1974 to\nwork for Intel.So although a lot of the newest companies in Silicon Valley don't\nmake anything out of silicon, there always seem to be multiple links\nback to Shockley. There's a lesson here: startups beget startups.\nPeople who work for startups start their own. People who get rich\nfrom startups fund new ones. I suspect this kind of organic growth\nis the only way to produce a startup hub, because it's the only way\nto grow the expertise you need.That has two important implications. The first is that you need\ntime to grow a silicon valley. The university you could create in\na couple years, but the startup community around it has to grow\norganically. The cycle time is limited by the time it takes a\ncompany to succeed, which probably averages about five years.The other implication of the organic growth hypothesis is that you\ncan't be somewhat of a startup hub. You either have a self-sustaining\nchain reaction, or not. Observation confirms this too: cities\neither have a startup scene, or they don't. There is no middle\nground. Chicago has the third largest metropolitan area in America.\nAs source of startups it's negligible compared to Seattle, number 15.The good news is that the initial seed can be quite small. Shockley\nSemiconductor, though itself not very successful, was big enough.\nIt brought a critical mass of experts in an important new technology\ntogether in a place they liked enough to stay.CompetingOf course, a would-be silicon valley faces an obstacle the original\none didn't: it has to compete with Silicon Valley. Can that be\ndone? Probably.One of Silicon Valley's biggest advantages is its venture capital\nfirms. This was not a factor in Shockley's day, because VC funds\ndidn't exist. In fact, Shockley Semiconductor and Fairchild\nSemiconductor were not startups at all in our sense. They were\nsubsidiaries-- of Beckman Instruments and Fairchild Camera and\nInstrument respectively. Those companies were apparently willing\nto establish subsidiaries wherever the experts wanted to live.Venture investors, however, prefer to fund startups within an hour's\ndrive. For one, they're more likely to notice startups nearby.\nBut when they do notice startups in other towns they prefer them\nto move. They don't want to have to travel to attend board meetings,\nand in any case the odds of succeeding are higher in a startup hub.The centralizing effect of venture firms is a double one: they cause\nstartups to form around them, and those draw in more startups through\nacquisitions. And although the first may be weakening because it's\nnow so cheap to start some startups, the second seems as strong as ever.\nThree of the most admired\n\"Web 2.0\" companies were started outside the usual startup hubs,\nbut two of them have already been reeled in through acquisitions.Such centralizing forces make it harder for new silicon valleys to\nget started. But by no means impossible. Ultimately power rests\nwith the founders. A startup with the best people will beat one\nwith funding from famous VCs, and a startup that was sufficiently\nsuccessful would never have to move. So a town that\ncould exert enough pull over the right people could resist and\nperhaps even surpass Silicon Valley.For all its power, Silicon Valley has a great weakness: the paradise\nShockley found in 1956 is now one giant parking lot. San Francisco\nand Berkeley are great, but they're forty miles away. Silicon\nValley proper is soul-crushing suburban sprawl. It\nhas fabulous weather, which makes it significantly better than the\nsoul-crushing sprawl of most other American cities. But a competitor\nthat managed to avoid sprawl would have real leverage. All a city\nneeds is to be the kind of place the next traitorous eight look at\nand say \"I want to stay here,\" and that would be enough to get the\nchain reaction started.Notes[1]\nIt's interesting to consider how low this number could be\nmade. I suspect five hundred would be enough, even if they could\nbring no assets with them. Probably just thirty, if I could pick them, \nwould be enough to turn Buffalo into a significant startup hub.[2]\nBureaucrats manage to allocate research funding moderately\nwell, but only because (like an in-house VC fund) they outsource\nmost of the work of selection. A professor at a famous university\nwho is highly regarded by his peers will get funding, pretty much\nregardless of the proposal. That wouldn't work for startups, whose\nfounders aren't sponsored by organizations, and are often unknowns.[3]\nYou'd have to do it all at once, or at least a whole department\nat a time, because people would be more likely to come if they\nknew their friends were. And you should probably start from scratch,\nrather than trying to upgrade an existing university, or much energy\nwould be lost in friction.[4]\nHypothesis: Any plan in which multiple independent buildings\nare gutted or demolished to be \"redeveloped\" as a single project\nis a net loss of personality for the city, with the exception of\nthe conversion of buildings not previously public, like warehouses.[5]\nA few startups get started in New York, but less\nthan a tenth as many per capita as in Boston, and mostly\nin less nerdy fields like finance and media.[6]\nSome blue counties are false positives (reflecting the\nremaining power of Democractic party machines), but there are no\nfalse negatives. You can safely write off all the red counties.[7]\nSome \"urban renewal\" experts took a shot at destroying Boston's\nin the 1960s, leaving the area around city hall a bleak wasteland,\nbut most neighborhoods successfully resisted them.Thanks to Chris Anderson, Trevor Blackwell, Marc Hedlund,\nJessica Livingston, Robert Morris, Greg Mcadoo, Fred Wilson,\nand Stephen Wolfram for\nreading drafts of this, and to Ed Dumbill for inviting me to speak.(The second part of this talk became Why Startups\nCondense in America.)"} {"title": "corpdev", "text": "January 2015Corporate Development, aka corp dev, is the group within companies\nthat buys other companies. If you're talking to someone from corp\ndev, that's why, whether you realize it yet or not.It's usually a mistake to talk to corp dev unless (a) you want to\nsell your company right now and (b) you're sufficiently likely to\nget an offer at an acceptable price. In practice that means startups\nshould only talk to corp dev when they're either doing really well\nor really badly. If you're doing really badly, meaning the company\nis about to die, you may as well talk to them, because you have\nnothing to lose. And if you're doing really well, you can safely\ntalk to them, because you both know the price will have to be high,\nand if they show the slightest sign of wasting your time, you'll\nbe confident enough to tell them to get lost.The danger is to companies in the middle. Particularly to young\ncompanies that are growing fast, but haven't been doing it for long\nenough to have grown big yet. It's usually a mistake for a promising\ncompany less than a year old even to talk to corp dev.But it's a mistake founders constantly make. When someone from\ncorp dev wants to meet, the founders tell themselves they should\nat least find out what they want. Besides, they don't want to\noffend Big Company by refusing to meet.Well, I'll tell you what they want. They want to talk about buying\nyou. That's what the title \"corp dev\" means. So before agreeing\nto meet with someone from corp dev, ask yourselves, \"Do we want to\nsell the company right now?\" And if the answer is no, tell them\n\"Sorry, but we're focusing on growing the company.\" They won't be\noffended. And certainly the founders of Big Company won't be\noffended. If anything they'll think more highly of you. You'll\nremind them of themselves. They didn't sell either; that's why\nthey're in a position now to buy other companies.\n[1]Most founders who get contacted by corp dev already know what it\nmeans. And yet even when they know what corp dev does and know\nthey don't want to sell, they take the meeting. Why do they do it?\nThe same mix of denial and wishful thinking that underlies most\nmistakes founders make. It's flattering to talk to someone who wants\nto buy you. And who knows, maybe their offer will be surprisingly\nhigh. You should at least see what it is, right?No. If they were going to send you an offer immediately by email,\nsure, you might as well open it. But that is not how conversations\nwith corp dev work. If you get an offer at all, it will be at the\nend of a long and unbelievably distracting process. And if the\noffer is surprising, it will be surprisingly low.Distractions are the thing you can least afford in a startup. And\nconversations with corp dev are the worst sort of distraction,\nbecause as well as consuming your attention they undermine your\nmorale. One of the tricks to surviving a grueling process is not\nto stop and think how tired you are. Instead you get into a sort\nof flow. \n[2]\nImagine what it would do to you if at mile 20 of a\nmarathon, someone ran up beside you and said \"You must feel really\ntired. Would you like to stop and take a rest?\" Conversations\nwith corp dev are like that but worse, because the suggestion of\nstopping gets combined in your mind with the imaginary high price\nyou think they'll offer.And then you're really in trouble. If they can, corp dev people\nlike to turn the tables on you. They like to get you to the point\nwhere you're trying to convince them to buy instead of them trying\nto convince you to sell. And surprisingly often they succeed.This is a very slippery slope, greased with some of the most powerful\nforces that can work on founders' minds, and attended by an experienced\nprofessional whose full time job is to push you down it.Their tactics in pushing you down that slope are usually fairly\nbrutal. Corp dev people's whole job is to buy companies, and they\ndon't even get to choose which. The only way their performance is\nmeasured is by how cheaply they can buy you, and the more ambitious\nones will stop at nothing to achieve that. For example, they'll\nalmost always start with a lowball offer, just to see if you'll\ntake it. Even if you don't, a low initial offer will demoralize you\nand make you easier to manipulate.And that is the most innocent of their tactics. Just wait till\nyou've agreed on a price and think you have a done deal, and then\nthey come back and say their boss has vetoed the deal and won't do\nit for more than half the agreed upon price. Happens all the time.\nIf you think investors can behave badly, it's nothing compared to\nwhat corp dev people can do. Even corp dev people at companies\nthat are otherwise benevolent.I remember once complaining to a\nfriend at Google about some nasty trick their corp dev people had\npulled on a YC startup.\"What happened to Don't be Evil?\" I asked.\"I don't think corp dev got the memo,\" he replied.The tactics you encounter in M&A conversations can be like nothing\nyou've experienced in the otherwise comparatively \nupstanding world\nof Silicon Valley. It's as if a chunk of genetic material from the\nold-fashioned robber baron business world got incorporated into the\nstartup world.\n[3]The simplest way to protect yourself is to use the trick that John\nD. Rockefeller, whose grandfather was an alcoholic, used to protect\nhimself from becoming one. He once told a Sunday school class\n\n Boys, do you know why I never became a drunkard? Because I never\n took the first drink.\n\nDo you want to sell your company right now? Not eventually, right\nnow. If not, just don't take the first meeting. They won't be\noffended. And you in turn will be guaranteed to be spared one of\nthe worst experiences that can happen to a startup.If you do want to sell, there's another set of \ntechniques\n for doing\nthat. But the biggest mistake founders make in dealing with corp\ndev is not doing a bad job of talking to them when they're ready\nto, but talking to them before they are. So if you remember only\nthe title of this essay, you already know most of what you need to\nknow about M&A in the first year.Notes[1]\nI'm not saying you should never sell. I'm saying you should\nbe clear in your own mind about whether you want to sell or not,\nand not be led by manipulation or wishful thinking into trying to\nsell earlier than you otherwise would have.[2]\nIn a startup, as in most competitive sports, the task at hand\nalmost does this for you; you're too busy to feel tired. But when\nyou lose that protection, e.g. at the final whistle, the fatigue\nhits you like a wave. To talk to corp dev is to let yourself feel\nit mid-game.[3]\nTo be fair, the apparent misdeeds of corp dev people are magnified\nby the fact that they function as the face of a large organization\nthat often doesn't know its own mind. Acquirers can be surprisingly\nindecisive about acquisitions, and their flakiness is indistinguishable\nfrom dishonesty by the time it filters down to you.Thanks to Marc Andreessen, Jessica Livingston, Geoff\nRalston, and Qasar Younis for reading drafts of this."} {"title": "langdes", "text": "May 2001\n\n(These are some notes I made\nfor a panel discussion on programming language design\nat MIT on May 10, 2001.)1. Programming Languages Are for People.Programming languages\nare how people talk to computers. The computer would be just as\nhappy speaking any language that was unambiguous. The reason we\nhave high level languages is because people can't deal with\nmachine language. The point of programming\nlanguages is to prevent our poor frail human brains from being \noverwhelmed by a mass of detail.Architects know that some kinds of design problems are more personal\nthan others. One of the cleanest, most abstract design problems\nis designing bridges. There your job is largely a matter of spanning\na given distance with the least material. The other end of the\nspectrum is designing chairs. Chair designers have to spend their\ntime thinking about human butts.Software varies in the same way. Designing algorithms for routing\ndata through a network is a nice, abstract problem, like designing\nbridges. Whereas designing programming languages is like designing\nchairs: it's all about dealing with human weaknesses.Most of us hate to acknowledge this. Designing systems of great\nmathematical elegance sounds a lot more appealing to most of us\nthan pandering to human weaknesses. And there is a role for mathematical\nelegance: some kinds of elegance make programs easier to understand.\nBut elegance is not an end in itself.And when I say languages have to be designed to suit human weaknesses,\nI don't mean that languages have to be designed for bad programmers.\nIn fact I think you ought to design for the \nbest programmers, but\neven the best programmers have limitations. I don't think anyone\nwould like programming in a language where all the variables were\nthe letter x with integer subscripts.2. Design for Yourself and Your Friends.If you look at the history of programming languages, a lot of the best\nones were languages designed for their own authors to use, and a\nlot of the worst ones were designed for other people to use.When languages are designed for other people, it's always a specific\ngroup of other people: people not as smart as the language designer.\nSo you get a language that talks down to you. Cobol is the most\nextreme case, but a lot of languages are pervaded by this spirit.It has nothing to do with how abstract the language is. C is pretty\nlow-level, but it was designed for its authors to use, and that's\nwhy hackers like it.The argument for designing languages for bad programmers is that\nthere are more bad programmers than good programmers. That may be\nso. But those few good programmers write a disproportionately\nlarge percentage of the software.I'm interested in the question, how do you design a language that\nthe very best hackers will like? I happen to think this is\nidentical to the question, how do you design a good programming\nlanguage?, but even if it isn't, it is at least an interesting\nquestion.3. Give the Programmer as Much Control as Possible.Many languages\n(especially the ones designed for other people) have the attitude\nof a governess: they try to prevent you from\ndoing things that they think aren't good for you. I like the \nopposite approach: give the programmer as much\ncontrol as you can.When I first learned Lisp, what I liked most about it was\nthat it considered me an equal partner. In the other languages\nI had learned up till then, there was the language and there was my \nprogram, written in the language, and the two were very separate.\nBut in Lisp the functions and macros I wrote were just like those\nthat made up the language itself. I could rewrite the language\nif I wanted. It had the same appeal as open-source software.4. Aim for Brevity.Brevity is underestimated and even scorned.\nBut if you look into the hearts of hackers, you'll see that they\nreally love it. How many times have you heard hackers speak fondly\nof how in, say, APL, they could do amazing things with just a couple\nlines of code? I think anything that really smart people really\nlove is worth paying attention to.I think almost anything\nyou can do to make programs shorter is good. There should be lots\nof library functions; anything that can be implicit should be;\nthe syntax should be terse to a fault; even the names of things\nshould be short.And it's not only programs that should be short. The manual should\nbe thin as well. A good part of manuals is taken up with clarifications\nand reservations and warnings and special cases. If you force \nyourself to shorten the manual, in the best case you do it by fixing\nthe things in the language that required so much explanation.5. Admit What Hacking Is.A lot of people wish that hacking was\nmathematics, or at least something like a natural science. I think\nhacking is more like architecture. Architecture is\nrelated to physics, in the sense that architects have to design\nbuildings that don't fall down, but the actual goal of architects\nis to make great buildings, not to make discoveries about statics.What hackers like to do is make great programs.\nAnd I think, at least in our own minds, we have to remember that it's\nan admirable thing to write great programs, even when this work \ndoesn't translate easily into the conventional intellectual\ncurrency of research papers. Intellectually, it is just as\nworthwhile to design a language programmers will love as it is to design a\nhorrible one that embodies some idea you can publish a paper\nabout.1. How to Organize Big Libraries?Libraries are becoming an\nincreasingly important component of programming languages. They're\nalso getting bigger, and this can be dangerous. If it takes longer\nto find the library function that will do what you want than it\nwould take to write it yourself, then all that code is doing nothing\nbut make your manual thick. (The Symbolics manuals were a case in \npoint.) So I think we will have to work on ways to organize\nlibraries. The ideal would be to design them so that the programmer\ncould guess what library call would do the right thing.2. Are People Really Scared of Prefix Syntax?This is an open\nproblem in the sense that I have wondered about it for years and\nstill don't know the answer. Prefix syntax seems perfectly natural\nto me, except possibly for math. But it could be that a lot of \nLisp's unpopularity is simply due to having an unfamiliar syntax. \nWhether to do anything about it, if it is true, is another question. \n\n3. What Do You Need for Server-Based Software?\n\nI think a lot of the most exciting new applications that get written\nin the next twenty years will be Web-based applications, meaning\nprograms that sit on the server and talk to you through a Web\nbrowser. And to write these kinds of programs we may need some\nnew things.One thing we'll need is support for the new way that server-based \napps get released. Instead of having one or two big releases a\nyear, like desktop software, server-based apps get released as a\nseries of small changes. You may have as many as five or ten\nreleases a day. And as a rule everyone will always use the latest\nversion.You know how you can design programs to be debuggable?\nWell, server-based software likewise has to be designed to be\nchangeable. You have to be able to change it easily, or at least\nto know what is a small change and what is a momentous one.Another thing that might turn out to be useful for server based\nsoftware, surprisingly, is continuations. In Web-based software\nyou can use something like continuation-passing style to get the\neffect of subroutines in the inherently \nstateless world of a Web\nsession. Maybe it would be worthwhile having actual continuations,\nif it was not too expensive.4. What New Abstractions Are Left to Discover?I'm not sure how\nreasonable a hope this is, but one thing I would really love to \ndo, personally, is discover a new abstraction-- something that would\nmake as much of a difference as having first class functions or\nrecursion or even keyword parameters. This may be an impossible\ndream. These things don't get discovered that often. But I am always\nlooking.1. You Can Use Whatever Language You Want.Writing application\nprograms used to mean writing desktop software. And in desktop\nsoftware there is a big bias toward writing the application in the\nsame language as the operating system. And so ten years ago,\nwriting software pretty much meant writing software in C.\nEventually a tradition evolved:\napplication programs must not be written in unusual languages. \nAnd this tradition had so long to develop that nontechnical people\nlike managers and venture capitalists also learned it.Server-based software blows away this whole model. With server-based\nsoftware you can use any language you want. Almost nobody understands\nthis yet (especially not managers and venture capitalists).\nA few hackers understand it, and that's why we even hear\nabout new, indy languages like Perl and Python. We're not hearing\nabout Perl and Python because people are using them to write Windows\napps.What this means for us, as people interested in designing programming\nlanguages, is that there is now potentially an actual audience for\nour work.2. Speed Comes from Profilers.Language designers, or at least\nlanguage implementors, like to write compilers that generate fast\ncode. But I don't think this is what makes languages fast for users.\nKnuth pointed out long ago that speed only matters in a few critical\nbottlenecks. And anyone who's tried it knows that you can't guess\nwhere these bottlenecks are. Profilers are the answer.Language designers are solving the wrong problem. Users don't need\nbenchmarks to run fast. What they need is a language that can show\nthem what parts of their own programs need to be rewritten. That's\nwhere speed comes from in practice. So maybe it would be a net \nwin if language implementors took half the time they would\nhave spent doing compiler optimizations and spent it writing a\ngood profiler instead.3. You Need an Application to Drive the Design of a Language.This may not be an absolute rule, but it seems like the best languages\nall evolved together with some application they were being used to\nwrite. C was written by people who needed it for systems programming.\nLisp was developed partly to do symbolic differentiation, and\nMcCarthy was so eager to get started that he was writing differentiation\nprograms even in the first paper on Lisp, in 1960.It's especially good if your application solves some new problem.\nThat will tend to drive your language to have new features that \nprogrammers need. I personally am interested in writing\na language that will be good for writing server-based applications.[During the panel, Guy Steele also made this point, with the\nadditional suggestion that the application should not consist of\nwriting the compiler for your language, unless your language\nhappens to be intended for writing compilers.]4. A Language Has to Be Good for Writing Throwaway Programs.You know what a throwaway program is: something you write quickly for\nsome limited task. I think if you looked around you'd find that \na lot of big, serious programs started as throwaway programs. I\nwould not be surprised if most programs started as throwaway\nprograms. And so if you want to make a language that's good for\nwriting software in general, it has to be good for writing throwaway\nprograms, because that is the larval stage of most software.5. Syntax Is Connected to Semantics.It's traditional to think of\nsyntax and semantics as being completely separate. This will\nsound shocking, but it may be that they aren't.\nI think that what you want in your language may be related\nto how you express it.I was talking recently to Robert Morris, and he pointed out that\noperator overloading is a bigger win in languages with infix\nsyntax. In a language with prefix syntax, any function you define\nis effectively an operator. If you want to define a plus for a\nnew type of number you've made up, you can just define a new function\nto add them. If you do that in a language with infix syntax,\nthere's a big difference in appearance between the use of an\noverloaded operator and a function call.1. New Programming Languages.Back in the 1970s\nit was fashionable to design new programming languages. Recently\nit hasn't been. But I think server-based software will make new \nlanguages fashionable again. With server-based software, you can\nuse any language you want, so if someone does design a language that\nactually seems better than others that are available, there will be\npeople who take a risk and use it.2. Time-Sharing.Richard Kelsey gave this as an idea whose time\nhas come again in the last panel, and I completely agree with him.\nMy guess (and Microsoft's guess, it seems) is that much computing\nwill move from the desktop onto remote servers. In other words, \ntime-sharing is back. And I think there will need to be support\nfor it at the language level. For example, I know that Richard\nand Jonathan Rees have done a lot of work implementing process \nscheduling within Scheme 48.3. Efficiency.Recently it was starting to seem that computers\nwere finally fast enough. More and more we were starting to hear\nabout byte code, which implies to me at least that we feel we have\ncycles to spare. But I don't think we will, with server-based\nsoftware. Someone is going to have to pay for the servers that\nthe software runs on, and the number of users they can support per\nmachine will be the divisor of their capital cost.So I think efficiency will matter, at least in computational\nbottlenecks. It will be especially important to do i/o fast,\nbecause server-based applications do a lot of i/o.It may turn out that byte code is not a win, in the end. Sun and\nMicrosoft seem to be facing off in a kind of a battle of the byte\ncodes at the moment. But they're doing it because byte code is a\nconvenient place to insert themselves into the process, not because\nbyte code is in itself a good idea. It may turn out that this\nwhole battleground gets bypassed. That would be kind of amusing.1. Clients.This is just a guess, but my guess is that\nthe winning model for most applications will be purely server-based.\nDesigning software that works on the assumption that everyone will \nhave your client is like designing a society on the assumption that\neveryone will just be honest. It would certainly be convenient, but\nyou have to assume it will never happen.I think there will be a proliferation of devices that have some\nkind of Web access, and all you'll be able to assume about them is\nthat they can support simple html and forms. Will you have a\nbrowser on your cell phone? Will there be a phone in your palm \npilot? Will your blackberry get a bigger screen? Will you be able\nto browse the Web on your gameboy? Your watch? I don't know. \nAnd I don't have to know if I bet on\neverything just being on the server. It's\njust so much more robust to have all the \nbrains on the server.2. Object-Oriented Programming.I realize this is a\ncontroversial one, but I don't think object-oriented programming\nis such a big deal. I think it is a fine model for certain kinds\nof applications that need that specific kind of data structure, \nlike window systems, simulations, and cad programs. But I don't\nsee why it ought to be the model for all programming.I think part of the reason people in big companies like object-oriented\nprogramming is because it yields a lot of what looks like work.\nSomething that might naturally be represented as, say, a list of\nintegers, can now be represented as a class with all kinds of\nscaffolding and hustle and bustle.Another attraction of\nobject-oriented programming is that methods give you some of the\neffect of first class functions. But this is old news to Lisp\nprogrammers. When you have actual first class functions, you can\njust use them in whatever way is appropriate to the task at hand,\ninstead of forcing everything into a mold of classes and methods.What this means for language design, I think, is that you shouldn't\nbuild object-oriented programming in too deeply. Maybe the\nanswer is to offer more general, underlying stuff, and let people design\nwhatever object systems they want as libraries.3. Design by Committee.Having your language designed by a committee is a big pitfall, \nand not just for the reasons everyone knows about. Everyone\nknows that committees tend to yield lumpy, inconsistent designs. \nBut I think a greater danger is that they won't take risks.\nWhen one person is in charge he can take risks\nthat a committee would never agree on.Is it necessary to take risks to design a good language though?\nMany people might suspect\nthat language design is something where you should stick fairly\nclose to the conventional wisdom. I bet this isn't true.\nIn everything else people do, reward is proportionate to risk.\nWhy should language design be any different?"} {"title": "laundry", "text": "October 2004\nAs E. B. White said, \"good writing is rewriting.\" I didn't\nrealize this when I was in school. In writing, as in math and \nscience, they only show you the finished product.\nYou don't see all the false starts. This gives students a\nmisleading view of how things get made.Part of the reason it happens is that writers don't want \npeople to see their mistakes. But I'm willing to let people\nsee an early draft if it will show how much you have\nto rewrite to beat an essay into shape.Below is the oldest version I can find of\nThe Age of the Essay \n(probably the second or third day), with\ntext that ultimately survived in \nred and text that later\ngot deleted in gray.\nThere seem to be several categories of cuts: things I got wrong,\nthings that seem like bragging, flames,\ndigressions, stretches of awkward prose, and unnecessary words.I discarded more from the beginning. That's\nnot surprising; it takes a while to hit your stride. There\nare more digressions at the start, because I'm not sure where\nI'm heading.The amount of cutting is about average. I probably write\nthree to four words for every one that appears in the final\nversion of an essay.(Before anyone gets mad at me for opinions expressed here, remember\nthat anything you see here that's not in the final version is obviously\nsomething I chose not to publish, often because I disagree\nwith it.)\nRecently a friend said that what he liked about\nmy essays was that they weren't written the way\nwe'd been taught to write essays in school. You\nremember: topic sentence, introductory paragraph,\nsupporting paragraphs, conclusion. It hadn't\noccurred to me till then that those horrible things\nwe had to write in school were even connected to\nwhat I was doing now. But sure enough, I thought,\nthey did call them \"essays,\" didn't they?Well, they're not. Those things you have to write\nin school are not only not essays, they're one of the\nmost pointless of all the pointless hoops you have\nto jump through in school. And I worry that they\nnot only teach students the wrong things about writing,\nbut put them off writing entirely.So I'm going to give the other side of the story: what\nan essay really is, and how you write one. Or at least,\nhow I write one. Students be forewarned: if you actually write\nthe kind of essay I describe, you'll probably get bad\ngrades. But knowing how it's really done should\nat least help you to understand the feeling of futility\nyou have when you're writing the things they tell you to.\nThe most obvious difference between real essays and\nthe things one has to write in school is that real\nessays are not exclusively about English literature.\nIt's a fine thing for schools to\n\nteach students how to\nwrite. But for some bizarre reason (actually, a very specific bizarre\nreason that I'll explain in a moment),\n\nthe teaching of\nwriting has gotten mixed together with the study\nof literature. And so all over the country, students are\nwriting not about how a baseball team with a small budget \nmight compete with the Yankees, or the role of color in\nfashion, or what constitutes a good dessert, but about\nsymbolism in Dickens.With obvious \nresults. Only a few people really\n\ncare about\nsymbolism in Dickens. The teacher doesn't.\nThe students don't. Most of the people who've had to write PhD\ndisserations about Dickens don't. And certainly\n\nDickens himself would be more interested in an essay\nabout color or baseball.How did things get this way? To answer that we have to go back\nalmost a thousand years. Between about 500 and 1000, life was\nnot very good in Europe. The term \"dark ages\" is presently\nout of fashion as too judgemental (the period wasn't dark; \nit was just different), but if this label didn't already\nexist, it would seem an inspired metaphor. What little\noriginal thought there was took place in lulls between\nconstant wars and had something of the character of\nthe thoughts of parents with a new baby.\nThe most amusing thing written during this\nperiod, Liudprand of Cremona's Embassy to Constantinople, is,\nI suspect, mostly inadvertantly so.Around 1000 Europe began to catch its breath.\nAnd once they\nhad the luxury of curiosity, one of the first things they discovered\nwas what we call \"the classics.\"\nImagine if we were visited \nby aliens. If they could even get here they'd presumably know a\nfew things we don't. Immediately Alien Studies would become\nthe most dynamic field of scholarship: instead of painstakingly\ndiscovering things for ourselves, we could simply suck up\neverything they'd discovered. So it was in Europe in 1200.\nWhen classical texts began to circulate in Europe, they contained\nnot just new answers, but new questions. (If anyone proved\na theorem in christian Europe before 1200, for example, there\nis no record of it.)For a couple centuries, some of the most important work\nbeing done was intellectual archaelogy. Those were also\nthe centuries during which schools were first established.\nAnd since reading ancient texts was the essence of what\nscholars did then, it became the basis of the curriculum.By 1700, someone who wanted to learn about\nphysics didn't need to start by mastering Greek in order to read Aristotle. But schools\nchange slower than scholarship: the study of\nancient texts\nhad such prestige that it remained the backbone of \neducation\nuntil the late 19th century. By then it was merely a tradition.\nIt did serve some purposes: reading a foreign language was difficult,\nand thus taught discipline, or at least, kept students busy;\nit introduced students to\ncultures quite different from their own; and its very uselessness\nmade it function (like white gloves) as a social bulwark.\nBut it certainly wasn't\ntrue, and hadn't been true for centuries, that students were\nserving apprenticeships in the hottest area of scholarship.Classical scholarship had also changed. In the early era, philology\nactually mattered. The texts that filtered into Europe were\nall corrupted to some degree by the errors of translators and\ncopyists. Scholars had to figure out what Aristotle said\nbefore they could figure out what he meant. But by the modern\nera such questions were answered as well as they were ever\ngoing to be. And so the study of ancient texts became less\nabout ancientness and more about texts.The time was then ripe for the question: if the study of\nancient texts is a valid field for scholarship, why not modern\ntexts? The answer, of course, is that the raison d'etre\nof classical scholarship was a kind of intellectual archaelogy that\ndoes not need to be done in the case of contemporary authors.\nBut for obvious reasons no one wanted to give that answer.\nThe archaeological work being mostly done, it implied that\nthe people studying the classics were, if not wasting their\ntime, at least working on problems of minor importance.And so began the study of modern literature. There was some\ninitial resistance, but it didn't last long.\nThe limiting\nreagent in the growth of university departments is what\nparents will let undergraduates study. If parents will let\ntheir children major in x, the rest follows straightforwardly.\nThere will be jobs teaching x, and professors to fill them.\nThe professors will establish scholarly journals and publish\none another's papers. Universities with x departments will\nsubscribe to the journals. Graduate students who want jobs\nas professors of x will write dissertations about it. It may\ntake a good long while for the more prestigious universities\nto cave in and establish departments in cheesier xes, but\nat the other end of the scale there are so many universities\ncompeting to attract students that the mere establishment of\na discipline requires little more than the desire to do it.High schools imitate universities.\nAnd so once university\nEnglish departments were established in the late nineteenth century,\nthe 'riting component of the 3 Rs \nwas morphed into English.\nWith the bizarre consequence that high school students now\nhad to write about English literature-- to write, without\neven realizing it, imitations of whatever\nEnglish professors had been publishing in their journals a\nfew decades before. It's no wonder if this seems to the\nstudent a pointless exercise, because we're now three steps\nremoved from real work: the students are imitating English\nprofessors, who are imitating classical scholars, who are\nmerely the inheritors of a tradition growing out of what\nwas, 700 years ago, fascinating and urgently needed work.Perhaps high schools should drop English and just teach writing.\nThe valuable part of English classes is learning to write, and\nthat could be taught better by itself. Students learn better\nwhen they're interested in what they're doing, and it's hard\nto imagine a topic less interesting than symbolism in Dickens.\nMost of the people who write about that sort of thing professionally\nare not really interested in it. (Though indeed, it's been a\nwhile since they were writing about symbolism; now they're\nwriting about gender.)I have no illusions about how eagerly this suggestion will \nbe adopted. Public schools probably couldn't stop teaching\nEnglish even if they wanted to; they're probably required to by\nlaw. But here's a related suggestion that goes with the grain\ninstead of against it: that universities establish a\nwriting major. Many of the students who now major in English\nwould major in writing if they could, and most would\nbe better off.It will be argued that it is a good thing for students to be\nexposed to their literary heritage. Certainly. But is that\nmore important than that they learn to write well? And are\nEnglish classes even the place to do it? After all,\nthe average public high school student gets zero exposure to \nhis artistic heritage. No disaster results.\nThe people who are interested in art learn about it for\nthemselves, and those who aren't don't. I find that American\nadults are no better or worse informed about literature than\nart, despite the fact that they spent years studying literature\nin high school and no time at all studying art. Which presumably\nmeans that what they're taught in school is rounding error \ncompared to what they pick up on their own.Indeed, English classes may even be harmful. In my case they\nwere effectively aversion therapy. Want to make someone dislike\na book? Force him to read it and write an essay about it.\nAnd make the topic so intellectually bogus that you\ncould not, if asked, explain why one ought to write about it.\nI love to read more than anything, but by the end of high school\nI never read the books we were assigned. I was so disgusted with\nwhat we were doing that it became a point of honor\nwith me to write nonsense at least as good at the other students'\nwithout having more than glanced over the book to learn the names\nof the characters and a few random events in it.I hoped this might be fixed in college, but I found the same\nproblem there. It was not the teachers. It was English. \nWe were supposed to read novels and write essays about them.\nAbout what, and why? That no one seemed to be able to explain.\nEventually by trial and error I found that what the teacher \nwanted us to do was pretend that the story had really taken\nplace, and to analyze based on what the characters said and did (the\nsubtler clues, the better) what their motives must have been.\nOne got extra credit for motives having to do with class,\nas I suspect one must now for those involving gender and \nsexuality. I learned how to churn out such stuff well enough\nto get an A, but I never took another English class.And the books we did these disgusting things to, like those\nwe mishandled in high school, I find still have black marks\nagainst them in my mind. The one saving grace was that \nEnglish courses tend to favor pompous, dull writers like\nHenry James, who deserve black marks against their names anyway.\nOne of the principles the IRS uses in deciding whether to\nallow deductions is that, if something is fun, it isn't work.\nFields that are intellectually unsure of themselves rely on\na similar principle. Reading P.G. Wodehouse or Evelyn Waugh or\nRaymond Chandler is too obviously pleasing to seem like\nserious work, as reading Shakespeare would have been before \nEnglish evolved enough to make it an effort to understand him. [sh]\nAnd so good writers (just you wait and see who's still in\nprint in 300 years) are less likely to have readers turned \nagainst them by clumsy, self-appointed tour guides.\nThe other big difference between a real essay and the \nthings\nthey make you write in school is that a real essay doesn't \ntake a position and then defend it. That principle,\nlike the idea that we ought to be writing about literature, \nturns out to be another intellectual hangover of long\nforgotten origins. It's often mistakenly believed that\nmedieval universities were mostly seminaries. In fact they\nwere more law schools. And at least in our tradition\nlawyers are advocates: they are\ntrained to be able to\ntake\neither side of an argument and make as good a case for it \nas they can. Whether or not this is a good idea (in the case of prosecutors,\nit probably isn't), it tended to pervade\nthe atmosphere of\nearly universities. After the lecture the most common form\nof discussion was the disputation. This idea\nis at least\nnominally preserved in our present-day thesis defense-- indeed,\nin the very word thesis. Most people treat the words \nthesis\nand dissertation as interchangeable, but originally, at least,\na thesis was a position one took and the dissertation was\nthe argument by which one defended it.I'm not complaining that we blur these two words together.\nAs far as I'm concerned, the sooner we lose the original\nsense of the word thesis, the better. For many, perhaps most, \ngraduate students, it is stuffing a square peg into a round\nhole to try to recast one's work as a single thesis. And\nas for the disputation, that seems clearly a net lose.\nArguing two sides of a case may be a necessary evil in a\nlegal dispute, but it's not the best way to get at the truth,\nas I think lawyers would be the first to admit.\nAnd yet this principle is built into the very structure of \nthe essays\nthey teach you to write in high school. The topic\nsentence is your thesis, chosen in advance, the supporting \nparagraphs the blows you strike in the conflict, and the\nconclusion--- uh, what it the conclusion? I was never sure \nabout that in high school. If your thesis was well expressed,\nwhat need was there to restate it? In theory it seemed that\nthe conclusion of a really good essay ought not to need to \nsay any more than QED.\nBut when you understand the origins\nof this sort of \"essay\", you can see where the\nconclusion comes from. It's the concluding remarks to the \njury.\nWhat other alternative is there? To answer that\nwe have to\nreach back into history again, though this time not so far.\nTo Michel de Montaigne, inventor of the essay.\nHe was\ndoing something quite different from what a\nlawyer does,\nand\nthe difference is embodied in the name. Essayer is the French\nverb meaning \"to try\" (the cousin of our word assay),\n\nand an \"essai\" is an effort.\nAn essay is something you\nwrite in order\nto figure something out.Figure out what? You don't know yet. And so you can't begin with a\nthesis, because you don't have one, and may never have \none. An essay doesn't begin with a statement, but with a \nquestion. In a real essay, you don't take a position and\ndefend it. You see a door that's ajar, and you open it and\nwalk in to see what's inside.If all you want to do is figure things out, why do you need\nto write anything, though? Why not just sit and think? Well,\nthere precisely is Montaigne's great discovery. Expressing\nideas helps to form them. Indeed, helps is far too weak a\nword. 90%\nof what ends up in my essays was stuff\nI only\nthought of when I sat down to write them. That's why I\nwrite them.So there's another difference between essays and\nthe things\nyou have to write in school. In school\n\nyou are, in theory,\nexplaining yourself to someone else. In the best case---if\nyou're really organized---you're just writing it down.\nIn a real essay you're writing for yourself. You're\nthinking out loud.But not quite. Just as inviting people over forces you to\nclean up your apartment, writing something that you know\n\nother people will read forces you to think well. So it\ndoes matter to have an audience. The things I've written\njust for myself are no good. Indeed, they're bad in\na particular way:\nthey tend to peter out. When I run into\ndifficulties, I notice that I\ntend to conclude with a few vague\nquestions and then drift off to get a cup of tea.This seems a common problem.\nIt's practically the standard\nending in blog entries--- with the addition of a \"heh\" or an \nemoticon, prompted by the all too accurate sense that\nsomething is missing.And indeed, a lot of\npublished essays peter out in this\nsame way.\nParticularly the sort written by the staff writers of newsmagazines. Outside writers tend to supply\neditorials of the defend-a-position variety, which\nmake a beeline toward a rousing (and\nforeordained) conclusion. But the staff writers feel\nobliged to write something more\nbalanced, which in\npractice ends up meaning blurry.\nSince they're\nwriting for a popular magazine, they start with the\nmost radioactively controversial questions, from which\n(because they're writing for a popular magazine)\nthey then proceed to recoil from\nin terror.\nGay marriage, for or\nagainst? This group says one thing. That group says\nanother. One thing is certain: the question is a\ncomplex one. (But don't get mad at us. We didn't\ndraw any conclusions.)Questions aren't enough. An essay has to come up with answers.\nThey don't always, of course. Sometimes you start with a \npromising question and get nowhere. But those you don't\npublish. Those are like experiments that get inconclusive\nresults. Something you publish ought to tell the reader \nsomething he didn't already know.\nBut what you tell him doesn't matter, so long as \nit's interesting. I'm sometimes accused of meandering.\nIn defend-a-position writing that would be a flaw.\nThere you're not concerned with truth. You already\nknow where you're going, and you want to go straight there,\nblustering through obstacles, and hand-waving\nyour way across swampy ground. But that's not what\nyou're trying to do in an essay. An essay is supposed to\nbe a search for truth. It would be suspicious if it didn't\nmeander.The Meander is a river in Asia Minor (aka\nTurkey).\nAs you might expect, it winds all over the place.\nBut does it\ndo this out of frivolity? Quite the opposite.\nLike all rivers, it's rigorously following the laws of physics.\nThe path it has discovered,\nwinding as it is, represents\nthe most economical route to the sea.The river's algorithm is simple. At each step, flow down.\nFor the essayist this translates to: flow interesting.\nOf all the places to go next, choose\nwhichever seems\nmost interesting.I'm pushing this metaphor a bit. An essayist\ncan't have\nquite as little foresight as a river. In fact what you do\n(or what I do) is somewhere between a river and a roman\nroad-builder. I have a general idea of the direction\nI want to go in, and\nI choose the next topic with that in mind. This essay is\nabout writing, so I do occasionally yank it back in that\ndirection, but it is not all the sort of essay I\nthought I was going to write about writing.Note too that hill-climbing (which is what this algorithm is\ncalled) can get you in trouble.\nSometimes, just\nlike a river,\nyou\nrun up against a blank wall. What\nI do then is just \nwhat the river does: backtrack.\nAt one point in this essay\nI found that after following a certain thread I ran out\nof ideas. I had to go back n\nparagraphs and start over\nin another direction. For illustrative purposes I've left\nthe abandoned branch as a footnote.\nErr on the side of the river. An essay is not a reference\nwork. It's not something you read looking for a specific\nanswer, and feel cheated if you don't find it. I'd much\nrather read an essay that went off in an unexpected but\ninteresting direction than one that plodded dutifully along\na prescribed course.So what's interesting? For me, interesting means surprise.\nDesign, as Matz\nhas said, should follow the principle of\nleast surprise.\nA button that looks like it will make a\nmachine stop should make it stop, not speed up. Essays\nshould do the opposite. Essays should aim for maximum\nsurprise.I was afraid of flying for a long time and could only travel\nvicariously. When friends came back from faraway places,\nit wasn't just out of politeness that I asked them about\ntheir trip.\nI really wanted to know. And I found that\nthe best way to get information out of them was to ask\nwhat surprised them. How was the place different from what\nthey expected? This is an extremely useful question.\nYou can ask it of even\nthe most unobservant people, and it will\nextract information they didn't even know they were\nrecording. Indeed, you can ask it in real time. Now when I go somewhere\nnew, I make a note of what surprises me about it. Sometimes I\neven make a conscious effort to visualize the place beforehand,\nso I'll have a detailed image to diff with reality.\nSurprises are facts\nyou didn't already \nknow.\nBut they're\nmore than that. They're facts\nthat contradict things you\nthought you knew. And so they're the most valuable sort of\nfact you can get. They're like a food that's not merely\nhealthy, but counteracts the unhealthy effects of things\nyou've already eaten.\nHow do you find surprises? Well, therein lies half\nthe work of essay writing. (The other half is expressing\nyourself well.) You can at least\nuse yourself as a\nproxy for the reader. You should only write about things\nyou've thought about a lot. And anything you come across\nthat surprises you, who've thought about the topic a lot,\nwill probably surprise most readers.For example, in a recent essay I pointed out that because\nyou can only judge computer programmers by working with\nthem, no one knows in programming who the heroes should\nbe.\nI\ncertainly\ndidn't realize this when I started writing\nthe \nessay, and even now I find it kind of weird. That's\nwhat you're looking for.So if you want to write essays, you need two ingredients:\nyou need\na few topics that you think about a lot, and you\nneed some ability to ferret out the unexpected.What should you think about? My guess is that it\ndoesn't matter. Almost everything is\ninteresting if you get deeply\nenough into it. The one possible exception\nare\nthings\nlike working in fast food, which\nhave deliberately had all\nthe variation sucked out of them.\nIn retrospect, was there\nanything interesting about working in Baskin-Robbins?\nWell, it was interesting to notice\nhow important color was\nto the customers. Kids a certain age would point into\nthe case and say that they wanted yellow. Did they want\nFrench Vanilla or Lemon? They would just look at you\nblankly. They wanted yellow. And then there was the\nmystery of why the perennial favorite Pralines n' Cream\nwas so appealing. I'm inclined now to\nthink it was the salt.\nAnd the mystery of why Passion Fruit tasted so disgusting.\nPeople would order it because of the name, and were always\ndisappointed. It should have been called In-sink-erator\nFruit.\nAnd there was\nthe difference in the way fathers and\nmothers bought ice cream for their kids.\nFathers tended to\nadopt the attitude of\nbenevolent kings bestowing largesse,\nand mothers that of\nharried bureaucrats,\ngiving in to\npressure against their better judgement.\nSo, yes, there does seem to be material, even in\nfast food.What about the other half, ferreting out the unexpected?\nThat may require some natural ability. I've noticed for\na long time that I'm pathologically observant. ....[That was as far as I'd gotten at the time.]Notes[sh] In Shakespeare's own time, serious writing meant theological\ndiscourses, not the bawdy plays acted over on the other \nside of the river among the bear gardens and whorehouses.The other extreme, the work that seems formidable from the moment\nit's created (indeed, is deliberately intended to be)\nis represented by Milton. Like the Aeneid, Paradise Lost is a\nrock imitating a butterfly that happened to get fossilized.\nEven Samuel Johnson seems to have balked at this, on the one \nhand paying Milton the compliment of an extensive biography,\nand on the other writing of Paradise Lost that \"none who read it\never wished it longer.\""} {"title": "love", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nJanuary 2006To do something well you have to like it. That idea is not exactly\nnovel. We've got it down to four words: \"Do what you love.\" But\nit's not enough just to tell people that. Doing what you love is\ncomplicated.The very idea is foreign to what most of us learn as kids. When I\nwas a kid, it seemed as if work and fun were opposites by definition.\nLife had two states: some of the time adults were making you do\nthings, and that was called work; the rest of the time you could\ndo what you wanted, and that was called playing. Occasionally the\nthings adults made you do were fun, just as, occasionally, playing\nwasn't\u2014for example, if you fell and hurt yourself. But except\nfor these few anomalous cases, work was pretty much defined as\nnot-fun.And it did not seem to be an accident. School, it was implied, was\ntedious because it was preparation for grownup work.The world then was divided into two groups, grownups and kids.\nGrownups, like some kind of cursed race, had to work. Kids didn't,\nbut they did have to go to school, which was a dilute version of\nwork meant to prepare us for the real thing. Much as we disliked\nschool, the grownups all agreed that grownup work was worse, and\nthat we had it easy.Teachers in particular all seemed to believe implicitly that work\nwas not fun. Which is not surprising: work wasn't fun for most of\nthem. Why did we have to memorize state capitals instead of playing\ndodgeball? For the same reason they had to watch over a bunch of\nkids instead of lying on a beach. You couldn't just do what you\nwanted.I'm not saying we should let little kids do whatever they want.\nThey may have to be made to work on certain things. But if we make\nkids work on dull stuff, it might be wise to tell them that tediousness\nis not the defining quality of work, and indeed that the reason\nthey have to work on dull stuff now is so they can work on more\ninteresting stuff later.\n[1]Once, when I was about 9 or 10, my father told me I could be whatever\nI wanted when I grew up, so long as I enjoyed it. I remember that\nprecisely because it seemed so anomalous. It was like being told\nto use dry water. Whatever I thought he meant, I didn't think he\nmeant work could literally be fun\u2014fun like playing. It\ntook me years to grasp that.JobsBy high school, the prospect of an actual job was on the horizon.\nAdults would sometimes come to speak to us about their work, or we\nwould go to see them at work. It was always understood that they\nenjoyed what they did. In retrospect I think one may have: the\nprivate jet pilot. But I don't think the bank manager really did.The main reason they all acted as if they enjoyed their work was\npresumably the upper-middle class convention that you're supposed\nto. It would not merely be bad for your career to say that you\ndespised your job, but a social faux-pas.Why is it conventional to pretend to like what you do? The first\nsentence of this essay explains that. If you have to like something\nto do it well, then the most successful people will all like what\nthey do. That's where the upper-middle class tradition comes from.\nJust as houses all over America are full of \nchairs\nthat are, without\nthe owners even knowing it, nth-degree imitations of chairs designed\n250 years ago for French kings, conventional attitudes about work\nare, without the owners even knowing it, nth-degree imitations of\nthe attitudes of people who've done great things.What a recipe for alienation. By the time they reach an age to\nthink about what they'd like to do, most kids have been thoroughly\nmisled about the idea of loving one's work. School has trained\nthem to regard work as an unpleasant duty. Having a job is said\nto be even more onerous than schoolwork. And yet all the adults\nclaim to like what they do. You can't blame kids for thinking \"I\nam not like these people; I am not suited to this world.\"Actually they've been told three lies: the stuff they've been taught\nto regard as work in school is not real work; grownup work is not\n(necessarily) worse than schoolwork; and many of the adults around\nthem are lying when they say they like what they do.The most dangerous liars can be the kids' own parents. If you take\na boring job to give your family a high standard of living, as so\nmany people do, you risk infecting your kids with the idea that\nwork is boring. \n[2]\nMaybe it would be better for kids in this one\ncase if parents were not so unselfish. A parent who set an example\nof loving their work might help their kids more than an expensive\nhouse.\n[3]It was not till I was in college that the idea of work finally broke\nfree from the idea of making a living. Then the important question\nbecame not how to make money, but what to work on. Ideally these\ncoincided, but some spectacular boundary cases (like Einstein in\nthe patent office) proved they weren't identical.The definition of work was now to make some original contribution\nto the world, and in the process not to starve. But after the habit\nof so many years my idea of work still included a large component\nof pain. Work still seemed to require discipline, because only\nhard problems yielded grand results, and hard problems couldn't\nliterally be fun. Surely one had to force oneself to work on them.If you think something's supposed to hurt, you're less likely to\nnotice if you're doing it wrong. That about sums up my experience\nof graduate school.BoundsHow much are you supposed to like what you do? Unless you\nknow that, you don't know when to stop searching. And if, like most\npeople, you underestimate it, you'll tend to stop searching too\nearly. You'll end up doing something chosen for you by your parents,\nor the desire to make money, or prestige\u2014or sheer inertia.Here's an upper bound: Do what you love doesn't mean, do what you\nwould like to do most this second. Even Einstein probably\nhad moments when he wanted to have a cup of coffee, but told himself\nhe ought to finish what he was working on first.It used to perplex me when I read about people who liked what they\ndid so much that there was nothing they'd rather do. There didn't\nseem to be any sort of work I liked that much. If I had a\nchoice of (a) spending the next hour working on something or (b)\nbe teleported to Rome and spend the next hour wandering about, was\nthere any sort of work I'd prefer? Honestly, no.But the fact is, almost anyone would rather, at any given moment,\nfloat about in the Carribbean, or have sex, or eat some delicious\nfood, than work on hard problems. The rule about doing what you\nlove assumes a certain length of time. It doesn't mean, do what\nwill make you happiest this second, but what will make you happiest\nover some longer period, like a week or a month.Unproductive pleasures pall eventually. After a while you get tired\nof lying on the beach. If you want to stay happy, you have to do\nsomething.As a lower bound, you have to like your work more than any unproductive\npleasure. You have to like what you do enough that the concept of\n\"spare time\" seems mistaken. Which is not to say you have to spend\nall your time working. You can only work so much before you get\ntired and start to screw up. Then you want to do something else\u2014even something mindless. But you don't regard this time as the\nprize and the time you spend working as the pain you endure to earn\nit.I put the lower bound there for practical reasons. If your work\nis not your favorite thing to do, you'll have terrible problems\nwith procrastination. You'll have to force yourself to work, and\nwhen you resort to that the results are distinctly inferior.To be happy I think you have to be doing something you not only\nenjoy, but admire. You have to be able to say, at the end, wow,\nthat's pretty cool. This doesn't mean you have to make something.\nIf you learn how to hang glide, or to speak a foreign language\nfluently, that will be enough to make you say, for a while at least,\nwow, that's pretty cool. What there has to be is a test.So one thing that falls just short of the standard, I think, is\nreading books. Except for some books in math and the hard sciences,\nthere's no test of how well you've read a book, and that's why\nmerely reading books doesn't quite feel like work. You have to do\nsomething with what you've read to feel productive.I think the best test is one Gino Lee taught me: to try to do things\nthat would make your friends say wow. But it probably wouldn't\nstart to work properly till about age 22, because most people haven't\nhad a big enough sample to pick friends from before then.SirensWhat you should not do, I think, is worry about the opinion of\nanyone beyond your friends. You shouldn't worry about prestige.\nPrestige is the opinion of the rest of the world. When you can ask\nthe opinions of people whose judgement you respect, what does it\nadd to consider the opinions of people you don't even know? \n[4]This is easy advice to give. It's hard to follow, especially when\nyou're young. \n[5]\nPrestige is like a powerful magnet that warps\neven your beliefs about what you enjoy. It causes you to work not\non what you like, but what you'd like to like.That's what leads people to try to write novels, for example. They\nlike reading novels. They notice that people who write them win\nNobel prizes. What could be more wonderful, they think, than to\nbe a novelist? But liking the idea of being a novelist is not\nenough; you have to like the actual work of novel-writing if you're\ngoing to be good at it; you have to like making up elaborate lies.Prestige is just fossilized inspiration. If you do anything well\nenough, you'll make it prestigious. Plenty of things we now\nconsider prestigious were anything but at first. Jazz comes to\nmind\u2014though almost any established art form would do. So just\ndo what you like, and let prestige take care of itself.Prestige is especially dangerous to the ambitious. If you want to\nmake ambitious people waste their time on errands, the way to do\nit is to bait the hook with prestige. That's the recipe for getting\npeople to give talks, write forewords, serve on committees, be\ndepartment heads, and so on. It might be a good rule simply to\navoid any prestigious task. If it didn't suck, they wouldn't have\nhad to make it prestigious.Similarly, if you admire two kinds of work equally, but one is more\nprestigious, you should probably choose the other. Your opinions\nabout what's admirable are always going to be slightly influenced\nby prestige, so if the two seem equal to you, you probably have\nmore genuine admiration for the less prestigious one.The other big force leading people astray is money. Money by itself\nis not that dangerous. When something pays well but is regarded\nwith contempt, like telemarketing, or prostitution, or personal\ninjury litigation, ambitious people aren't tempted by it. That\nkind of work ends up being done by people who are \"just trying to\nmake a living.\" (Tip: avoid any field whose practitioners say\nthis.) The danger is when money is combined with prestige, as in,\nsay, corporate law, or medicine. A comparatively safe and prosperous\ncareer with some automatic baseline prestige is dangerously tempting\nto someone young, who hasn't thought much about what they really\nlike.The test of whether people love what they do is whether they'd do\nit even if they weren't paid for it\u2014even if they had to work at\nanother job to make a living. How many corporate lawyers would do\ntheir current work if they had to do it for free, in their spare\ntime, and take day jobs as waiters to support themselves?This test is especially helpful in deciding between different kinds\nof academic work, because fields vary greatly in this respect. Most\ngood mathematicians would work on math even if there were no jobs\nas math professors, whereas in the departments at the other end of\nthe spectrum, the availability of teaching jobs is the driver:\npeople would rather be English professors than work in ad agencies,\nand publishing papers is the way you compete for such jobs. Math\nwould happen without math departments, but it is the existence of\nEnglish majors, and therefore jobs teaching them, that calls into\nbeing all those thousands of dreary papers about gender and identity\nin the novels of Conrad. No one does \nthat \nkind of thing for fun.The advice of parents will tend to err on the side of money. It\nseems safe to say there are more undergrads who want to be novelists\nand whose parents want them to be doctors than who want to be doctors\nand whose parents want them to be novelists. The kids think their\nparents are \"materialistic.\" Not necessarily. All parents tend to\nbe more conservative for their kids than they would for themselves,\nsimply because, as parents, they share risks more than rewards. If\nyour eight year old son decides to climb a tall tree, or your teenage\ndaughter decides to date the local bad boy, you won't get a share\nin the excitement, but if your son falls, or your daughter gets\npregnant, you'll have to deal with the consequences.DisciplineWith such powerful forces leading us astray, it's not surprising\nwe find it so hard to discover what we like to work on. Most people\nare doomed in childhood by accepting the axiom that work = pain.\nThose who escape this are nearly all lured onto the rocks by prestige\nor money. How many even discover something they love to work on?\nA few hundred thousand, perhaps, out of billions.It's hard to find work you love; it must be, if so few do. So don't\nunderestimate this task. And don't feel bad if you haven't succeeded\nyet. In fact, if you admit to yourself that you're discontented,\nyou're a step ahead of most people, who are still in denial. If\nyou're surrounded by colleagues who claim to enjoy work that you\nfind contemptible, odds are they're lying to themselves. Not\nnecessarily, but probably.Although doing great work takes less discipline than people think\u2014because the way to do great work is to find something you like so\nmuch that you don't have to force yourself to do it\u2014finding\nwork you love does usually require discipline. Some people are\nlucky enough to know what they want to do when they're 12, and just\nglide along as if they were on railroad tracks. But this seems the\nexception. More often people who do great things have careers with\nthe trajectory of a ping-pong ball. They go to school to study A,\ndrop out and get a job doing B, and then become famous for C after\ntaking it up on the side.Sometimes jumping from one sort of work to another is a sign of\nenergy, and sometimes it's a sign of laziness. Are you dropping\nout, or boldly carving a new path? You often can't tell yourself.\nPlenty of people who will later do great things seem to be disappointments\nearly on, when they're trying to find their niche.Is there some test you can use to keep yourself honest? One is to\ntry to do a good job at whatever you're doing, even if you don't\nlike it. Then at least you'll know you're not using dissatisfaction\nas an excuse for being lazy. Perhaps more importantly, you'll get\ninto the habit of doing things well.Another test you can use is: always produce. For example, if you\nhave a day job you don't take seriously because you plan to be a\nnovelist, are you producing? Are you writing pages of fiction,\nhowever bad? As long as you're producing, you'll know you're not\nmerely using the hazy vision of the grand novel you plan to write\none day as an opiate. The view of it will be obstructed by the all\ntoo palpably flawed one you're actually writing.\"Always produce\" is also a heuristic for finding the work you love.\nIf you subject yourself to that constraint, it will automatically\npush you away from things you think you're supposed to work on,\ntoward things you actually like. \"Always produce\" will discover\nyour life's work the way water, with the aid of gravity, finds the\nhole in your roof.Of course, figuring out what you like to work on doesn't mean you\nget to work on it. That's a separate question. And if you're\nambitious you have to keep them separate: you have to make a conscious\neffort to keep your ideas about what you want from being contaminated\nby what seems possible. \n[6]It's painful to keep them apart, because it's painful to observe\nthe gap between them. So most people pre-emptively lower their\nexpectations. For example, if you asked random people on the street\nif they'd like to be able to draw like Leonardo, you'd find most\nwould say something like \"Oh, I can't draw.\" This is more a statement\nof intention than fact; it means, I'm not going to try. Because\nthe fact is, if you took a random person off the street and somehow\ngot them to work as hard as they possibly could at drawing for the\nnext twenty years, they'd get surprisingly far. But it would require\na great moral effort; it would mean staring failure in the eye every\nday for years. And so to protect themselves people say \"I can't.\"Another related line you often hear is that not everyone can do\nwork they love\u2014that someone has to do the unpleasant jobs. Really?\nHow do you make them? In the US the only mechanism for forcing\npeople to do unpleasant jobs is the draft, and that hasn't been\ninvoked for over 30 years. All we can do is encourage people to\ndo unpleasant work, with money and prestige.If there's something people still won't do, it seems as if society\njust has to make do without. That's what happened with domestic\nservants. For millennia that was the canonical example of a job\n\"someone had to do.\" And yet in the mid twentieth century servants\npractically disappeared in rich countries, and the rich have just\nhad to do without.So while there may be some things someone has to do, there's a good\nchance anyone saying that about any particular job is mistaken.\nMost unpleasant jobs would either get automated or go undone if no\none were willing to do them.Two RoutesThere's another sense of \"not everyone can do work they love\"\nthat's all too true, however. One has to make a living, and it's\nhard to get paid for doing work you love. There are two routes to\nthat destination:\n\n The organic route: as you become more eminent, gradually to\n increase the parts of your job that you like at the expense of\n those you don't.The two-job route: to work at things you don't like to get money\n to work on things you do.\n\nThe organic route is more common. It happens naturally to anyone\nwho does good work. A young architect has to take whatever work\nhe can get, but if he does well he'll gradually be in a position\nto pick and choose among projects. The disadvantage of this route\nis that it's slow and uncertain. Even tenure is not real freedom.The two-job route has several variants depending on how long you\nwork for money at a time. At one extreme is the \"day job,\" where\nyou work regular hours at one job to make money, and work on what\nyou love in your spare time. At the other extreme you work at\nsomething till you make enough not to \nhave to work for money again.The two-job route is less common than the organic route, because\nit requires a deliberate choice. It's also more dangerous. Life\ntends to get more expensive as you get older, so it's easy to get\nsucked into working longer than you expected at the money job.\nWorse still, anything you work on changes you. If you work too\nlong on tedious stuff, it will rot your brain. And the best paying\njobs are most dangerous, because they require your full attention.The advantage of the two-job route is that it lets you jump over\nobstacles. The landscape of possible jobs isn't flat; there are\nwalls of varying heights between different kinds of work. \n[7]\nThe trick of maximizing the parts of your job that you like can get you\nfrom architecture to product design, but not, probably, to music.\nIf you make money doing one thing and then work on another, you\nhave more freedom of choice.Which route should you take? That depends on how sure you are of\nwhat you want to do, how good you are at taking orders, how much\nrisk you can stand, and the odds that anyone will pay (in your\nlifetime) for what you want to do. If you're sure of the general\narea you want to work in and it's something people are likely to\npay you for, then you should probably take the organic route. But\nif you don't know what you want to work on, or don't like to take\norders, you may want to take the two-job route, if you can stand\nthe risk.Don't decide too soon. Kids who know early what they want to do\nseem impressive, as if they got the answer to some math question\nbefore the other kids. They have an answer, certainly, but odds\nare it's wrong.A friend of mine who is a quite successful doctor complains constantly\nabout her job. When people applying to medical school ask her for\nadvice, she wants to shake them and yell \"Don't do it!\" (But she\nnever does.) How did she get into this fix? In high school she\nalready wanted to be a doctor. And she is so ambitious and determined\nthat she overcame every obstacle along the way\u2014including,\nunfortunately, not liking it.Now she has a life chosen for her by a high-school kid.When you're young, you're given the impression that you'll get\nenough information to make each choice before you need to make it.\nBut this is certainly not so with work. When you're deciding what\nto do, you have to operate on ridiculously incomplete information.\nEven in college you get little idea what various types of work are\nlike. At best you may have a couple internships, but not all jobs\noffer internships, and those that do don't teach you much more about\nthe work than being a batboy teaches you about playing baseball.In the design of lives, as in the design of most other things, you\nget better results if you use flexible media. So unless you're\nfairly sure what you want to do, your best bet may be to choose a\ntype of work that could turn into either an organic or two-job\ncareer. That was probably part of the reason I chose computers.\nYou can be a professor, or make a lot of money, or morph it into\nany number of other kinds of work.It's also wise, early on, to seek jobs that let you do many different\nthings, so you can learn faster what various kinds of work are like.\nConversely, the extreme version of the two-job route is dangerous\nbecause it teaches you so little about what you like. If you work\nhard at being a bond trader for ten years, thinking that you'll\nquit and write novels when you have enough money, what happens when\nyou quit and then discover that you don't actually like writing\nnovels?Most people would say, I'd take that problem. Give me a million\ndollars and I'll figure out what to do. But it's harder than it\nlooks. Constraints give your life shape. Remove them and most\npeople have no idea what to do: look at what happens to those who\nwin lotteries or inherit money. Much as everyone thinks they want\nfinancial security, the happiest people are not those who have it,\nbut those who like what they do. So a plan that promises freedom\nat the expense of knowing what to do with it may not be as good as\nit seems.Whichever route you take, expect a struggle. Finding work you love\nis very difficult. Most people fail. Even if you succeed, it's\nrare to be free to work on what you want till your thirties or\nforties. But if you have the destination in sight you'll be more\nlikely to arrive at it. If you know you can love work, you're in\nthe home stretch, and if you know what work you love, you're\npractically there.Notes[1]\nCurrently we do the opposite: when we make kids do boring work,\nlike arithmetic drills, instead of admitting frankly that it's\nboring, we try to disguise it with superficial decorations.[2]\nOne father told me about a related phenomenon: he found himself\nconcealing from his family how much he liked his work. When he\nwanted to go to work on a saturday, he found it easier to say that\nit was because he \"had to\" for some reason, rather than admitting\nhe preferred to work than stay home with them.[3]\nSomething similar happens with suburbs. Parents move to suburbs\nto raise their kids in a safe environment, but suburbs are so dull\nand artificial that by the time they're fifteen the kids are convinced\nthe whole world is boring.[4]\nI'm not saying friends should be the only audience for your\nwork. The more people you can help, the better. But friends should\nbe your compass.[5]\nDonald Hall said young would-be poets were mistaken to be so\nobsessed with being published. But you can imagine what it would\ndo for a 24 year old to get a poem published in The New Yorker.\nNow to people he meets at parties he's a real poet. Actually he's\nno better or worse than he was before, but to a clueless audience\nlike that, the approval of an official authority makes all the\ndifference. So it's a harder problem than Hall realizes. The\nreason the young care so much about prestige is that the people\nthey want to impress are not very discerning.[6]\nThis is isomorphic to the principle that you should prevent\nyour beliefs about how things are from being contaminated by how\nyou wish they were. Most people let them mix pretty promiscuously.\nThe continuing popularity of religion is the most visible index of\nthat.[7]\nA more accurate metaphor would be to say that the graph of jobs\nis not very well connected.Thanks to Trevor Blackwell, Dan Friedman, Sarah Harlin,\nJessica Livingston, Jackie McDonough, Robert Morris, Peter Norvig, \nDavid Sloo, and Aaron Swartz\nfor reading drafts of this."} {"title": "nft", "text": "May 2021Noora Health, a nonprofit I've \nsupported for years, just launched\na new NFT. It has a dramatic name, Save Thousands of Lives,\nbecause that's what the proceeds will do.Noora has been saving lives for 7 years. They run programs in\nhospitals in South Asia to teach new mothers how to take care of\ntheir babies once they get home. They're in 165 hospitals now. And\nbecause they know the numbers before and after they start at a new\nhospital, they can measure the impact they have. It is massive.\nFor every 1000 live births, they save 9 babies.This number comes from a study\nof 133,733 families at 28 different\nhospitals that Noora conducted in collaboration with the Better\nBirth team at Ariadne Labs, a joint center for health systems\ninnovation at Brigham and Women\u0092s Hospital and Harvard T.H. Chan\nSchool of Public Health.Noora is so effective that even if you measure their costs in the\nmost conservative way, by dividing their entire budget by the number\nof lives saved, the cost of saving a life is the lowest I've seen.\n$1,235.For this NFT, they're going to issue a public report tracking how\nthis specific tranche of money is spent, and estimating the number\nof lives saved as a result.NFTs are a new territory, and this way of using them is especially\nnew, but I'm excited about its potential. And I'm excited to see\nwhat happens with this particular auction, because unlike an NFT\nrepresenting something that has already happened,\nthis NFT gets better as the price gets higher.The reserve price was about $2.5 million, because that's what it\ntakes for the name to be accurate: that's what it costs to save\n2000 lives. But the higher the price of this NFT goes, the more\nlives will be saved. What a sentence to be able to write."} {"title": "startuplessons", "text": "April 2006(This essay is derived from a talk at the 2006 \nStartup School.)The startups we've funded so far are pretty quick, but they seem\nquicker to learn some lessons than others. I think it's because\nsome things about startups are kind of counterintuitive.We've now \ninvested \nin enough companies that I've learned a trick\nfor determining which points are the counterintuitive ones:\nthey're the ones I have to keep repeating.So I'm going to number these points, and maybe with future startups\nI'll be able to pull off a form of Huffman coding. I'll make them\nall read this, and then instead of nagging them in detail, I'll\njust be able to say: number four!\n1. Release Early.The thing I probably repeat most is this recipe for a startup: get\na version 1 out fast, then improve it based on users' reactions.By \"release early\" I don't mean you should release something full\nof bugs, but that you should release something minimal. Users hate\nbugs, but they don't seem to mind a minimal version 1, if there's\nmore coming soon.There are several reasons it pays to get version 1 done fast. One\nis that this is simply the right way to write software, whether for\na startup or not. I've been repeating that since 1993, and I haven't seen much since to\ncontradict it. I've seen a lot of startups die because they were\ntoo slow to release stuff, and none because they were too quick.\n[1]One of the things that will surprise you if you build something\npopular is that you won't know your users. Reddit now has almost half a million\nunique visitors a month. Who are all those people? They have no\nidea. No web startup does. And since you don't know your users,\nit's dangerous to guess what they'll like. Better to release\nsomething and let them tell you.Wufoo took this to heart and released\ntheir form-builder before the underlying database. You can't even\ndrive the thing yet, but 83,000 people came to sit in the driver's\nseat and hold the steering wheel. And Wufoo got valuable feedback\nfrom it: Linux users complained they used too much Flash, so they\nrewrote their software not to. If they'd waited to release everything\nat once, they wouldn't have discovered this problem till it was\nmore deeply wired in.Even if you had no users, it would still be important to release\nquickly, because for a startup the initial release acts as a shakedown\ncruise. If anything major is broken-- if the idea's no good,\nfor example, or the founders hate one another-- the stress of getting\nthat first version out will expose it. And if you have such problems\nyou want to find them early.Perhaps the most important reason to release early, though, is that\nit makes you work harder. When you're working on something that\nisn't released, problems are intriguing. In something that's out\nthere, problems are alarming. There is a lot more urgency once you\nrelease. And I think that's precisely why people put it off. They\nknow they'll have to work a lot harder once they do. \n[2]\n2. Keep Pumping Out Features.Of course, \"release early\" has a second component, without which\nit would be bad advice. If you're going to start with something\nthat doesn't do much, you better improve it fast.What I find myself repeating is \"pump out features.\" And this rule\nisn't just for the initial stages. This is something all startups\nshould do for as long as they want to be considered startups.I don't mean, of course, that you should make your application ever\nmore complex. By \"feature\" I mean one unit of hacking-- one quantum\nof making users' lives better.As with exercise, improvements beget improvements. If you run every\nday, you'll probably feel like running tomorrow. But if you skip\nrunning for a couple weeks, it will be an effort to drag yourself\nout. So it is with hacking: the more ideas you implement, the more\nideas you'll have. You should make your system better at least in\nsome small way every day or two.This is not just a good way to get development done; it is also a\nform of marketing. Users love a site that's constantly improving.\nIn fact, users expect a site to improve. Imagine if you visited a\nsite that seemed very good, and then returned two months later and\nnot one thing had changed. Wouldn't it start to seem lame? \n[3]They'll like you even better when you improve in response to their\ncomments, because customers are used to companies ignoring them.\nIf you're the rare exception-- a company that actually listens--\nyou'll generate fanatical loyalty. You won't need to advertise,\nbecause your users will do it for you.This seems obvious too, so why do I have to keep repeating it? I\nthink the problem here is that people get used to how things are.\nOnce a product gets past the stage where it has glaring flaws, you\nstart to get used to it, and gradually whatever features it happens\nto have become its identity. For example, I doubt many people at\nYahoo (or Google for that matter) realized how much better web mail\ncould be till Paul Buchheit showed them.I think the solution is to assume that anything you've made is far\nshort of what it could be. Force yourself, as a sort of intellectual\nexercise, to keep thinking of improvements. Ok, sure, what you\nhave is perfect. But if you had to change something, what would\nit be?If your product seems finished, there are two possible explanations:\n(a) it is finished, or (b) you lack imagination. Experience suggests\n(b) is a thousand times more likely.\n3. Make Users Happy.Improving constantly is an instance of a more general rule: make\nusers happy. One thing all startups have in common is that they\ncan't force anyone to do anything. They can't force anyone to use\ntheir software, and they can't force anyone to do deals with them.\nA startup has to sing for its supper. That's why the successful\nones make great things. They have to, or die.When you're running a startup you feel like a little bit of debris\nblown about by powerful winds. The most powerful wind is users.\nThey can either catch you and loft you up into the sky, as they did\nwith Google, or leave you flat on the pavement, as they do with\nmost startups. Users are a fickle wind, but more powerful than any\nother. If they take you up, no competitor can keep you down.As a little piece of debris, the rational thing for you to do is\nnot to lie flat, but to curl yourself into a shape the wind will\ncatch.I like the wind metaphor because it reminds you how impersonal the\nstream of traffic is. The vast majority of people who visit your\nsite will be casual visitors. It's them you have to design your\nsite for. The people who really care will find what they want by\nthemselves.The median visitor will arrive with their finger poised on the Back\nbutton. Think about your own experience: most links you\nfollow lead to something lame. Anyone who has used the web for\nmore than a couple weeks has been trained to click on Back after\nfollowing a link. So your site has to say \"Wait! Don't click on\nBack. This site isn't lame. Look at this, for example.\"There are two things you have to do to make people pause. The most\nimportant is to explain, as concisely as possible, what the hell\nyour site is about. How often have you visited a site that seemed\nto assume you already knew what they did? For example, the corporate\nsite that says the\ncompany makes\n\n enterprise content management solutions for business that enable\n organizations to unify people, content and processes to minimize\n business risk, accelerate time-to-value and sustain lower total\n cost of ownership.\n\nAn established company may get away with such an opaque description,\nbut no startup can. A startup\nshould be able to explain in one or two sentences exactly what it\ndoes. \n[4]\nAnd not just to users. You need this for everyone:\ninvestors, acquirers, partners, reporters, potential employees, and\neven current employees. You probably shouldn't even start a company\nto do something that can't be described compellingly in one or two\nsentences.The other thing I repeat is to give people everything you've got,\nright away. If you have something impressive, try to put it on the\nfront page, because that's the only one most visitors will see.\nThough indeed there's a paradox here: the more you push the good\nstuff toward the front, the more likely visitors are to explore\nfurther. \n[5]In the best case these two suggestions get combined: you tell\nvisitors what your site is about by showing them. One of the\nstandard pieces of advice in fiction writing is \"show, don't tell.\"\nDon't say that a character's angry; have him grind his teeth, or\nbreak his pencil in half. Nothing will explain what your site does\nso well as using it.The industry term here is \"conversion.\" The job of your site is\nto convert casual visitors into users-- whatever your definition\nof a user is. You can measure this in your growth rate. Either\nyour site is catching on, or it isn't, and you must know which. If\nyou have decent growth, you'll win in the end, no matter how obscure\nyou are now. And if you don't, you need to fix something.\n4. Fear the Right Things.Another thing I find myself saying a lot is \"don't worry.\" Actually,\nit's more often \"don't worry about this; worry about that instead.\"\nStartups are right to be paranoid, but they sometimes fear the wrong\nthings.Most visible disasters are not so alarming as they seem. Disasters\nare normal in a startup: a founder quits, you discover a patent\nthat covers what you're doing, your servers keep crashing, you run\ninto an insoluble technical problem, you have to change your name,\na deal falls through-- these are all par for the course. They won't\nkill you unless you let them.Nor will most competitors. A lot of startups worry \"what if Google\nbuilds something like us?\" Actually big companies are not the ones\nyou have to worry about-- not even Google. The people at Google\nare smart, but no smarter than you; they're not as motivated, because\nGoogle is not going to go out of business if this one product fails;\nand even at Google they have a lot of bureaucracy to slow them down.What you should fear, as a startup, is not the established players,\nbut other startups you don't know exist yet. They're way more\ndangerous than Google because, like you, they're cornered animals.Looking just at existing competitors can give you a false sense of\nsecurity. You should compete against what someone else could be\ndoing, not just what you can see people doing. A corollary is that\nyou shouldn't relax just because you have no visible competitors\nyet. No matter what your idea, there's someone else out there\nworking on the same thing.That's the downside of it being easier to start a startup: more people\nare doing it. But I disagree with Caterina Fake when she says that\nmakes this a bad time to start a startup. More people are starting\nstartups, but not as many more as could. Most college graduates\nstill think they have to get a job. The average person can't ignore\nsomething that's been beaten into their head since they were three\njust because serving web pages recently got a lot cheaper.And in any case, competitors are not the biggest threat. Way more\nstartups hose themselves than get crushed by competitors. There\nare a lot of ways to do it, but the three main ones are internal\ndisputes, inertia, and ignoring users. Each is, by itself, enough\nto kill you. But if I had to pick the worst, it would be ignoring\nusers. If you want a recipe for a startup that's going to die,\nhere it is: a couple of founders who have some great idea they know\neveryone is going to love, and that's what they're going to build,\nno matter what.Almost everyone's initial plan is broken. If companies stuck to\ntheir initial plans, Microsoft would be selling programming languages,\nand Apple would be selling printed circuit boards. In both cases\ntheir customers told them what their business should be-- and they\nwere smart enough to listen.As Richard Feynman said, the imagination of nature is greater than\nthe imagination of man. You'll find more interesting things by\nlooking at the world than you could ever produce just by thinking.\nThis principle is very powerful. It's why the best abstract painting\nstill falls short of Leonardo, for example. And it applies to\nstartups too. No idea for a product could ever be so clever as the\nones you can discover by smashing a beam of prototypes into a beam\nof users.\n5. Commitment Is a Self-Fulfilling Prophecy.I now have enough experience with startups to be able to say what\nthe most important quality is in a startup founder, and it's not\nwhat you might think. The most important quality in a startup\nfounder is determination. Not intelligence-- determination.This is a little depressing. I'd like to believe Viaweb succeeded\nbecause we were smart, not merely determined. A lot of people in\nthe startup world want to believe that. Not just founders, but\ninvestors too. They like the idea of inhabiting a world ruled by\nintelligence. And you can tell they really believe this, because\nit affects their investment decisions.Time after time VCs invest in startups founded by eminent professors.\nThis may work in biotech, where a lot of startups simply commercialize\nexisting research, but in software you want to invest in students,\nnot professors. Microsoft, Yahoo, and Google were all founded by\npeople who dropped out of school to do it. What students lack in\nexperience they more than make up in dedication.Of course, if you want to get rich, it's not enough merely to be\ndetermined. You have to be smart too, right? I'd like to think\nso, but I've had an experience that convinced me otherwise: I spent\nseveral years living in New York.You can lose quite a lot in the brains department and it won't kill\nyou. But lose even a little bit in the commitment department, and\nthat will kill you very rapidly.Running a startup is like walking on your hands: it's possible, but\nit requires extraordinary effort. If an ordinary employee were\nasked to do the things a startup founder has to, he'd be very\nindignant. Imagine if you were hired at some big company, and in\naddition to writing software ten times faster than you'd ever had\nto before, they expected you to answer support calls, administer\nthe servers, design the web site, cold-call customers, find the\ncompany office space, and go out and get everyone lunch.And to do all this not in the calm, womb-like atmosphere of a big\ncompany, but against a backdrop of constant disasters. That's the\npart that really demands determination. In a startup, there's\nalways some disaster happening. So if you're the least bit inclined\nto find an excuse to quit, there's always one right there.But if you lack commitment, chances are it will have been hurting\nyou long before you actually quit. Everyone who deals with startups\nknows how important commitment is, so if they sense you're ambivalent,\nthey won't give you much attention. If you lack commitment, you'll\njust find that for some mysterious reason good things happen to\nyour competitors but not to you. If you lack commitment, it will\nseem to you that you're unlucky.Whereas if you're determined to stick around, people will pay\nattention to you, because odds are they'll have to deal with you\nlater. You're a local, not just a tourist, so everyone has to come\nto terms with you.At Y Combinator we sometimes mistakenly fund teams who have the\nattitude that they're going to give this startup thing a shot for\nthree months, and if something great happens, they'll stick with\nit-- \"something great\" meaning either that someone wants to buy\nthem or invest millions of dollars in them. But if this is your\nattitude, \"something great\" is very unlikely to happen to you,\nbecause both acquirers and investors judge you by your level of\ncommitment.If an acquirer thinks you're going to stick around no matter what,\nthey'll be more likely to buy you, because if they don't and you\nstick around, you'll probably grow, your price will go up, and\nthey'll be left wishing they'd bought you earlier. Ditto for\ninvestors. What really motivates investors, even big VCs, is not\nthe hope of good returns, but the fear of missing out. \n[6]\nSo if\nyou make it clear you're going to succeed no matter what, and the only\nreason you need them is to make it happen a little faster, you're\nmuch more likely to get money.You can't fake this. The only way to convince everyone that you're\nready to fight to the death is actually to be ready to.You have to be the right kind of determined, though. I carefully\nchose the word determined rather than stubborn, because stubbornness\nis a disastrous quality in a startup. You have to be determined,\nbut flexible, like a running back. A successful running back doesn't\njust put his head down and try to run through people. He improvises:\nif someone appears in front of him, he runs around them; if someone\ntries to grab him, he spins out of their grip; he'll even run in\nthe wrong direction briefly if that will help. The one thing he'll\nnever do is stand still. \n[7]\n6. There Is Always Room.I was talking recently to a startup founder about whether it might\nbe good to add a social component to their software. He said he\ndidn't think so, because the whole social thing was tapped out.\nReally? So in a hundred years the only social networking sites\nwill be the Facebook, MySpace, Flickr, and Del.icio.us? Not likely.There is always room for new stuff. At every point in history,\neven the darkest bits of the dark ages, people were discovering\nthings that made everyone say \"why didn't anyone think of that\nbefore?\" We know this continued to be true up till 2004, when the\nFacebook was founded-- though strictly speaking someone else did\nthink of that.The reason we don't see the opportunities all around us is that we\nadjust to however things are, and assume that's how things have to\nbe. For example, it would seem crazy to most people to try to make\na better search engine than Google. Surely that field, at least,\nis tapped out. Really? In a hundred years-- or even twenty-- are\npeople still going to search for information using something like\nthe current Google? Even Google probably doesn't think that.In particular, I don't think there's any limit to the number of\nstartups. Sometimes you hear people saying \"All these guys starting\nstartups now are going to be disappointed. How many little startups\nare Google and Yahoo going to buy, after all?\" That sounds cleverly\nskeptical, but I can prove it's mistaken. No one proposes that\nthere's some limit to the number of people who can be employed in\nan economy consisting of big, slow-moving companies with a couple\nthousand people each. Why should there be any limit to the number\nwho could be employed by small, fast-moving companies with ten each?\nIt seems to me the only limit would be the number of people who\nwant to work that hard.The limit on the number of startups is not the number that can get\nacquired by Google and Yahoo-- though it seems even that should\nbe unlimited, if the startups were actually worth buying-- but the\namount of wealth that can be created. And I don't think there's\nany limit on that, except cosmological ones.So for all practical purposes, there is no limit to the number of\nstartups. Startups make wealth, which means they make things people\nwant, and if there's a limit on the number of things people want,\nwe are nowhere near it. I still don't even have a flying car.\n7. Don't Get Your Hopes Up.This is another one I've been repeating since long before Y Combinator.\nIt was practically the corporate motto at Viaweb.Startup founders are naturally optimistic. They wouldn't do it\notherwise. But you should treat your optimism the way you'd treat\nthe core of a nuclear reactor: as a source of power that's also\nvery dangerous. You have to build a shield around it, or it will\nfry you.The shielding of a reactor is not uniform; the reactor would be\nuseless if it were. It's pierced in a few places to let pipes in.\nAn optimism shield has to be pierced too. I think the place to\ndraw the line is between what you expect of yourself, and what you\nexpect of other people. It's ok to be optimistic about what you\ncan do, but assume the worst about machines and other people.This is particularly necessary in a startup, because you tend to\nbe pushing the limits of whatever you're doing. So things don't\nhappen in the smooth, predictable way they do in the rest of the\nworld. Things change suddenly, and usually for the worse.Shielding your optimism is nowhere more important than with deals.\nIf your startup is doing a deal, just assume it's not going to\nhappen. The VCs who say they're going to invest in you aren't.\nThe company that says they're going to buy you isn't. The big\ncustomer who wants to use your system in their whole company won't.\nThen if things work out you can be pleasantly surprised.The reason I warn startups not to get their hopes up is not to save\nthem from being disappointed when things fall through. It's\nfor a more practical reason: to prevent them from leaning their\ncompany against something that's going to fall over, taking them\nwith it.For example, if someone says they want to invest in you, there's a\nnatural tendency to stop looking for other investors. That's why\npeople proposing deals seem so positive: they want you to\nstop looking. And you want to stop too, because doing deals is a\npain. Raising money, in particular, is a huge time sink. So you\nhave to consciously force yourself to keep looking.Even if you ultimately do the first deal, it will be to your advantage\nto have kept looking, because you'll get better terms. Deals are\ndynamic; unless you're negotiating with someone unusually honest,\nthere's not a single point where you shake hands and the deal's\ndone. There are usually a lot of subsidiary questions to be cleared\nup after the handshake, and if the other side senses weakness-- if\nthey sense you need this deal-- they will be very tempted to screw\nyou in the details.VCs and corp dev guys are professional negotiators. They're trained\nto take advantage of weakness. \n[8]\nSo while they're often nice\nguys, they just can't help it. And as pros they do this more than\nyou. So don't even try to bluff them. The only way a startup can\nhave any leverage in a deal is genuinely not to need it. And if\nyou don't believe in a deal, you'll be less likely to depend on it.So I want to plant a hypnotic suggestion in your heads: when you\nhear someone say the words \"we want to invest in you\" or \"we want\nto acquire you,\" I want the following phrase to appear automatically\nin your head: don't get your hopes up. Just continue running\nyour company as if this deal didn't exist. Nothing is more likely\nto make it close.The way to succeed in a startup is to focus on the goal of getting\nlots of users, and keep walking swiftly toward it while investors\nand acquirers scurry alongside trying to wave money in your face.\nSpeed, not MoneyThe way I've described it, starting a startup sounds pretty stressful.\nIt is. When I talk to the founders of the companies we've funded,\nthey all say the same thing: I knew it would be hard, but I didn't\nrealize it would be this hard.So why do it? It would be worth enduring a lot of pain and stress\nto do something grand or heroic, but just to make money? Is making\nmoney really that important?No, not really. It seems ridiculous to me when people take business\ntoo seriously. I regard making money as a boring errand to be got\nout of the way as soon as possible. There is nothing grand or\nheroic about starting a startup per se.So why do I spend so much time thinking about startups? I'll tell\nyou why. Economically, a startup is best seen not as a way to get\nrich, but as a way to work faster. You have to make a living, and\na startup is a way to get that done quickly, instead of letting it\ndrag on through your whole life.\n[9]We take it for granted most of the time, but human life is fairly\nmiraculous. It is also palpably short. You're given this marvellous\nthing, and then poof, it's taken away. You can see why people\ninvent gods to explain it. But even to people who don't believe\nin gods, life commands respect. There are times in most of our\nlives when the days go by in a blur, and almost everyone has a\nsense, when this happens, of wasting something precious. As Ben\nFranklin said, if you love life, don't waste time, because time is\nwhat life is made of.So no, there's nothing particularly grand about making money. That's\nnot what makes startups worth the trouble. What's important about\nstartups is the speed. By compressing the dull but necessary task\nof making a living into the smallest possible time, you show respect\nfor life, and there is something grand about that.Notes[1]\nStartups can die from releasing something full of bugs, and not\nfixing them fast enough, but I don't know of any that died from\nreleasing something stable but minimal very early, then promptly\nimproving it.[2]\nI know this is why I haven't released Arc. The moment I do,\nI'll have people nagging me for features.[3]\nA web site is different from a book or movie or desktop application\nin this respect. Users judge a site not as a single snapshot, but\nas an animation with multiple frames. Of the two, I'd say the rate of\nimprovement is more important to users than where you currently\nare.[4]\nIt should not always tell this to users, however. For example,\nMySpace is basically a replacement mall for mallrats. But it was\nwiser for them, initially, to pretend that the site was about bands.[5]\nSimilarly, don't make users register to try your site. Maybe\nwhat you have is so valuable that visitors should gladly register\nto get at it. But they've been trained to expect the opposite.\nMost of the things they've tried on the web have sucked-- and\nprobably especially those that made them register.[6]\nVCs have rational reasons for behaving this way. They don't\nmake their money (if they make money) off their median investments.\nIn a typical fund, half the companies fail, most of the rest generate\nmediocre returns, and one or two \"make the fund\" by succeeding\nspectacularly. So if they miss just a few of the most promising\nopportunities, it could hose the whole fund.[7]\nThe attitude of a running back doesn't translate to soccer.\nThough it looks great when a forward dribbles past multiple defenders,\na player who persists in trying such things will do worse in the\nlong term than one who passes.[8]\nThe reason Y Combinator never negotiates valuations\nis that we're not professional negotiators, and don't want to turn\ninto them.[9]\nThere are two ways to do \nwork you love: (a) to make money, then work\non what you love, or (b) to get a job where you get paid to work on\nstuff you love. In practice the first phases of both\nconsist mostly of unedifying schleps, and in (b) the second phase is less\nsecure.Thanks to Sam Altman, Trevor Blackwell, Beau Hartshorne, Jessica \nLivingston, and Robert Morris for reading drafts of this."} {"title": "gba", "text": "April 2004To the popular press, \"hacker\" means someone who breaks\ninto computers. Among programmers it means a good programmer.\nBut the two meanings are connected. To programmers,\n\"hacker\" connotes mastery in the most literal sense: someone\nwho can make a computer do what he wants\u2014whether the computer\nwants to or not.To add to the confusion, the noun \"hack\" also has two senses. It can\nbe either a compliment or an insult. It's called a hack when\nyou do something in an ugly way. But when you do something\nso clever that you somehow beat the system, that's also\ncalled a hack. The word is used more often in the former than\nthe latter sense, probably because ugly solutions are more\ncommon than brilliant ones.Believe it or not, the two senses of \"hack\" are also\nconnected. Ugly and imaginative solutions have something in\ncommon: they both break the rules. And there is a gradual\ncontinuum between rule breaking that's merely ugly (using\nduct tape to attach something to your bike) and rule breaking\nthat is brilliantly imaginative (discarding Euclidean space).Hacking predates computers. When he\nwas working on the Manhattan Project, Richard Feynman used to\namuse himself by breaking into safes containing secret documents.\nThis tradition continues today.\nWhen we were in grad school, a hacker friend of mine who spent too much\ntime around MIT had\nhis own lock picking kit.\n(He now runs a hedge fund, a not unrelated enterprise.)It is sometimes hard to explain to authorities why one would\nwant to do such things.\nAnother friend of mine once got in trouble with the government for\nbreaking into computers. This had only recently been declared\na crime, and the FBI found that their usual investigative\ntechnique didn't work. Police investigation apparently begins with\na motive. The usual motives are few: drugs, money, sex,\nrevenge. Intellectual curiosity was not one of the motives on\nthe FBI's list. Indeed, the whole concept seemed foreign to\nthem.Those in authority tend to be annoyed by hackers'\ngeneral attitude of disobedience. But that disobedience is\na byproduct of the qualities that make them good programmers.\nThey may laugh at the CEO when he talks in generic corporate\nnewspeech, but they also laugh at someone who tells them\na certain problem can't be solved.\nSuppress one, and you suppress the other.This attitude is sometimes affected. Sometimes young programmers\nnotice the eccentricities of eminent hackers and decide to\nadopt some of their own in order to seem smarter.\nThe fake version is not merely\nannoying; the prickly attitude of these posers\ncan actually slow the process of innovation.But even factoring in their annoying eccentricities,\nthe disobedient attitude of hackers is a net win. I wish its\nadvantages were better understood.For example, I suspect people in Hollywood are\nsimply mystified by\nhackers' attitudes toward copyrights. They are a perennial\ntopic of heated discussion on Slashdot.\nBut why should people who program computers\nbe so concerned about copyrights, of all things?Partly because some companies use mechanisms to prevent\ncopying. Show any hacker a lock and his first thought is\nhow to pick it. But there is a deeper reason that\nhackers are alarmed by measures like copyrights and patents.\nThey see increasingly aggressive measures to protect\n\"intellectual property\"\nas a threat to the intellectual\nfreedom they need to do their job.\nAnd they are right.It is by poking about inside current technology that\nhackers get ideas for the next generation. No thanks,\nintellectual homeowners may say, we don't need any\noutside help. But they're wrong.\nThe next generation of computer technology has\noften\u2014perhaps more often than not\u2014been developed by outsiders.In 1977 there was no doubt some group within IBM developing\nwhat they expected to be\nthe next generation of business computer. They were mistaken.\nThe next generation of business computer was\nbeing developed on entirely different lines by two long-haired\nguys called Steve in a garage in Los Altos. At about the\nsame time, the powers that be\nwere cooperating to develop the\nofficial next generation operating system, Multics.\nBut two guys who thought Multics excessively complex went off\nand wrote their own. They gave it a name that\nwas a joking reference to Multics: Unix.The latest intellectual property laws impose\nunprecedented restrictions on the sort of poking around that\nleads to new ideas. In the past, a competitor might use patents\nto prevent you from selling a copy of something they\nmade, but they couldn't prevent you from\ntaking one apart to see how it worked. The latest\nlaws make this a crime. How are we\nto develop new technology if we can't study current\ntechnology to figure out how to improve it?Ironically, hackers have brought this on themselves.\nComputers are responsible for the problem. The control systems\ninside machines used to be physical: gears and levers and cams.\nIncreasingly, the brains (and thus the value) of products is\nin software. And by this I mean software in the general sense:\ni.e. data. A song on an LP is physically stamped into the\nplastic. A song on an iPod's disk is merely stored on it.Data is by definition easy to copy. And the Internet\nmakes copies easy to distribute. So it is no wonder\ncompanies are afraid. But, as so often happens, fear has\nclouded their judgement. The government has responded\nwith draconian laws to protect intellectual property.\nThey probably mean well. But\nthey may not realize that such laws will do more harm\nthan good.Why are programmers so violently opposed to these laws?\nIf I were a legislator, I'd be interested in this\nmystery\u2014for the same reason that, if I were a farmer and suddenly\nheard a lot of squawking coming from my hen house one night,\nI'd want to go out and investigate. Hackers are not stupid,\nand unanimity is very rare in this world.\nSo if they're all squawking, \nperhaps there is something amiss.Could it be that such laws, though intended to protect America,\nwill actually harm it? Think about it. There is something\nvery American about Feynman breaking into safes during\nthe Manhattan Project. It's hard to imagine the authorities\nhaving a sense of humor about such things over\nin Germany at that time. Maybe it's not a coincidence.Hackers are unruly. That is the essence of hacking. And it\nis also the essence of Americanness. It is no accident\nthat Silicon Valley\nis in America, and not France, or Germany,\nor England, or Japan. In those countries, people color inside\nthe lines.I lived for a while in Florence. But after I'd been there\na few months I realized that what I'd been unconsciously hoping\nto find there was back in the place I'd just left.\nThe reason Florence is famous is that in 1450, it was New York.\nIn 1450 it was filled with the kind of turbulent and ambitious\npeople you find now in America. (So I went back to America.)It is greatly to America's advantage that it is\na congenial atmosphere for the right sort of unruliness\u2014that\nit is a home not just for the smart, but for smart-alecks.\nAnd hackers are invariably smart-alecks. If we had a national\nholiday, it would be April 1st. It says a great deal about\nour work that we use the same word for a brilliant or a\nhorribly cheesy solution. When we cook one up we're not\nalways 100% sure which kind it is. But as long as it has\nthe right sort of wrongness, that's a promising sign.\nIt's odd that people\nthink of programming as precise and methodical. Computers\nare precise and methodical. Hacking is something you do\nwith a gleeful laugh.In our world some of the most characteristic solutions\nare not far removed from practical\njokes. IBM was no doubt rather surprised by the consequences\nof the licensing deal for DOS, just as the hypothetical\n\"adversary\" must be when Michael Rabin solves a problem by\nredefining it as one that's easier to solve.Smart-alecks have to develop a keen sense of how much they\ncan get away with. And lately hackers \nhave sensed a change\nin the atmosphere.\nLately hackerliness seems rather frowned upon.To hackers the recent contraction in civil liberties seems\nespecially ominous. That must also mystify outsiders. \nWhy should we care especially about civil\nliberties? Why programmers, more than\ndentists or salesmen or landscapers?Let me put the case in terms a government official would appreciate.\nCivil liberties are not just an ornament, or a quaint\nAmerican tradition. Civil liberties make countries rich.\nIf you made a graph of\nGNP per capita vs. civil liberties, you'd notice a definite\ntrend. Could civil liberties really be a cause, rather\nthan just an effect? I think so. I think a society in which\npeople can do and say what they want will also tend to\nbe one in which the most efficient solutions win, rather than\nthose sponsored by the most influential people.\nAuthoritarian countries become corrupt;\ncorrupt countries become poor; and poor countries are weak. \nIt seems to me there is\na Laffer curve for government power, just as for\ntax revenues. At least, it seems likely enough that it\nwould be stupid to try the experiment and find out. Unlike\nhigh tax rates, you can't repeal totalitarianism if it\nturns out to be a mistake.This is why hackers worry. The government spying on people doesn't\nliterally make programmers write worse code. It just leads\neventually to a world in which bad ideas win. And because\nthis is so important to hackers, they're especially sensitive\nto it. They can sense totalitarianism approaching from a\ndistance, as animals can sense an approaching \nthunderstorm.It would be ironic if, as hackers fear, recent measures\nintended to protect national security and intellectual property\nturned out to be a missile aimed right at what makes \nAmerica successful. But it would not be the first time that\nmeasures taken in an atmosphere of panic had\nthe opposite of the intended effect.There is such a thing as Americanness.\nThere's nothing like living abroad to teach you that. \nAnd if you want to know whether something will nurture or squash\nthis quality, it would be hard to find a better focus\ngroup than hackers, because they come closest of any group\nI know to embodying it. Closer, probably, than\nthe men running our government,\nwho for all their talk of patriotism\nremind me more of Richelieu or Mazarin\nthan Thomas Jefferson or George Washington.When you read what the founding fathers had to say for\nthemselves, they sound more like hackers.\n\"The spirit of resistance to government,\"\nJefferson wrote, \"is so valuable on certain occasions, that I wish\nit always to be kept alive.\"Imagine an American president saying that today.\nLike the remarks of an outspoken old grandmother, the sayings of\nthe founding fathers have embarrassed generations of\ntheir less confident successors. They remind us where we come from.\nThey remind us that it is the people who break rules that are\nthe source of America's wealth and power.Those in a position to impose rules naturally want them to be\nobeyed. But be careful what you ask for. You might get it.Thanks to Ken Anderson, Trevor Blackwell, Daniel Giffin, \nSarah Harlin, Shiro Kawai, Jessica Livingston, Matz, \nJackie McDonough, Robert Morris, Eric Raymond, Guido van Rossum,\nDavid Weinberger, and\nSteven Wolfram for reading drafts of this essay.\n(The image shows Steves Jobs and Wozniak \nwith a \"blue box.\"\nPhoto by Margret Wozniak. Reproduced by permission of Steve\nWozniak.)"} {"title": "island", "text": "July 2006I've discovered a handy test for figuring out what you're addicted\nto. Imagine you were going to spend the weekend at a friend's house\non a little island off the coast of Maine. There are no shops on\nthe island and you won't be able to leave while you're there. Also,\nyou've never been to this house before, so you can't assume it will\nhave more than any house might.What, besides clothes and toiletries, do you make a point of packing?\nThat's what you're addicted to. For example, if you find yourself\npacking a bottle of vodka (just in case), you may want to stop and\nthink about that.For me the list is four things: books, earplugs, a notebook, and a\npen.There are other things I might bring if I thought of it, like music,\nor tea, but I can live without them. I'm not so addicted to caffeine\nthat I wouldn't risk the house not having any tea, just for a\nweekend.Quiet is another matter. I realize it seems a bit eccentric to\ntake earplugs on a trip to an island off the coast of Maine. If\nanywhere should be quiet, that should. But what if the person in\nthe next room snored? What if there was a kid playing basketball?\n(Thump, thump, thump... thump.) Why risk it? Earplugs are small.Sometimes I can think with noise. If I already have momentum on\nsome project, I can work in noisy places. I can edit an essay or\ndebug code in an airport. But airports are not so bad: most of the\nnoise is whitish. I couldn't work with the sound of a sitcom coming\nthrough the wall, or a car in the street playing thump-thump music.And of course there's another kind of thinking, when you're starting\nsomething new, that requires complete quiet. You never\nknow when this will strike. It's just as well to carry plugs.The notebook and pen are professional equipment, as it were. Though\nactually there is something druglike about them, in the sense that\ntheir main purpose is to make me feel better. I hardly ever go\nback and read stuff I write down in notebooks. It's just that if\nI can't write things down, worrying about remembering one idea gets\nin the way of having the next. Pen and paper wick ideas.The best notebooks I've found are made by a company called Miquelrius.\nI use their smallest size, which is about 2.5 x 4 in.\nThe secret to writing on such\nnarrow pages is to break words only when you run out of space, like\na Latin inscription. I use the cheapest plastic Bic ballpoints,\npartly because their gluey ink doesn't seep through pages, and\npartly so I don't worry about losing them.I only started carrying a notebook about three years ago. Before\nthat I used whatever scraps of paper I could find. But the problem\nwith scraps of paper is that they're not ordered. In a notebook\nyou can guess what a scribble means by looking at the pages\naround it. In the scrap era I was constantly finding notes I'd\nwritten years before that might say something I needed to remember,\nif I could only figure out what.As for books, I know the house would probably have something to\nread. On the average trip I bring four books and only read one of\nthem, because I find new books to read en route. Really bringing\nbooks is insurance.I realize this dependence on books is not entirely good\u2014that what\nI need them for is distraction. The books I bring on trips are\noften quite virtuous, the sort of stuff that might be assigned\nreading in a college class. But I know my motives aren't virtuous.\nI bring books because if the world gets boring I need to be able\nto slip into another distilled by some writer. It's like eating\njam when you know you should be eating fruit.There is a point where I'll do without books. I was walking in\nsome steep mountains once, and decided I'd rather just think, if I\nwas bored, rather than carry a single unnecessary ounce. It wasn't\nso bad. I found I could entertain myself by having ideas instead\nof reading other people's. If you stop eating jam, fruit starts\nto taste better.So maybe I'll try not bringing books on some future trip. They're\ngoing to have to pry the plugs out of my cold, dead ears, however."} {"title": "vcsqueeze", "text": "November 2005In the next few years, venture capital funds will find themselves\nsqueezed from four directions. They're already stuck with a seller's\nmarket, because of the huge amounts they raised at the end of the\nBubble and still haven't invested. This by itself is not the end\nof the world. In fact, it's just a more extreme version of the\nnorm\nin the VC business: too much money chasing too few deals.Unfortunately, those few deals now want less and less money, because\nit's getting so cheap to start a startup. The four causes: open\nsource, which makes software free; Moore's law, which makes hardware\ngeometrically closer to free; the Web, which makes promotion free\nif you're good; and better languages, which make development a lot\ncheaper.When we started our startup in 1995, the first three were our biggest\nexpenses. We had to pay $5000 for the Netscape Commerce Server,\nthe only software that then supported secure http connections. We\npaid $3000 for a server with a 90 MHz processor and 32 meg of\nmemory. And we paid a PR firm about $30,000 to promote our launch.Now you could get all three for nothing. You can get the software\nfor free; people throw away computers more powerful than our first\nserver; and if you make something good you can generate ten times\nas much traffic by word of mouth online than our first PR firm got\nthrough the print media.And of course another big change for the average startup is that\nprogramming languages have improved-- or rather, the median language has. At most startups ten years\nago, software development meant ten programmers writing code in\nC++. Now the same work might be done by one or two using Python\nor Ruby.During the Bubble, a lot of people predicted that startups would\noutsource their development to India. I think a better model for\nthe future is David Heinemeier Hansson, who outsourced his development\nto a more powerful language instead. A lot of well-known applications\nare now, like BaseCamp, written by just one programmer. And one\nguy is more than 10x cheaper than ten, because (a) he won't waste\nany time in meetings, and (b) since he's probably a founder, he can\npay himself nothing.Because starting a startup is so cheap, venture capitalists now\noften want to give startups more money than the startups want to\ntake. VCs like to invest several million at a time. But as one\nVC told me after a startup he funded would only take about half a\nmillion, \"I don't know what we're going to do. Maybe we'll just\nhave to give some of it back.\" Meaning give some of the fund back\nto the institutional investors who supplied it, because it wasn't\ngoing to be possible to invest it all.Into this already bad situation comes the third problem: Sarbanes-Oxley.\nSarbanes-Oxley is a law, passed after the Bubble, that drastically\nincreases the regulatory burden on public companies. And in addition\nto the cost of compliance, which is at least two million dollars a\nyear, the law introduces frightening legal exposure for corporate\nofficers. An experienced CFO I know said flatly: \"I would not\nwant to be CFO of a public company now.\"You might think that responsible corporate governance is an area\nwhere you can't go too far. But you can go too far in any law, and\nthis remark convinced me that Sarbanes-Oxley must have. This CFO\nis both the smartest and the most upstanding money guy I know. If\nSarbanes-Oxley deters people like him from being CFOs of public \ncompanies, that's proof enough that it's broken.Largely because of Sarbanes-Oxley, few startups go public now. For\nall practical purposes, succeeding now equals getting bought. Which\nmeans VCs are now in the business of finding promising little 2-3\nman startups and pumping them up into companies that cost $100\nmillion to acquire. They didn't mean to be in this business; it's\njust what their business has evolved into.Hence the fourth problem: the acquirers have begun to realize they\ncan buy wholesale. Why should they wait for VCs to make the startups\nthey want more expensive? Most of what the VCs add, acquirers don't\nwant anyway. The acquirers already have brand recognition and HR\ndepartments. What they really want is the software and the developers,\nand that's what the startup is in the early phase: concentrated\nsoftware and developers.Google, typically, seems to have been the first to figure this out.\n\"Bring us your startups early,\" said Google's speaker at the Startup School. They're quite\nexplicit about it: they like to acquire startups at just the point\nwhere they would do a Series A round. (The Series A round is the\nfirst round of real VC funding; it usually happens in the first\nyear.) It is a brilliant strategy, and one that other big technology\ncompanies will no doubt try to duplicate. Unless they want to have \nstill more of their lunch eaten by Google.Of course, Google has an advantage in buying startups: a lot of the\npeople there are rich, or expect to be when their options vest.\nOrdinary employees find it very hard to recommend an acquisition;\nit's just too annoying to see a bunch of twenty year olds get rich\nwhen you're still working for salary. Even if it's the right thing \nfor your company to do.The Solution(s)Bad as things look now, there is a way for VCs to save themselves.\nThey need to do two things, one of which won't surprise them, and \nanother that will seem an anathema.Let's start with the obvious one: lobby to get Sarbanes-Oxley \nloosened. This law was created to prevent future Enrons, not to\ndestroy the IPO market. Since the IPO market was practically dead\nwhen it passed, few saw what bad effects it would have. But now \nthat technology has recovered from the last bust, we can see clearly\nwhat a bottleneck Sarbanes-Oxley has become.Startups are fragile plants\u2014seedlings, in fact. These seedlings\nare worth protecting, because they grow into the trees of the\neconomy. Much of the economy's growth is their growth. I think\nmost politicians realize that. But they don't realize just how \nfragile startups are, and how easily they can become collateral\ndamage of laws meant to fix some other problem.Still more dangerously, when you destroy startups, they make very\nlittle noise. If you step on the toes of the coal industry, you'll\nhear about it. But if you inadvertantly squash the startup industry,\nall that happens is that the founders of the next Google stay in \ngrad school instead of starting a company.My second suggestion will seem shocking to VCs: let founders cash \nout partially in the Series A round. At the moment, when VCs invest\nin a startup, all the stock they get is newly issued and all the \nmoney goes to the company. They could buy some stock directly from\nthe founders as well.Most VCs have an almost religious rule against doing this. They\ndon't want founders to get a penny till the company is sold or goes\npublic. VCs are obsessed with control, and they worry that they'll\nhave less leverage over the founders if the founders have any money.This is a dumb plan. In fact, letting the founders sell a little stock\nearly would generally be better for the company, because it would\ncause the founders' attitudes toward risk to be aligned with the\nVCs'. As things currently work, their attitudes toward risk tend\nto be diametrically opposed: the founders, who have nothing, would\nprefer a 100% chance of $1 million to a 20% chance of $10 million,\nwhile the VCs can afford to be \"rational\" and prefer the latter.Whatever they say, the reason founders are selling their companies\nearly instead of doing Series A rounds is that they get paid up\nfront. That first million is just worth so much more than the\nsubsequent ones. If founders could sell a little stock early,\nthey'd be happy to take VC money and bet the rest on a bigger\noutcome.So why not let the founders have that first million, or at least\nhalf million? The VCs would get same number of shares for the \nmoney. So what if some of the money would go to the \nfounders instead of the company?Some VCs will say this is\nunthinkable\u2014that they want all their money to be put to work\ngrowing the company. But the fact is, the huge size of current VC\ninvestments is dictated by the structure\nof VC funds, not the needs of startups. Often as not these large \ninvestments go to work destroying the company rather than growing\nit.The angel investors who funded our startup let the founders sell\nsome stock directly to them, and it was a good deal for everyone. \nThe angels made a huge return on that investment, so they're happy.\nAnd for us founders it blunted the terrifying all-or-nothingness\nof a startup, which in its raw form is more a distraction than a\nmotivator.If VCs are frightened at the idea of letting founders partially\ncash out, let me tell them something still more frightening: you\nare now competing directly with Google.\nThanks to Trevor Blackwell, Sarah Harlin, Jessica\nLivingston, and Robert Morris for reading drafts of this."} {"title": "wisdom", "text": "February 2007A few days ago I finally figured out something I've wondered about\nfor 25 years: the relationship between wisdom and intelligence.\nAnyone can see they're not the same by the number of people who are\nsmart, but not very wise. And yet intelligence and wisdom do seem\nrelated. How?What is wisdom? I'd say it's knowing what to do in a lot of\nsituations. I'm not trying to make a deep point here about the\ntrue nature of wisdom, just to figure out how we use the word. A\nwise person is someone who usually knows the right thing to do.And yet isn't being smart also knowing what to do in certain\nsituations? For example, knowing what to do when the teacher tells\nyour elementary school class to add all the numbers from 1 to 100?\n[1]Some say wisdom and intelligence apply to different types of\nproblems\u2014wisdom to human problems and intelligence to abstract\nones. But that isn't true. Some wisdom has nothing to do with\npeople: for example, the wisdom of the engineer who knows certain\nstructures are less prone to failure than others. And certainly\nsmart people can find clever solutions to human problems as well\nas abstract ones. \n[2]Another popular explanation is that wisdom comes from experience\nwhile intelligence is innate. But people are not simply wise in\nproportion to how much experience they have. Other things must\ncontribute to wisdom besides experience, and some may be innate: a\nreflective disposition, for example.Neither of the conventional explanations of the difference between\nwisdom and intelligence stands up to scrutiny. So what is the\ndifference? If we look at how people use the words \"wise\" and\n\"smart,\" what they seem to mean is different shapes of performance.Curve\"Wise\" and \"smart\" are both ways of saying someone knows what to\ndo. The difference is that \"wise\" means one has a high average\noutcome across all situations, and \"smart\" means one does spectacularly\nwell in a few. That is, if you had a graph in which the x axis\nrepresented situations and the y axis the outcome, the graph of the\nwise person would be high overall, and the graph of the smart person\nwould have high peaks.The distinction is similar to the rule that one should judge talent\nat its best and character at its worst. Except you judge intelligence\nat its best, and wisdom by its average. That's how the two are\nrelated: they're the two different senses in which the same curve\ncan be high.So a wise person knows what to do in most situations, while a smart\nperson knows what to do in situations where few others could. We\nneed to add one more qualification: we should ignore cases where\nsomeone knows what to do because they have inside information. \n[3]\nBut aside from that, I don't think we can get much more specific\nwithout starting to be mistaken.Nor do we need to. Simple as it is, this explanation predicts, or\nat least accords with, both of the conventional stories about the\ndistinction between wisdom and intelligence. Human problems are\nthe most common type, so being good at solving those is key in\nachieving a high average outcome. And it seems natural that a\nhigh average outcome depends mostly on experience, but that dramatic\npeaks can only be achieved by people with certain rare, innate\nqualities; nearly anyone can learn to be a good swimmer, but to be\nan Olympic swimmer you need a certain body type.This explanation also suggests why wisdom is such an elusive concept:\nthere's no such thing. \"Wise\" means something\u2014that one is\non average good at making the right choice. But giving the name\n\"wisdom\" to the supposed quality that enables one to do that doesn't\nmean such a thing exists. To the extent \"wisdom\" means anything,\nit refers to a grab-bag of qualities as various as self-discipline,\nexperience, and empathy. \n[4]Likewise, though \"intelligent\" means something, we're asking for\ntrouble if we insist on looking for a single thing called \"intelligence.\"\nAnd whatever its components, they're not all innate. We use the\nword \"intelligent\" as an indication of ability: a smart person can\ngrasp things few others could. It does seem likely there's some\ninborn predisposition to intelligence (and wisdom too), but this\npredisposition is not itself intelligence.One reason we tend to think of intelligence as inborn is that people\ntrying to measure it have concentrated on the aspects of it that\nare most measurable. A quality that's inborn will obviously be\nmore convenient to work with than one that's influenced by experience,\nand thus might vary in the course of a study. The problem comes\nwhen we drag the word \"intelligence\" over onto what they're measuring.\nIf they're measuring something inborn, they can't be measuring\nintelligence. Three year olds aren't smart. When we describe one\nas smart, it's shorthand for \"smarter than other three year olds.\"SplitPerhaps it's a technicality to point out that a predisposition to\nintelligence is not the same as intelligence. But it's an important\ntechnicality, because it reminds us that we can become smarter,\njust as we can become wiser.The alarming thing is that we may have to choose between the two.If wisdom and intelligence are the average and peaks of the same\ncurve, then they converge as the number of points on the curve\ndecreases. If there's just one point, they're identical: the average\nand maximum are the same. But as the number of points increases,\nwisdom and intelligence diverge. And historically the number of\npoints on the curve seems to have been increasing: our ability is\ntested in an ever wider range of situations.In the time of Confucius and Socrates, people seem to have regarded\nwisdom, learning, and intelligence as more closely related than we\ndo. Distinguishing between \"wise\" and \"smart\" is a modern habit.\n[5]\nAnd the reason we do is that they've been diverging. As knowledge\ngets more specialized, there are more points on the curve, and the\ndistinction between the spikes and the average becomes sharper,\nlike a digital image rendered with more pixels.One consequence is that some old recipes may have become obsolete.\nAt the very least we have to go back and figure out if they were\nreally recipes for wisdom or intelligence. But the really striking\nchange, as intelligence and wisdom drift apart, is that we may have\nto decide which we prefer. We may not be able to optimize for both\nsimultaneously.Society seems to have voted for intelligence. We no longer admire\nthe sage\u2014not the way people did two thousand years ago. Now\nwe admire the genius. Because in fact the distinction we began\nwith has a rather brutal converse: just as you can be smart without\nbeing very wise, you can be wise without being very smart. That\ndoesn't sound especially admirable. That gets you James Bond, who\nknows what to do in a lot of situations, but has to rely on Q for\nthe ones involving math.Intelligence and wisdom are obviously not mutually exclusive. In\nfact, a high average may help support high peaks. But there are\nreasons to believe that at some point you have to choose between\nthem. One is the example of very smart people, who are so often\nunwise that in popular culture this now seems to be regarded as the\nrule rather than the exception. Perhaps the absent-minded professor\nis wise in his way, or wiser than he seems, but he's not wise in\nthe way Confucius or Socrates wanted people to be. \n[6]NewFor both Confucius and Socrates, wisdom, virtue, and happiness were\nnecessarily related. The wise man was someone who knew what the\nright choice was and always made it; to be the right choice, it had\nto be morally right; he was therefore always happy, knowing he'd\ndone the best he could. I can't think of many ancient philosophers\nwho would have disagreed with that, so far as it goes.\"The superior man is always happy; the small man sad,\" said Confucius.\n[7]Whereas a few years ago I read an interview with a mathematician\nwho said that most nights he went to bed discontented, feeling he\nhadn't made enough progress. \n[8]\nThe Chinese and Greek words we\ntranslate as \"happy\" didn't mean exactly what we do by it, but\nthere's enough overlap that this remark contradicts them.Is the mathematician a small man because he's discontented? No;\nhe's just doing a kind of work that wasn't very common in Confucius's\nday.Human knowledge seems to grow fractally. Time after time, something\nthat seemed a small and uninteresting area\u2014experimental error,\neven\u2014turns out, when examined up close, to have as much in\nit as all knowledge up to that point. Several of the fractal buds\nthat have exploded since ancient times involve inventing and\ndiscovering new things. Math, for example, used to be something a\nhandful of people did part-time. Now it's the career of thousands.\nAnd in work that involves making new things, some old rules don't\napply.Recently I've spent some time advising people, and there I find the\nancient rule still works: try to understand the situation as well\nas you can, give the best advice you can based on your experience,\nand then don't worry about it, knowing you did all you could. But\nI don't have anything like this serenity when I'm writing an essay.\nThen I'm worried. What if I run out of ideas? And when I'm writing,\nfour nights out of five I go to bed discontented, feeling I didn't\nget enough done.Advising people and writing are fundamentally different types of\nwork. When people come to you with a problem and you have to figure\nout the right thing to do, you don't (usually) have to invent\nanything. You just weigh the alternatives and try to judge which\nis the prudent choice. But prudence can't tell me what sentence\nto write next. The search space is too big.Someone like a judge or a military officer can in much of his work\nbe guided by duty, but duty is no guide in making things. Makers\ndepend on something more precarious: inspiration. And like most\npeople who lead a precarious existence, they tend to be worried,\nnot contented. In that respect they're more like the small man of\nConfucius's day, always one bad harvest (or ruler) away from\nstarvation. Except instead of being at the mercy of weather and\nofficials, they're at the mercy of their own imagination.LimitsTo me it was a relief just to realize it might be ok to be discontented.\nThe idea that a successful person should be happy has thousands of\nyears of momentum behind it. If I was any good, why didn't I have\nthe easy confidence winners are supposed to have? But that, I now\nbelieve, is like a runner asking \"If I'm such a good athlete, why\ndo I feel so tired?\" Good runners still get tired; they just get\ntired at higher speeds.People whose work is to invent or discover things are in the same\nposition as the runner. There's no way for them to do the best\nthey can, because there's no limit to what they could do. The\nclosest you can come is to compare yourself to other people. But\nthe better you do, the less this matters. An undergrad who gets\nsomething published feels like a star. But for someone at the top\nof the field, what's the test of doing well? Runners can at least\ncompare themselves to others doing exactly the same thing; if you\nwin an Olympic gold medal, you can be fairly content, even if you\nthink you could have run a bit faster. But what is a novelist to\ndo?Whereas if you're doing the kind of work in which problems are\npresented to you and you have to choose between several alternatives,\nthere's an upper bound on your performance: choosing the best every\ntime. In ancient societies, nearly all work seems to have been of\nthis type. The peasant had to decide whether a garment was worth\nmending, and the king whether or not to invade his neighbor, but\nneither was expected to invent anything. In principle they could\nhave; the king could have invented firearms, then invaded his\nneighbor. But in practice innovations were so rare that they weren't\nexpected of you, any more than goalkeepers are expected to score\ngoals. \n[9]\nIn practice, it seemed as if there was a correct decision\nin every situation, and if you made it you'd done your job perfectly,\njust as a goalkeeper who prevents the other team from scoring is\nconsidered to have played a perfect game.In this world, wisdom seemed paramount. \n[10]\nEven now, most people\ndo work in which problems are put before them and they have to\nchoose the best alternative. But as knowledge has grown more\nspecialized, there are more and more types of work in which people\nhave to make up new things, and in which performance is therefore\nunbounded. Intelligence has become increasingly important relative\nto wisdom because there is more room for spikes.RecipesAnother sign we may have to choose between intelligence and wisdom\nis how different their recipes are. Wisdom seems to come largely\nfrom curing childish qualities, and intelligence largely from\ncultivating them.Recipes for wisdom, particularly ancient ones, tend to have a\nremedial character. To achieve wisdom one must cut away all the\ndebris that fills one's head on emergence from childhood, leaving\nonly the important stuff. Both self-control and experience have\nthis effect: to eliminate the random biases that come from your own\nnature and from the circumstances of your upbringing respectively.\nThat's not all wisdom is, but it's a large part of it. Much of\nwhat's in the sage's head is also in the head of every twelve year\nold. The difference is that in the head of the twelve year old\nit's mixed together with a lot of random junk.The path to intelligence seems to be through working on hard problems.\nYou develop intelligence as you might develop muscles, through\nexercise. But there can't be too much compulsion here. No amount\nof discipline can replace genuine curiosity. So cultivating\nintelligence seems to be a matter of identifying some bias in one's\ncharacter\u2014some tendency to be interested in certain types of\nthings\u2014and nurturing it. Instead of obliterating your\nidiosyncrasies in an effort to make yourself a neutral vessel for\nthe truth, you select one and try to grow it from a seedling into\na tree.The wise are all much alike in their wisdom, but very smart people\ntend to be smart in distinctive ways.Most of our educational traditions aim at wisdom. So perhaps one\nreason schools work badly is that they're trying to make intelligence\nusing recipes for wisdom. Most recipes for wisdom have an element\nof subjection. At the very least, you're supposed to do what the\nteacher says. The more extreme recipes aim to break down your\nindividuality the way basic training does. But that's not the route\nto intelligence. Whereas wisdom comes through humility, it may\nactually help, in cultivating intelligence, to have a mistakenly\nhigh opinion of your abilities, because that encourages you to keep\nworking. Ideally till you realize how mistaken you were.(The reason it's hard to learn new skills late in life is not just\nthat one's brain is less malleable. Another probably even worse\nobstacle is that one has higher standards.)I realize we're on dangerous ground here. I'm not proposing the\nprimary goal of education should be to increase students' \"self-esteem.\"\nThat just breeds laziness. And in any case, it doesn't really fool\nthe kids, not the smart ones. They can tell at a young age that a\ncontest where everyone wins is a fraud.A teacher has to walk a narrow path: you want to encourage kids to\ncome up with things on their own, but you can't simply applaud\neverything they produce. You have to be a good audience: appreciative,\nbut not too easily impressed. And that's a lot of work. You have\nto have a good enough grasp of kids' capacities at different ages\nto know when to be surprised.That's the opposite of traditional recipes for education. Traditionally\nthe student is the audience, not the teacher; the student's job is\nnot to invent, but to absorb some prescribed body of material. (The\nuse of the term \"recitation\" for sections in some colleges is a\nfossil of this.) The problem with these old traditions is that\nthey're too much influenced by recipes for wisdom.DifferentI deliberately gave this essay a provocative title; of course it's\nworth being wise. But I think it's important to understand the\nrelationship between intelligence and wisdom, and particularly what\nseems to be the growing gap between them. That way we can avoid\napplying rules and standards to intelligence that are really meant\nfor wisdom. These two senses of \"knowing what to do\" are more\ndifferent than most people realize. The path to wisdom is through\ndiscipline, and the path to intelligence through carefully selected\nself-indulgence. Wisdom is universal, and intelligence idiosyncratic.\nAnd while wisdom yields calmness, intelligence much of the time\nleads to discontentment.That's particularly worth remembering. A physicist friend recently\ntold me half his department was on Prozac. Perhaps if we acknowledge\nthat some amount of frustration is inevitable in certain kinds\nof work, we can mitigate its effects. Perhaps we can box it up and\nput it away some of the time, instead of letting it flow together\nwith everyday sadness to produce what seems an alarmingly large\npool. At the very least, we can avoid being discontented about\nbeing discontented.If you feel exhausted, it's not necessarily because there's something\nwrong with you. Maybe you're just running fast.Notes[1]\nGauss was supposedly asked this when he was 10. Instead of\nlaboriously adding together the numbers like the other students,\nhe saw that they consisted of 50 pairs that each summed to 101 (100\n+ 1, 99 + 2, etc), and that he could just multiply 101 by 50 to get\nthe answer, 5050.[2]\nA variant is that intelligence is the ability to solve problems,\nand wisdom the judgement to know how to use those solutions. But\nwhile this is certainly an important relationship between wisdom\nand intelligence, it's not the distinction between them. Wisdom\nis useful in solving problems too, and intelligence can help in\ndeciding what to do with the solutions.[3]\nIn judging both intelligence and wisdom we have to factor out\nsome knowledge. People who know the combination of a safe will be\nbetter at opening it than people who don't, but no one would say\nthat was a test of intelligence or wisdom.But knowledge overlaps with wisdom and probably also intelligence.\nA knowledge of human nature is certainly part of wisdom. So where\ndo we draw the line?Perhaps the solution is to discount knowledge that at some point\nhas a sharp drop in utility. For example, understanding French\nwill help you in a large number of situations, but its value drops\nsharply as soon as no one else involved knows French. Whereas the\nvalue of understanding vanity would decline more gradually.The knowledge whose utility drops sharply is the kind that has\nlittle relation to other knowledge. This includes mere conventions,\nlike languages and safe combinations, and also what we'd call\n\"random\" facts, like movie stars' birthdays, or how to distinguish\n1956 from 1957 Studebakers.[4]\nPeople seeking some single thing called \"wisdom\" have been\nfooled by grammar. Wisdom is just knowing the right thing to do,\nand there are a hundred and one different qualities that help in\nthat. Some, like selflessness, might come from meditating in an\nempty room, and others, like a knowledge of human nature, might\ncome from going to drunken parties.Perhaps realizing this will help dispel the cloud of semi-sacred\nmystery that surrounds wisdom in so many people's eyes. The mystery\ncomes mostly from looking for something that doesn't exist. And\nthe reason there have historically been so many different schools\nof thought about how to achieve wisdom is that they've focused on\ndifferent components of it.When I use the word \"wisdom\" in this essay, I mean no more than\nwhatever collection of qualities helps people make the right choice\nin a wide variety of situations.[5]\nEven in English, our sense of the word \"intelligence\" is\nsurprisingly recent. Predecessors like \"understanding\" seem to\nhave had a broader meaning.[6]\nThere is of course some uncertainty about how closely the remarks\nattributed to Confucius and Socrates resemble their actual opinions.\nI'm using these names as we use the name \"Homer,\" to mean the\nhypothetical people who said the things attributed to them.[7]\nAnalects VII:36, Fung trans.Some translators use \"calm\" instead of \"happy.\" One source of\ndifficulty here is that present-day English speakers have a different\nidea of happiness from many older societies. Every language probably\nhas a word meaning \"how one feels when things are going well,\" but\ndifferent cultures react differently when things go well. We react\nlike children, with smiles and laughter. But in a more reserved\nsociety, or in one where life was tougher, the reaction might be a\nquiet contentment.[8]\nIt may have been Andrew Wiles, but I'm not sure. If anyone\nremembers such an interview, I'd appreciate hearing from you.[9]\nConfucius claimed proudly that he had never invented\nanything\u2014that he had simply passed on an accurate account of\nancient traditions. [Analects VII:1] It's hard for us now to\nappreciate how important a duty it must have been in preliterate\nsocieties to remember and pass on the group's accumulated knowledge.\nEven in Confucius's time it still seems to have been the first duty\nof the scholar.[10]\nThe bias toward wisdom in ancient philosophy may be exaggerated\nby the fact that, in both Greece and China, many of the first\nphilosophers (including Confucius and Plato) saw themselves as\nteachers of administrators, and so thought disproportionately about\nsuch matters. The few people who did invent things, like storytellers,\nmust have seemed an outlying data point that could be ignored.Thanks to Trevor Blackwell, Sarah Harlin, Jessica Livingston,\nand Robert Morris for reading drafts of this."} {"title": "worked", "text": "February 2021Before college the two main things I worked on, outside of school,\nwere writing and programming. I didn't write essays. I wrote what\nbeginning writers were supposed to write then, and probably still\nare: short stories. My stories were awful. They had hardly any plot,\njust characters with strong feelings, which I imagined made them\ndeep.The first programs I tried writing were on the IBM 1401 that our\nschool district used for what was then called \"data processing.\"\nThis was in 9th grade, so I was 13 or 14. The school district's\n1401 happened to be in the basement of our junior high school, and\nmy friend Rich Draves and I got permission to use it. It was like\na mini Bond villain's lair down there, with all these alien-looking\nmachines \u0097 CPU, disk drives, printer, card reader \u0097 sitting up\non a raised floor under bright fluorescent lights.The language we used was an early version of Fortran. You had to\ntype programs on punch cards, then stack them in the card reader\nand press a button to load the program into memory and run it. The\nresult would ordinarily be to print something on the spectacularly\nloud printer.I was puzzled by the 1401. I couldn't figure out what to do with\nit. And in retrospect there's not much I could have done with it.\nThe only form of input to programs was data stored on punched cards,\nand I didn't have any data stored on punched cards. The only other\noption was to do things that didn't rely on any input, like calculate\napproximations of pi, but I didn't know enough math to do anything\ninteresting of that type. So I'm not surprised I can't remember any\nprograms I wrote, because they can't have done much. My clearest\nmemory is of the moment I learned it was possible for programs not\nto terminate, when one of mine didn't. On a machine without\ntime-sharing, this was a social as well as a technical error, as\nthe data center manager's expression made clear.With microcomputers, everything changed. Now you could have a\ncomputer sitting right in front of you, on a desk, that could respond\nto your keystrokes as it was running instead of just churning through\na stack of punch cards and then stopping. \n[1]The first of my friends to get a microcomputer built it himself.\nIt was sold as a kit by Heathkit. I remember vividly how impressed\nand envious I felt watching him sitting in front of it, typing\nprograms right into the computer.Computers were expensive in those days and it took me years of\nnagging before I convinced my father to buy one, a TRS-80, in about\n1980. The gold standard then was the Apple II, but a TRS-80 was\ngood enough. This was when I really started programming. I wrote\nsimple games, a program to predict how high my model rockets would\nfly, and a word processor that my father used to write at least one\nbook. There was only room in memory for about 2 pages of text, so\nhe'd write 2 pages at a time and then print them out, but it was a\nlot better than a typewriter.Though I liked programming, I didn't plan to study it in college.\nIn college I was going to study philosophy, which sounded much more\npowerful. It seemed, to my naive high school self, to be the study\nof the ultimate truths, compared to which the things studied in\nother fields would be mere domain knowledge. What I discovered when\nI got to college was that the other fields took up so much of the\nspace of ideas that there wasn't much left for these supposed\nultimate truths. All that seemed left for philosophy were edge cases\nthat people in other fields felt could safely be ignored.I couldn't have put this into words when I was 18. All I knew at\nthe time was that I kept taking philosophy courses and they kept\nbeing boring. So I decided to switch to AI.AI was in the air in the mid 1980s, but there were two things\nespecially that made me want to work on it: a novel by Heinlein\ncalled The Moon is a Harsh Mistress, which featured an intelligent\ncomputer called Mike, and a PBS documentary that showed Terry\nWinograd using SHRDLU. I haven't tried rereading The Moon is a Harsh\nMistress, so I don't know how well it has aged, but when I read it\nI was drawn entirely into its world. It seemed only a matter of\ntime before we'd have Mike, and when I saw Winograd using SHRDLU,\nit seemed like that time would be a few years at most. All you had\nto do was teach SHRDLU more words.There weren't any classes in AI at Cornell then, not even graduate\nclasses, so I started trying to teach myself. Which meant learning\nLisp, since in those days Lisp was regarded as the language of AI.\nThe commonly used programming languages then were pretty primitive,\nand programmers' ideas correspondingly so. The default language at\nCornell was a Pascal-like language called PL/I, and the situation\nwas similar elsewhere. Learning Lisp expanded my concept of a program\nso fast that it was years before I started to have a sense of where\nthe new limits were. This was more like it; this was what I had\nexpected college to do. It wasn't happening in a class, like it was\nsupposed to, but that was ok. For the next couple years I was on a\nroll. I knew what I was going to do.For my undergraduate thesis, I reverse-engineered SHRDLU. My God\ndid I love working on that program. It was a pleasing bit of code,\nbut what made it even more exciting was my belief \u0097 hard to imagine\nnow, but not unique in 1985 \u0097 that it was already climbing the\nlower slopes of intelligence.I had gotten into a program at Cornell that didn't make you choose\na major. You could take whatever classes you liked, and choose\nwhatever you liked to put on your degree. I of course chose \"Artificial\nIntelligence.\" When I got the actual physical diploma, I was dismayed\nto find that the quotes had been included, which made them read as\nscare-quotes. At the time this bothered me, but now it seems amusingly\naccurate, for reasons I was about to discover.I applied to 3 grad schools: MIT and Yale, which were renowned for\nAI at the time, and Harvard, which I'd visited because Rich Draves\nwent there, and was also home to Bill Woods, who'd invented the\ntype of parser I used in my SHRDLU clone. Only Harvard accepted me,\nso that was where I went.I don't remember the moment it happened, or if there even was a\nspecific moment, but during the first year of grad school I realized\nthat AI, as practiced at the time, was a hoax. By which I mean the\nsort of AI in which a program that's told \"the dog is sitting on\nthe chair\" translates this into some formal representation and adds\nit to the list of things it knows.What these programs really showed was that there's a subset of\nnatural language that's a formal language. But a very proper subset.\nIt was clear that there was an unbridgeable gap between what they\ncould do and actually understanding natural language. It was not,\nin fact, simply a matter of teaching SHRDLU more words. That whole\nway of doing AI, with explicit data structures representing concepts,\nwas not going to work. Its brokenness did, as so often happens,\ngenerate a lot of opportunities to write papers about various\nband-aids that could be applied to it, but it was never going to\nget us Mike.So I looked around to see what I could salvage from the wreckage\nof my plans, and there was Lisp. I knew from experience that Lisp\nwas interesting for its own sake and not just for its association\nwith AI, even though that was the main reason people cared about\nit at the time. So I decided to focus on Lisp. In fact, I decided\nto write a book about Lisp hacking. It's scary to think how little\nI knew about Lisp hacking when I started writing that book. But\nthere's nothing like writing a book about something to help you\nlearn it. The book, On Lisp, wasn't published till 1993, but I wrote\nmuch of it in grad school.Computer Science is an uneasy alliance between two halves, theory\nand systems. The theory people prove things, and the systems people\nbuild things. I wanted to build things. I had plenty of respect for\ntheory \u0097 indeed, a sneaking suspicion that it was the more admirable\nof the two halves \u0097 but building things seemed so much more exciting.The problem with systems work, though, was that it didn't last.\nAny program you wrote today, no matter how good, would be obsolete\nin a couple decades at best. People might mention your software in\nfootnotes, but no one would actually use it. And indeed, it would\nseem very feeble work. Only people with a sense of the history of\nthe field would even realize that, in its time, it had been good.There were some surplus Xerox Dandelions floating around the computer\nlab at one point. Anyone who wanted one to play around with could\nhave one. I was briefly tempted, but they were so slow by present\nstandards; what was the point? No one else wanted one either, so\noff they went. That was what happened to systems work.I wanted not just to build things, but to build things that would\nlast.In this dissatisfied state I went in 1988 to visit Rich Draves at\nCMU, where he was in grad school. One day I went to visit the\nCarnegie Institute, where I'd spent a lot of time as a kid. While\nlooking at a painting there I realized something that might seem\nobvious, but was a big surprise to me. There, right on the wall,\nwas something you could make that would last. Paintings didn't\nbecome obsolete. Some of the best ones were hundreds of years old.And moreover this was something you could make a living doing. Not\nas easily as you could by writing software, of course, but I thought\nif you were really industrious and lived really cheaply, it had to\nbe possible to make enough to survive. And as an artist you could\nbe truly independent. You wouldn't have a boss, or even need to get\nresearch funding.I had always liked looking at paintings. Could I make them? I had\nno idea. I'd never imagined it was even possible. I knew intellectually\nthat people made art \u0097 that it didn't just appear spontaneously\n\u0097 but it was as if the people who made it were a different species.\nThey either lived long ago or were mysterious geniuses doing strange\nthings in profiles in Life magazine. The idea of actually being\nable to make art, to put that verb before that noun, seemed almost\nmiraculous.That fall I started taking art classes at Harvard. Grad students\ncould take classes in any department, and my advisor, Tom Cheatham,\nwas very easy going. If he even knew about the strange classes I\nwas taking, he never said anything.So now I was in a PhD program in computer science, yet planning to\nbe an artist, yet also genuinely in love with Lisp hacking and\nworking away at On Lisp. In other words, like many a grad student,\nI was working energetically on multiple projects that were not my\nthesis.I didn't see a way out of this situation. I didn't want to drop out\nof grad school, but how else was I going to get out? I remember\nwhen my friend Robert Morris got kicked out of Cornell for writing\nthe internet worm of 1988, I was envious that he'd found such a\nspectacular way to get out of grad school.Then one day in April 1990 a crack appeared in the wall. I ran into\nprofessor Cheatham and he asked if I was far enough along to graduate\nthat June. I didn't have a word of my dissertation written, but in\nwhat must have been the quickest bit of thinking in my life, I\ndecided to take a shot at writing one in the 5 weeks or so that\nremained before the deadline, reusing parts of On Lisp where I\ncould, and I was able to respond, with no perceptible delay \"Yes,\nI think so. I'll give you something to read in a few days.\"I picked applications of continuations as the topic. In retrospect\nI should have written about macros and embedded languages. There's\na whole world there that's barely been explored. But all I wanted\nwas to get out of grad school, and my rapidly written dissertation\nsufficed, just barely.Meanwhile I was applying to art schools. I applied to two: RISD in\nthe US, and the Accademia di Belli Arti in Florence, which, because\nit was the oldest art school, I imagined would be good. RISD accepted\nme, and I never heard back from the Accademia, so off to Providence\nI went.I'd applied for the BFA program at RISD, which meant in effect that\nI had to go to college again. This was not as strange as it sounds,\nbecause I was only 25, and art schools are full of people of different\nages. RISD counted me as a transfer sophomore and said I had to do\nthe foundation that summer. The foundation means the classes that\neveryone has to take in fundamental subjects like drawing, color,\nand design.Toward the end of the summer I got a big surprise: a letter from\nthe Accademia, which had been delayed because they'd sent it to\nCambridge England instead of Cambridge Massachusetts, inviting me\nto take the entrance exam in Florence that fall. This was now only\nweeks away. My nice landlady let me leave my stuff in her attic. I\nhad some money saved from consulting work I'd done in grad school;\nthere was probably enough to last a year if I lived cheaply. Now\nall I had to do was learn Italian.Only stranieri (foreigners) had to take this entrance exam. In\nretrospect it may well have been a way of excluding them, because\nthere were so many stranieri attracted by the idea of studying\nart in Florence that the Italian students would otherwise have been\noutnumbered. I was in decent shape at painting and drawing from the\nRISD foundation that summer, but I still don't know how I managed\nto pass the written exam. I remember that I answered the essay\nquestion by writing about Cezanne, and that I cranked up the\nintellectual level as high as I could to make the most of my limited\nvocabulary. \n[2]I'm only up to age 25 and already there are such conspicuous patterns.\nHere I was, yet again about to attend some august institution in\nthe hopes of learning about some prestigious subject, and yet again\nabout to be disappointed. The students and faculty in the painting\ndepartment at the Accademia were the nicest people you could imagine,\nbut they had long since arrived at an arrangement whereby the\nstudents wouldn't require the faculty to teach anything, and in\nreturn the faculty wouldn't require the students to learn anything.\nAnd at the same time all involved would adhere outwardly to the\nconventions of a 19th century atelier. We actually had one of those\nlittle stoves, fed with kindling, that you see in 19th century\nstudio paintings, and a nude model sitting as close to it as possible\nwithout getting burned. Except hardly anyone else painted her besides\nme. The rest of the students spent their time chatting or occasionally\ntrying to imitate things they'd seen in American art magazines.Our model turned out to live just down the street from me. She made\na living from a combination of modelling and making fakes for a\nlocal antique dealer. She'd copy an obscure old painting out of a\nbook, and then he'd take the copy and maltreat it to make it look\nold. \n[3]While I was a student at the Accademia I started painting still\nlives in my bedroom at night. These paintings were tiny, because\nthe room was, and because I painted them on leftover scraps of\ncanvas, which was all I could afford at the time. Painting still\nlives is different from painting people, because the subject, as\nits name suggests, can't move. People can't sit for more than about\n15 minutes at a time, and when they do they don't sit very still.\nSo the traditional m.o. for painting people is to know how to paint\na generic person, which you then modify to match the specific person\nyou're painting. Whereas a still life you can, if you want, copy\npixel by pixel from what you're seeing. You don't want to stop\nthere, of course, or you get merely photographic accuracy, and what\nmakes a still life interesting is that it's been through a head.\nYou want to emphasize the visual cues that tell you, for example,\nthat the reason the color changes suddenly at a certain point is\nthat it's the edge of an object. By subtly emphasizing such things\nyou can make paintings that are more realistic than photographs not\njust in some metaphorical sense, but in the strict information-theoretic\nsense. \n[4]I liked painting still lives because I was curious about what I was\nseeing. In everyday life, we aren't consciously aware of much we're\nseeing. Most visual perception is handled by low-level processes\nthat merely tell your brain \"that's a water droplet\" without telling\nyou details like where the lightest and darkest points are, or\n\"that's a bush\" without telling you the shape and position of every\nleaf. This is a feature of brains, not a bug. In everyday life it\nwould be distracting to notice every leaf on every bush. But when\nyou have to paint something, you have to look more closely, and\nwhen you do there's a lot to see. You can still be noticing new\nthings after days of trying to paint something people usually take\nfor granted, just as you can after\ndays of trying to write an essay about something people usually\ntake for granted.This is not the only way to paint. I'm not 100% sure it's even a\ngood way to paint. But it seemed a good enough bet to be worth\ntrying.Our teacher, professor Ulivi, was a nice guy. He could see I worked\nhard, and gave me a good grade, which he wrote down in a sort of\npassport each student had. But the Accademia wasn't teaching me\nanything except Italian, and my money was running out, so at the\nend of the first year I went back to the US.I wanted to go back to RISD, but I was now broke and RISD was very\nexpensive, so I decided to get a job for a year and then return to\nRISD the next fall. I got one at a company called Interleaf, which\nmade software for creating documents. You mean like Microsoft Word?\nExactly. That was how I learned that low end software tends to eat\nhigh end software. But Interleaf still had a few years to live yet.\n[5]Interleaf had done something pretty bold. Inspired by Emacs, they'd\nadded a scripting language, and even made the scripting language a\ndialect of Lisp. Now they wanted a Lisp hacker to write things in\nit. This was the closest thing I've had to a normal job, and I\nhereby apologize to my boss and coworkers, because I was a bad\nemployee. Their Lisp was the thinnest icing on a giant C cake, and\nsince I didn't know C and didn't want to learn it, I never understood\nmost of the software. Plus I was terribly irresponsible. This was\nback when a programming job meant showing up every day during certain\nworking hours. That seemed unnatural to me, and on this point the\nrest of the world is coming around to my way of thinking, but at\nthe time it caused a lot of friction. Toward the end of the year I\nspent much of my time surreptitiously working on On Lisp, which I\nhad by this time gotten a contract to publish.The good part was that I got paid huge amounts of money, especially\nby art student standards. In Florence, after paying my part of the\nrent, my budget for everything else had been $7 a day. Now I was\ngetting paid more than 4 times that every hour, even when I was\njust sitting in a meeting. By living cheaply I not only managed to\nsave enough to go back to RISD, but also paid off my college loans.I learned some useful things at Interleaf, though they were mostly\nabout what not to do. I learned that it's better for technology\ncompanies to be run by product people than sales people (though\nsales is a real skill and people who are good at it are really good\nat it), that it leads to bugs when code is edited by too many people,\nthat cheap office space is no bargain if it's depressing, that\nplanned meetings are inferior to corridor conversations, that big,\nbureaucratic customers are a dangerous source of money, and that\nthere's not much overlap between conventional office hours and the\noptimal time for hacking, or conventional offices and the optimal\nplace for it.But the most important thing I learned, and which I used in both\nViaweb and Y Combinator, is that the low end eats the high end:\nthat it's good to be the \"entry level\" option, even though that\nwill be less prestigious, because if you're not, someone else will\nbe, and will squash you against the ceiling. Which in turn means\nthat prestige is a danger sign.When I left to go back to RISD the next fall, I arranged to do\nfreelance work for the group that did projects for customers, and\nthis was how I survived for the next several years. When I came\nback to visit for a project later on, someone told me about a new\nthing called HTML, which was, as he described it, a derivative of\nSGML. Markup language enthusiasts were an occupational hazard at\nInterleaf and I ignored him, but this HTML thing later became a big\npart of my life.In the fall of 1992 I moved back to Providence to continue at RISD.\nThe foundation had merely been intro stuff, and the Accademia had\nbeen a (very civilized) joke. Now I was going to see what real art\nschool was like. But alas it was more like the Accademia than not.\nBetter organized, certainly, and a lot more expensive, but it was\nnow becoming clear that art school did not bear the same relationship\nto art that medical school bore to medicine. At least not the\npainting department. The textile department, which my next door\nneighbor belonged to, seemed to be pretty rigorous. No doubt\nillustration and architecture were too. But painting was post-rigorous.\nPainting students were supposed to express themselves, which to the\nmore worldly ones meant to try to cook up some sort of distinctive\nsignature style.A signature style is the visual equivalent of what in show business\nis known as a \"schtick\": something that immediately identifies the\nwork as yours and no one else's. For example, when you see a painting\nthat looks like a certain kind of cartoon, you know it's by Roy\nLichtenstein. So if you see a big painting of this type hanging in\nthe apartment of a hedge fund manager, you know he paid millions\nof dollars for it. That's not always why artists have a signature\nstyle, but it's usually why buyers pay a lot for such work.\n[6]There were plenty of earnest students too: kids who \"could draw\"\nin high school, and now had come to what was supposed to be the\nbest art school in the country, to learn to draw even better. They\ntended to be confused and demoralized by what they found at RISD,\nbut they kept going, because painting was what they did. I was not\none of the kids who could draw in high school, but at RISD I was\ndefinitely closer to their tribe than the tribe of signature style\nseekers.I learned a lot in the color class I took at RISD, but otherwise I\nwas basically teaching myself to paint, and I could do that for\nfree. So in 1993 I dropped out. I hung around Providence for a bit,\nand then my college friend Nancy Parmet did me a big favor. A\nrent-controlled apartment in a building her mother owned in New\nYork was becoming vacant. Did I want it? It wasn't much more than\nmy current place, and New York was supposed to be where the artists\nwere. So yes, I wanted it!\n[7]Asterix comics begin by zooming in on a tiny corner of Roman Gaul\nthat turns out not to be controlled by the Romans. You can do\nsomething similar on a map of New York City: if you zoom in on the\nUpper East Side, there's a tiny corner that's not rich, or at least\nwasn't in 1993. It's called Yorkville, and that was my new home.\nNow I was a New York artist \u0097 in the strictly technical sense of\nmaking paintings and living in New York.I was nervous about money, because I could sense that Interleaf was\non the way down. Freelance Lisp hacking work was very rare, and I\ndidn't want to have to program in another language, which in those\ndays would have meant C++ if I was lucky. So with my unerring nose\nfor financial opportunity, I decided to write another book on Lisp.\nThis would be a popular book, the sort of book that could be used\nas a textbook. I imagined myself living frugally off the royalties\nand spending all my time painting. (The painting on the cover of\nthis book, ANSI Common Lisp, is one that I painted around this\ntime.)The best thing about New York for me was the presence of Idelle and\nJulian Weber. Idelle Weber was a painter, one of the early\nphotorealists, and I'd taken her painting class at Harvard. I've\nnever known a teacher more beloved by her students. Large numbers\nof former students kept in touch with her, including me. After I\nmoved to New York I became her de facto studio assistant.She liked to paint on big, square canvases, 4 to 5 feet on a side.\nOne day in late 1994 as I was stretching one of these monsters there\nwas something on the radio about a famous fund manager. He wasn't\nthat much older than me, and was super rich. The thought suddenly\noccurred to me: why don't I become rich? Then I'll be able to work\non whatever I want.Meanwhile I'd been hearing more and more about this new thing called\nthe World Wide Web. Robert Morris showed it to me when I visited\nhim in Cambridge, where he was now in grad school at Harvard. It\nseemed to me that the web would be a big deal. I'd seen what graphical\nuser interfaces had done for the popularity of microcomputers. It\nseemed like the web would do the same for the internet.If I wanted to get rich, here was the next train leaving the station.\nI was right about that part. What I got wrong was the idea. I decided\nwe should start a company to put art galleries online. I can't\nhonestly say, after reading so many Y Combinator applications, that\nthis was the worst startup idea ever, but it was up there. Art\ngalleries didn't want to be online, and still don't, not the fancy\nones. That's not how they sell. I wrote some software to generate\nweb sites for galleries, and Robert wrote some to resize images and\nset up an http server to serve the pages. Then we tried to sign up\ngalleries. To call this a difficult sale would be an understatement.\nIt was difficult to give away. A few galleries let us make sites\nfor them for free, but none paid us.Then some online stores started to appear, and I realized that\nexcept for the order buttons they were identical to the sites we'd\nbeen generating for galleries. This impressive-sounding thing called\nan \"internet storefront\" was something we already knew how to build.So in the summer of 1995, after I submitted the camera-ready copy\nof ANSI Common Lisp to the publishers, we started trying to write\nsoftware to build online stores. At first this was going to be\nnormal desktop software, which in those days meant Windows software.\nThat was an alarming prospect, because neither of us knew how to\nwrite Windows software or wanted to learn. We lived in the Unix\nworld. But we decided we'd at least try writing a prototype store\nbuilder on Unix. Robert wrote a shopping cart, and I wrote a new\nsite generator for stores \u0097 in Lisp, of course.We were working out of Robert's apartment in Cambridge. His roommate\nwas away for big chunks of time, during which I got to sleep in his\nroom. For some reason there was no bed frame or sheets, just a\nmattress on the floor. One morning as I was lying on this mattress\nI had an idea that made me sit up like a capital L. What if we ran\nthe software on the server, and let users control it by clicking\non links? Then we'd never have to write anything to run on users'\ncomputers. We could generate the sites on the same server we'd serve\nthem from. Users wouldn't need anything more than a browser.This kind of software, known as a web app, is common now, but at\nthe time it wasn't clear that it was even possible. To find out,\nwe decided to try making a version of our store builder that you\ncould control through the browser. A couple days later, on August\n12, we had one that worked. The UI was horrible, but it proved you\ncould build a whole store through the browser, without any client\nsoftware or typing anything into the command line on the server.Now we felt like we were really onto something. I had visions of a\nwhole new generation of software working this way. You wouldn't\nneed versions, or ports, or any of that crap. At Interleaf there\nhad been a whole group called Release Engineering that seemed to\nbe at least as big as the group that actually wrote the software.\nNow you could just update the software right on the server.We started a new company we called Viaweb, after the fact that our\nsoftware worked via the web, and we got $10,000 in seed funding\nfrom Idelle's husband Julian. In return for that and doing the\ninitial legal work and giving us business advice, we gave him 10%\nof the company. Ten years later this deal became the model for Y\nCombinator's. We knew founders needed something like this, because\nwe'd needed it ourselves.At this stage I had a negative net worth, because the thousand\ndollars or so I had in the bank was more than counterbalanced by\nwhat I owed the government in taxes. (Had I diligently set aside\nthe proper proportion of the money I'd made consulting for Interleaf?\nNo, I had not.) So although Robert had his graduate student stipend,\nI needed that seed funding to live on.We originally hoped to launch in September, but we got more ambitious\nabout the software as we worked on it. Eventually we managed to\nbuild a WYSIWYG site builder, in the sense that as you were creating\npages, they looked exactly like the static ones that would be\ngenerated later, except that instead of leading to static pages,\nthe links all referred to closures stored in a hash table on the\nserver.It helped to have studied art, because the main goal of an online\nstore builder is to make users look legit, and the key to looking\nlegit is high production values. If you get page layouts and fonts\nand colors right, you can make a guy running a store out of his\nbedroom look more legit than a big company.(If you're curious why my site looks so old-fashioned, it's because\nit's still made with this software. It may look clunky today, but\nin 1996 it was the last word in slick.)In September, Robert rebelled. \"We've been working on this for a\nmonth,\" he said, \"and it's still not done.\" This is funny in\nretrospect, because he would still be working on it almost 3 years\nlater. But I decided it might be prudent to recruit more programmers,\nand I asked Robert who else in grad school with him was really good.\nHe recommended Trevor Blackwell, which surprised me at first, because\nat that point I knew Trevor mainly for his plan to reduce everything\nin his life to a stack of notecards, which he carried around with\nhim. But Rtm was right, as usual. Trevor turned out to be a\nfrighteningly effective hacker.It was a lot of fun working with Robert and Trevor. They're the two\nmost independent-minded people \nI know, and in completely different\nways. If you could see inside Rtm's brain it would look like a\ncolonial New England church, and if you could see inside Trevor's\nit would look like the worst excesses of Austrian Rococo.We opened for business, with 6 stores, in January 1996. It was just\nas well we waited a few months, because although we worried we were\nlate, we were actually almost fatally early. There was a lot of\ntalk in the press then about ecommerce, but not many people actually\nwanted online stores.\n[8]There were three main parts to the software: the editor, which\npeople used to build sites and which I wrote, the shopping cart,\nwhich Robert wrote, and the manager, which kept track of orders and\nstatistics, and which Trevor wrote. In its time, the editor was one\nof the best general-purpose site builders. I kept the code tight\nand didn't have to integrate with any other software except Robert's\nand Trevor's, so it was quite fun to work on. If all I'd had to do\nwas work on this software, the next 3 years would have been the\neasiest of my life. Unfortunately I had to do a lot more, all of\nit stuff I was worse at than programming, and the next 3 years were\ninstead the most stressful.There were a lot of startups making ecommerce software in the second\nhalf of the 90s. We were determined to be the Microsoft Word, not\nthe Interleaf. Which meant being easy to use and inexpensive. It\nwas lucky for us that we were poor, because that caused us to make\nViaweb even more inexpensive than we realized. We charged $100 a\nmonth for a small store and $300 a month for a big one. This low\nprice was a big attraction, and a constant thorn in the sides of\ncompetitors, but it wasn't because of some clever insight that we\nset the price low. We had no idea what businesses paid for things.\n$300 a month seemed like a lot of money to us.We did a lot of things right by accident like that. For example,\nwe did what's now called \"doing things that \ndon't scale,\" although\nat the time we would have described it as \"being so lame that we're\ndriven to the most desperate measures to get users.\" The most common\nof which was building stores for them. This seemed particularly\nhumiliating, since the whole raison d'etre of our software was that\npeople could use it to make their own stores. But anything to get\nusers.We learned a lot more about retail than we wanted to know. For\nexample, that if you could only have a small image of a man's shirt\n(and all images were small then by present standards), it was better\nto have a closeup of the collar than a picture of the whole shirt.\nThe reason I remember learning this was that it meant I had to\nrescan about 30 images of men's shirts. My first set of scans were\nso beautiful too.Though this felt wrong, it was exactly the right thing to be doing.\nBuilding stores for users taught us about retail, and about how it\nfelt to use our software. I was initially both mystified and repelled\nby \"business\" and thought we needed a \"business person\" to be in\ncharge of it, but once we started to get users, I was converted,\nin much the same way I was converted to \nfatherhood once I had kids.\nWhatever users wanted, I was all theirs. Maybe one day we'd have\nso many users that I couldn't scan their images for them, but in\nthe meantime there was nothing more important to do.Another thing I didn't get at the time is that \ngrowth rate is the\nultimate test of a startup. Our growth rate was fine. We had about\n70 stores at the end of 1996 and about 500 at the end of 1997. I\nmistakenly thought the thing that mattered was the absolute number\nof users. And that is the thing that matters in the sense that\nthat's how much money you're making, and if you're not making enough,\nyou might go out of business. But in the long term the growth rate\ntakes care of the absolute number. If we'd been a startup I was\nadvising at Y Combinator, I would have said: Stop being so stressed\nout, because you're doing fine. You're growing 7x a year. Just don't\nhire too many more people and you'll soon be profitable, and then\nyou'll control your own destiny.Alas I hired lots more people, partly because our investors wanted\nme to, and partly because that's what startups did during the\nInternet Bubble. A company with just a handful of employees would\nhave seemed amateurish. So we didn't reach breakeven until about\nwhen Yahoo bought us in the summer of 1998. Which in turn meant we\nwere at the mercy of investors for the entire life of the company.\nAnd since both we and our investors were noobs at startups, the\nresult was a mess even by startup standards.It was a huge relief when Yahoo bought us. In principle our Viaweb\nstock was valuable. It was a share in a business that was profitable\nand growing rapidly. But it didn't feel very valuable to me; I had\nno idea how to value a business, but I was all too keenly aware of\nthe near-death experiences we seemed to have every few months. Nor\nhad I changed my grad student lifestyle significantly since we\nstarted. So when Yahoo bought us it felt like going from rags to\nriches. Since we were going to California, I bought a car, a yellow\n1998 VW GTI. I remember thinking that its leather seats alone were\nby far the most luxurious thing I owned.The next year, from the summer of 1998 to the summer of 1999, must\nhave been the least productive of my life. I didn't realize it at\nthe time, but I was worn out from the effort and stress of running\nViaweb. For a while after I got to California I tried to continue\nmy usual m.o. of programming till 3 in the morning, but fatigue\ncombined with Yahoo's prematurely aged\nculture and grim cube farm\nin Santa Clara gradually dragged me down. After a few months it\nfelt disconcertingly like working at Interleaf.Yahoo had given us a lot of options when they bought us. At the\ntime I thought Yahoo was so overvalued that they'd never be worth\nanything, but to my astonishment the stock went up 5x in the next\nyear. I hung on till the first chunk of options vested, then in the\nsummer of 1999 I left. It had been so long since I'd painted anything\nthat I'd half forgotten why I was doing this. My brain had been\nentirely full of software and men's shirts for 4 years. But I had\ndone this to get rich so I could paint, I reminded myself, and now\nI was rich, so I should go paint.When I said I was leaving, my boss at Yahoo had a long conversation\nwith me about my plans. I told him all about the kinds of pictures\nI wanted to paint. At the time I was touched that he took such an\ninterest in me. Now I realize it was because he thought I was lying.\nMy options at that point were worth about $2 million a month. If I\nwas leaving that kind of money on the table, it could only be to\ngo and start some new startup, and if I did, I might take people\nwith me. This was the height of the Internet Bubble, and Yahoo was\nground zero of it. My boss was at that moment a billionaire. Leaving\nthen to start a new startup must have seemed to him an insanely,\nand yet also plausibly, ambitious plan.But I really was quitting to paint, and I started immediately.\nThere was no time to lose. I'd already burned 4 years getting rich.\nNow when I talk to founders who are leaving after selling their\ncompanies, my advice is always the same: take a vacation. That's\nwhat I should have done, just gone off somewhere and done nothing\nfor a month or two, but the idea never occurred to me.So I tried to paint, but I just didn't seem to have any energy or\nambition. Part of the problem was that I didn't know many people\nin California. I'd compounded this problem by buying a house up in\nthe Santa Cruz Mountains, with a beautiful view but miles from\nanywhere. I stuck it out for a few more months, then in desperation\nI went back to New York, where unless you understand about rent\ncontrol you'll be surprised to hear I still had my apartment, sealed\nup like a tomb of my old life. Idelle was in New York at least, and\nthere were other people trying to paint there, even though I didn't\nknow any of them.When I got back to New York I resumed my old life, except now I was\nrich. It was as weird as it sounds. I resumed all my old patterns,\nexcept now there were doors where there hadn't been. Now when I was\ntired of walking, all I had to do was raise my hand, and (unless\nit was raining) a taxi would stop to pick me up. Now when I walked\npast charming little restaurants I could go in and order lunch. It\nwas exciting for a while. Painting started to go better. I experimented\nwith a new kind of still life where I'd paint one painting in the\nold way, then photograph it and print it, blown up, on canvas, and\nthen use that as the underpainting for a second still life, painted\nfrom the same objects (which hopefully hadn't rotted yet).Meanwhile I looked for an apartment to buy. Now I could actually\nchoose what neighborhood to live in. Where, I asked myself and\nvarious real estate agents, is the Cambridge of New York? Aided by\noccasional visits to actual Cambridge, I gradually realized there\nwasn't one. Huh.Around this time, in the spring of 2000, I had an idea. It was clear\nfrom our experience with Viaweb that web apps were the future. Why\nnot build a web app for making web apps? Why not let people edit\ncode on our server through the browser, and then host the resulting\napplications for them?\n[9]\nYou could run all sorts of services\non the servers that these applications could use just by making an\nAPI call: making and receiving phone calls, manipulating images,\ntaking credit card payments, etc.I got so excited about this idea that I couldn't think about anything\nelse. It seemed obvious that this was the future. I didn't particularly\nwant to start another company, but it was clear that this idea would\nhave to be embodied as one, so I decided to move to Cambridge and\nstart it. I hoped to lure Robert into working on it with me, but\nthere I ran into a hitch. Robert was now a postdoc at MIT, and\nthough he'd made a lot of money the last time I'd lured him into\nworking on one of my schemes, it had also been a huge time sink.\nSo while he agreed that it sounded like a plausible idea, he firmly\nrefused to work on it.Hmph. Well, I'd do it myself then. I recruited Dan Giffin, who had\nworked for Viaweb, and two undergrads who wanted summer jobs, and\nwe got to work trying to build what it's now clear is about twenty\ncompanies and several open source projects worth of software. The\nlanguage for defining applications would of course be a dialect of\nLisp. But I wasn't so naive as to assume I could spring an overt\nLisp on a general audience; we'd hide the parentheses, like Dylan\ndid.By then there was a name for the kind of company Viaweb was, an\n\"application service provider,\" or ASP. This name didn't last long\nbefore it was replaced by \"software as a service,\" but it was current\nfor long enough that I named this new company after it: it was going\nto be called Aspra.I started working on the application builder, Dan worked on network\ninfrastructure, and the two undergrads worked on the first two\nservices (images and phone calls). But about halfway through the\nsummer I realized I really didn't want to run a company \u0097 especially\nnot a big one, which it was looking like this would have to be. I'd\nonly started Viaweb because I needed the money. Now that I didn't\nneed money anymore, why was I doing this? If this vision had to be\nrealized as a company, then screw the vision. I'd build a subset\nthat could be done as an open source project.Much to my surprise, the time I spent working on this stuff was not\nwasted after all. After we started Y Combinator, I would often\nencounter startups working on parts of this new architecture, and\nit was very useful to have spent so much time thinking about it and\neven trying to write some of it.The subset I would build as an open source project was the new Lisp,\nwhose parentheses I now wouldn't even have to hide. A lot of Lisp\nhackers dream of building a new Lisp, partly because one of the\ndistinctive features of the language is that it has dialects, and\npartly, I think, because we have in our minds a Platonic form of\nLisp that all existing dialects fall short of. I certainly did. So\nat the end of the summer Dan and I switched to working on this new\ndialect of Lisp, which I called Arc, in a house I bought in Cambridge.The following spring, lightning struck. I was invited to give a\ntalk at a Lisp conference, so I gave one about how we'd used Lisp\nat Viaweb. Afterward I put a postscript file of this talk online,\non paulgraham.com, which I'd created years before using Viaweb but\nhad never used for anything. In one day it got 30,000 page views.\nWhat on earth had happened? The referring urls showed that someone\nhad posted it on Slashdot.\n[10]Wow, I thought, there's an audience. If I write something and put\nit on the web, anyone can read it. That may seem obvious now, but\nit was surprising then. In the print era there was a narrow channel\nto readers, guarded by fierce monsters known as editors. The only\nway to get an audience for anything you wrote was to get it published\nas a book, or in a newspaper or magazine. Now anyone could publish\nanything.This had been possible in principle since 1993, but not many people\nhad realized it yet. I had been intimately involved with building\nthe infrastructure of the web for most of that time, and a writer\nas well, and it had taken me 8 years to realize it. Even then it\ntook me several years to understand the implications. It meant there\nwould be a whole new generation of \nessays.\n[11]In the print era, the channel for publishing essays had been\nvanishingly small. Except for a few officially anointed thinkers\nwho went to the right parties in New York, the only people allowed\nto publish essays were specialists writing about their specialties.\nThere were so many essays that had never been written, because there\nhad been no way to publish them. Now they could be, and I was going\nto write them.\n[12]I've worked on several different things, but to the extent there\nwas a turning point where I figured out what to work on, it was\nwhen I started publishing essays online. From then on I knew that\nwhatever else I did, I'd always write essays too.I knew that online essays would be a \nmarginal medium at first.\nSocially they'd seem more like rants posted by nutjobs on their\nGeoCities sites than the genteel and beautifully typeset compositions\npublished in The New Yorker. But by this point I knew enough to\nfind that encouraging instead of discouraging.One of the most conspicuous patterns I've noticed in my life is how\nwell it has worked, for me at least, to work on things that weren't\nprestigious. Still life has always been the least prestigious form\nof painting. Viaweb and Y Combinator both seemed lame when we started\nthem. I still get the glassy eye from strangers when they ask what\nI'm writing, and I explain that it's an essay I'm going to publish\non my web site. Even Lisp, though prestigious intellectually in\nsomething like the way Latin is, also seems about as hip.It's not that unprestigious types of work are good per se. But when\nyou find yourself drawn to some kind of work despite its current\nlack of prestige, it's a sign both that there's something real to\nbe discovered there, and that you have the right kind of motives.\nImpure motives are a big danger for the ambitious. If anything is\ngoing to lead you astray, it will be the desire to impress people.\nSo while working on things that aren't prestigious doesn't guarantee\nyou're on the right track, it at least guarantees you're not on the\nmost common type of wrong one.Over the next several years I wrote lots of essays about all kinds\nof different topics. O'Reilly reprinted a collection of them as a\nbook, called Hackers & Painters after one of the essays in it. I\nalso worked on spam filters, and did some more painting. I used to\nhave dinners for a group of friends every thursday night, which\ntaught me how to cook for groups. And I bought another building in\nCambridge, a former candy factory (and later, twas said, porn\nstudio), to use as an office.One night in October 2003 there was a big party at my house. It was\na clever idea of my friend Maria Daniels, who was one of the thursday\ndiners. Three separate hosts would all invite their friends to one\nparty. So for every guest, two thirds of the other guests would be\npeople they didn't know but would probably like. One of the guests\nwas someone I didn't know but would turn out to like a lot: a woman\ncalled Jessica Livingston. A couple days later I asked her out.Jessica was in charge of marketing at a Boston investment bank.\nThis bank thought it understood startups, but over the next year,\nas she met friends of mine from the startup world, she was surprised\nhow different reality was. And how colorful their stories were. So\nshe decided to compile a book of \ninterviews with startup founders.When the bank had financial problems and she had to fire half her\nstaff, she started looking for a new job. In early 2005 she interviewed\nfor a marketing job at a Boston VC firm. It took them weeks to make\nup their minds, and during this time I started telling her about\nall the things that needed to be fixed about venture capital. They\nshould make a larger number of smaller investments instead of a\nhandful of giant ones, they should be funding younger, more technical\nfounders instead of MBAs, they should let the founders remain as\nCEO, and so on.One of my tricks for writing essays had always been to give talks.\nThe prospect of having to stand up in front of a group of people\nand tell them something that won't waste their time is a great\nspur to the imagination. When the Harvard Computer Society, the\nundergrad computer club, asked me to give a talk, I decided I would\ntell them how to start a startup. Maybe they'd be able to avoid the\nworst of the mistakes we'd made.So I gave this talk, in the course of which I told them that the\nbest sources of seed funding were successful startup founders,\nbecause then they'd be sources of advice too. Whereupon it seemed\nthey were all looking expectantly at me. Horrified at the prospect\nof having my inbox flooded by business plans (if I'd only known),\nI blurted out \"But not me!\" and went on with the talk. But afterward\nit occurred to me that I should really stop procrastinating about\nangel investing. I'd been meaning to since Yahoo bought us, and now\nit was 7 years later and I still hadn't done one angel investment.Meanwhile I had been scheming with Robert and Trevor about projects\nwe could work on together. I missed working with them, and it seemed\nlike there had to be something we could collaborate on.As Jessica and I were walking home from dinner on March 11, at the\ncorner of Garden and Walker streets, these three threads converged.\nScrew the VCs who were taking so long to make up their minds. We'd\nstart our own investment firm and actually implement the ideas we'd\nbeen talking about. I'd fund it, and Jessica could quit her job and\nwork for it, and we'd get Robert and Trevor as partners too.\n[13]Once again, ignorance worked in our favor. We had no idea how to\nbe angel investors, and in Boston in 2005 there were no Ron Conways\nto learn from. So we just made what seemed like the obvious choices,\nand some of the things we did turned out to be novel.There are multiple components to Y Combinator, and we didn't figure\nthem all out at once. The part we got first was to be an angel firm.\nIn those days, those two words didn't go together. There were VC\nfirms, which were organized companies with people whose job it was\nto make investments, but they only did big, million dollar investments.\nAnd there were angels, who did smaller investments, but these were\nindividuals who were usually focused on other things and made\ninvestments on the side. And neither of them helped founders enough\nin the beginning. We knew how helpless founders were in some respects,\nbecause we remembered how helpless we'd been. For example, one thing\nJulian had done for us that seemed to us like magic was to get us\nset up as a company. We were fine writing fairly difficult software,\nbut actually getting incorporated, with bylaws and stock and all\nthat stuff, how on earth did you do that? Our plan was not only to\nmake seed investments, but to do for startups everything Julian had\ndone for us.YC was not organized as a fund. It was cheap enough to run that we\nfunded it with our own money. That went right by 99% of readers,\nbut professional investors are thinking \"Wow, that means they got\nall the returns.\" But once again, this was not due to any particular\ninsight on our part. We didn't know how VC firms were organized.\nIt never occurred to us to try to raise a fund, and if it had, we\nwouldn't have known where to start.\n[14]The most distinctive thing about YC is the batch model: to fund a\nbunch of startups all at once, twice a year, and then to spend three\nmonths focusing intensively on trying to help them. That part we\ndiscovered by accident, not merely implicitly but explicitly due\nto our ignorance about investing. We needed to get experience as\ninvestors. What better way, we thought, than to fund a whole bunch\nof startups at once? We knew undergrads got temporary jobs at tech\ncompanies during the summer. Why not organize a summer program where\nthey'd start startups instead? We wouldn't feel guilty for being\nin a sense fake investors, because they would in a similar sense\nbe fake founders. So while we probably wouldn't make much money out\nof it, we'd at least get to practice being investors on them, and\nthey for their part would probably have a more interesting summer\nthan they would working at Microsoft.We'd use the building I owned in Cambridge as our headquarters.\nWe'd all have dinner there once a week \u0097 on tuesdays, since I was\nalready cooking for the thursday diners on thursdays \u0097 and after\ndinner we'd bring in experts on startups to give talks.We knew undergrads were deciding then about summer jobs, so in a\nmatter of days we cooked up something we called the Summer Founders\nProgram, and I posted an \nannouncement \non my site, inviting undergrads\nto apply. I had never imagined that writing essays would be a way\nto get \"deal flow,\" as investors call it, but it turned out to be\nthe perfect source.\n[15]\nWe got 225 applications for the Summer\nFounders Program, and we were surprised to find that a lot of them\nwere from people who'd already graduated, or were about to that\nspring. Already this SFP thing was starting to feel more serious\nthan we'd intended.We invited about 20 of the 225 groups to interview in person, and\nfrom those we picked 8 to fund. They were an impressive group. That\nfirst batch included reddit, Justin Kan and Emmett Shear, who went\non to found Twitch, Aaron Swartz, who had already helped write the\nRSS spec and would a few years later become a martyr for open access,\nand Sam Altman, who would later become the second president of YC.\nI don't think it was entirely luck that the first batch was so good.\nYou had to be pretty bold to sign up for a weird thing like the\nSummer Founders Program instead of a summer job at a legit place\nlike Microsoft or Goldman Sachs.The deal for startups was based on a combination of the deal we did\nwith Julian ($10k for 10%) and what Robert said MIT grad students\ngot for the summer ($6k). We invested $6k per founder, which in the\ntypical two-founder case was $12k, in return for 6%. That had to\nbe fair, because it was twice as good as the deal we ourselves had\ntaken. Plus that first summer, which was really hot, Jessica brought\nthe founders free air conditioners.\n[16]Fairly quickly I realized that we had stumbled upon the way to scale\nstartup funding. Funding startups in batches was more convenient\nfor us, because it meant we could do things for a lot of startups\nat once, but being part of a batch was better for the startups too.\nIt solved one of the biggest problems faced by founders: the\nisolation. Now you not only had colleagues, but colleagues who\nunderstood the problems you were facing and could tell you how they\nwere solving them.As YC grew, we started to notice other advantages of scale. The\nalumni became a tight community, dedicated to helping one another,\nand especially the current batch, whose shoes they remembered being\nin. We also noticed that the startups were becoming one another's\ncustomers. We used to refer jokingly to the \"YC GDP,\" but as YC\ngrows this becomes less and less of a joke. Now lots of startups\nget their initial set of customers almost entirely from among their\nbatchmates.I had not originally intended YC to be a full-time job. I was going\nto do three things: hack, write essays, and work on YC. As YC grew,\nand I grew more excited about it, it started to take up a lot more\nthan a third of my attention. But for the first few years I was\nstill able to work on other things.In the summer of 2006, Robert and I started working on a new version\nof Arc. This one was reasonably fast, because it was compiled into\nScheme. To test this new Arc, I wrote Hacker News in it. It was\noriginally meant to be a news aggregator for startup founders and\nwas called Startup News, but after a few months I got tired of\nreading about nothing but startups. Plus it wasn't startup founders\nwe wanted to reach. It was future startup founders. So I changed\nthe name to Hacker News and the topic to whatever engaged one's\nintellectual curiosity.HN was no doubt good for YC, but it was also by far the biggest\nsource of stress for me. If all I'd had to do was select and help\nfounders, life would have been so easy. And that implies that HN\nwas a mistake. Surely the biggest source of stress in one's work\nshould at least be something close to the core of the work. Whereas\nI was like someone who was in pain while running a marathon not\nfrom the exertion of running, but because I had a blister from an\nill-fitting shoe. When I was dealing with some urgent problem during\nYC, there was about a 60% chance it had to do with HN, and a 40%\nchance it had do with everything else combined.\n[17]As well as HN, I wrote all of YC's internal software in Arc. But\nwhile I continued to work a good deal in Arc, I gradually stopped\nworking on Arc, partly because I didn't have time to, and partly\nbecause it was a lot less attractive to mess around with the language\nnow that we had all this infrastructure depending on it. So now my\nthree projects were reduced to two: writing essays and working on\nYC.YC was different from other kinds of work I've done. Instead of\ndeciding for myself what to work on, the problems came to me. Every\n6 months there was a new batch of startups, and their problems,\nwhatever they were, became our problems. It was very engaging work,\nbecause their problems were quite varied, and the good founders\nwere very effective. If you were trying to learn the most you could\nabout startups in the shortest possible time, you couldn't have\npicked a better way to do it.There were parts of the job I didn't like. Disputes between cofounders,\nfiguring out when people were lying to us, fighting with people who\nmaltreated the startups, and so on. But I worked hard even at the\nparts I didn't like. I was haunted by something Kevin Hale once\nsaid about companies: \"No one works harder than the boss.\" He meant\nit both descriptively and prescriptively, and it was the second\npart that scared me. I wanted YC to be good, so if how hard I worked\nset the upper bound on how hard everyone else worked, I'd better\nwork very hard.One day in 2010, when he was visiting California for interviews,\nRobert Morris did something astonishing: he offered me unsolicited\nadvice. I can only remember him doing that once before. One day at\nViaweb, when I was bent over double from a kidney stone, he suggested\nthat it would be a good idea for him to take me to the hospital.\nThat was what it took for Rtm to offer unsolicited advice. So I\nremember his exact words very clearly. \"You know,\" he said, \"you\nshould make sure Y Combinator isn't the last cool thing you do.\"At the time I didn't understand what he meant, but gradually it\ndawned on me that he was saying I should quit. This seemed strange\nadvice, because YC was doing great. But if there was one thing rarer\nthan Rtm offering advice, it was Rtm being wrong. So this set me\nthinking. It was true that on my current trajectory, YC would be\nthe last thing I did, because it was only taking up more of my\nattention. It had already eaten Arc, and was in the process of\neating essays too. Either YC was my life's work or I'd have to leave\neventually. And it wasn't, so I would.In the summer of 2012 my mother had a stroke, and the cause turned\nout to be a blood clot caused by colon cancer. The stroke destroyed\nher balance, and she was put in a nursing home, but she really\nwanted to get out of it and back to her house, and my sister and I\nwere determined to help her do it. I used to fly up to Oregon to\nvisit her regularly, and I had a lot of time to think on those\nflights. On one of them I realized I was ready to hand YC over to\nsomeone else.I asked Jessica if she wanted to be president, but she didn't, so\nwe decided we'd try to recruit Sam Altman. We talked to Robert and\nTrevor and we agreed to make it a complete changing of the guard.\nUp till that point YC had been controlled by the original LLC we\nfour had started. But we wanted YC to last for a long time, and to\ndo that it couldn't be controlled by the founders. So if Sam said\nyes, we'd let him reorganize YC. Robert and I would retire, and\nJessica and Trevor would become ordinary partners.When we asked Sam if he wanted to be president of YC, initially he\nsaid no. He wanted to start a startup to make nuclear reactors.\nBut I kept at it, and in October 2013 he finally agreed. We decided\nhe'd take over starting with the winter 2014 batch. For the rest\nof 2013 I left running YC more and more to Sam, partly so he could\nlearn the job, and partly because I was focused on my mother, whose\ncancer had returned.She died on January 15, 2014. We knew this was coming, but it was\nstill hard when it did.I kept working on YC till March, to help get that batch of startups\nthrough Demo Day, then I checked out pretty completely. (I still\ntalk to alumni and to new startups working on things I'm interested\nin, but that only takes a few hours a week.)What should I do next? Rtm's advice hadn't included anything about\nthat. I wanted to do something completely different, so I decided\nI'd paint. I wanted to see how good I could get if I really focused\non it. So the day after I stopped working on YC, I started painting.\nI was rusty and it took a while to get back into shape, but it was\nat least completely engaging.\n[18]I spent most of the rest of 2014 painting. I'd never been able to\nwork so uninterruptedly before, and I got to be better than I had\nbeen. Not good enough, but better. Then in November, right in the\nmiddle of a painting, I ran out of steam. Up till that point I'd\nalways been curious to see how the painting I was working on would\nturn out, but suddenly finishing this one seemed like a chore. So\nI stopped working on it and cleaned my brushes and haven't painted\nsince. So far anyway.I realize that sounds rather wimpy. But attention is a zero sum\ngame. If you can choose what to work on, and you choose a project\nthat's not the best one (or at least a good one) for you, then it's\ngetting in the way of another project that is. And at 50 there was\nsome opportunity cost to screwing around.I started writing essays again, and wrote a bunch of new ones over\nthe next few months. I even wrote a couple that \nweren't about\nstartups. Then in March 2015 I started working on Lisp again.The distinctive thing about Lisp is that its core is a language\ndefined by writing an interpreter in itself. It wasn't originally\nintended as a programming language in the ordinary sense. It was\nmeant to be a formal model of computation, an alternative to the\nTuring machine. If you want to write an interpreter for a language\nin itself, what's the minimum set of predefined operators you need?\nThe Lisp that John McCarthy invented, or more accurately discovered,\nis an answer to that question.\n[19]McCarthy didn't realize this Lisp could even be used to program\ncomputers till his grad student Steve Russell suggested it. Russell\ntranslated McCarthy's interpreter into IBM 704 machine language,\nand from that point Lisp started also to be a programming language\nin the ordinary sense. But its origins as a model of computation\ngave it a power and elegance that other languages couldn't match.\nIt was this that attracted me in college, though I didn't understand\nwhy at the time.McCarthy's 1960 Lisp did nothing more than interpret Lisp expressions.\nIt was missing a lot of things you'd want in a programming language.\nSo these had to be added, and when they were, they weren't defined\nusing McCarthy's original axiomatic approach. That wouldn't have\nbeen feasible at the time. McCarthy tested his interpreter by\nhand-simulating the execution of programs. But it was already getting\nclose to the limit of interpreters you could test that way \u0097 indeed,\nthere was a bug in it that McCarthy had overlooked. To test a more\ncomplicated interpreter, you'd have had to run it, and computers\nthen weren't powerful enough.Now they are, though. Now you could continue using McCarthy's\naxiomatic approach till you'd defined a complete programming language.\nAnd as long as every change you made to McCarthy's Lisp was a\ndiscoveredness-preserving transformation, you could, in principle,\nend up with a complete language that had this quality. Harder to\ndo than to talk about, of course, but if it was possible in principle,\nwhy not try? So I decided to take a shot at it. It took 4 years,\nfrom March 26, 2015 to October 12, 2019. It was fortunate that I\nhad a precisely defined goal, or it would have been hard to keep\nat it for so long.I wrote this new Lisp, called Bel, \nin itself in Arc. That may sound\nlike a contradiction, but it's an indication of the sort of trickery\nI had to engage in to make this work. By means of an egregious\ncollection of hacks I managed to make something close enough to an\ninterpreter written in itself that could actually run. Not fast,\nbut fast enough to test.I had to ban myself from writing essays during most of this time,\nor I'd never have finished. In late 2015 I spent 3 months writing\nessays, and when I went back to working on Bel I could barely\nunderstand the code. Not so much because it was badly written as\nbecause the problem is so convoluted. When you're working on an\ninterpreter written in itself, it's hard to keep track of what's\nhappening at what level, and errors can be practically encrypted\nby the time you get them.So I said no more essays till Bel was done. But I told few people\nabout Bel while I was working on it. So for years it must have\nseemed that I was doing nothing, when in fact I was working harder\nthan I'd ever worked on anything. Occasionally after wrestling for\nhours with some gruesome bug I'd check Twitter or HN and see someone\nasking \"Does Paul Graham still code?\"Working on Bel was hard but satisfying. I worked on it so intensively\nthat at any given time I had a decent chunk of the code in my head\nand could write more there. I remember taking the boys to the\ncoast on a sunny day in 2015 and figuring out how to deal with some\nproblem involving continuations while I watched them play in the\ntide pools. It felt like I was doing life right. I remember that\nbecause I was slightly dismayed at how novel it felt. The good news\nis that I had more moments like this over the next few years.In the summer of 2016 we moved to England. We wanted our kids to\nsee what it was like living in another country, and since I was a\nBritish citizen by birth, that seemed the obvious choice. We only\nmeant to stay for a year, but we liked it so much that we still\nlive there. So most of Bel was written in England.In the fall of 2019, Bel was finally finished. Like McCarthy's\noriginal Lisp, it's a spec rather than an implementation, although\nlike McCarthy's Lisp it's a spec expressed as code.Now that I could write essays again, I wrote a bunch about topics\nI'd had stacked up. I kept writing essays through 2020, but I also\nstarted to think about other things I could work on. How should I\nchoose what to do? Well, how had I chosen what to work on in the\npast? I wrote an essay for myself to answer that question, and I\nwas surprised how long and messy the answer turned out to be. If\nthis surprised me, who'd lived it, then I thought perhaps it would\nbe interesting to other people, and encouraging to those with\nsimilarly messy lives. So I wrote a more detailed version for others\nto read, and this is the last sentence of it.\nNotes[1]\nMy experience skipped a step in the evolution of computers:\ntime-sharing machines with interactive OSes. I went straight from\nbatch processing to microcomputers, which made microcomputers seem\nall the more exciting.[2]\nItalian words for abstract concepts can nearly always be\npredicted from their English cognates (except for occasional traps\nlike polluzione). It's the everyday words that differ. So if you\nstring together a lot of abstract concepts with a few simple verbs,\nyou can make a little Italian go a long way.[3]\nI lived at Piazza San Felice 4, so my walk to the Accademia\nwent straight down the spine of old Florence: past the Pitti, across\nthe bridge, past Orsanmichele, between the Duomo and the Baptistery,\nand then up Via Ricasoli to Piazza San Marco. I saw Florence at\nstreet level in every possible condition, from empty dark winter\nevenings to sweltering summer days when the streets were packed with\ntourists.[4]\nYou can of course paint people like still lives if you want\nto, and they're willing. That sort of portrait is arguably the apex\nof still life painting, though the long sitting does tend to produce\npained expressions in the sitters.[5]\nInterleaf was one of many companies that had smart people and\nbuilt impressive technology, and yet got crushed by Moore's Law.\nIn the 1990s the exponential growth in the power of commodity (i.e.\nIntel) processors rolled up high-end, special-purpose hardware and\nsoftware companies like a bulldozer.[6]\nThe signature style seekers at RISD weren't specifically\nmercenary. In the art world, money and coolness are tightly coupled.\nAnything expensive comes to be seen as cool, and anything seen as\ncool will soon become equally expensive.[7]\nTechnically the apartment wasn't rent-controlled but\nrent-stabilized, but this is a refinement only New Yorkers would\nknow or care about. The point is that it was really cheap, less\nthan half market price.[8]\nMost software you can launch as soon as it's done. But when\nthe software is an online store builder and you're hosting the\nstores, if you don't have any users yet, that fact will be painfully\nobvious. So before we could launch publicly we had to launch\nprivately, in the sense of recruiting an initial set of users and\nmaking sure they had decent-looking stores.[9]\nWe'd had a code editor in Viaweb for users to define their\nown page styles. They didn't know it, but they were editing Lisp\nexpressions underneath. But this wasn't an app editor, because the\ncode ran when the merchants' sites were generated, not when shoppers\nvisited them.[10]\nThis was the first instance of what is now a familiar experience,\nand so was what happened next, when I read the comments and found\nthey were full of angry people. How could I claim that Lisp was\nbetter than other languages? Weren't they all Turing complete?\nPeople who see the responses to essays I write sometimes tell me\nhow sorry they feel for me, but I'm not exaggerating when I reply\nthat it has always been like this, since the very beginning. It\ncomes with the territory. An essay must tell readers things they\ndon't already know, and some \npeople dislike being told such things.[11]\nPeople put plenty of stuff on the internet in the 90s of\ncourse, but putting something online is not the same as publishing\nit online. Publishing online means you treat the online version as\nthe (or at least a) primary version.[12]\nThere is a general lesson here that our experience with Y\nCombinator also teaches: Customs continue to constrain you long\nafter the restrictions that caused them have disappeared. Customary\nVC practice had once, like the customs about publishing essays,\nbeen based on real constraints. Startups had once been much more\nexpensive to start, and proportionally rare. Now they could be cheap\nand common, but the VCs' customs still reflected the old world,\njust as customs about writing essays still reflected the constraints\nof the print era.Which in turn implies that people who are independent-minded (i.e.\nless influenced by custom) will have an advantage in fields affected\nby rapid change (where customs are more likely to be obsolete).Here's an interesting point, though: you can't always predict which\nfields will be affected by rapid change. Obviously software and\nventure capital will be, but who would have predicted that essay\nwriting would be?[13]\nY Combinator was not the original name. At first we were\ncalled Cambridge Seed. But we didn't want a regional name, in case\nsomeone copied us in Silicon Valley, so we renamed ourselves after\none of the coolest tricks in the lambda calculus, the Y combinator.I picked orange as our color partly because it's the warmest, and\npartly because no VC used it. In 2005 all the VCs used staid colors\nlike maroon, navy blue, and forest green, because they were trying\nto appeal to LPs, not founders. The YC logo itself is an inside\njoke: the Viaweb logo had been a white V on a red circle, so I made\nthe YC logo a white Y on an orange square.[14]\nYC did become a fund for a couple years starting in 2009,\nbecause it was getting so big I could no longer afford to fund it\npersonally. But after Heroku got bought we had enough money to go\nback to being self-funded.[15]\nI've never liked the term \"deal flow,\" because it implies\nthat the number of new startups at any given time is fixed. This\nis not only false, but it's the purpose of YC to falsify it, by\ncausing startups to be founded that would not otherwise have existed.[16]\nShe reports that they were all different shapes and sizes,\nbecause there was a run on air conditioners and she had to get\nwhatever she could, but that they were all heavier than she could\ncarry now.[17]\nAnother problem with HN was a bizarre edge case that occurs\nwhen you both write essays and run a forum. When you run a forum,\nyou're assumed to see if not every conversation, at least every\nconversation involving you. And when you write essays, people post\nhighly imaginative misinterpretations of them on forums. Individually\nthese two phenomena are tedious but bearable, but the combination\nis disastrous. You actually have to respond to the misinterpretations,\nbecause the assumption that you're present in the conversation means\nthat not responding to any sufficiently upvoted misinterpretation\nreads as a tacit admission that it's correct. But that in turn\nencourages more; anyone who wants to pick a fight with you senses\nthat now is their chance.[18]\nThe worst thing about leaving YC was not working with Jessica\nanymore. We'd been working on YC almost the whole time we'd known\neach other, and we'd neither tried nor wanted to separate it from\nour personal lives, so leaving was like pulling up a deeply rooted\ntree.[19]\nOne way to get more precise about the concept of invented vs\ndiscovered is to talk about space aliens. Any sufficiently advanced\nalien civilization would certainly know about the Pythagorean\ntheorem, for example. I believe, though with less certainty, that\nthey would also know about the Lisp in McCarthy's 1960 paper.But if so there's no reason to suppose that this is the limit of\nthe language that might be known to them. Presumably aliens need\nnumbers and errors and I/O too. So it seems likely there exists at\nleast one path out of McCarthy's Lisp along which discoveredness\nis preserved.Thanks to Trevor Blackwell, John Collison, Patrick Collison, Daniel\nGackle, Ralph Hazell, Jessica Livingston, Robert Morris, and Harj\nTaggar for reading drafts of this."} {"title": "popular", "text": "May 2001(This article was written as a kind of business plan for a\nnew language.\nSo it is missing (because it takes for granted) the most important\nfeature of a good programming language: very powerful abstractions.)A friend of mine once told an eminent operating systems\nexpert that he wanted to design a really good\nprogramming language. The expert told him that it would be a\nwaste of time, that programming languages don't become popular\nor unpopular based on their merits, and so no matter how\ngood his language was, no one would use it. At least, that\nwas what had happened to the language he had designed.What does make a language popular? Do popular\nlanguages deserve their popularity? Is it worth trying to\ndefine a good programming language? How would you do it?I think the answers to these questions can be found by looking \nat hackers, and learning what they want. Programming\nlanguages are for hackers, and a programming language\nis good as a programming language (rather than, say, an\nexercise in denotational semantics or compiler design)\nif and only if hackers like it.1 The Mechanics of PopularityIt's true, certainly, that most people don't choose programming\nlanguages simply based on their merits. Most programmers are told\nwhat language to use by someone else. And yet I think the effect\nof such external factors on the popularity of programming languages\nis not as great as it's sometimes thought to be. I think a bigger\nproblem is that a hacker's idea of a good programming language is\nnot the same as most language designers'.Between the two, the hacker's opinion is the one that matters.\nProgramming languages are not theorems. They're tools, designed\nfor people, and they have to be designed to suit human strengths\nand weaknesses as much as shoes have to be designed for human feet.\nIf a shoe pinches when you put it on, it's a bad shoe, however\nelegant it may be as a piece of sculpture.It may be that the majority of programmers can't tell a good language\nfrom a bad one. But that's no different with any other tool. It\ndoesn't mean that it's a waste of time to try designing a good\nlanguage. Expert hackers \ncan tell a good language when they see\none, and they'll use it. Expert hackers are a tiny minority,\nadmittedly, but that tiny minority write all the good software,\nand their influence is such that the rest of the programmers will\ntend to use whatever language they use. Often, indeed, it is not\nmerely influence but command: often the expert hackers are the very\npeople who, as their bosses or faculty advisors, tell the other\nprogrammers what language to use.The opinion of expert hackers is not the only force that determines\nthe relative popularity of programming languages \u2014 legacy software\n(Cobol) and hype (Ada, Java) also play a role \u2014 but I think it is\nthe most powerful force over the long term. Given an initial critical\nmass and enough time, a programming language probably becomes about\nas popular as it deserves to be. And popularity further separates\ngood languages from bad ones, because feedback from real live users\nalways leads to improvements. Look at how much any popular language\nhas changed during its life. Perl and Fortran are extreme cases,\nbut even Lisp has changed a lot. Lisp 1.5 didn't have macros, for\nexample; these evolved later, after hackers at MIT had spent a\ncouple years using Lisp to write real programs. [1]So whether or not a language has to be good to be popular, I think\na language has to be popular to be good. And it has to stay popular\nto stay good. The state of the art in programming languages doesn't\nstand still. And yet the Lisps we have today are still pretty much\nwhat they had at MIT in the mid-1980s, because that's the last time\nLisp had a sufficiently large and demanding user base.Of course, hackers have to know about a language before they can\nuse it. How are they to hear? From other hackers. But there has to\nbe some initial group of hackers using the language for others even\nto hear about it. I wonder how large this group has to be; how many\nusers make a critical mass? Off the top of my head, I'd say twenty.\nIf a language had twenty separate users, meaning twenty users who\ndecided on their own to use it, I'd consider it to be real.Getting there can't be easy. I would not be surprised if it is\nharder to get from zero to twenty than from twenty to a thousand.\nThe best way to get those initial twenty users is probably to use\na trojan horse: to give people an application they want, which\nhappens to be written in the new language.2 External FactorsLet's start by acknowledging one external factor that does affect\nthe popularity of a programming language. To become popular, a\nprogramming language has to be the scripting language of a popular\nsystem. Fortran and Cobol were the scripting languages of early\nIBM mainframes. C was the scripting language of Unix, and so, later,\nwas Perl. Tcl is the scripting language of Tk. Java and Javascript\nare intended to be the scripting languages of web browsers.Lisp is not a massively popular language because it is not the\nscripting language of a massively popular system. What popularity\nit retains dates back to the 1960s and 1970s, when it was the\nscripting language of MIT. A lot of the great programmers of the\nday were associated with MIT at some point. And in the early 1970s,\nbefore C, MIT's dialect of Lisp, called MacLisp, was one of the\nonly programming languages a serious hacker would want to use.Today Lisp is the scripting language of two moderately popular\nsystems, Emacs and Autocad, and for that reason I suspect that most\nof the Lisp programming done today is done in Emacs Lisp or AutoLisp.Programming languages don't exist in isolation. To hack is a\ntransitive verb \u2014 hackers are usually hacking something \u2014 and in\npractice languages are judged relative to whatever they're used to\nhack. So if you want to design a popular language, you either have\nto supply more than a language, or you have to design your language\nto replace the scripting language of some existing system.Common Lisp is unpopular partly because it's an orphan. It did\noriginally come with a system to hack: the Lisp Machine. But Lisp\nMachines (along with parallel computers) were steamrollered by the\nincreasing power of general purpose processors in the 1980s. Common\nLisp might have remained popular if it had been a good scripting\nlanguage for Unix. It is, alas, an atrociously bad one.One way to describe this situation is to say that a language isn't\njudged on its own merits. Another view is that a programming language\nreally isn't a programming language unless it's also the scripting\nlanguage of something. This only seems unfair if it comes as a\nsurprise. I think it's no more unfair than expecting a programming\nlanguage to have, say, an implementation. It's just part of what\na programming language is.A programming language does need a good implementation, of course,\nand this must be free. Companies will pay for software, but individual\nhackers won't, and it's the hackers you need to attract.A language also needs to have a book about it. The book should be\nthin, well-written, and full of good examples. K&R is the ideal\nhere. At the moment I'd almost say that a language has to have a\nbook published by O'Reilly. That's becoming the test of mattering\nto hackers.There should be online documentation as well. In fact, the book\ncan start as online documentation. But I don't think that physical\nbooks are outmoded yet. Their format is convenient, and the de\nfacto censorship imposed by publishers is a useful if imperfect\nfilter. Bookstores are one of the most important places for learning\nabout new languages.3 BrevityGiven that you can supply the three things any language needs \u2014 a\nfree implementation, a book, and something to hack \u2014 how do you\nmake a language that hackers will like?One thing hackers like is brevity. Hackers are lazy, in the same\nway that mathematicians and modernist architects are lazy: they\nhate anything extraneous. It would not be far from the truth to\nsay that a hacker about to write a program decides what language\nto use, at least subconsciously, based on the total number of\ncharacters he'll have to type. If this isn't precisely how hackers\nthink, a language designer would do well to act as if it were.It is a mistake to try to baby the user with long-winded expressions\nthat are meant to resemble English. Cobol is notorious for this\nflaw. A hacker would consider being asked to writeadd x to y giving zinstead ofz = x+yas something between an insult to his intelligence and a sin against\nGod.It has sometimes been said that Lisp should use first and rest\ninstead of car and cdr, because it would make programs easier to\nread. Maybe for the first couple hours. But a hacker can learn\nquickly enough that car means the first element of a list and cdr\nmeans the rest. Using first and rest means 50% more typing. And\nthey are also different lengths, meaning that the arguments won't\nline up when they're called, as car and cdr often are, in successive\nlines. I've found that it matters a lot how code lines up on the\npage. I can barely read Lisp code when it is set in a variable-width\nfont, and friends say this is true for other languages too.Brevity is one place where strongly typed languages lose. All other\nthings being equal, no one wants to begin a program with a bunch\nof declarations. Anything that can be implicit, should be.The individual tokens should be short as well. Perl and Common Lisp\noccupy opposite poles on this question. Perl programs can be almost\ncryptically dense, while the names of built-in Common Lisp operators\nare comically long. The designers of Common Lisp probably expected\nusers to have text editors that would type these long names for\nthem. But the cost of a long name is not just the cost of typing\nit. There is also the cost of reading it, and the cost of the space\nit takes up on your screen.4 HackabilityThere is one thing more important than brevity to a hacker: being\nable to do what you want. In the history of programming languages\na surprising amount of effort has gone into preventing programmers\nfrom doing things considered to be improper. This is a dangerously\npresumptuous plan. How can the language designer know what the\nprogrammer is going to need to do? I think language designers would\ndo better to consider their target user to be a genius who will\nneed to do things they never anticipated, rather than a bumbler\nwho needs to be protected from himself. The bumbler will shoot\nhimself in the foot anyway. You may save him from referring to\nvariables in another package, but you can't save him from writing\na badly designed program to solve the wrong problem, and taking\nforever to do it.Good programmers often want to do dangerous and unsavory things.\nBy unsavory I mean things that go behind whatever semantic facade\nthe language is trying to present: getting hold of the internal\nrepresentation of some high-level abstraction, for example. Hackers\nlike to hack, and hacking means getting inside things and second\nguessing the original designer.Let yourself be second guessed. When you make any tool, people use\nit in ways you didn't intend, and this is especially true of a\nhighly articulated tool like a programming language. Many a hacker\nwill want to tweak your semantic model in a way that you never\nimagined. I say, let them; give the programmer access to as much\ninternal stuff as you can without endangering runtime systems like\nthe garbage collector.In Common Lisp I have often wanted to iterate through the fields\nof a struct \u2014 to comb out references to a deleted object, for example,\nor find fields that are uninitialized. I know the structs are just\nvectors underneath. And yet I can't write a general purpose function\nthat I can call on any struct. I can only access the fields by\nname, because that's what a struct is supposed to mean.A hacker may only want to subvert the intended model of things once\nor twice in a big program. But what a difference it makes to be\nable to. And it may be more than a question of just solving a\nproblem. There is a kind of pleasure here too. Hackers share the\nsurgeon's secret pleasure in poking about in gross innards, the\nteenager's secret pleasure in popping zits. [2] For boys, at least,\ncertain kinds of horrors are fascinating. Maxim magazine publishes\nan annual volume of photographs, containing a mix of pin-ups and\ngrisly accidents. They know their audience.Historically, Lisp has been good at letting hackers have their way.\nThe political correctness of Common Lisp is an aberration. Early\nLisps let you get your hands on everything. A good deal of that\nspirit is, fortunately, preserved in macros. What a wonderful thing,\nto be able to make arbitrary transformations on the source code.Classic macros are a real hacker's tool \u2014 simple, powerful, and\ndangerous. It's so easy to understand what they do: you call a\nfunction on the macro's arguments, and whatever it returns gets\ninserted in place of the macro call. Hygienic macros embody the\nopposite principle. They try to protect you from understanding what\nthey're doing. I have never heard hygienic macros explained in one\nsentence. And they are a classic example of the dangers of deciding\nwhat programmers are allowed to want. Hygienic macros are intended\nto protect me from variable capture, among other things, but variable\ncapture is exactly what I want in some macros.A really good language should be both clean and dirty: cleanly\ndesigned, with a small core of well understood and highly orthogonal\noperators, but dirty in the sense that it lets hackers have their\nway with it. C is like this. So were the early Lisps. A real hacker's\nlanguage will always have a slightly raffish character.A good programming language should have features that make the kind\nof people who use the phrase \"software engineering\" shake their\nheads disapprovingly. At the other end of the continuum are languages\nlike Ada and Pascal, models of propriety that are good for teaching\nand not much else.5 Throwaway ProgramsTo be attractive to hackers, a language must be good for writing\nthe kinds of programs they want to write. And that means, perhaps\nsurprisingly, that it has to be good for writing throwaway programs.A throwaway program is a program you write quickly for some limited\ntask: a program to automate some system administration task, or\ngenerate test data for a simulation, or convert data from one format\nto another. The surprising thing about throwaway programs is that,\nlike the \"temporary\" buildings built at so many American universities\nduring World War II, they often don't get thrown away. Many evolve\ninto real programs, with real features and real users.I have a hunch that the best big programs begin life this way,\nrather than being designed big from the start, like the Hoover Dam.\nIt's terrifying to build something big from scratch. When people\ntake on a project that's too big, they become overwhelmed. The\nproject either gets bogged down, or the result is sterile and\nwooden: a shopping mall rather than a real downtown, Brasilia rather\nthan Rome, Ada rather than C.Another way to get a big program is to start with a throwaway\nprogram and keep improving it. This approach is less daunting, and\nthe design of the program benefits from evolution. I think, if one\nlooked, that this would turn out to be the way most big programs\nwere developed. And those that did evolve this way are probably\nstill written in whatever language they were first written in,\nbecause it's rare for a program to be ported, except for political\nreasons. And so, paradoxically, if you want to make a language that\nis used for big systems, you have to make it good for writing\nthrowaway programs, because that's where big systems come from.Perl is a striking example of this idea. It was not only designed\nfor writing throwaway programs, but was pretty much a throwaway\nprogram itself. Perl began life as a collection of utilities for\ngenerating reports, and only evolved into a programming language\nas the throwaway programs people wrote in it grew larger. It was\nnot until Perl 5 (if then) that the language was suitable for\nwriting serious programs, and yet it was already massively popular.What makes a language good for throwaway programs? To start with,\nit must be readily available. A throwaway program is something that\nyou expect to write in an hour. So the language probably must\nalready be installed on the computer you're using. It can't be\nsomething you have to install before you use it. It has to be there.\nC was there because it came with the operating system. Perl was\nthere because it was originally a tool for system administrators,\nand yours had already installed it.Being available means more than being installed, though. An\ninteractive language, with a command-line interface, is more\navailable than one that you have to compile and run separately. A\npopular programming language should be interactive, and start up\nfast.Another thing you want in a throwaway program is brevity. Brevity\nis always attractive to hackers, and never more so than in a program\nthey expect to turn out in an hour.6 LibrariesOf course the ultimate in brevity is to have the program already\nwritten for you, and merely to call it. And this brings us to what\nI think will be an increasingly important feature of programming\nlanguages: library functions. Perl wins because it has large\nlibraries for manipulating strings. This class of library functions\nare especially important for throwaway programs, which are often\noriginally written for converting or extracting data. Many Perl\nprograms probably begin as just a couple library calls stuck\ntogether.I think a lot of the advances that happen in programming languages\nin the next fifty years will have to do with library functions. I\nthink future programming languages will have libraries that are as\ncarefully designed as the core language. Programming language design\nwill not be about whether to make your language strongly or weakly\ntyped, or object oriented, or functional, or whatever, but about\nhow to design great libraries. The kind of language designers who\nlike to think about how to design type systems may shudder at this.\nIt's almost like writing applications! Too bad. Languages are for\nprogrammers, and libraries are what programmers need.It's hard to design good libraries. It's not simply a matter of\nwriting a lot of code. Once the libraries get too big, it can\nsometimes take longer to find the function you need than to write\nthe code yourself. Libraries need to be designed using a small set\nof orthogonal operators, just like the core language. It ought to\nbe possible for the programmer to guess what library call will do\nwhat he needs.Libraries are one place Common Lisp falls short. There are only\nrudimentary libraries for manipulating strings, and almost none\nfor talking to the operating system. For historical reasons, Common\nLisp tries to pretend that the OS doesn't exist. And because you\ncan't talk to the OS, you're unlikely to be able to write a serious\nprogram using only the built-in operators in Common Lisp. You have\nto use some implementation-specific hacks as well, and in practice\nthese tend not to give you everything you want. Hackers would think\na lot more highly of Lisp if Common Lisp had powerful string\nlibraries and good OS support.7 SyntaxCould a language with Lisp's syntax, or more precisely, lack of\nsyntax, ever become popular? I don't know the answer to this\nquestion. I do think that syntax is not the main reason Lisp isn't\ncurrently popular. Common Lisp has worse problems than unfamiliar\nsyntax. I know several programmers who are comfortable with prefix\nsyntax and yet use Perl by default, because it has powerful string\nlibraries and can talk to the os.There are two possible problems with prefix notation: that it is\nunfamiliar to programmers, and that it is not dense enough. The\nconventional wisdom in the Lisp world is that the first problem is\nthe real one. I'm not so sure. Yes, prefix notation makes ordinary\nprogrammers panic. But I don't think ordinary programmers' opinions\nmatter. Languages become popular or unpopular based on what expert\nhackers think of them, and I think expert hackers might be able to\ndeal with prefix notation. Perl syntax can be pretty incomprehensible,\nbut that has not stood in the way of Perl's popularity. If anything\nit may have helped foster a Perl cult.A more serious problem is the diffuseness of prefix notation. For\nexpert hackers, that really is a problem. No one wants to write\n(aref a x y) when they could write a[x,y].In this particular case there is a way to finesse our way out of\nthe problem. If we treat data structures as if they were functions\non indexes, we could write (a x y) instead, which is even shorter\nthan the Perl form. Similar tricks may shorten other types of\nexpressions.We can get rid of (or make optional) a lot of parentheses by making\nindentation significant. That's how programmers read code anyway:\nwhen indentation says one thing and delimiters say another, we go\nby the indentation. Treating indentation as significant would\neliminate this common source of bugs as well as making programs\nshorter.Sometimes infix syntax is easier to read. This is especially true\nfor math expressions. I've used Lisp my whole programming life and\nI still don't find prefix math expressions natural. And yet it is\nconvenient, especially when you're generating code, to have operators\nthat take any number of arguments. So if we do have infix syntax,\nit should probably be implemented as some kind of read-macro.I don't think we should be religiously opposed to introducing syntax\ninto Lisp, as long as it translates in a well-understood way into\nunderlying s-expressions. There is already a good deal of syntax\nin Lisp. It's not necessarily bad to introduce more, as long as no\none is forced to use it. In Common Lisp, some delimiters are reserved\nfor the language, suggesting that at least some of the designers\nintended to have more syntax in the future.One of the most egregiously unlispy pieces of syntax in Common Lisp\noccurs in format strings; format is a language in its own right,\nand that language is not Lisp. If there were a plan for introducing\nmore syntax into Lisp, format specifiers might be able to be included\nin it. It would be a good thing if macros could generate format\nspecifiers the way they generate any other kind of code.An eminent Lisp hacker told me that his copy of CLTL falls open to\nthe section format. Mine too. This probably indicates room for\nimprovement. It may also mean that programs do a lot of I/O.8 EfficiencyA good language, as everyone knows, should generate fast code. But\nin practice I don't think fast code comes primarily from things\nyou do in the design of the language. As Knuth pointed out long\nago, speed only matters in certain critical bottlenecks. And as\nmany programmers have observed since, one is very often mistaken\nabout where these bottlenecks are.So, in practice, the way to get fast code is to have a very good\nprofiler, rather than by, say, making the language strongly typed.\nYou don't need to know the type of every argument in every call in\nthe program. You do need to be able to declare the types of arguments\nin the bottlenecks. And even more, you need to be able to find out\nwhere the bottlenecks are.One complaint people have had with Lisp is that it's hard to tell\nwhat's expensive. This might be true. It might also be inevitable,\nif you want to have a very abstract language. And in any case I\nthink good profiling would go a long way toward fixing the problem:\nyou'd soon learn what was expensive.Part of the problem here is social. Language designers like to\nwrite fast compilers. That's how they measure their skill. They\nthink of the profiler as an add-on, at best. But in practice a good\nprofiler may do more to improve the speed of actual programs written\nin the language than a compiler that generates fast code. Here,\nagain, language designers are somewhat out of touch with their\nusers. They do a really good job of solving slightly the wrong\nproblem.It might be a good idea to have an active profiler \u2014 to push\nperformance data to the programmer instead of waiting for him to\ncome asking for it. For example, the editor could display bottlenecks\nin red when the programmer edits the source code. Another approach\nwould be to somehow represent what's happening in running programs.\nThis would be an especially big win in server-based applications,\nwhere you have lots of running programs to look at. An active\nprofiler could show graphically what's happening in memory as a\nprogram's running, or even make sounds that tell what's happening.Sound is a good cue to problems. In one place I worked, we had a\nbig board of dials showing what was happening to our web servers.\nThe hands were moved by little servomotors that made a slight noise\nwhen they turned. I couldn't see the board from my desk, but I\nfound that I could tell immediately, by the sound, when there was\na problem with a server.It might even be possible to write a profiler that would automatically\ndetect inefficient algorithms. I would not be surprised if certain\npatterns of memory access turned out to be sure signs of bad\nalgorithms. If there were a little guy running around inside the\ncomputer executing our programs, he would probably have as long\nand plaintive a tale to tell about his job as a federal government\nemployee. I often have a feeling that I'm sending the processor on\na lot of wild goose chases, but I've never had a good way to look\nat what it's doing.A number of Lisps now compile into byte code, which is then executed\nby an interpreter. This is usually done to make the implementation\neasier to port, but it could be a useful language feature. It might\nbe a good idea to make the byte code an official part of the\nlanguage, and to allow programmers to use inline byte code in\nbottlenecks. Then such optimizations would be portable too.The nature of speed, as perceived by the end-user, may be changing.\nWith the rise of server-based applications, more and more programs\nmay turn out to be i/o-bound. It will be worth making i/o fast.\nThe language can help with straightforward measures like simple,\nfast, formatted output functions, and also with deep structural\nchanges like caching and persistent objects.Users are interested in response time. But another kind of efficiency\nwill be increasingly important: the number of simultaneous users\nyou can support per processor. Many of the interesting applications\nwritten in the near future will be server-based, and the number of\nusers per server is the critical question for anyone hosting such\napplications. In the capital cost of a business offering a server-based\napplication, this is the divisor.For years, efficiency hasn't mattered much in most end-user\napplications. Developers have been able to assume that each user\nwould have an increasingly powerful processor sitting on their\ndesk. And by Parkinson's Law, software has expanded to use the\nresources available. That will change with server-based applications.\nIn that world, the hardware and software will be supplied together.\nFor companies that offer server-based applications, it will make\na very big difference to the bottom line how many users they can\nsupport per server.In some applications, the processor will be the limiting factor,\nand execution speed will be the most important thing to optimize.\nBut often memory will be the limit; the number of simultaneous\nusers will be determined by the amount of memory you need for each\nuser's data. The language can help here too. Good support for\nthreads will enable all the users to share a single heap. It may\nalso help to have persistent objects and/or language level support\nfor lazy loading.9 TimeThe last ingredient a popular language needs is time. No one wants\nto write programs in a language that might go away, as so many\nprogramming languages do. So most hackers will tend to wait until\na language has been around for a couple years before even considering\nusing it.Inventors of wonderful new things are often surprised to discover\nthis, but you need time to get any message through to people. A\nfriend of mine rarely does anything the first time someone asks\nhim. He knows that people sometimes ask for things that they turn\nout not to want. To avoid wasting his time, he waits till the third\nor fourth time he's asked to do something; by then, whoever's asking\nhim may be fairly annoyed, but at least they probably really do\nwant whatever they're asking for.Most people have learned to do a similar sort of filtering on new\nthings they hear about. They don't even start paying attention\nuntil they've heard about something ten times. They're perfectly\njustified: the majority of hot new whatevers do turn out to be a\nwaste of time, and eventually go away. By delaying learning VRML,\nI avoided having to learn it at all.So anyone who invents something new has to expect to keep repeating\ntheir message for years before people will start to get it. We\nwrote what was, as far as I know, the first web-server based\napplication, and it took us years to get it through to people that\nit didn't have to be downloaded. It wasn't that they were stupid.\nThey just had us tuned out.The good news is, simple repetition solves the problem. All you\nhave to do is keep telling your story, and eventually people will\nstart to hear. It's not when people notice you're there that they\npay attention; it's when they notice you're still there.It's just as well that it usually takes a while to gain momentum.\nMost technologies evolve a good deal even after they're first\nlaunched \u2014 programming languages especially. Nothing could be better,\nfor a new techology, than a few years of being used only by a small\nnumber of early adopters. Early adopters are sophisticated and\ndemanding, and quickly flush out whatever flaws remain in your\ntechnology. When you only have a few users you can be in close\ncontact with all of them. And early adopters are forgiving when\nyou improve your system, even if this causes some breakage.There are two ways new technology gets introduced: the organic\ngrowth method, and the big bang method. The organic growth method\nis exemplified by the classic seat-of-the-pants underfunded garage\nstartup. A couple guys, working in obscurity, develop some new\ntechnology. They launch it with no marketing and initially have\nonly a few (fanatically devoted) users. They continue to improve\nthe technology, and meanwhile their user base grows by word of\nmouth. Before they know it, they're big.The other approach, the big bang method, is exemplified by the\nVC-backed, heavily marketed startup. They rush to develop a product,\nlaunch it with great publicity, and immediately (they hope) have\na large user base.Generally, the garage guys envy the big bang guys. The big bang\nguys are smooth and confident and respected by the VCs. They can\nafford the best of everything, and the PR campaign surrounding the\nlaunch has the side effect of making them celebrities. The organic\ngrowth guys, sitting in their garage, feel poor and unloved. And\nyet I think they are often mistaken to feel sorry for themselves.\nOrganic growth seems to yield better technology and richer founders\nthan the big bang method. If you look at the dominant technologies\ntoday, you'll find that most of them grew organically.This pattern doesn't only apply to companies. You see it in sponsored\nresearch too. Multics and Common Lisp were big-bang projects, and\nUnix and MacLisp were organic growth projects.10 Redesign\"The best writing is rewriting,\" wrote E. B. White. Every good\nwriter knows this, and it's true for software too. The most important\npart of design is redesign. Programming languages, especially,\ndon't get redesigned enough.To write good software you must simultaneously keep two opposing\nideas in your head. You need the young hacker's naive faith in\nhis abilities, and at the same time the veteran's skepticism. You\nhave to be able to think \nhow hard can it be? with one half of\nyour brain while thinking \nit will never work with the other.The trick is to realize that there's no real contradiction here.\nYou want to be optimistic and skeptical about two different things.\nYou have to be optimistic about the possibility of solving the\nproblem, but skeptical about the value of whatever solution you've\ngot so far.People who do good work often think that whatever they're working\non is no good. Others see what they've done and are full of wonder,\nbut the creator is full of worry. This pattern is no coincidence:\nit is the worry that made the work good.If you can keep hope and worry balanced, they will drive a project\nforward the same way your two legs drive a bicycle forward. In the\nfirst phase of the two-cycle innovation engine, you work furiously\non some problem, inspired by your confidence that you'll be able\nto solve it. In the second phase, you look at what you've done in\nthe cold light of morning, and see all its flaws very clearly. But\nas long as your critical spirit doesn't outweigh your hope, you'll\nbe able to look at your admittedly incomplete system, and think,\nhow hard can it be to get the rest of the way?, thereby continuing\nthe cycle.It's tricky to keep the two forces balanced. In young hackers,\noptimism predominates. They produce something, are convinced it's\ngreat, and never improve it. In old hackers, skepticism predominates,\nand they won't even dare to take on ambitious projects.Anything you can do to keep the redesign cycle going is good. Prose\ncan be rewritten over and over until you're happy with it. But\nsoftware, as a rule, doesn't get redesigned enough. Prose has\nreaders, but software has users. If a writer rewrites an essay,\npeople who read the old version are unlikely to complain that their\nthoughts have been broken by some newly introduced incompatibility.Users are a double-edged sword. They can help you improve your\nlanguage, but they can also deter you from improving it. So choose\nyour users carefully, and be slow to grow their number. Having\nusers is like optimization: the wise course is to delay it. Also,\nas a general rule, you can at any given time get away with changing\nmore than you think. Introducing change is like pulling off a\nbandage: the pain is a memory almost as soon as you feel it.Everyone knows that it's not a good idea to have a language designed\nby a committee. Committees yield bad design. But I think the worst\ndanger of committees is that they interfere with redesign. It is\nso much work to introduce changes that no one wants to bother.\nWhatever a committee decides tends to stay that way, even if most\nof the members don't like it.Even a committee of two gets in the way of redesign. This happens\nparticularly in the interfaces between pieces of software written\nby two different people. To change the interface both have to agree\nto change it at once. And so interfaces tend not to change at all,\nwhich is a problem because they tend to be one of the most ad hoc\nparts of any system.One solution here might be to design systems so that interfaces\nare horizontal instead of vertical \u2014 so that modules are always\nvertically stacked strata of abstraction. Then the interface will\ntend to be owned by one of them. The lower of two levels will either\nbe a language in which the upper is written, in which case the\nlower level will own the interface, or it will be a slave, in which\ncase the interface can be dictated by the upper level.11 LispWhat all this implies is that there is hope for a new Lisp. There\nis hope for any language that gives hackers what they want, including\nLisp. I think we may have made a mistake in thinking that hackers\nare turned off by Lisp's strangeness. This comforting illusion may\nhave prevented us from seeing the real problem with Lisp, or at\nleast Common Lisp, which is that it sucks for doing what hackers\nwant to do. A hacker's language needs powerful libraries and\nsomething to hack. Common Lisp has neither. A hacker's language is\nterse and hackable. Common Lisp is not.The good news is, it's not Lisp that sucks, but Common Lisp. If we\ncan develop a new Lisp that is a real hacker's language, I think\nhackers will use it. They will use whatever language does the job.\nAll we have to do is make sure this new Lisp does some important\njob better than other languages.History offers some encouragement. Over time, successive new\nprogramming languages have taken more and more features from Lisp.\nThere is no longer much left to copy before the language you've\nmade is Lisp. The latest hot language, Python, is a watered-down\nLisp with infix syntax and no macros. A new Lisp would be a natural\nstep in this progression.I sometimes think that it would be a good marketing trick to call\nit an improved version of Python. That sounds hipper than Lisp. To\nmany people, Lisp is a slow AI language with a lot of parentheses.\nFritz Kunze's official biography carefully avoids mentioning the\nL-word. But my guess is that we shouldn't be afraid to call the\nnew Lisp Lisp. Lisp still has a lot of latent respect among the\nvery best hackers \u2014 the ones who took 6.001 and understood it, for\nexample. And those are the users you need to win.In \"How to Become a Hacker,\" Eric Raymond describes Lisp as something\nlike Latin or Greek \u2014 a language you should learn as an intellectual\nexercise, even though you won't actually use it:\n\n Lisp is worth learning for the profound enlightenment experience\n you will have when you finally get it; that experience will make\n you a better programmer for the rest of your days, even if you\n never actually use Lisp itself a lot.\n\nIf I didn't know Lisp, reading this would set me asking questions.\nA language that would make me a better programmer, if it means\nanything at all, means a language that would be better for programming.\nAnd that is in fact the implication of what Eric is saying.As long as that idea is still floating around, I think hackers will\nbe receptive enough to a new Lisp, even if it is called Lisp. But\nthis Lisp must be a hacker's language, like the classic Lisps of\nthe 1970s. It must be terse, simple, and hackable. And it must have\npowerful libraries for doing what hackers want to do now.In the matter of libraries I think there is room to beat languages\nlike Perl and Python at their own game. A lot of the new applications\nthat will need to be written in the coming years will be \nserver-based\napplications. There's no reason a new Lisp shouldn't have string\nlibraries as good as Perl, and if this new Lisp also had powerful\nlibraries for server-based applications, it could be very popular.\nReal hackers won't turn up their noses at a new tool that will let\nthem solve hard problems with a few library calls. Remember, hackers\nare lazy.It could be an even bigger win to have core language support for\nserver-based applications. For example, explicit support for programs\nwith multiple users, or data ownership at the level of type tags.Server-based applications also give us the answer to the question\nof what this new Lisp will be used to hack. It would not hurt to\nmake Lisp better as a scripting language for Unix. (It would be\nhard to make it worse.) But I think there are areas where existing\nlanguages would be easier to beat. I think it might be better to\nfollow the model of Tcl, and supply the Lisp together with a complete\nsystem for supporting server-based applications. Lisp is a natural\nfit for server-based applications. Lexical closures provide a way\nto get the effect of subroutines when the ui is just a series of\nweb pages. S-expressions map nicely onto html, and macros are good\nat generating it. There need to be better tools for writing\nserver-based applications, and there needs to be a new Lisp, and\nthe two would work very well together.12 The Dream LanguageBy way of summary, let's try describing the hacker's dream language.\nThe dream language is \nbeautiful, clean, and terse. It has an\ninteractive toplevel that starts up fast. You can write programs\nto solve common problems with very little code. Nearly all the\ncode in any program you write is code that's specific to your\napplication. Everything else has been done for you.The syntax of the language is brief to a fault. You never have to\ntype an unnecessary character, or even to use the shift key much.Using big abstractions you can write the first version of a program\nvery quickly. Later, when you want to optimize, there's a really\ngood profiler that tells you where to focus your attention. You\ncan make inner loops blindingly fast, even writing inline byte code\nif you need to.There are lots of good examples to learn from, and the language is\nintuitive enough that you can learn how to use it from examples in\na couple minutes. You don't need to look in the manual much. The\nmanual is thin, and has few warnings and qualifications.The language has a small core, and powerful, highly orthogonal\nlibraries that are as carefully designed as the core language. The\nlibraries all work well together; everything in the language fits\ntogether like the parts in a fine camera. Nothing is deprecated,\nor retained for compatibility. The source code of all the libraries\nis readily available. It's easy to talk to the operating system\nand to applications written in other languages.The language is built in layers. The higher-level abstractions are\nbuilt in a very transparent way out of lower-level abstractions,\nwhich you can get hold of if you want.Nothing is hidden from you that doesn't absolutely have to be. The\nlanguage offers abstractions only as a way of saving you work,\nrather than as a way of telling you what to do. In fact, the language\nencourages you to be an equal participant in its design. You can\nchange everything about it, including even its syntax, and anything\nyou write has, as much as possible, the same status as what comes\npredefined.Notes[1] Macros very close to the modern idea were proposed by Timothy\nHart in 1964, two years after Lisp 1.5 was released. What was\nmissing, initially, were ways to avoid variable capture and multiple\nevaluation; Hart's examples are subject to both.[2] In When the Air Hits Your Brain, neurosurgeon Frank Vertosick\nrecounts a conversation in which his chief resident, Gary, talks\nabout the difference between surgeons and internists (\"fleas\"):\n\n Gary and I ordered a large pizza and found an open booth. The\n chief lit a cigarette. \"Look at those goddamn fleas, jabbering\n about some disease they'll see once in their lifetimes. That's\n the trouble with fleas, they only like the bizarre stuff. They\n hate their bread and butter cases. That's the difference between\n us and the fucking fleas. See, we love big juicy lumbar disc\n herniations, but they hate hypertension....\"\n\nIt's hard to think of a lumbar disc herniation as juicy (except\nliterally). And yet I think I know what they mean. I've often had\na juicy bug to track down. Someone who's not a programmer would\nfind it hard to imagine that there could be pleasure in a bug.\nSurely it's better if everything just works. In one way, it is.\nAnd yet there is undeniably a grim satisfaction in hunting down\ncertain sorts of bugs."} {"title": "pow", "text": "January 2017People who are powerful but uncharismatic will tend to be disliked.\nTheir power makes them a target for criticism that they don't have\nthe charisma to disarm. That was Hillary Clinton's problem. It also\ntends to be a problem for any CEO who is more of a builder than a\nschmoozer. And yet the builder-type CEO is (like Hillary) probably\nthe best person for the job.I don't think there is any solution to this problem. It's human\nnature. The best we can do is to recognize that it's happening, and\nto understand that being a magnet for criticism is sometimes a sign\nnot that someone is the wrong person for a job, but that they're\nthe right one."} {"title": "submarine", "text": "April 2005\"Suits make a corporate comeback,\" says the New\nYork Times. Why does this sound familiar? Maybe because\nthe suit was also back in February,\n\nSeptember\n2004, June\n2004, March\n2004, September\n2003, \n\nNovember\n2002, \nApril 2002,\nand February\n2002.\n\nWhy do the media keep running stories saying suits are back? Because\nPR firms tell \nthem to. One of the most surprising things I discovered\nduring my brief business career was the existence of the PR industry,\nlurking like a huge, quiet submarine beneath the news. Of the\nstories you read in traditional media that aren't about politics,\ncrimes, or disasters, more than half probably come from PR firms.I know because I spent years hunting such \"press hits.\" Our startup spent\nits entire marketing budget on PR: at a time when we were assembling\nour own computers to save money, we were paying a PR firm $16,000\na month. And they were worth it. PR is the news equivalent of\nsearch engine optimization; instead of buying ads, which readers\nignore, you get yourself inserted directly into the stories. [1]Our PR firm\nwas one of the best in the business. In 18 months, they got press\nhits in over 60 different publications. \nAnd we weren't the only ones they did great things for. \nIn 1997 I got a call from another\nstartup founder considering hiring them to promote his company. I\ntold him they were PR gods, worth every penny of their outrageous \nfees. But I remember thinking his company's name was odd.\nWhy call an auction site \"eBay\"?\nSymbiosisPR is not dishonest. Not quite. In fact, the reason the best PR\nfirms are so effective is precisely that they aren't dishonest.\nThey give reporters genuinely valuable information. A good PR firm\nwon't bug reporters just because the client tells them to; they've\nworked hard to build their credibility with reporters, and they\ndon't want to destroy it by feeding them mere propaganda.If anyone is dishonest, it's the reporters. The main reason PR \nfirms exist is that reporters are lazy. Or, to put it more nicely,\noverworked. Really they ought to be out there digging up stories\nfor themselves. But it's so tempting to sit in their offices and\nlet PR firms bring the stories to them. After all, they know good\nPR firms won't lie to them.A good flatterer doesn't lie, but tells his victim selective truths\n(what a nice color your eyes are). Good PR firms use the same\nstrategy: they give reporters stories that are true, but whose truth\nfavors their clients.For example, our PR firm often pitched stories about how the Web \nlet small merchants compete with big ones. This was perfectly true.\nBut the reason reporters ended up writing stories about this\nparticular truth, rather than some other one, was that small merchants\nwere our target market, and we were paying the piper.Different publications vary greatly in their reliance on PR firms.\nAt the bottom of the heap are the trade press, who make most of\ntheir money from advertising and would give the magazines away for\nfree if advertisers would let them. [2] The average\ntrade publication is a bunch of ads, glued together by just enough\narticles to make it look like a magazine. They're so desperate for\n\"content\" that some will print your press releases almost verbatim,\nif you take the trouble to write them to read like articles.At the other extreme are publications like the New York Times\nand the Wall Street Journal. Their reporters do go out and\nfind their own stories, at least some of the time. They'll listen \nto PR firms, but briefly and skeptically. We managed to get press \nhits in almost every publication we wanted, but we never managed \nto crack the print edition of the Times. [3]The weak point of the top reporters is not laziness, but vanity.\nYou don't pitch stories to them. You have to approach them as if\nyou were a specimen under their all-seeing microscope, and make it\nseem as if the story you want them to run is something they thought \nof themselves.Our greatest PR coup was a two-part one. We estimated, based on\nsome fairly informal math, that there were about 5000 stores on the\nWeb. We got one paper to print this number, which seemed neutral \nenough. But once this \"fact\" was out there in print, we could quote\nit to other publications, and claim that with 1000 users we had 20%\nof the online store market.This was roughly true. We really did have the biggest share of the\nonline store market, and 5000 was our best guess at its size. But\nthe way the story appeared in the press sounded a lot more definite.Reporters like definitive statements. For example, many of the\nstories about Jeremy Jaynes's conviction say that he was one of the\n10 worst spammers. This \"fact\" originated in Spamhaus's ROKSO list,\nwhich I think even Spamhaus would admit is a rough guess at the top\nspammers. The first stories about Jaynes cited this source, but\nnow it's simply repeated as if it were part of the indictment. \n[4]All you can say with certainty about Jaynes is that he was a fairly\nbig spammer. But reporters don't want to print vague stuff like\n\"fairly big.\" They want statements with punch, like \"top ten.\" And\nPR firms give them what they want.\nWearing suits, we're told, will make us \n3.6\npercent more productive.BuzzWhere the work of PR firms really does get deliberately misleading is in\nthe generation of \"buzz.\" They usually feed the same story to \nseveral different publications at once. And when readers see similar\nstories in multiple places, they think there is some important trend\nafoot. Which is exactly what they're supposed to think.When Windows 95 was launched, people waited outside stores\nat midnight to buy the first copies. None of them would have been\nthere without PR firms, who generated such a buzz in\nthe news media that it became self-reinforcing, like a nuclear chain\nreaction.I doubt PR firms realize it yet, but the Web makes it possible to \ntrack them at work. If you search for the obvious phrases, you\nturn up several efforts over the years to place stories about the \nreturn of the suit. For example, the Reuters article \n\nthat got picked up by USA\nToday in September 2004. \"The suit is back,\" it begins.Trend articles like this are almost always the work of\nPR firms. Once you know how to read them, it's straightforward to\nfigure out who the client is. With trend stories, PR firms usually\nline up one or more \"experts\" to talk about the industry generally. \nIn this case we get three: the NPD Group, the creative director of\nGQ, and a research director at Smith Barney. [5] When\nyou get to the end of the experts, look for the client. And bingo, \nthere it is: The Men's Wearhouse.Not surprising, considering The Men's Wearhouse was at that moment \nrunning ads saying \"The Suit is Back.\" Talk about a successful\npress hit-- a wire service article whose first sentence is your own\nad copy.The secret to finding other press hits from a given pitch\nis to realize that they all started from the same document back at\nthe PR firm. Search for a few key phrases and the names of the\nclients and the experts, and you'll turn up other variants of this \nstory.Casual\nfridays are out and dress codes are in writes Diane E. Lewis\nin The Boston Globe. In a remarkable coincidence, Ms. Lewis's\nindustry contacts also include the creative director of GQ.Ripped jeans and T-shirts are out, writes Mary Kathleen Flynn in\nUS News & World Report. And she too knows the \ncreative director of GQ.Men's suits\nare back writes Nicole Ford in Sexbuzz.Com (\"the ultimate men's\nentertainment magazine\").Dressing\ndown loses appeal as men suit up at the office writes Tenisha\nMercer of The Detroit News.\nNow that so many news articles are online, I suspect you could find\na similar pattern for most trend stories placed by PR firms. I\npropose we call this new sport \"PR diving,\" and I'm sure there are\nfar more striking examples out there than this clump of five stories.OnlineAfter spending years chasing them, it's now second nature\nto me to recognize press hits for what they are. But before we\nhired a PR firm I had no idea where articles in the mainstream media\ncame from. I could tell a lot of them were crap, but I didn't\nrealize why.Remember the exercises in critical reading you did in school, where\nyou had to look at a piece of writing and step back and ask whether\nthe author was telling the whole truth? If you really want to be\na critical reader, it turns out you have to step back one step\nfurther, and ask not just whether the author is telling the truth,\nbut why he's writing about this subject at all.Online, the answer tends to be a lot simpler. Most people who\npublish online write what they write for the simple reason that\nthey want to. You\ncan't see the fingerprints of PR firms all over the articles, as\nyou can in so many print publications-- which is one of the reasons,\nthough they may not consciously realize it, that readers trust\nbloggers more than Business Week.I was talking recently to a friend who works for a\nbig newspaper. He thought the print media were in serious trouble,\nand that they were still mostly in denial about it. \"They think\nthe decline is cyclic,\" he said. \"Actually it's structural.\"In other words, the readers are leaving, and they're not coming\nback.\nWhy? I think the main reason is that the writing online is more honest.\nImagine how incongruous the New York Times article about\nsuits would sound if you read it in a blog:\n The urge to look corporate-- sleek, commanding,\n prudent, yet with just a touch of hubris on your well-cut sleeve--\n is an unexpected development in a time of business disgrace.\n \nThe problem\nwith this article is not just that it originated in a PR firm.\nThe whole tone is bogus. This is the tone of someone writing down\nto their audience.Whatever its flaws, the writing you find online\nis authentic. It's not mystery meat cooked up\nout of scraps of pitch letters and press releases, and pressed into \nmolds of zippy\njournalese. It's people writing what they think.I didn't realize, till there was an alternative, just how artificial\nmost of the writing in the mainstream media was. I'm not saying\nI used to believe what I read in Time and Newsweek. Since high\nschool, at least, I've thought of magazines like that more as\nguides to what ordinary people were being\ntold to think than as \nsources of information. But I didn't realize till the last \nfew years that writing for publication didn't have to mean writing\nthat way. I didn't realize you could write as candidly and\ninformally as you would if you were writing to a friend.Readers aren't the only ones who've noticed the\nchange. The PR industry has too.\nA hilarious article\non the site of the PR Society of America gets to the heart of the \nmatter:\n Bloggers are sensitive about becoming mouthpieces\n for other organizations and companies, which is the reason they\n began blogging in the first place. \nPR people fear bloggers for the same reason readers\nlike them. And that means there may be a struggle ahead. As\nthis new kind of writing draws readers away from traditional media, we\nshould be prepared for whatever PR mutates into to compensate. \nWhen I think \nhow hard PR firms work to score press hits in the traditional \nmedia, I can't imagine they'll work any less hard to feed stories\nto bloggers, if they can figure out how.\nNotes[1] PR has at least \none beneficial feature: it favors small companies. If PR didn't \nwork, the only alternative would be to advertise, and only big\ncompanies can afford that.[2] Advertisers pay \nless for ads in free publications, because they assume readers \nignore something they get for free. This is why so many trade\npublications nominally have a cover price and yet give away free\nsubscriptions with such abandon.[3] Different sections\nof the Times vary so much in their standards that they're\npractically different papers. Whoever fed the style section reporter\nthis story about suits coming back would have been sent packing by\nthe regular news reporters.[4] The most striking\nexample I know of this type is the \"fact\" that the Internet worm \nof 1988 infected 6000 computers. I was there when it was cooked up,\nand this was the recipe: someone guessed that there were about\n60,000 computers attached to the Internet, and that the worm might\nhave infected ten percent of them.Actually no one knows how many computers the worm infected, because\nthe remedy was to reboot them, and this destroyed all traces. But\npeople like numbers. And so this one is now replicated\nall over the Internet, like a little worm of its own.[5] Not all were\nnecessarily supplied by the PR firm. Reporters sometimes call a few\nadditional sources on their own, like someone adding a few fresh \nvegetables to a can of soup.\nThanks to Ingrid Basset, Trevor Blackwell, Sarah Harlin, Jessica \nLivingston, Jackie McDonough, Robert Morris, and Aaron Swartz (who\nalso found the PRSA article) for reading drafts of this.Correction: Earlier versions used a recent\nBusiness Week article mentioning del.icio.us as an example\nof a press hit, but Joshua Schachter tells me \nit was spontaneous."} {"title": "sun", "text": "September 2017The most valuable insights are both general and surprising. \nF\u00a0=\u00a0ma for example. But general and surprising is a hard\ncombination to achieve. That territory tends to be picked\nclean, precisely because those insights are so valuable.Ordinarily, the best that people can do is one without the\nother: either surprising without being general (e.g.\ngossip), or general without being surprising (e.g.\nplatitudes).Where things get interesting is the moderately valuable\ninsights. You get those from small additions of whichever\nquality was missing. The more common case is a small\naddition of generality: a piece of gossip that's more than\njust gossip, because it teaches something interesting about\nthe world. But another less common approach is to focus on\nthe most general ideas and see if you can find something new\nto say about them. Because these start out so general, you\nonly need a small delta of novelty to produce a useful\ninsight.A small delta of novelty is all you'll be able to get most\nof the time. Which means if you take this route, your ideas\nwill seem a lot like ones that already exist. Sometimes\nyou'll find you've merely rediscovered an idea that did\nalready exist. But don't be discouraged. Remember the huge\nmultiplier that kicks in when you do manage to think of\nsomething even a little new.Corollary: the more general the ideas you're talking about,\nthe less you should worry about repeating yourself. If you\nwrite enough, it's inevitable you will. Your brain is much\nthe same from year to year and so are the stimuli that hit\nit. I feel slightly bad when I find I've said something\nclose to what I've said before, as if I were plagiarizing\nmyself. But rationally one shouldn't. You won't say\nsomething exactly the same way the second time, and that\nvariation increases the chance you'll get that tiny but\ncritical delta of novelty.And of course, ideas beget ideas. (That sounds \nfamiliar.)\nAn idea with a small amount of novelty could lead to one\nwith more. But only if you keep going. So it's doubly\nimportant not to let yourself be discouraged by people who\nsay there's not much new about something you've discovered.\n\"Not much new\" is a real achievement when you're talking\nabout the most general ideas. It's not true that there's nothing new under the sun. There\nare some domains where there's almost nothing new. But\nthere's a big difference between nothing and almost nothing,\nwhen it's multiplied by the area under the sun.\nThanks to Sam Altman, Patrick Collison, and Jessica\nLivingston for reading drafts of this."} {"title": "weird", "text": "August 2021When people say that in their experience all programming languages\nare basically equivalent, they're making a statement not about\nlanguages but about the kind of programming they've done.99.5% of programming consists of gluing together calls to library\nfunctions. All popular languages are equally good at this. So one\ncan easily spend one's whole career operating in the intersection\nof popular programming languages.But the other .5% of programming is disproportionately interesting.\nIf you want to learn what it consists of, the weirdness of weird\nlanguages is a good clue to follow.Weird languages aren't weird by accident. Not the good ones, at\nleast. The weirdness of the good ones usually implies the existence\nof some form of programming that's not just the usual gluing together\nof library calls.A concrete example: Lisp macros. Lisp macros seem weird even to\nmany Lisp programmers. They're not only not in the intersection of\npopular languages, but by their nature would be hard to implement\nproperly in a language without turning it into a dialect of\nLisp. And macros are definitely evidence of techniques that go\nbeyond glue programming. For example, solving problems by first\nwriting a language for problems of that type, and then writing\nyour specific application in it. Nor is this all you can do with\nmacros; it's just one region in a space of program-manipulating\ntechniques that even now is far from fully explored.So if you want to expand your concept of what programming can be,\none way to do it is by learning weird languages. Pick a language\nthat most programmers consider weird but whose median user is smart,\nand then focus on the differences between this language and the\nintersection of popular languages. What can you say in this language\nthat would be impossibly inconvenient to say in others? In the\nprocess of learning how to say things you couldn't previously say,\nyou'll probably be learning how to think things you couldn't\npreviously think.\nThanks to Trevor Blackwell, Patrick Collison, Daniel Gackle, Amjad\nMasad, and Robert Morris for reading drafts of this.\n"} {"title": "diff", "text": "December 2001 (rev. May 2002)\n\n(This article came about in response to some questions on\nthe LL1 mailing list. It is now\nincorporated in Revenge of the Nerds.)When McCarthy designed Lisp in the late 1950s, it was\na radical departure from existing languages,\nthe most important of which was Fortran.Lisp embodied nine new ideas:\n1. Conditionals. A conditional is an if-then-else\nconstruct. We take these for granted now. They were \ninvented\nby McCarthy in the course of developing Lisp. \n(Fortran at that time only had a conditional\ngoto, closely based on the branch instruction in the \nunderlying hardware.) McCarthy, who was on the Algol committee, got\nconditionals into Algol, whence they spread to most other\nlanguages.2. A function type. In Lisp, functions are first class \nobjects-- they're a data type just like integers, strings,\netc, and have a literal representation, can be stored in variables,\ncan be passed as arguments, and so on.3. Recursion. Recursion existed as a mathematical concept\nbefore Lisp of course, but Lisp was the first programming language to support\nit. (It's arguably implicit in making functions first class\nobjects.)4. A new concept of variables. In Lisp, all variables\nare effectively pointers. Values are what\nhave types, not variables, and assigning or binding\nvariables means copying pointers, not what they point to.5. Garbage-collection.6. Programs composed of expressions. Lisp programs are \ntrees of expressions, each of which returns a value. \n(In some Lisps expressions\ncan return multiple values.) This is in contrast to Fortran\nand most succeeding languages, which distinguish between\nexpressions and statements.It was natural to have this\ndistinction in Fortran because (not surprisingly in a language\nwhere the input format was punched cards) the language was\nline-oriented. You could not nest statements. And\nso while you needed expressions for math to work, there was\nno point in making anything else return a value, because\nthere could not be anything waiting for it.This limitation\nwent away with the arrival of block-structured languages,\nbut by then it was too late. The distinction between\nexpressions and statements was entrenched. It spread from \nFortran into Algol and thence to both their descendants.When a language is made entirely of expressions, you can\ncompose expressions however you want. You can say either\n(using Arc syntax)(if foo (= x 1) (= x 2))or(= x (if foo 1 2))7. A symbol type. Symbols differ from strings in that\nyou can test equality by comparing a pointer.8. A notation for code using trees of symbols.9. The whole language always available. \nThere is\nno real distinction between read-time, compile-time, and runtime.\nYou can compile or run code while reading, read or run code\nwhile compiling, and read or compile code at runtime.Running code at read-time lets users reprogram Lisp's syntax;\nrunning code at compile-time is the basis of macros; compiling\nat runtime is the basis of Lisp's use as an extension\nlanguage in programs like Emacs; and reading at runtime\nenables programs to communicate using s-expressions, an\nidea recently reinvented as XML.\nWhen Lisp was first invented, all these ideas were far\nremoved from ordinary programming practice, which was\ndictated largely by the hardware available in the late 1950s.Over time, the default language, embodied\nin a succession of popular languages, has\ngradually evolved toward Lisp. 1-5 are now widespread.\n6 is starting to appear in the mainstream.\nPython has a form of 7, though there doesn't seem to be\nany syntax for it. \n8, which (with 9) is what makes Lisp macros\npossible, is so far still unique to Lisp,\nperhaps because (a) it requires those parens, or something \njust as bad, and (b) if you add that final increment of power, \nyou can no \nlonger claim to have invented a new language, but only\nto have designed a new dialect of Lisp ; -)Though useful to present-day programmers, it's\nstrange to describe Lisp in terms of its\nvariation from the random expedients other languages\nadopted. That was not, probably, how McCarthy\nthought of it. Lisp wasn't designed to fix the mistakes\nin Fortran; it came about more as the byproduct of an\nattempt to axiomatize computation."} {"title": "hubs", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2011If you look at a list of US cities sorted by population, the number\nof successful startups per capita varies by orders of magnitude.\nSomehow it's as if most places were sprayed with startupicide.I wondered about this for years. I could see the average town was\nlike a roach motel for startup ambitions: smart, ambitious people\nwent in, but no startups came out. But I was never able to figure\nout exactly what happened inside the motel\u2014exactly what was\nkilling all the potential startups.\n[1]A couple weeks ago I finally figured it out. I was framing the\nquestion wrong. The problem is not that most towns kill startups.\nIt's that death is the default for startups,\nand most towns don't save them. Instead of thinking of most places\nas being sprayed with startupicide, it's more accurate to think of\nstartups as all being poisoned, and a few places being sprayed with\nthe antidote.Startups in other places are just doing what startups naturally do:\nfail. The real question is, what's saving startups in places\nlike Silicon Valley?\n[2]EnvironmentI think there are two components to the antidote: being in a place\nwhere startups are the cool thing to do, and chance meetings with\npeople who can help you. And what drives them both is the number\nof startup people around you.The first component is particularly helpful in the first stage of\na startup's life, when you go from merely having an interest in\nstarting a company to actually doing it. It's quite a leap to start\na startup. It's an unusual thing to do. But in Silicon Valley it\nseems normal.\n[3]In most places, if you start a startup, people treat you as if\nyou're unemployed. People in the Valley aren't automatically\nimpressed with you just because you're starting a company, but they\npay attention. Anyone who's been here any amount of time knows not\nto default to skepticism, no matter how inexperienced you seem or\nhow unpromising your idea sounds at first, because they've all seen\ninexperienced founders with unpromising sounding ideas who a few\nyears later were billionaires.Having people around you care about what you're doing is an\nextraordinarily powerful force. Even the\nmost willful people are susceptible to it. About a year after we\nstarted Y Combinator I said something to a partner at a well known\nVC firm that gave him the (mistaken) impression I was considering\nstarting another startup. He responded so eagerly that for about\nhalf a second I found myself considering doing it.In most other cities, the prospect of starting a startup just doesn't\nseem real. In the Valley it's not only real but fashionable. That\nno doubt causes a lot of people to start startups who shouldn't.\nBut I think that's ok. Few people are suited to running a startup,\nand it's very hard to predict beforehand which are (as I know all\ntoo well from being in the business of trying to predict beforehand),\nso lots of people starting startups who shouldn't is probably the\noptimal state of affairs. As long as you're at a point in your\nlife when you can bear the risk of failure, the best way to find\nout if you're suited to running a startup is to try\nit.ChanceThe second component of the antidote is chance meetings with people\nwho can help you. This force works in both phases: both in the\ntransition from the desire to start a startup to starting one, and\nthe transition from starting a company to succeeding. The power\nof chance meetings is more variable than people around you caring\nabout startups, which is like a sort of background radiation that\naffects everyone equally, but at its strongest it is far stronger.Chance meetings produce miracles to compensate for the disasters\nthat characteristically befall startups. In the Valley, terrible\nthings happen to startups all the time, just like they do to startups\neverywhere. The reason startups are more likely to make it here\nis that great things happen to them too. In the Valley, lightning\nhas a sign bit.For example, you start a site for college students and you decide\nto move to the Valley for the summer to work on it. And then on a\nrandom suburban street in Palo Alto you happen to run into Sean\nParker, who understands the domain really well because he started\na similar startup himself, and also knows all the investors. And\nmoreover has advanced views, for 2004, on founders retaining control of their companies.You can't say precisely what the miracle will be, or even for sure\nthat one will happen. The best one can say is: if you're in a\nstartup hub, unexpected good things will probably happen to you,\nespecially if you deserve them.I bet this is true even for startups we fund. Even with us working\nto make things happen for them on purpose rather than by accident,\nthe frequency of helpful chance meetings in the Valley is so high\nthat it's still a significant increment on what we can deliver.Chance meetings play a role like the role relaxation plays in having\nideas. Most people have had the experience of working hard on some\nproblem, not being able to solve it, giving up and going to bed,\nand then thinking of the answer in the shower in the morning. What\nmakes the answer appear is letting your thoughts drift a bit\u2014and thus drift off the wrong\npath you'd been pursuing last night and onto the right one adjacent\nto it.Chance meetings let your acquaintance drift in the same way taking\na shower lets your thoughts drift. The critical thing in both cases\nis that they drift just the right amount. The meeting between Larry\nPage and Sergey Brin was a good example. They let their acquaintance\ndrift, but only a little; they were both meeting someone they had\na lot in common with.For Larry Page the most important component of the antidote was\nSergey Brin, and vice versa. The antidote is \npeople. It's not the\nphysical infrastructure of Silicon Valley that makes it work, or\nthe weather, or anything like that. Those helped get it started,\nbut now that the reaction is self-sustaining what drives it is the\npeople.Many observers have noticed that one of the most distinctive things\nabout startup hubs is the degree to which people help one another\nout, with no expectation of getting anything in return. I'm not\nsure why this is so. Perhaps it's because startups are less of a\nzero sum game than most types of business; they are rarely killed\nby competitors. Or perhaps it's because so many startup founders\nhave backgrounds in the sciences, where collaboration is encouraged.A large part of YC's function is to accelerate that process. We're\na sort of Valley within the Valley, where the density of people\nworking on startups and their willingness to help one another are\nboth artificially amplified.NumbersBoth components of the antidote\u2014an environment that encourages\nstartups, and chance meetings with people who help you\u2014are\ndriven by the same underlying cause: the number of startup people\naround you. To make a startup hub, you need a lot of people\ninterested in startups.There are three reasons. The first, obviously, is that if you don't\nhave enough density, the chance meetings don't happen.\n[4]\nThe second is that different startups need such different things, so\nyou need a lot of people to supply each startup with what they need\nmost. Sean Parker was exactly what Facebook needed in 2004. Another\nstartup might have needed a database guy, or someone with connections\nin the movie business.This is one of the reasons we fund such a large number of companies,\nincidentally. The bigger the community, the greater the chance it\nwill contain the person who has that one thing you need most.The third reason you need a lot of people to make a startup hub is\nthat once you have enough people interested in the same problem,\nthey start to set the social norms. And it is a particularly\nvaluable thing when the atmosphere around you encourages you to do\nsomething that would otherwise seem too ambitious. In most places\nthe atmosphere pulls you back toward the mean.I flew into the Bay Area a few days ago. I notice this every time\nI fly over the Valley: somehow you can sense something is going on. \nObviously you can sense prosperity in how well kept a\nplace looks. But there are different kinds of prosperity. Silicon\nValley doesn't look like Boston, or New York, or LA, or DC. I tried\nasking myself what word I'd use to describe the feeling the Valley\nradiated, and the word that came to mind was optimism.Notes[1]\nI'm not saying it's impossible to succeed in a city with few\nother startups, just harder. If you're sufficiently good at\ngenerating your own morale, you can survive without external\nencouragement. Wufoo was based in Tampa and they succeeded. But\nthe Wufoos are exceptionally disciplined.[2]\nIncidentally, this phenomenon is not limited to startups. Most\nunusual ambitions fail, unless the person who has them manages to\nfind the right sort of community.[3]\nStarting a company is common, but starting a startup is rare.\nI've talked about the distinction between the two elsewhere, but\nessentially a startup is a new business designed for scale. Most\nnew businesses are service businesses and except in rare cases those\ndon't scale.[4]\nAs I was writing this, I had a demonstration of the density of\nstartup people in the Valley. Jessica and I bicycled to University\nAve in Palo Alto to have lunch at the fabulous Oren's Hummus. As\nwe walked in, we met Charlie Cheever sitting near the door. Selina\nTobaccowala stopped to say hello on her way out. Then Josh Wilson\ncame in to pick up a take out order. After lunch we went to get\nfrozen yogurt. On the way we met Rajat Suri. When we got to the\nyogurt place, we found Dave Shen there, and as we walked out we ran\ninto Yuri Sagalov. We walked with him for a block or so and we ran\ninto Muzzammil Zaveri, and then a block later we met Aydin Senkut.\nThis is everyday life in Palo Alto. I wasn't trying to meet people;\nI was just having lunch. And I'm sure for every startup founder\nor investor I saw that I knew, there were 5 more I didn't. If Ron\nConway had been with us he would have met 30 people he knew.Thanks to Sam Altman, Paul Buchheit, Jessica Livingston, and\nHarj Taggar for reading drafts of this."} {"title": "iflisp", "text": "May 2003If Lisp is so great, why don't more people use it? I was \nasked this question by a student in the audience at a \ntalk I gave recently. Not for the first time, either.In languages, as in so many things, there's not much \ncorrelation between popularity and quality. Why does \nJohn Grisham (King of Torts sales rank, 44) outsell\nJane Austen (Pride and Prejudice sales rank, 6191)?\nWould even Grisham claim that it's because he's a better\nwriter?Here's the first sentence of Pride and Prejudice:\n\nIt is a truth universally acknowledged, that a single man \nin possession of a good fortune must be in want of a\nwife.\n\n\"It is a truth universally acknowledged?\" Long words for\nthe first sentence of a love story.Like Jane Austen, Lisp looks hard. Its syntax, or lack\nof syntax, makes it look completely unlike \nthe languages\nmost people are used to. Before I learned Lisp, I was afraid\nof it too. I recently came across a notebook from 1983\nin which I'd written:\n\nI suppose I should learn Lisp, but it seems so foreign.\n\nFortunately, I was 19 at the time and not too resistant to learning\nnew things. I was so ignorant that learning\nalmost anything meant learning new things.People frightened by Lisp make up other reasons for not\nusing it. The standard\nexcuse, back when C was the default language, was that Lisp\nwas too slow. Now that Lisp dialects are among\nthe faster\nlanguages available, that excuse has gone away.\nNow the standard excuse is openly circular: that other languages\nare more popular.(Beware of such reasoning. It gets you Windows.)Popularity is always self-perpetuating, but it's especially\nso in programming languages. More libraries\nget written for popular languages, which makes them still\nmore popular. Programs often have to work with existing programs,\nand this is easier if they're written in the same language,\nso languages spread from program to program like a virus.\nAnd managers prefer popular languages, because they give them \nmore leverage over developers, who can more easily be replaced.Indeed, if programming languages were all more or less equivalent,\nthere would be little justification for using any but the most\npopular. But they aren't all equivalent, not by a long\nshot. And that's why less popular languages, like Jane Austen's \nnovels, continue to survive at all. When everyone else is reading \nthe latest John Grisham novel, there will always be a few people \nreading Jane Austen instead."} {"title": "know", "text": "December 2014I've read Villehardouin's chronicle of the Fourth Crusade at least\ntwo times, maybe three. And yet if I had to write down everything\nI remember from it, I doubt it would amount to much more than a\npage. Multiply this times several hundred, and I get an uneasy\nfeeling when I look at my bookshelves. What use is it to read all\nthese books if I remember so little from them?A few months ago, as I was reading Constance Reid's excellent\nbiography of Hilbert, I figured out if not the answer to this\nquestion, at least something that made me feel better about it.\nShe writes:\n\n Hilbert had no patience with mathematical lectures which filled\n the students with facts but did not teach them how to frame a\n problem and solve it. He often used to tell them that \"a perfect\n formulation of a problem is already half its solution.\"\n\nThat has always seemed to me an important point, and I was even\nmore convinced of it after hearing it confirmed by Hilbert.But how had I come to believe in this idea in the first place? A\ncombination of my own experience and other things I'd read. None\nof which I could at that moment remember! And eventually I'd forget\nthat Hilbert had confirmed it too. But my increased belief in the\nimportance of this idea would remain something I'd learned from\nthis book, even after I'd forgotten I'd learned it.Reading and experience train your model of the world. And even if\nyou forget the experience or what you read, its effect on your model\nof the world persists. Your mind is like a compiled program you've\nlost the source of. It works, but you don't know why.The place to look for what I learned from Villehardouin's chronicle\nis not what I remember from it, but my mental models of the crusades,\nVenice, medieval culture, siege warfare, and so on. Which doesn't\nmean I couldn't have read more attentively, but at least the harvest\nof reading is not so miserably small as it might seem.This is one of those things that seem obvious in retrospect. But\nit was a surprise to me and presumably would be to anyone else who\nfelt uneasy about (apparently) forgetting so much they'd read.Realizing it does more than make you feel a little better about\nforgetting, though. There are specific implications.For example, reading and experience are usually \"compiled\" at the\ntime they happen, using the state of your brain at that time. The\nsame book would get compiled differently at different points in\nyour life. Which means it is very much worth reading important\nbooks multiple times. I always used to feel some misgivings about\nrereading books. I unconsciously lumped reading together with work\nlike carpentry, where having to do something again is a sign you\ndid it wrong the first time. Whereas now the phrase \"already read\"\nseems almost ill-formed.Intriguingly, this implication isn't limited to books. Technology\nwill increasingly make it possible to relive our experiences. When\npeople do that today it's usually to enjoy them again (e.g. when\nlooking at pictures of a trip) or to find the origin of some bug in\ntheir compiled code (e.g. when Stephen Fry succeeded in remembering\nthe childhood trauma that prevented him from singing). But as\ntechnologies for recording and playing back your life improve, it\nmay become common for people to relive experiences without any goal\nin mind, simply to learn from them again as one might when rereading\na book.Eventually we may be able not just to play back experiences but\nalso to index and even edit them. So although not knowing how you\nknow things may seem part of being human, it may not be.\nThanks to Sam Altman, Jessica Livingston, and Robert Morris for reading \ndrafts of this."} {"title": "rss", "text": "Aaron Swartz created a scraped\nfeed\nof the essays page."} {"title": "todo", "text": "April 2012A palliative care nurse called Bronnie Ware made a list of the\nbiggest regrets\nof the dying. Her list seems plausible. I could see\nmyself \u2014 can see myself \u2014 making at least 4 of these\n5 mistakes.If you had to compress them into a single piece of advice, it might\nbe: don't be a cog. The 5 regrets paint a portrait of post-industrial\nman, who shrinks himself into a shape that fits his circumstances,\nthen turns dutifully till he stops.The alarming thing is, the mistakes that produce these regrets are\nall errors of omission. You forget your dreams, ignore your family,\nsuppress your feelings, neglect your friends, and forget to be\nhappy. Errors of omission are a particularly dangerous type of\nmistake, because you make them by default.I would like to avoid making these mistakes. But how do you avoid\nmistakes you make by default? Ideally you transform your life so\nit has other defaults. But it may not be possible to do that\ncompletely. As long as these mistakes happen by default, you probably\nhave to be reminded not to make them. So I inverted the 5 regrets,\nyielding a list of 5 commands\n\n Don't ignore your dreams; don't work too much; say what you\n think; cultivate friendships; be happy.\n\nwhich I then put at the top of the file I use as a todo list."} {"title": "vb", "text": "January 2016Life is short, as everyone knows. When I was a kid I used to wonder\nabout this. Is life actually short, or are we really complaining\nabout its finiteness? Would we be just as likely to feel life was\nshort if we lived 10 times as long?Since there didn't seem any way to answer this question, I stopped\nwondering about it. Then I had kids. That gave me a way to answer\nthe question, and the answer is that life actually is short.Having kids showed me how to convert a continuous quantity, time,\ninto discrete quantities. You only get 52 weekends with your 2 year\nold. If Christmas-as-magic lasts from say ages 3 to 10, you only\nget to watch your child experience it 8 times. And while it's\nimpossible to say what is a lot or a little of a continuous quantity\nlike time, 8 is not a lot of something. If you had a handful of 8\npeanuts, or a shelf of 8 books to choose from, the quantity would\ndefinitely seem limited, no matter what your lifespan was.Ok, so life actually is short. Does it make any difference to know\nthat?It has for me. It means arguments of the form \"Life is too short\nfor x\" have great force. It's not just a figure of speech to say\nthat life is too short for something. It's not just a synonym for\nannoying. If you find yourself thinking that life is too short for\nsomething, you should try to eliminate it if you can.When I ask myself what I've found life is too short for, the word\nthat pops into my head is \"bullshit.\" I realize that answer is\nsomewhat tautological. It's almost the definition of bullshit that\nit's the stuff that life is too short for. And yet bullshit does\nhave a distinctive character. There's something fake about it.\nIt's the junk food of experience.\n[1]If you ask yourself what you spend your time on that's bullshit,\nyou probably already know the answer. Unnecessary meetings, pointless\ndisputes, bureaucracy, posturing, dealing with other people's\nmistakes, traffic jams, addictive but unrewarding pastimes.There are two ways this kind of thing gets into your life: it's\neither forced on you, or it tricks you. To some extent you have to\nput up with the bullshit forced on you by circumstances. You need\nto make money, and making money consists mostly of errands. Indeed,\nthe law of supply and demand insures that: the more rewarding some\nkind of work is, the cheaper people will do it. It may be that\nless bullshit is forced on you than you think, though. There has\nalways been a stream of people who opt out of the default grind and\ngo live somewhere where opportunities are fewer in the conventional\nsense, but life feels more authentic. This could become more common.You can do it on a smaller scale without moving. The amount of\ntime you have to spend on bullshit varies between employers. Most\nlarge organizations (and many small ones) are steeped in it. But\nif you consciously prioritize bullshit avoidance over other factors\nlike money and prestige, you can probably find employers that will\nwaste less of your time.If you're a freelancer or a small company, you can do this at the\nlevel of individual customers. If you fire or avoid toxic customers,\nyou can decrease the amount of bullshit in your life by more than\nyou decrease your income.But while some amount of bullshit is inevitably forced on you, the\nbullshit that sneaks into your life by tricking you is no one's\nfault but your own. And yet the bullshit you choose may be harder\nto eliminate than the bullshit that's forced on you. Things that\nlure you into wasting your time have to be really good at\ntricking you. An example that will be familiar to a lot of people\nis arguing online. When someone\ncontradicts you, they're in a sense attacking you. Sometimes pretty\novertly. Your instinct when attacked is to defend yourself. But\nlike a lot of instincts, this one wasn't designed for the world we\nnow live in. Counterintuitive as it feels, it's better most of\nthe time not to defend yourself. Otherwise these people are literally\ntaking your life.\n[2]Arguing online is only incidentally addictive. There are more\ndangerous things than that. As I've written before, one byproduct\nof technical progress is that things we like tend to become more\naddictive. Which means we will increasingly have to make a conscious\neffort to avoid addictions \u0097 to stand outside ourselves and ask \"is\nthis how I want to be spending my time?\"As well as avoiding bullshit, one should actively seek out things\nthat matter. But different things matter to different people, and\nmost have to learn what matters to them. A few are lucky and realize\nearly on that they love math or taking care of animals or writing,\nand then figure out a way to spend a lot of time doing it. But\nmost people start out with a life that's a mix of things that\nmatter and things that don't, and only gradually learn to distinguish\nbetween them.For the young especially, much of this confusion is induced by the\nartificial situations they find themselves in. In middle school and\nhigh school, what the other kids think of you seems the most important\nthing in the world. But when you ask adults what they got wrong\nat that age, nearly all say they cared too much what other kids\nthought of them.One heuristic for distinguishing stuff that matters is to ask\nyourself whether you'll care about it in the future. Fake stuff\nthat matters usually has a sharp peak of seeming to matter. That's\nhow it tricks you. The area under the curve is small, but its shape\njabs into your consciousness like a pin.The things that matter aren't necessarily the ones people would\ncall \"important.\" Having coffee with a friend matters. You won't\nfeel later like that was a waste of time.One great thing about having small children is that they make you\nspend time on things that matter: them. They grab your sleeve as\nyou're staring at your phone and say \"will you play with me?\" And\nodds are that is in fact the bullshit-minimizing option.If life is short, we should expect its shortness to take us by\nsurprise. And that is just what tends to happen. You take things\nfor granted, and then they're gone. You think you can always write\nthat book, or climb that mountain, or whatever, and then you realize\nthe window has closed. The saddest windows close when other people\ndie. Their lives are short too. After my mother died, I wished I'd\nspent more time with her. I lived as if she'd always be there.\nAnd in her typical quiet way she encouraged that illusion. But an\nillusion it was. I think a lot of people make the same mistake I\ndid.The usual way to avoid being taken by surprise by something is to\nbe consciously aware of it. Back when life was more precarious,\npeople used to be aware of death to a degree that would now seem a\nbit morbid. I'm not sure why, but it doesn't seem the right answer\nto be constantly reminding oneself of the grim reaper hovering at\neveryone's shoulder. Perhaps a better solution is to look at the\nproblem from the other end. Cultivate a habit of impatience about\nthe things you most want to do. Don't wait before climbing that\nmountain or writing that book or visiting your mother. You don't\nneed to be constantly reminding yourself why you shouldn't wait.\nJust don't wait.I can think of two more things one does when one doesn't have much\nof something: try to get more of it, and savor what one has. Both\nmake sense here.How you live affects how long you live. Most people could do better.\nMe among them.But you can probably get even more effect by paying closer attention\nto the time you have. It's easy to let the days rush by. The\n\"flow\" that imaginative people love so much has a darker cousin\nthat prevents you from pausing to savor life amid the daily slurry\nof errands and alarms. One of the most striking things I've read\nwas not in a book, but the title of one: James Salter's Burning\nthe Days.It is possible to slow time somewhat. I've gotten better at it.\nKids help. When you have small children, there are a lot of moments\nso perfect that you can't help noticing.It does help too to feel that you've squeezed everything out of\nsome experience. The reason I'm sad about my mother is not just\nthat I miss her but that I think of all the things we could have\ndone that we didn't. My oldest son will be 7 soon. And while I\nmiss the 3 year old version of him, I at least don't have any regrets\nover what might have been. We had the best time a daddy and a 3\nyear old ever had.Relentlessly prune bullshit, don't wait to do things that matter,\nand savor the time you have. That's what you do when life is short.Notes[1]\nAt first I didn't like it that the word that came to mind was\none that had other meanings. But then I realized the other meanings\nare fairly closely related. Bullshit in the sense of things you\nwaste your time on is a lot like intellectual bullshit.[2]\nI chose this example deliberately as a note to self. I get\nattacked a lot online. People tell the craziest lies about me.\nAnd I have so far done a pretty mediocre job of suppressing the\nnatural human inclination to say \"Hey, that's not true!\"Thanks to Jessica Livingston and Geoff Ralston for reading drafts\nof this."} {"title": "web20", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nNovember 2005Does \"Web 2.0\" mean anything? Till recently I thought it didn't,\nbut the truth turns out to be more complicated. Originally, yes,\nit was meaningless. Now it seems to have acquired a meaning. And\nyet those who dislike the term are probably right, because if it\nmeans what I think it does, we don't need it.I first heard the phrase \"Web 2.0\" in the name of the Web 2.0\nconference in 2004. At the time it was supposed to mean using \"the\nweb as a platform,\" which I took to refer to web-based applications.\n[1]So I was surprised at a conference this summer when Tim O'Reilly\nled a session intended to figure out a definition of \"Web 2.0.\"\nDidn't it already mean using the web as a platform? And if it\ndidn't already mean something, why did we need the phrase at all?OriginsTim says the phrase \"Web 2.0\" first\narose in \"a brainstorming session between\nO'Reilly and Medialive International.\" What is Medialive International?\n\"Producers of technology tradeshows and conferences,\" according to\ntheir site. So presumably that's what this brainstorming session\nwas about. O'Reilly wanted to organize a conference about the web,\nand they were wondering what to call it.I don't think there was any deliberate plan to suggest there was a\nnew version of the web. They just wanted to make the point\nthat the web mattered again. It was a kind of semantic deficit\nspending: they knew new things were coming, and the \"2.0\" referred\nto whatever those might turn out to be.And they were right. New things were coming. But the new version\nnumber led to some awkwardness in the short term. In the process\nof developing the pitch for the first conference, someone must have\ndecided they'd better take a stab at explaining what that \"2.0\"\nreferred to. Whatever it meant, \"the web as a platform\" was at\nleast not too constricting.The story about \"Web 2.0\" meaning the web as a platform didn't live\nmuch past the first conference. By the second conference, what\n\"Web 2.0\" seemed to mean was something about democracy. At least,\nit did when people wrote about it online. The conference itself\ndidn't seem very grassroots. It cost $2800, so the only people who\ncould afford to go were VCs and people from big companies.And yet, oddly enough, Ryan Singel's article\nabout the conference in Wired News spoke of \"throngs of\ngeeks.\" When a friend of mine asked Ryan about this, it was news\nto him. He said he'd originally written something like \"throngs\nof VCs and biz dev guys\" but had later shortened it just to \"throngs,\"\nand that this must have in turn been expanded by the editors into\n\"throngs of geeks.\" After all, a Web 2.0 conference would presumably\nbe full of geeks, right?Well, no. There were about 7. Even Tim O'Reilly was wearing a \nsuit, a sight so alien I couldn't parse it at first. I saw\nhim walk by and said to one of the O'Reilly people \"that guy looks\njust like Tim.\"\"Oh, that's Tim. He bought a suit.\"\nI ran after him, and sure enough, it was. He explained that he'd\njust bought it in Thailand.The 2005 Web 2.0 conference reminded me of Internet trade shows\nduring the Bubble, full of prowling VCs looking for the next hot\nstartup. There was that same odd atmosphere created by a large \nnumber of people determined not to miss out. Miss out on what?\nThey didn't know. Whatever was going to happen\u2014whatever Web 2.0\nturned out to be.I wouldn't quite call it \"Bubble 2.0\" just because VCs are eager\nto invest again. The Internet is a genuinely big deal. The bust\nwas as much an overreaction as\nthe boom. It's to be expected that once we started to pull out of\nthe bust, there would be a lot of growth in this area, just as there\nwas in the industries that spiked the sharpest before the Depression.The reason this won't turn into a second Bubble is that the IPO\nmarket is gone. Venture investors\nare driven by exit strategies. The reason they were funding all \nthose laughable startups during the late 90s was that they hoped\nto sell them to gullible retail investors; they hoped to be laughing\nall the way to the bank. Now that route is closed. Now the default\nexit strategy is to get bought, and acquirers are less prone to\nirrational exuberance than IPO investors. The closest you'll get \nto Bubble valuations is Rupert Murdoch paying $580 million for \nMyspace. That's only off by a factor of 10 or so.1. AjaxDoes \"Web 2.0\" mean anything more than the name of a conference\nyet? I don't like to admit it, but it's starting to. When people\nsay \"Web 2.0\" now, I have some idea what they mean. And the fact\nthat I both despise the phrase and understand it is the surest proof\nthat it has started to mean something.One ingredient of its meaning is certainly Ajax, which I can still\nonly just bear to use without scare quotes. Basically, what \"Ajax\"\nmeans is \"Javascript now works.\" And that in turn means that\nweb-based applications can now be made to work much more like desktop\nones.As you read this, a whole new generation\nof software is being written to take advantage of Ajax. There\nhasn't been such a wave of new applications since microcomputers\nfirst appeared. Even Microsoft sees it, but it's too late for them\nto do anything more than leak \"internal\" \ndocuments designed to give the impression they're on top of this\nnew trend.In fact the new generation of software is being written way too\nfast for Microsoft even to channel it, let alone write their own\nin house. Their only hope now is to buy all the best Ajax startups\nbefore Google does. And even that's going to be hard, because\nGoogle has as big a head start in buying microstartups as it did\nin search a few years ago. After all, Google Maps, the canonical\nAjax application, was the result of a startup they bought.So ironically the original description of the Web 2.0 conference\nturned out to be partially right: web-based applications are a big\ncomponent of Web 2.0. But I'm convinced they got this right by \naccident. The Ajax boom didn't start till early 2005, when Google\nMaps appeared and the term \"Ajax\" was coined.2. DemocracyThe second big element of Web 2.0 is democracy. We now have several\nexamples to prove that amateurs can \nsurpass professionals, when they have the right kind of system to \nchannel their efforts. Wikipedia\nmay be the most famous. Experts have given Wikipedia middling\nreviews, but they miss the critical point: it's good enough. And \nit's free, which means people actually read it. On the web, articles\nyou have to pay for might as well not exist. Even if you were \nwilling to pay to read them yourself, you can't link to them. \nThey're not part of the conversation.Another place democracy seems to win is in deciding what counts as\nnews. I never look at any news site now except Reddit.\n[2]\n I know if something major\nhappens, or someone writes a particularly interesting article, it \nwill show up there. Why bother checking the front page of any\nspecific paper or magazine? Reddit's like an RSS feed for the whole\nweb, with a filter for quality. Similar sites include Digg, a technology news site that's\nrapidly approaching Slashdot in popularity, and del.icio.us, the collaborative\nbookmarking network that set off the \"tagging\" movement. And whereas\nWikipedia's main appeal is that it's good enough and free, these\nsites suggest that voters do a significantly better job than human\neditors.The most dramatic example of Web 2.0 democracy is not in the selection\nof ideas, but their production. \nI've noticed for a while that the stuff I read on individual people's\nsites is as good as or better than the stuff I read in newspapers\nand magazines. And now I have independent evidence: the top links\non Reddit are generally links to individual people's sites rather \nthan to magazine articles or news stories.My experience of writing\nfor magazines suggests an explanation. Editors. They control the\ntopics you can write about, and they can generally rewrite whatever\nyou produce. The result is to damp extremes. Editing yields 95th\npercentile writing\u201495% of articles are improved by it, but 5% are\ndragged down. 5% of the time you get \"throngs of geeks.\"On the web, people can publish whatever they want. Nearly all of\nit falls short of the editor-damped writing in print publications.\nBut the pool of writers is very, very large. If it's large enough,\nthe lack of damping means the best writing online should surpass \nthe best in print.\n[3] \nAnd now that the web has evolved mechanisms\nfor selecting good stuff, the web wins net. Selection beats damping,\nfor the same reason market economies beat centrally planned ones.Even the startups are different this time around. They are to the \nstartups of the Bubble what bloggers are to the print media. During\nthe Bubble, a startup meant a company headed by an MBA that was \nblowing through several million dollars of VC money to \"get big\nfast\" in the most literal sense. Now it means a smaller, younger, more technical group that just \ndecided to make something great. They'll decide later if they want \nto raise VC-scale funding, and if they take it, they'll take it on\ntheir terms.3. Don't Maltreat UsersI think everyone would agree that democracy and Ajax are elements\nof \"Web 2.0.\" I also see a third: not to maltreat users. During\nthe Bubble a lot of popular sites were quite high-handed with users.\nAnd not just in obvious ways, like making them register, or subjecting\nthem to annoying ads. The very design of the average site in the \nlate 90s was an abuse. Many of the most popular sites were loaded\nwith obtrusive branding that made them slow to load and sent the\nuser the message: this is our site, not yours. (There's a physical\nanalog in the Intel and Microsoft stickers that come on some\nlaptops.)I think the root of the problem was that sites felt they were giving\nsomething away for free, and till recently a company giving anything\naway for free could be pretty high-handed about it. Sometimes it\nreached the point of economic sadism: site owners assumed that the\nmore pain they caused the user, the more benefit it must be to them. \nThe most dramatic remnant of this model may be at salon.com, where \nyou can read the beginning of a story, but to get the rest you have\nsit through a movie.At Y Combinator we advise all the startups we fund never to lord\nit over users. Never make users register, unless you need to in\norder to store something for them. If you do make users register, \nnever make them wait for a confirmation link in an email; in fact,\ndon't even ask for their email address unless you need it for some\nreason. Don't ask them any unnecessary questions. Never send them\nemail unless they explicitly ask for it. Never frame pages you\nlink to, or open them in new windows. If you have a free version \nand a pay version, don't make the free version too restricted. And\nif you find yourself asking \"should we allow users to do x?\" just \nanswer \"yes\" whenever you're unsure. Err on the side of generosity.In How to Start a Startup I advised startups\nnever to let anyone fly under them, meaning never to let any other\ncompany offer a cheaper, easier solution. Another way to fly low \nis to give users more power. Let users do what they want. If you \ndon't and a competitor does, you're in trouble.iTunes is Web 2.0ish in this sense. Finally you can buy individual\nsongs instead of having to buy whole albums. The recording industry\nhated the idea and resisted it as long as possible. But it was\nobvious what users wanted, so Apple flew under the labels.\n[4]\nThough really it might be better to describe iTunes as Web 1.5. \nWeb 2.0 applied to music would probably mean individual bands giving\naway DRMless songs for free.The ultimate way to be nice to users is to give them something for\nfree that competitors charge for. During the 90s a lot of people \nprobably thought we'd have some working system for micropayments \nby now. In fact things have gone in the other direction. The most \nsuccessful sites are the ones that figure out new ways to give stuff\naway for free. Craigslist has largely destroyed the classified ad\nsites of the 90s, and OkCupid looks likely to do the same to the\nprevious generation of dating sites.Serving web pages is very, very cheap. If you can make even a \nfraction of a cent per page view, you can make a profit. And\ntechnology for targeting ads continues to improve. I wouldn't be\nsurprised if ten years from now eBay had been supplanted by an \nad-supported freeBay (or, more likely, gBay).Odd as it might sound, we tell startups that they should try to\nmake as little money as possible. If you can figure out a way to\nturn a billion dollar industry into a fifty million dollar industry,\nso much the better, if all fifty million go to you. Though indeed,\nmaking things cheaper often turns out to generate more money in the\nend, just as automating things often turns out to generate more\njobs.The ultimate target is Microsoft. What a bang that balloon is going\nto make when someone pops it by offering a free web-based alternative \nto MS Office.\n[5]\nWho will? Google? They seem to be taking their\ntime. I suspect the pin will be wielded by a couple of 20 year old\nhackers who are too naive to be intimidated by the idea. (How hard\ncan it be?)The Common ThreadAjax, democracy, and not dissing users. What do they all have in \ncommon? I didn't realize they had anything in common till recently,\nwhich is one of the reasons I disliked the term \"Web 2.0\" so much.\nIt seemed that it was being used as a label for whatever happened\nto be new\u2014that it didn't predict anything.But there is a common thread. Web 2.0 means using the web the way\nit's meant to be used. The \"trends\" we're seeing now are simply\nthe inherent nature of the web emerging from under the broken models\nthat got imposed on it during the Bubble.I realized this when I read an interview with\nJoe Kraus, the co-founder of Excite.\n[6]\n\n Excite really never got the business model right at all. We fell \n into the classic problem of how when a new medium comes out it\n adopts the practices, the content, the business models of the old\n medium\u2014which fails, and then the more appropriate models get\n figured out.\n\nIt may have seemed as if not much was happening during the years\nafter the Bubble burst. But in retrospect, something was happening:\nthe web was finding its natural angle of repose. The democracy \ncomponent, for example\u2014that's not an innovation, in the sense of\nsomething someone made happen. That's what the web naturally tends\nto produce.Ditto for the idea of delivering desktop-like applications over the\nweb. That idea is almost as old as the web. But the first time \naround it was co-opted by Sun, and we got Java applets. Java has\nsince been remade into a generic replacement for C++, but in 1996\nthe story about Java was that it represented a new model of software.\nInstead of desktop applications, you'd run Java \"applets\" delivered\nfrom a server.This plan collapsed under its own weight. Microsoft helped kill it,\nbut it would have died anyway. There was no uptake among hackers.\nWhen you find PR firms promoting\nsomething as the next development platform, you can be sure it's\nnot. If it were, you wouldn't need PR firms to tell you, because \nhackers would already be writing stuff on top of it, the way sites \nlike Busmonster used Google Maps as a\nplatform before Google even meant it to be one.The proof that Ajax is the next hot platform is that thousands of \nhackers have spontaneously started building things on top\nof it. Mikey likes it.There's another thing all three components of Web 2.0 have in common.\nHere's a clue. Suppose you approached investors with the following\nidea for a Web 2.0 startup:\n\n Sites like del.icio.us and flickr allow users to \"tag\" content\n with descriptive tokens. But there is also huge source of\n implicit tags that they ignore: the text within web links.\n Moreover, these links represent a social network connecting the \n individuals and organizations who created the pages, and by using\n graph theory we can compute from this network an estimate of the\n reputation of each member. We plan to mine the web for these \n implicit tags, and use them together with the reputation hierarchy\n they embody to enhance web searches.\n\nHow long do you think it would take them on average to realize that\nit was a description of Google?Google was a pioneer in all three components of Web 2.0: their core\nbusiness sounds crushingly hip when described in Web 2.0 terms, \n\"Don't maltreat users\" is a subset of \"Don't be evil,\" and of course\nGoogle set off the whole Ajax boom with Google Maps.Web 2.0 means using the web as it was meant to be used, and Google\ndoes. That's their secret. They're sailing with the wind, instead of sitting \nbecalmed praying for a business model, like the print media, or \ntrying to tack upwind by suing their customers, like Microsoft and \nthe record labels.\n[7]Google doesn't try to force things to happen their way. They try \nto figure out what's going to happen, and arrange to be standing \nthere when it does. That's the way to approach technology\u2014and \nas business includes an ever larger technological component, the\nright way to do business.The fact that Google is a \"Web 2.0\" company shows that, while\nmeaningful, the term is also rather bogus. It's like the word\n\"allopathic.\" It just means doing things right, and it's a bad \nsign when you have a special word for that.\nNotes[1]\nFrom the conference\nsite, June 2004: \"While the first wave of the Web was closely \ntied to the browser, the second wave extends applications across \nthe web and enables a new generation of services and business\nopportunities.\" To the extent this means anything, it seems to be\nabout \nweb-based applications.[2]\nDisclosure: Reddit was funded by \nY Combinator. But although\nI started using it out of loyalty to the home team, I've become a\ngenuine addict. While we're at it, I'm also an investor in\n!MSFT, having sold all my shares earlier this year.[3]\nI'm not against editing. I spend more time editing than\nwriting, and I have a group of picky friends who proofread almost\neverything I write. What I dislike is editing done after the fact \nby someone else.[4]\nObvious is an understatement. Users had been climbing in through \nthe window for years before Apple finally moved the door.[5]\nHint: the way to create a web-based alternative to Office may\nnot be to write every component yourself, but to establish a protocol\nfor web-based apps to share a virtual home directory spread across\nmultiple servers. Or it may be to write it all yourself.[6]\nIn Jessica Livingston's\nFounders at\nWork.[7]\nMicrosoft didn't sue their customers directly, but they seem \nto have done all they could to help SCO sue them.Thanks to Trevor Blackwell, Sarah Harlin, Jessica Livingston, Peter\nNorvig, Aaron Swartz, and Jeff Weiner for reading drafts of this, and to the\nguys at O'Reilly and Adaptive Path for answering my questions."} {"title": "addiction", "text": "July 2010What hard liquor, cigarettes, heroin, and crack have in common is\nthat they're all more concentrated forms of less addictive predecessors.\nMost if not all the things we describe as addictive are. And the\nscary thing is, the process that created them is accelerating.We wouldn't want to stop it. It's the same process that cures\ndiseases: technological progress. Technological progress means\nmaking things do more of what we want. When the thing we want is\nsomething we want to want, we consider technological progress good.\nIf some new technique makes solar cells x% more efficient, that\nseems strictly better. When progress concentrates something we\ndon't want to want\u2014when it transforms opium into heroin\u2014it seems\nbad. But it's the same process at work.\n[1]No one doubts this process is accelerating, which means increasing\nnumbers of things we like will be transformed into things we like\ntoo much.\n[2]As far as I know there's no word for something we like too much.\nThe closest is the colloquial sense of \"addictive.\" That usage has\nbecome increasingly common during my lifetime. And it's clear why:\nthere are an increasing number of things we need it for. At the\nextreme end of the spectrum are crack and meth. Food has been\ntransformed by a combination of factory farming and innovations in\nfood processing into something with way more immediate bang for the\nbuck, and you can see the results in any town in America. Checkers\nand solitaire have been replaced by World of Warcraft and FarmVille.\nTV has become much more engaging, and even so it can't compete with Facebook.The world is more addictive than it was 40 years ago. And unless\nthe forms of technological progress that produced these things are\nsubject to different laws than technological progress in general,\nthe world will get more addictive in the next 40 years than it did\nin the last 40.The next 40 years will bring us some wonderful things. I don't\nmean to imply they're all to be avoided. Alcohol is a dangerous\ndrug, but I'd rather live in a world with wine than one without.\nMost people can coexist with alcohol; but you have to be careful.\nMore things we like will mean more things we have to be careful\nabout.Most people won't, unfortunately. Which means that as the world\nbecomes more addictive, the two senses in which one can live a\nnormal life will be driven ever further apart. One sense of \"normal\"\nis statistically normal: what everyone else does. The other is the\nsense we mean when we talk about the normal operating range of a\npiece of machinery: what works best.These two senses are already quite far apart. Already someone\ntrying to live well would seem eccentrically abstemious in most of\nthe US. That phenomenon is only going to become more pronounced.\nYou can probably take it as a rule of thumb from now on that if\npeople don't think you're weird, you're living badly.Societies eventually develop antibodies to addictive new things.\nI've seen that happen with cigarettes. When cigarettes first\nappeared, they spread the way an infectious disease spreads through\na previously isolated population. Smoking rapidly became a\n(statistically) normal thing. There were ashtrays everywhere. We\nhad ashtrays in our house when I was a kid, even though neither of\nmy parents smoked. You had to for guests.As knowledge spread about the dangers of smoking, customs changed.\nIn the last 20 years, smoking has been transformed from something\nthat seemed totally normal into a rather seedy habit: from something\nmovie stars did in publicity shots to something small huddles of\naddicts do outside the doors of office buildings. A lot of the\nchange was due to legislation, of course, but the legislation\ncouldn't have happened if customs hadn't already changed.It took a while though\u2014on the order of 100 years. And unless the\nrate at which social antibodies evolve can increase to match the\naccelerating rate at which technological progress throws off new\naddictions, we'll be increasingly unable to rely on customs to\nprotect us.\n[3]\nUnless we want to be canaries in the coal mine\nof each new addiction\u2014the people whose sad example becomes a\nlesson to future generations\u2014we'll have to figure out for ourselves\nwhat to avoid and how. It will actually become a reasonable strategy\n(or a more reasonable strategy) to suspect \neverything new.In fact, even that won't be enough. We'll have to worry not just\nabout new things, but also about existing things becoming more\naddictive. That's what bit me. I've avoided most addictions, but\nthe Internet got me because it became addictive while I was using\nit.\n[4]Most people I know have problems with Internet addiction. We're\nall trying to figure out our own customs for getting free of it.\nThat's why I don't have an iPhone, for example; the last thing I\nwant is for the Internet to follow me out into the world.\n[5]\nMy latest trick is taking long hikes. I used to think running was a\nbetter form of exercise than hiking because it took less time. Now\nthe slowness of hiking seems an advantage, because the longer I\nspend on the trail, the longer I have to think without interruption.Sounds pretty eccentric, doesn't it? It always will when you're\ntrying to solve problems where there are no customs yet to guide\nyou. Maybe I can't plead Occam's razor; maybe I'm simply eccentric.\nBut if I'm right about the acceleration of addictiveness, then this\nkind of lonely squirming to avoid it will increasingly be the fate\nof anyone who wants to get things done. We'll increasingly be\ndefined by what we say no to.\nNotes[1]\nCould you restrict technological progress to areas where you\nwanted it? Only in a limited way, without becoming a police state.\nAnd even then your restrictions would have undesirable side effects.\n\"Good\" and \"bad\" technological progress aren't sharply differentiated,\nso you'd find you couldn't slow the latter without also slowing the\nformer. And in any case, as Prohibition and the \"war on drugs\"\nshow, bans often do more harm than good.[2]\nTechnology has always been accelerating. By Paleolithic\nstandards, technology evolved at a blistering pace in the Neolithic\nperiod.[3]\nUnless we mass produce social customs. I suspect the recent\nresurgence of evangelical Christianity in the US is partly a reaction\nto drugs. In desperation people reach for the sledgehammer; if\ntheir kids won't listen to them, maybe they'll listen to God. But\nthat solution has broader consequences than just getting kids to\nsay no to drugs. You end up saying no to \nscience as well.\nI worry we may be heading for a future in which only a few people\nplot their own itinerary through no-land, while everyone else books\na package tour. Or worse still, has one booked for them by the\ngovernment.[4]\nPeople commonly use the word \"procrastination\" to describe\nwhat they do on the Internet. It seems to me too mild to describe\nwhat's happening as merely not-doing-work. We don't call it\nprocrastination when someone gets drunk instead of working.[5]\nSeveral people have told me they like the iPad because it\nlets them bring the Internet into situations where a laptop would\nbe too conspicuous. In other words, it's a hip flask. (This is\ntrue of the iPhone too, of course, but this advantage isn't as\nobvious because it reads as a phone, and everyone's used to those.)Thanks to Sam Altman, Patrick Collison, Jessica Livingston, and\nRobert Morris for reading drafts of this."} {"title": "philosophy", "text": "September 2007In high school I decided I was going to study philosophy in college.\nI had several motives, some more honorable than others. One of the\nless honorable was to shock people. College was regarded as job\ntraining where I grew up, so studying philosophy seemed an impressively\nimpractical thing to do. Sort of like slashing holes in your clothes\nor putting a safety pin through your ear, which were other forms\nof impressive impracticality then just coming into fashion.But I had some more honest motives as well. I thought studying\nphilosophy would be a shortcut straight to wisdom. All the people\nmajoring in other things would just end up with a bunch of domain\nknowledge. I would be learning what was really what.I'd tried to read a few philosophy books. Not recent ones; you\nwouldn't find those in our high school library. But I tried to\nread Plato and Aristotle. I doubt I believed I understood them,\nbut they sounded like they were talking about something important.\nI assumed I'd learn what in college.The summer before senior year I took some college classes. I learned\na lot in the calculus class, but I didn't learn much in Philosophy\n101. And yet my plan to study philosophy remained intact. It was\nmy fault I hadn't learned anything. I hadn't read the books we\nwere assigned carefully enough. I'd give Berkeley's Principles\nof Human Knowledge another shot in college. Anything so admired\nand so difficult to read must have something in it, if one could\nonly figure out what.Twenty-six years later, I still don't understand Berkeley. I have\na nice edition of his collected works. Will I ever read it? Seems\nunlikely.The difference between then and now is that now I understand why\nBerkeley is probably not worth trying to understand. I think I see\nnow what went wrong with philosophy, and how we might fix it.WordsI did end up being a philosophy major for most of college. It\ndidn't work out as I'd hoped. I didn't learn any magical truths\ncompared to which everything else was mere domain knowledge. But\nI do at least know now why I didn't. Philosophy doesn't really\nhave a subject matter in the way math or history or most other\nuniversity subjects do. There is no core of knowledge one must\nmaster. The closest you come to that is a knowledge of what various\nindividual philosophers have said about different topics over the\nyears. Few were sufficiently correct that people have forgotten\nwho discovered what they discovered.Formal logic has some subject matter. I took several classes in\nlogic. I don't know if I learned anything from them.\n[1]\nIt does seem to me very important to be able to flip ideas around in\none's head: to see when two ideas don't fully cover the space of\npossibilities, or when one idea is the same as another but with a\ncouple things changed. But did studying logic teach me the importance\nof thinking this way, or make me any better at it? I don't know.There are things I know I learned from studying philosophy. The\nmost dramatic I learned immediately, in the first semester of\nfreshman year, in a class taught by Sydney Shoemaker. I learned\nthat I don't exist. I am (and you are) a collection of cells that\nlurches around driven by various forces, and calls itself I. But\nthere's no central, indivisible thing that your identity goes with.\nYou could conceivably lose half your brain and live. Which means\nyour brain could conceivably be split into two halves and each\ntransplanted into different bodies. Imagine waking up after such\nan operation. You have to imagine being two people.The real lesson here is that the concepts we use in everyday life\nare fuzzy, and break down if pushed too hard. Even a concept as\ndear to us as I. It took me a while to grasp this, but when I\ndid it was fairly sudden, like someone in the nineteenth century\ngrasping evolution and realizing the story of creation they'd been\ntold as a child was all wrong. \n[2]\nOutside of math there's a limit\nto how far you can push words; in fact, it would not be a bad\ndefinition of math to call it the study of terms that have precise\nmeanings. Everyday words are inherently imprecise. They work well\nenough in everyday life that you don't notice. Words seem to work,\njust as Newtonian physics seems to. But you can always make them\nbreak if you push them far enough.I would say that this has been, unfortunately for philosophy, the\ncentral fact of philosophy. Most philosophical debates are not\nmerely afflicted by but driven by confusions over words. Do we\nhave free will? Depends what you mean by \"free.\" Do abstract ideas\nexist? Depends what you mean by \"exist.\"Wittgenstein is popularly credited with the idea that most philosophical\ncontroversies are due to confusions over language. I'm not sure\nhow much credit to give him. I suspect a lot of people realized\nthis, but reacted simply by not studying philosophy, rather than\nbecoming philosophy professors.How did things get this way? Can something people have spent\nthousands of years studying really be a waste of time? Those are\ninteresting questions. In fact, some of the most interesting\nquestions you can ask about philosophy. The most valuable way to\napproach the current philosophical tradition may be neither to get\nlost in pointless speculations like Berkeley, nor to shut them down\nlike Wittgenstein, but to study it as an example of reason gone\nwrong.HistoryWestern philosophy really begins with Socrates, Plato, and Aristotle.\nWhat we know of their predecessors comes from fragments and references\nin later works; their doctrines could be described as speculative\ncosmology that occasionally strays into analysis. Presumably they\nwere driven by whatever makes people in every other society invent\ncosmologies.\n[3]With Socrates, Plato, and particularly Aristotle, this tradition\nturned a corner. There started to be a lot more analysis. I suspect\nPlato and Aristotle were encouraged in this by progress in math.\nMathematicians had by then shown that you could figure things out\nin a much more conclusive way than by making up fine sounding stories\nabout them. \n[4]People talk so much about abstractions now that we don't realize\nwhat a leap it must have been when they first started to. It was\npresumably many thousands of years between when people first started\ndescribing things as hot or cold and when someone asked \"what is\nheat?\" No doubt it was a very gradual process. We don't know if\nPlato or Aristotle were the first to ask any of the questions they\ndid. But their works are the oldest we have that do this on a large\nscale, and there is a freshness (not to say naivete) about them\nthat suggests some of the questions they asked were new to them,\nat least.Aristotle in particular reminds me of the phenomenon that happens\nwhen people discover something new, and are so excited by it that\nthey race through a huge percentage of the newly discovered territory\nin one lifetime. If so, that's evidence of how new this kind of\nthinking was. \n[5]This is all to explain how Plato and Aristotle can be very impressive\nand yet naive and mistaken. It was impressive even to ask the\nquestions they did. That doesn't mean they always came up with\ngood answers. It's not considered insulting to say that ancient\nGreek mathematicians were naive in some respects, or at least lacked\nsome concepts that would have made their lives easier. So I hope\npeople will not be too offended if I propose that ancient philosophers\nwere similarly naive. In particular, they don't seem to have fully\ngrasped what I earlier called the central fact of philosophy: that\nwords break if you push them too far.\"Much to the surprise of the builders of the first digital computers,\"\nRod Brooks wrote, \"programs written for them usually did not work.\"\n[6]\nSomething similar happened when people first started trying\nto talk about abstractions. Much to their surprise, they didn't\narrive at answers they agreed upon. In fact, they rarely seemed\nto arrive at answers at all.They were in effect arguing about artifacts induced by sampling at\ntoo low a resolution.The proof of how useless some of their answers turned out to be is\nhow little effect they have. No one after reading Aristotle's\nMetaphysics does anything differently as a result.\n[7]Surely I'm not claiming that ideas have to have practical applications\nto be interesting? No, they may not have to. Hardy's boast that\nnumber theory had no use whatsoever wouldn't disqualify it. But\nhe turned out to be mistaken. In fact, it's suspiciously hard to\nfind a field of math that truly has no practical use. And Aristotle's\nexplanation of the ultimate goal of philosophy in Book A of the\nMetaphysics implies that philosophy should be useful too.Theoretical KnowledgeAristotle's goal was to find the most general of general principles.\nThe examples he gives are convincing: an ordinary worker builds\nthings a certain way out of habit; a master craftsman can do more\nbecause he grasps the underlying principles. The trend is clear:\nthe more general the knowledge, the more admirable it is. But then\nhe makes a mistake\u2014possibly the most important mistake in the\nhistory of philosophy. He has noticed that theoretical knowledge\nis often acquired for its own sake, out of curiosity, rather than\nfor any practical need. So he proposes there are two kinds of\ntheoretical knowledge: some that's useful in practical matters and\nsome that isn't. Since people interested in the latter are interested\nin it for its own sake, it must be more noble. So he sets as his\ngoal in the Metaphysics the exploration of knowledge that has no\npractical use. Which means no alarms go off when he takes on grand\nbut vaguely understood questions and ends up getting lost in a sea\nof words.His mistake was to confuse motive and result. Certainly, people\nwho want a deep understanding of something are often driven by\ncuriosity rather than any practical need. But that doesn't mean\nwhat they end up learning is useless. It's very valuable in practice\nto have a deep understanding of what you're doing; even if you're\nnever called on to solve advanced problems, you can see shortcuts\nin the solution of simple ones, and your knowledge won't break down\nin edge cases, as it would if you were relying on formulas you\ndidn't understand. Knowledge is power. That's what makes theoretical\nknowledge prestigious. It's also what causes smart people to be\ncurious about certain things and not others; our DNA is not so\ndisinterested as we might think.So while ideas don't have to have immediate practical applications\nto be interesting, the kinds of things we find interesting will\nsurprisingly often turn out to have practical applications.The reason Aristotle didn't get anywhere in the Metaphysics was\npartly that he set off with contradictory aims: to explore the most\nabstract ideas, guided by the assumption that they were useless.\nHe was like an explorer looking for a territory to the north of\nhim, starting with the assumption that it was located to the south.And since his work became the map used by generations of future\nexplorers, he sent them off in the wrong direction as well. \n[8]\nPerhaps worst of all, he protected them from both the criticism of\noutsiders and the promptings of their own inner compass by establishing\nthe principle that the most noble sort of theoretical knowledge had\nto be useless.The Metaphysics is mostly a failed experiment. A few ideas from\nit turned out to be worth keeping; the bulk of it has had no effect\nat all. The Metaphysics is among the least read of all famous\nbooks. It's not hard to understand the way Newton's Principia\nis, but the way a garbled message is.Arguably it's an interesting failed experiment. But unfortunately\nthat was not the conclusion Aristotle's successors derived from\nworks like the Metaphysics. \n[9]\nSoon after, the western world\nfell on intellectual hard times. Instead of version 1s to be\nsuperseded, the works of Plato and Aristotle became revered texts\nto be mastered and discussed. And so things remained for a shockingly\nlong time. It was not till around 1600 (in Europe, where the center\nof gravity had shifted by then) that one found people confident\nenough to treat Aristotle's work as a catalog of mistakes. And\neven then they rarely said so outright.If it seems surprising that the gap was so long, consider how little\nprogress there was in math between Hellenistic times and the\nRenaissance.In the intervening years an unfortunate idea took hold: that it\nwas not only acceptable to produce works like the Metaphysics,\nbut that it was a particularly prestigious line of work, done by a\nclass of people called philosophers. No one thought to go back and\ndebug Aristotle's motivating argument. And so instead of correcting\nthe problem Aristotle discovered by falling into it\u2014that you can\neasily get lost if you talk too loosely about very abstract ideas\u2014they \ncontinued to fall into it.The SingularityCuriously, however, the works they produced continued to attract\nnew readers. Traditional philosophy occupies a kind of singularity\nin this respect. If you write in an unclear way about big ideas,\nyou produce something that seems tantalizingly attractive to\ninexperienced but intellectually ambitious students. Till one knows\nbetter, it's hard to distinguish something that's hard to understand\nbecause the writer was unclear in his own mind from something like\na mathematical proof that's hard to understand because the ideas\nit represents are hard to understand. To someone who hasn't learned\nthe difference, traditional philosophy seems extremely attractive:\nas hard (and therefore impressive) as math, yet broader in scope.\nThat was what lured me in as a high school student.This singularity is even more singular in having its own defense\nbuilt in. When things are hard to understand, people who suspect\nthey're nonsense generally keep quiet. There's no way to prove a\ntext is meaningless. The closest you can get is to show that the\nofficial judges of some class of texts can't distinguish them from\nplacebos. \n[10]And so instead of denouncing philosophy, most people who suspected\nit was a waste of time just studied other things. That alone is\nfairly damning evidence, considering philosophy's claims. It's\nsupposed to be about the ultimate truths. Surely all smart people\nwould be interested in it, if it delivered on that promise.Because philosophy's flaws turned away the sort of people who might\nhave corrected them, they tended to be self-perpetuating. Bertrand\nRussell wrote in a letter in 1912:\n\n Hitherto the people attracted to philosophy have been mostly those\n who loved the big generalizations, which were all wrong, so that\n few people with exact minds have taken up the subject.\n[11]\n\nHis response was to launch Wittgenstein at it, with dramatic results.I think Wittgenstein deserves to be famous not for the discovery\nthat most previous philosophy was a waste of time, which judging\nfrom the circumstantial evidence must have been made by every smart\nperson who studied a little philosophy and declined to pursue it\nfurther, but for how he acted in response.\n[12]\nInstead of quietly\nswitching to another field, he made a fuss, from inside. He was\nGorbachev.The field of philosophy is still shaken from the fright Wittgenstein\ngave it. \n[13]\nLater in life he spent a lot of time talking about\nhow words worked. Since that seems to be allowed, that's what a\nlot of philosophers do now. Meanwhile, sensing a vacuum in the\nmetaphysical speculation department, the people who used to do\nliterary criticism have been edging Kantward, under new names like\n\"literary theory,\" \"critical theory,\" and when they're feeling\nambitious, plain \"theory.\" The writing is the familiar word salad:\n\n Gender is not like some of the other grammatical modes which\n express precisely a mode of conception without any reality that\n corresponds to the conceptual mode, and consequently do not express\n precisely something in reality by which the intellect could be\n moved to conceive a thing the way it does, even where that motive\n is not something in the thing as such.\n [14]\n\nThe singularity I've described is not going away. There's a market\nfor writing that sounds impressive and can't be disproven. There\nwill always be both supply and demand. So if one group abandons\nthis territory, there will always be others ready to occupy it.A ProposalWe may be able to do better. Here's an intriguing possibility.\nPerhaps we should do what Aristotle meant to do, instead of what\nhe did. The goal he announces in the Metaphysics seems one worth\npursuing: to discover the most general truths. That sounds good.\nBut instead of trying to discover them because they're useless,\nlet's try to discover them because they're useful.I propose we try again, but that we use that heretofore despised\ncriterion, applicability, as a guide to keep us from wondering\noff into a swamp of abstractions. Instead of trying to answer the\nquestion:\n\n What are the most general truths?\n\nlet's try to answer the question\n\n Of all the useful things we can say, which are the most general?\n\nThe test of utility I propose is whether we cause people who read\nwhat we've written to do anything differently afterward. Knowing\nwe have to give definite (if implicit) advice will keep us from\nstraying beyond the resolution of the words we're using.The goal is the same as Aristotle's; we just approach it from a\ndifferent direction.As an example of a useful, general idea, consider that of the\ncontrolled experiment. There's an idea that has turned out to be\nwidely applicable. Some might say it's part of science, but it's\nnot part of any specific science; it's literally meta-physics (in\nour sense of \"meta\"). The idea of evolution is another. It turns\nout to have quite broad applications\u2014for example, in genetic\nalgorithms and even product design. Frankfurt's distinction between\nlying and bullshitting seems a promising recent example.\n[15]These seem to me what philosophy should look like: quite general\nobservations that would cause someone who understood them to do\nsomething differently.Such observations will necessarily be about things that are imprecisely\ndefined. Once you start using words with precise meanings, you're\ndoing math. So starting from utility won't entirely solve the\nproblem I described above\u2014it won't flush out the metaphysical\nsingularity. But it should help. It gives people with good\nintentions a new roadmap into abstraction. And they may thereby\nproduce things that make the writing of the people with bad intentions\nlook bad by comparison.One drawback of this approach is that it won't produce the sort of\nwriting that gets you tenure. And not just because it's not currently\nthe fashion. In order to get tenure in any field you must not\narrive at conclusions that members of tenure committees can disagree\nwith. In practice there are two kinds of solutions to this problem.\nIn math and the sciences, you can prove what you're saying, or at\nany rate adjust your conclusions so you're not claiming anything\nfalse (\"6 of 8 subjects had lower blood pressure after the treatment\").\nIn the humanities you can either avoid drawing any definite conclusions\n(e.g. conclude that an issue is a complex one), or draw conclusions\nso narrow that no one cares enough to disagree with you.The kind of philosophy I'm advocating won't be able to take either\nof these routes. At best you'll be able to achieve the essayist's\nstandard of proof, not the mathematician's or the experimentalist's.\nAnd yet you won't be able to meet the usefulness test without\nimplying definite and fairly broadly applicable conclusions. Worse\nstill, the usefulness test will tend to produce results that annoy\npeople: there's no use in telling people things they already believe,\nand people are often upset to be told things they don't.Here's the exciting thing, though. Anyone can do this. Getting\nto general plus useful by starting with useful and cranking up the\ngenerality may be unsuitable for junior professors trying to get\ntenure, but it's better for everyone else, including professors who\nalready have it. This side of the mountain is a nice gradual slope.\nYou can start by writing things that are useful but very specific,\nand then gradually make them more general. Joe's has good burritos.\nWhat makes a good burrito? What makes good food? What makes\nanything good? You can take as long as you want. You don't have\nto get all the way to the top of the mountain. You don't have to\ntell anyone you're doing philosophy.If it seems like a daunting task to do philosophy, here's an\nencouraging thought. The field is a lot younger than it seems.\nThough the first philosophers in the western tradition lived about\n2500 years ago, it would be misleading to say the field is 2500\nyears old, because for most of that time the leading practitioners\nweren't doing much more than writing commentaries on Plato or\nAristotle while watching over their shoulders for the next invading\narmy. In the times when they weren't, philosophy was hopelessly\nintermingled with religion. It didn't shake itself free till a\ncouple hundred years ago, and even then was afflicted by the\nstructural problems I've described above. If I say this, some will\nsay it's a ridiculously overbroad and uncharitable generalization,\nand others will say it's old news, but here goes: judging from their\nworks, most philosophers up to the present have been wasting their\ntime. So in a sense the field is still at the first step. \n[16]That sounds a preposterous claim to make. It won't seem so\npreposterous in 10,000 years. Civilization always seems old, because\nit's always the oldest it's ever been. The only way to say whether\nsomething is really old or not is by looking at structural evidence,\nand structurally philosophy is young; it's still reeling from the\nunexpected breakdown of words.Philosophy is as young now as math was in 1500. There is a lot\nmore to discover.Notes\n[1]\nIn practice formal logic is not much use, because despite\nsome progress in the last 150 years we're still only able to formalize\na small percentage of statements. We may never do that much better,\nfor the same reason 1980s-style \"knowledge representation\" could\nnever have worked; many statements may have no representation more\nconcise than a huge, analog brain state.[2]\nIt was harder for Darwin's contemporaries to grasp this than\nwe can easily imagine. The story of creation in the Bible is not\njust a Judeo-Christian concept; it's roughly what everyone must\nhave believed since before people were people. The hard part of\ngrasping evolution was to realize that species weren't, as they\nseem to be, unchanging, but had instead evolved from different,\nsimpler organisms over unimaginably long periods of time.Now we don't have to make that leap. No one in an industrialized\ncountry encounters the idea of evolution for the first time as an\nadult. Everyone's taught about it as a child, either as truth or\nheresy.[3]\nGreek philosophers before Plato wrote in verse. This must\nhave affected what they said. If you try to write about the nature\nof the world in verse, it inevitably turns into incantation. Prose\nlets you be more precise, and more tentative.[4]\nPhilosophy is like math's\nne'er-do-well brother. It was born when Plato and Aristotle looked\nat the works of their predecessors and said in effect \"why can't\nyou be more like your brother?\" Russell was still saying the same\nthing 2300 years later.Math is the precise half of the most abstract ideas, and philosophy\nthe imprecise half. It's probably inevitable that philosophy will\nsuffer by comparison, because there's no lower bound to its precision.\nBad math is merely boring, whereas bad philosophy is nonsense. And\nyet there are some good ideas in the imprecise half.[5]\nAristotle's best work was in logic and zoology, both of which\nhe can be said to have invented. But the most dramatic departure\nfrom his predecessors was a new, much more analytical style of\nthinking. He was arguably the first scientist.[6]\nBrooks, Rodney, Programming in Common Lisp, Wiley, 1985, p.\n94.[7]\nSome would say we depend on Aristotle more than we realize,\nbecause his ideas were one of the ingredients in our common culture.\nCertainly a lot of the words we use have a connection with Aristotle,\nbut it seems a bit much to suggest that we wouldn't have the concept\nof the essence of something or the distinction between matter and\nform if Aristotle hadn't written about them.One way to see how much we really depend on Aristotle would be to\ndiff European culture with Chinese: what ideas did European culture\nhave in 1800 that Chinese culture didn't, in virtue of Aristotle's\ncontribution?[8]\nThe meaning of the word \"philosophy\" has changed over time.\nIn ancient times it covered a broad range of topics, comparable in\nscope to our \"scholarship\" (though without the methodological\nimplications). Even as late as Newton's time it included what we\nnow call \"science.\" But core of the subject today is still what\nseemed to Aristotle the core: the attempt to discover the most\ngeneral truths.Aristotle didn't call this \"metaphysics.\" That name got assigned\nto it because the books we now call the Metaphysics came after\n(meta = after) the Physics in the standard edition of Aristotle's\nworks compiled by Andronicus of Rhodes three centuries later. What\nwe call \"metaphysics\" Aristotle called \"first philosophy.\"[9]\nSome of Aristotle's immediate successors may have realized\nthis, but it's hard to say because most of their works are lost.[10]\nSokal, Alan, \"Transgressing the Boundaries: Toward a Transformative\nHermeneutics of Quantum Gravity,\" Social Text 46/47, pp. 217-252.Abstract-sounding nonsense seems to be most attractive when it's\naligned with some axe the audience already has to grind. If this\nis so we should find it's most popular with groups that are (or\nfeel) weak. The powerful don't need its reassurance.[11]\nLetter to Ottoline Morrell, December 1912. Quoted in:Monk, Ray, Ludwig Wittgenstein: The Duty of Genius, Penguin, 1991,\np. 75.[12]\nA preliminary result, that all metaphysics between Aristotle\nand 1783 had been a waste of time, is due to I. Kant.[13]\nWittgenstein asserted a sort of mastery to which the inhabitants\nof early 20th century Cambridge seem to have been peculiarly\nvulnerable\u2014perhaps partly because so many had been raised religious\nand then stopped believing, so had a vacant space in their heads\nfor someone to tell them what to do (others chose Marx or Cardinal\nNewman), and partly because a quiet, earnest place like Cambridge\nin that era had no natural immunity to messianic figures, just as\nEuropean politics then had no natural immunity to dictators.[14]\nThis is actually from the Ordinatio of Duns Scotus (ca.\n1300), with \"number\" replaced by \"gender.\" Plus ca change.Wolter, Allan (trans), Duns Scotus: Philosophical Writings, Nelson,\n1963, p. 92.[15]\nFrankfurt, Harry, On Bullshit, Princeton University Press,\n2005.[16]\nSome introductions to philosophy now take the line that\nphilosophy is worth studying as a process rather than for any\nparticular truths you'll learn. The philosophers whose works they\ncover would be rolling in their graves at that. They hoped they\nwere doing more than serving as examples of how to argue: they hoped\nthey were getting results. Most were wrong, but it doesn't seem\nan impossible hope.This argument seems to me like someone in 1500 looking at the lack\nof results achieved by alchemy and saying its value was as a process.\nNo, they were going about it wrong. It turns out it is possible\nto transmute lead into gold (though not economically at current\nenergy prices), but the route to that knowledge was to\nbacktrack and try another approach.Thanks to Trevor Blackwell, Paul Buchheit, Jessica Livingston, \nRobert Morris, Mark Nitzberg, and Peter Norvig for reading drafts of this."} {"title": "unions", "text": "May 2007People who worry about the increasing gap between rich and poor\ngenerally look back on the mid twentieth century as a golden age.\nIn those days we had a large number of high-paying union manufacturing\njobs that boosted the median income. I wouldn't quite call the\nhigh-paying union job a myth, but I think people who dwell on it\nare reading too much into it.Oddly enough, it was working with startups that made me realize\nwhere the high-paying union job came from. In a rapidly growing\nmarket, you don't worry too much about efficiency. It's more\nimportant to grow fast. If there's some mundane problem getting\nin your way, and there's a simple solution that's somewhat expensive,\njust take it and get on with more important things. EBay didn't\nwin by paying less for servers than their competitors.Difficult though it may be to imagine now, manufacturing was a\ngrowth industry in the mid twentieth century. This was an era when\nsmall firms making everything from cars to candy were getting\nconsolidated into a new kind of corporation with national reach and\nhuge economies of scale. You had to grow fast or die. Workers\nwere for these companies what servers are for an Internet startup.\nA reliable supply was more important than low cost.If you looked in the head of a 1950s auto executive, the attitude\nmust have been: sure, give 'em whatever they ask for, so long as\nthe new model isn't delayed.In other words, those workers were not paid what their work was\nworth. Circumstances being what they were, companies would have\nbeen stupid to insist on paying them so little.If you want a less controversial example of this phenomenon, ask\nanyone who worked as a consultant building web sites during the\nInternet Bubble. In the late nineties you could get paid huge sums\nof money for building the most trivial things. And yet does anyone\nwho was there have any expectation those days will ever return? I\ndoubt it. Surely everyone realizes that was just a temporary\naberration.The era of labor unions seems to have been the same kind of aberration, \njust spread\nover a longer period, and mixed together with a lot of ideology\nthat prevents people from viewing it with as cold an eye as they\nwould something like consulting during the Bubble.Basically, unions were just Razorfish.People who think the labor movement was the creation of heroic union\norganizers have a problem to explain: why are unions shrinking now?\nThe best they can do is fall back on the default explanation of\npeople living in fallen civilizations. Our ancestors were giants.\nThe workers of the early twentieth century must have had a moral\ncourage that's lacking today.In fact there's a simpler explanation. The early twentieth century\nwas just a fast-growing startup overpaying for infrastructure. And\nwe in the present are not a fallen people, who have abandoned\nwhatever mysterious high-minded principles produced the high-paying\nunion job. We simply live in a time when the fast-growing companies\noverspend on different things."} {"title": "apple", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nNovember 2009I don't think Apple realizes how badly the App Store approval process\nis broken. Or rather, I don't think they realize how much it matters\nthat it's broken.The way Apple runs the App Store has harmed their reputation with\nprogrammers more than anything else they've ever done. \nTheir reputation with programmers used to be great.\nIt used to be the most common complaint you heard\nabout Apple was that their fans admired them too uncritically.\nThe App Store has changed that. Now a lot of programmers\nhave started to see Apple as evil.How much of the goodwill Apple once had with programmers have they\nlost over the App Store? A third? Half? And that's just so far.\nThe App Store is an ongoing karma leak.* * *How did Apple get into this mess? Their fundamental problem is\nthat they don't understand software.They treat iPhone apps the way they treat the music they sell through\niTunes. Apple is the channel; they own the user; if you want to\nreach users, you do it on their terms. The record labels agreed,\nreluctantly. But this model doesn't work for software. It doesn't\nwork for an intermediary to own the user. The software business\nlearned that in the early 1980s, when companies like VisiCorp showed\nthat although the words \"software\" and \"publisher\" fit together,\nthe underlying concepts don't. Software isn't like music or books.\nIt's too complicated for a third party to act as an intermediary\nbetween developer and user. And yet that's what Apple is trying\nto be with the App Store: a software publisher. And a particularly\noverreaching one at that, with fussy tastes and a rigidly enforced\nhouse style.If software publishing didn't work in 1980, it works even less now\nthat software development has evolved from a small number of big\nreleases to a constant stream of small ones. But Apple doesn't\nunderstand that either. Their model of product development derives\nfrom hardware. They work on something till they think it's finished,\nthen they release it. You have to do that with hardware, but because\nsoftware is so easy to change, its design can benefit from evolution.\nThe standard way to develop applications now is to launch fast and\niterate. Which means it's a disaster to have long, random delays\neach time you release a new version.Apparently Apple's attitude is that developers should be more careful\nwhen they submit a new version to the App Store. They would say\nthat. But powerful as they are, they're not powerful enough to\nturn back the evolution of technology. Programmers don't use\nlaunch-fast-and-iterate out of laziness. They use it because it\nyields the best results. By obstructing that process, Apple is\nmaking them do bad work, and programmers hate that as much as Apple\nwould.How would Apple like it if when they discovered a serious bug in\nOS\u00a0X, instead of releasing a software update immediately, they had\nto submit their code to an intermediary who sat on it for a month\nand then rejected it because it contained an icon they didn't like?By breaking software development, Apple gets the opposite of what\nthey intended: the version of an app currently available in the App\nStore tends to be an old and buggy one. One developer told me:\n\n As a result of their process, the App Store is full of half-baked\n applications. I make a new version almost every day that I release\n to beta users. The version on the App Store feels old and crappy.\n I'm sure that a lot of developers feel this way: One emotion is\n \"I'm not really proud about what's in the App Store\", and it's\n combined with the emotion \"Really, it's Apple's fault.\"\n\nAnother wrote:\n\n I believe that they think their approval process helps users by\n ensuring quality. In reality, bugs like ours get through all the\n time and then it can take 4-8 weeks to get that bug fix approved,\n leaving users to think that iPhone apps sometimes just don't work.\n Worse for Apple, these apps work just fine on other platforms\n that have immediate approval processes.\n\nActually I suppose Apple has a third misconception: that all the\ncomplaints about App Store approvals are not a serious problem.\nThey must hear developers complaining. But partners and suppliers\nare always complaining. It would be a bad sign if they weren't;\nit would mean you were being too easy on them. Meanwhile the iPhone\nis selling better than ever. So why do they need to fix anything?They get away with maltreating developers, in the short term, because\nthey make such great hardware. I just bought a new 27\" iMac a\ncouple days ago. It's fabulous. The screen's too shiny, and the\ndisk is surprisingly loud, but it's so beautiful that you can't\nmake yourself care.So I bought it, but I bought it, for the first time, with misgivings.\nI felt the way I'd feel buying something made in a country with a\nbad human rights record. That was new. In the past when I bought\nthings from Apple it was an unalloyed pleasure. Oh boy! They make\nsuch great stuff. This time it felt like a Faustian bargain. They\nmake such great stuff, but they're such assholes. Do I really want\nto support this company?* * *Should Apple care what people like me think? What difference does\nit make if they alienate a small minority of their users?There are a couple reasons they should care. One is that these\nusers are the people they want as employees. If your company seems\nevil, the best programmers won't work for you. That hurt Microsoft\na lot starting in the 90s. Programmers started to feel sheepish\nabout working there. It seemed like selling out. When people from\nMicrosoft were talking to other programmers and they mentioned where\nthey worked, there were a lot of self-deprecating jokes about having\ngone over to the dark side. But the real problem for Microsoft\nwasn't the embarrassment of the people they hired. It was the\npeople they never got. And you know who got them? Google and\nApple. If Microsoft was the Empire, they were the Rebel Alliance.\nAnd it's largely because they got more of the best people that\nGoogle and Apple are doing so much better than Microsoft today.Why are programmers so fussy about their employers' morals? Partly\nbecause they can afford to be. The best programmers can work\nwherever they want. They don't have to work for a company they\nhave qualms about.But the other reason programmers are fussy, I think, is that evil\nbegets stupidity. An organization that wins by exercising power\nstarts to lose the ability to win by doing better work. And it's\nnot fun for a smart person to work in a place where the best ideas\naren't the ones that win. I think the reason Google embraced \"Don't\nbe evil\" so eagerly was not so much to impress the outside world\nas to inoculate themselves against arrogance.\n[1]That has worked for Google so far. They've become more\nbureaucratic, but otherwise they seem to have held true to their\noriginal principles. With Apple that seems less the case. When you\nlook at the famous \n1984 ad \nnow, it's easier to imagine Apple as the\ndictator on the screen than the woman with the hammer.\n[2]\nIn fact, if you read the dictator's speech it sounds uncannily like a\nprophecy of the App Store.\n\n We have triumphed over the unprincipled dissemination of facts.We have created, for the first time in all history, a garden of\n pure ideology, where each worker may bloom secure from the pests\n of contradictory and confusing truths.\n\nThe other reason Apple should care what programmers think of them\nis that when you sell a platform, developers make or break you. If\nanyone should know this, Apple should. VisiCalc made the Apple II.And programmers build applications for the platforms they use. Most\napplications\u2014most startups, probably\u2014grow out of personal projects.\nApple itself did. Apple made microcomputers because that's what\nSteve Wozniak wanted for himself. He couldn't have afforded a\nminicomputer. \n[3]\n Microsoft likewise started out making interpreters\nfor little microcomputers because\nBill Gates and Paul Allen were interested in using them. It's a\nrare startup that doesn't build something the founders use.The main reason there are so many iPhone apps is that so many programmers\nhave iPhones. They may know, because they read it in an article,\nthat Blackberry has such and such market share. But in practice\nit's as if RIM didn't exist. If they're going to build something,\nthey want to be able to use it themselves, and that means building\nan iPhone app.So programmers continue to develop iPhone apps, even though Apple\ncontinues to maltreat them. They're like someone stuck in an abusive\nrelationship. They're so attracted to the iPhone that they can't\nleave. But they're looking for a way out. One wrote:\n\n While I did enjoy developing for the iPhone, the control they\n place on the App Store does not give me the drive to develop\n applications as I would like. In fact I don't intend to make any\n more iPhone applications unless absolutely necessary.\n[4]\n\nCan anything break this cycle? No device I've seen so far could.\nPalm and RIM haven't a hope. The only credible contender is Android.\nBut Android is an orphan; Google doesn't really care about it, not\nthe way Apple cares about the iPhone. Apple cares about the iPhone\nthe way Google cares about search.* * *Is the future of handheld devices one locked down by Apple? It's\na worrying prospect. It would be a bummer to have another grim\nmonoculture like we had in the 1990s. In 1995, writing software\nfor end users was effectively identical with writing Windows\napplications. Our horror at that prospect was the single biggest\nthing that drove us to start building web apps.At least we know now what it would take to break Apple's lock.\nYou'd have to get iPhones out of programmers' hands. If programmers\nused some other device for mobile web access, they'd start to develop\napps for that instead.How could you make a device programmers liked better than the iPhone?\nIt's unlikely you could make something better designed. Apple\nleaves no room there. So this alternative device probably couldn't\nwin on general appeal. It would have to win by virtue of some\nappeal it had to programmers specifically.One way to appeal to programmers is with software. If you\ncould think of an application programmers had to have, but that\nwould be impossible in the circumscribed world of the iPhone, \nyou could presumably get them to switch.That would definitely happen if programmers started to use handhelds\nas development machines\u2014if handhelds displaced laptops the\nway laptops displaced desktops. You need more control of a development\nmachine than Apple will let you have over an iPhone.Could anyone make a device that you'd carry around in your pocket\nlike a phone, and yet would also work as a development machine?\nIt's hard to imagine what it would look like. But I've learned\nnever to say never about technology. A phone-sized device that\nwould work as a development machine is no more miraculous by present\nstandards than the iPhone itself would have seemed by the standards\nof 1995.My current development machine is a MacBook Air, which I use with\nan external monitor and keyboard in my office, and by itself when\ntraveling. If there was a version half the size I'd prefer it.\nThat still wouldn't be small enough to carry around everywhere like\na phone, but we're within a factor of 4 or so. Surely that gap is\nbridgeable. In fact, let's make it an\nRFS. Wanted: \nWoman with hammer.Notes[1]\nWhen Google adopted \"Don't be evil,\" they were still so small\nthat no one would have expected them to be, yet.\n[2]\nThe dictator in the 1984 ad isn't Microsoft, incidentally;\nit's IBM. IBM seemed a lot more frightening in those days, but\nthey were friendlier to developers than Apple is now.[3]\nHe couldn't even afford a monitor. That's why the Apple\nI used a TV as a monitor.[4]\nSeveral people I talked to mentioned how much they liked the\niPhone SDK. The problem is not Apple's products but their policies.\nFortunately policies are software; Apple can change them instantly\nif they want to. Handy that, isn't it?Thanks to Sam Altman, Trevor Blackwell, Ross Boucher, \nJames Bracy, Gabor Cselle,\nPatrick Collison, Jason Freedman, John Gruber, Joe Hewitt, Jessica Livingston,\nRobert Morris, Teng Siong Ong, Nikhil Pandit, Savraj Singh, and Jared Tame for reading drafts of this."} {"title": "boss", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nMarch 2008, rev. June 2008Technology tends to separate normal from natural. Our bodies\nweren't designed to eat the foods that people in rich countries eat, or\nto get so little exercise. \nThere may be a similar problem with the way we work: \na normal job may be as bad for us intellectually as white flour\nor sugar is for us physically.I began to suspect this after spending several years working \nwith startup founders. I've now worked with over 200 of them, and I've\nnoticed a definite difference between programmers working on their\nown startups and those working for large organizations.\nI wouldn't say founders seem happier, necessarily;\nstarting a startup can be very stressful. Maybe the best way to put\nit is to say that they're happier in the sense that your body is\nhappier during a long run than sitting on a sofa eating\ndoughnuts.Though they're statistically abnormal, startup founders seem to be\nworking in a way that's more natural for humans.I was in Africa last year and saw a lot of animals in the wild that\nI'd only seen in zoos before. It was remarkable how different they\nseemed. Particularly lions. Lions in the wild seem about ten times\nmore alive. They're like different animals. I suspect that working\nfor oneself feels better to humans in much the same way that living\nin the wild must feel better to a wide-ranging predator like a lion.\nLife in a zoo is easier, but it isn't the life they were designed\nfor.\nTreesWhat's so unnatural about working for a big company? The root of\nthe problem is that humans weren't meant to work in such large\ngroups.Another thing you notice when you see animals in the wild is that\neach species thrives in groups of a certain size. A herd of impalas\nmight have 100 adults; baboons maybe 20; lions rarely 10. Humans\nalso seem designed to work in groups, and what I've read about\nhunter-gatherers accords with research on organizations and my own\nexperience to suggest roughly what the ideal size is: groups of 8\nwork well; by 20 they're getting hard to manage; and a group of 50\nis really unwieldy.\n[1]\nWhatever the upper limit is, we are clearly not meant to work in\ngroups of several hundred. And yet\u2014for reasons having more\nto do with technology than human nature\u2014a great many people\nwork for companies with hundreds or thousands of employees.Companies know groups that large wouldn't work, so they divide\nthemselves into units small enough to work together. But to\ncoordinate these they have to introduce something new: bosses.These smaller groups are always arranged in a tree structure. Your\nboss is the point where your group attaches to the tree. But when\nyou use this trick for dividing a large group into smaller ones,\nsomething strange happens that I've never heard anyone mention\nexplicitly. In the group one level up from yours, your boss\nrepresents your entire group. A group of 10 managers is not merely\na group of 10 people working together in the usual way. It's really\na group of groups. Which means for a group of 10 managers to work\ntogether as if they were simply a group of 10 individuals, the group\nworking for each manager would have to work as if they were a single\nperson\u2014the workers and manager would each share only one\nperson's worth of freedom between them.In practice a group of people are never able to act as if they were\none person. But in a large organization divided into groups in\nthis way, the pressure is always in that direction. Each group\ntries its best to work as if it were the small group of individuals\nthat humans were designed to work in. That was the point of creating\nit. And when you propagate that constraint, the result is that\neach person gets freedom of action in inverse proportion to the\nsize of the entire tree.\n[2]Anyone who's worked for a large organization has felt this. You\ncan feel the difference between working for a company with 100\nemployees and one with 10,000, even if your group has only 10 people.\nCorn SyrupA group of 10 people within a large organization is a kind of fake\ntribe. The number of people you interact with is about right. But\nsomething is missing: individual initiative. Tribes of hunter-gatherers\nhave much more freedom. The leaders have a little more power than other\nmembers of the tribe, but they don't generally tell them what to\ndo and when the way a boss can.It's not your boss's fault. The real problem is that in the group\nabove you in the hierarchy, your entire group is one virtual person.\nYour boss is just the way that constraint is imparted to you.So working in a group of 10 people within a large organization feels\nboth right and wrong at the same time. On the surface it feels\nlike the kind of group you're meant to work in, but something major\nis missing. A job at a big company is like high fructose corn\nsyrup: it has some of the qualities of things you're meant to like,\nbut is disastrously lacking in others.Indeed, food is an excellent metaphor to explain what's wrong with\nthe usual sort of job.For example, working for a big company is the default thing to do,\nat least for programmers. How bad could it be? Well, food shows\nthat pretty clearly. If you were dropped at a random point in\nAmerica today, nearly all the food around you would be bad for you.\nHumans were not designed to eat white flour, refined sugar, high\nfructose corn syrup, and hydrogenated vegetable oil. And yet if\nyou analyzed the contents of the average grocery store you'd probably\nfind these four ingredients accounted for most of the calories.\n\"Normal\" food is terribly bad for you. The only people who eat\nwhat humans were actually designed to eat are a few Birkenstock-wearing\nweirdos in Berkeley.If \"normal\" food is so bad for us, why is it so common? There are\ntwo main reasons. One is that it has more immediate appeal. You\nmay feel lousy an hour after eating that pizza, but eating the first\ncouple bites feels great. The other is economies of scale.\nProducing junk food scales; producing fresh vegetables doesn't.\nWhich means (a) junk food can be very cheap, and (b) it's worth\nspending a lot to market it.If people have to choose between something that's cheap, heavily\nmarketed, and appealing in the short term, and something that's\nexpensive, obscure, and appealing in the long term, which do you\nthink most will choose?It's the same with work. The average MIT graduate wants to work\nat Google or Microsoft, because it's a recognized brand, it's safe,\nand they'll get paid a good salary right away. It's the job\nequivalent of the pizza they had for lunch. The drawbacks will\nonly become apparent later, and then only in a vague sense of\nmalaise.And founders and early employees of startups, meanwhile, are like\nthe Birkenstock-wearing weirdos of Berkeley: though a tiny minority\nof the population, they're the ones living as humans are meant to.\nIn an artificial world, only extremists live naturally.\nProgrammersThe restrictiveness of big company jobs is particularly hard on\nprogrammers, because the essence of programming is to build new\nthings. Sales people make much the same pitches every day; support\npeople answer much the same questions; but once you've written a\npiece of code you don't need to write it again. So a programmer\nworking as programmers are meant to is always making new things.\nAnd when you're part of an organization whose structure gives each\nperson freedom in inverse proportion to the size of the tree, you're\ngoing to face resistance when you do something new.This seems an inevitable consequence of bigness. It's true even\nin the smartest companies. I was talking recently to a founder who\nconsidered starting a startup right out of college, but went to\nwork for Google instead because he thought he'd learn more there.\nHe didn't learn as much as he expected. Programmers learn by doing,\nand most of the things he wanted to do, he couldn't\u2014sometimes\nbecause the company wouldn't let him, but often because the company's\ncode wouldn't let him. Between the drag of legacy code, the overhead\nof doing development in such a large organization, and the restrictions\nimposed by interfaces owned by other groups, he could only try a\nfraction of the things he would have liked to. He said he has\nlearned much more in his own startup, despite the fact that he has\nto do all the company's errands as well as programming, because at\nleast when he's programming he can do whatever he wants.An obstacle downstream propagates upstream. If you're not allowed\nto implement new ideas, you stop having them. And vice versa: when\nyou can do whatever you want, you have more ideas about what to do.\nSo working for yourself makes your brain more powerful in the same\nway a low-restriction exhaust system makes an engine more powerful.Working for yourself doesn't have to mean starting a startup, of\ncourse. But a programmer deciding between a regular job at a big\ncompany and their own startup is probably going to learn more doing\nthe startup.You can adjust the amount of freedom you get by scaling the size\nof company you work for. If you start the company, you'll have the\nmost freedom. If you become one of the first 10 employees you'll\nhave almost as much freedom as the founders. Even a company with\n100 people will feel different from one with 1000.Working for a small company doesn't ensure freedom. The tree\nstructure of large organizations sets an upper bound on freedom,\nnot a lower bound. The head of a small company may still choose\nto be a tyrant. The point is that a large organization is compelled\nby its structure to be one.\nConsequencesThat has real consequences for both organizations and individuals.\nOne is that companies will inevitably slow down as they grow larger,\nno matter how hard they try to keep their startup mojo. It's a\nconsequence of the tree structure that every large organization is\nforced to adopt.Or rather, a large organization could only avoid slowing down if\nthey avoided tree structure. And since human nature limits the\nsize of group that can work together, the only way I can imagine\nfor larger groups to avoid tree structure would be to have no\nstructure: to have each group actually be independent, and to work\ntogether the way components of a market economy do.That might be worth exploring. I suspect there are already some\nhighly partitionable businesses that lean this way. But I don't\nknow any technology companies that have done it.There is one thing companies can do short of structuring themselves\nas sponges: they can stay small. If I'm right, then it really\npays to keep a company as small as it can be at every stage.\nParticularly a technology company. Which means it's doubly important\nto hire the best people. Mediocre hires hurt you twice: they get\nless done, but they also make you big, because you need more of\nthem to solve a given problem.For individuals the upshot is the same: aim small. It will always\nsuck to work for large organizations, and the larger the organization,\nthe more it will suck.In an essay I wrote a couple years ago \nI advised graduating seniors\nto work for a couple years for another company before starting their\nown. I'd modify that now. Work for another company if you want\nto, but only for a small one, and if you want to start your own\nstartup, go ahead.The reason I suggested college graduates not start startups immediately\nwas that I felt most would fail. And they will. But ambitious\nprogrammers are better off doing their own thing and failing than\ngoing to work at a big company. Certainly they'll learn more. They\nmight even be better off financially. A lot of people in their\nearly twenties get into debt, because their expenses grow even\nfaster than the salary that seemed so high when they left school.\nAt least if you start a startup and fail your net worth will be\nzero rather than negative. \n[3]We've now funded so many different types of founders that we have\nenough data to see patterns, and there seems to be no benefit from\nworking for a big company. The people who've worked for a few years\ndo seem better than the ones straight out of college, but only\nbecause they're that much older.The people who come to us from big companies often seem kind of\nconservative. It's hard to say how much is because big companies\nmade them that way, and how much is the natural conservatism that\nmade them work for the big companies in the first place. But\ncertainly a large part of it is learned. I know because I've seen\nit burn off.Having seen that happen so many times is one of the things that\nconvinces me that working for oneself, or at least for a small\ngroup, is the natural way for programmers to live. Founders arriving\nat Y Combinator often have the downtrodden air of refugees. Three\nmonths later they're transformed: they have so much more \nconfidence\nthat they seem as if they've grown several inches taller. \n[4]\nStrange as this sounds, they seem both more worried and happier at the same\ntime. Which is exactly how I'd describe the way lions seem in the\nwild.Watching employees get transformed into founders makes it clear\nthat the difference between the two is due mostly to environment\u2014and\nin particular that the environment in big companies is toxic to\nprogrammers. In the first couple weeks of working on their own\nstartup they seem to come to life, because finally they're working\nthe way people are meant to.Notes[1]\nWhen I talk about humans being meant or designed to live a\ncertain way, I mean by evolution.[2]\nIt's not only the leaves who suffer. The constraint propagates\nup as well as down. So managers are constrained too; instead of\njust doing things, they have to act through subordinates.[3]\nDo not finance your startup with credit cards. Financing a\nstartup with debt is usually a stupid move, and credit card debt\nstupidest of all. Credit card debt is a bad idea, period. It is\na trap set by evil companies for the desperate and the foolish.[4]\nThe founders we fund used to be younger (initially we encouraged\nundergrads to apply), and the first couple times I saw this I used\nto wonder if they were actually getting physically taller.Thanks to Trevor Blackwell, Ross Boucher, Aaron Iba, Abby\nKirigin, Ivan Kirigin, Jessica Livingston, and Robert Morris for\nreading drafts of this."} {"title": "desres", "text": "January 2003(This article is derived from a keynote talk at the fall 2002 meeting\nof NEPLS.)Visitors to this country are often surprised to find that\nAmericans like to begin a conversation by asking \"what do you do?\"\nI've never liked this question. I've rarely had a\nneat answer to it. But I think I have finally solved the problem.\nNow, when someone asks me what I do, I look them straight\nin the eye and say \"I'm designing a \nnew dialect of Lisp.\" \nI recommend this answer to anyone who doesn't like being asked what\nthey do. The conversation will turn immediately to other topics.I don't consider myself to be doing research on programming languages.\nI'm just designing one, in the same way that someone might design\na building or a chair or a new typeface.\nI'm not trying to discover anything new. I just want\nto make a language that will be good to program in. In some ways,\nthis assumption makes life a lot easier.The difference between design and research seems to be a question\nof new versus good. Design doesn't have to be new, but it has to \nbe good. Research doesn't have to be good, but it has to be new.\nI think these two paths converge at the top: the best design\nsurpasses its predecessors by using new ideas, and the best research\nsolves problems that are not only new, but actually worth solving.\nSo ultimately we're aiming for the same destination, just approaching\nit from different directions.What I'm going to talk about today is what your target looks like\nfrom the back. What do you do differently when you treat\nprogramming languages as a design problem instead of a research topic?The biggest difference is that you focus more on the user.\nDesign begins by asking, who is this\nfor and what do they need from it? A good architect,\nfor example, does not begin by creating a design that he then\nimposes on the users, but by studying the intended users and figuring\nout what they need.Notice I said \"what they need,\" not \"what they want.\" I don't mean\nto give the impression that working as a designer means working as \na sort of short-order cook, making whatever the client tells you\nto. This varies from field to field in the arts, but\nI don't think there is any field in which the best work is done by\nthe people who just make exactly what the customers tell them to.The customer is always right in\nthe sense that the measure of good design is how well it works\nfor the user. If you make a novel that bores everyone, or a chair\nthat's horribly uncomfortable to sit in, then you've done a bad\njob, period. It's no defense to say that the novel or the chair \nis designed according to the most advanced theoretical principles.And yet, making what works for the user doesn't mean simply making\nwhat the user tells you to. Users don't know what all the choices\nare, and are often mistaken about what they really want.The answer to the paradox, I think, is that you have to design\nfor the user, but you have to design what the user needs, not simply \nwhat he says he wants.\nIt's much like being a doctor. You can't just treat a patient's\nsymptoms. When a patient tells you his symptoms, you have to figure\nout what's actually wrong with him, and treat that.This focus on the user is a kind of axiom from which most of the\npractice of good design can be derived, and around which most design\nissues center.If good design must do what the user needs, who is the user? When\nI say that design must be for users, I don't mean to imply that good \ndesign aims at some kind of \nlowest common denominator. You can pick any group of users you\nwant. If you're designing a tool, for example, you can design it\nfor anyone from beginners to experts, and what's good design\nfor one group might be bad for another. The point\nis, you have to pick some group of users. I don't think you can\neven talk about good or bad design except with\nreference to some intended user.You're most likely to get good design if the intended users include\nthe designer himself. When you design something\nfor a group that doesn't include you, it tends to be for people\nyou consider to be less sophisticated than you, not more sophisticated.That's a problem, because looking down on the user, however benevolently,\nseems inevitably to corrupt the designer.\nI suspect that very few housing\nprojects in the US were designed by architects who expected to live\nin them. You can see the same thing\nin programming languages. C, Lisp, and Smalltalk were created for\ntheir own designers to use. Cobol, Ada, and Java, were created \nfor other people to use.If you think you're designing something for idiots, the odds are\nthat you're not designing something good, even for idiots.\nEven if you're designing something for the most sophisticated\nusers, though, you're still designing for humans. It's different \nin research. In math you\ndon't choose abstractions because they're\neasy for humans to understand; you choose whichever make the\nproof shorter. I think this is true for the sciences generally.\nScientific ideas are not meant to be ergonomic.Over in the arts, things are very different. Design is\nall about people. The human body is a strange\nthing, but when you're designing a chair,\nthat's what you're designing for, and there's no way around it.\nAll the arts have to pander to the interests and limitations\nof humans. In painting, for example, all other things being\nequal a painting with people in it will be more interesting than\none without. It is not merely an accident of history that\nthe great paintings of the Renaissance are all full of people.\nIf they hadn't been, painting as a medium wouldn't have the prestige\nthat it does.Like it or not, programming languages are also for people,\nand I suspect the human brain is just as lumpy and idiosyncratic\nas the human body. Some ideas are easy for people to grasp\nand some aren't. For example, we seem to have a very limited\ncapacity for dealing with detail. It's this fact that makes\nprograming languages a good idea in the first place; if we\ncould handle the detail, we could just program in machine\nlanguage.Remember, too, that languages are not\nprimarily a form for finished programs, but something that\nprograms have to be developed in. Anyone in the arts could\ntell you that you might want different mediums for the\ntwo situations. Marble, for example, is a nice, durable\nmedium for finished ideas, but a hopelessly inflexible one\nfor developing new ideas.A program, like a proof,\nis a pruned version of a tree that in the past has had\nfalse starts branching off all over it. So the test of\na language is not simply how clean the finished program looks\nin it, but how clean the path to the finished program was.\nA design choice that gives you elegant finished programs\nmay not give you an elegant design process. For example, \nI've written a few macro-defining macros full of nested\nbackquotes that look now like little gems, but writing them\ntook hours of the ugliest trial and error, and frankly, I'm still\nnot entirely sure they're correct.We often act as if the test of a language were how good\nfinished programs look in it.\nIt seems so convincing when you see the same program\nwritten in two languages, and one version is much shorter.\nWhen you approach the problem from the direction of the\narts, you're less likely to depend on this sort of\ntest. You don't want to end up with a programming\nlanguage like marble.For example, it is a huge win in developing software to\nhave an interactive toplevel, what in Lisp is called a\nread-eval-print loop. And when you have one this has\nreal effects on the design of the language. It would not\nwork well for a language where you have to declare\nvariables before using them, for example. When you're\njust typing expressions into the toplevel, you want to be \nable to set x to some value and then start doing things\nto x. You don't want to have to declare the type of x\nfirst. You may dispute either of the premises, but if\na language has to have a toplevel to be convenient, and\nmandatory type declarations are incompatible with a\ntoplevel, then no language that makes type declarations \nmandatory could be convenient to program in.In practice, to get good design you have to get close, and stay\nclose, to your users. You have to calibrate your ideas on actual\nusers constantly, especially in the beginning. One of the reasons\nJane Austen's novels are so good is that she read them out loud to\nher family. That's why she never sinks into self-indulgently arty\ndescriptions of landscapes,\nor pretentious philosophizing. (The philosophy's there, but it's\nwoven into the story instead of being pasted onto it like a label.)\nIf you open an average \"literary\" novel and imagine reading it out loud\nto your friends as something you'd written, you'll feel all too\nkeenly what an imposition that kind of thing is upon the reader.In the software world, this idea is known as Worse is Better.\nActually, there are several ideas mixed together in the concept of\nWorse is Better, which is why people are still arguing about\nwhether worse\nis actually better or not. But one of the main ideas in that\nmix is that if you're building something new, you should get a\nprototype in front of users as soon as possible.The alternative approach might be called the Hail Mary strategy.\nInstead of getting a prototype out quickly and gradually refining\nit, you try to create the complete, finished, product in one long\ntouchdown pass. As far as I know, this is a\nrecipe for disaster. Countless startups destroyed themselves this\nway during the Internet bubble. I've never heard of a case\nwhere it worked.What people outside the software world may not realize is that\nWorse is Better is found throughout the arts.\nIn drawing, for example, the idea was discovered during the\nRenaissance. Now almost every drawing teacher will tell you that\nthe right way to get an accurate drawing is not to\nwork your way slowly around the contour of an object, because errors will\naccumulate and you'll find at the end that the lines don't meet.\nInstead you should draw a few quick lines in roughly the right place,\nand then gradually refine this initial sketch.In most fields, prototypes\nhave traditionally been made out of different materials.\nTypefaces to be cut in metal were initially designed \nwith a brush on paper. Statues to be cast in bronze \nwere modelled in wax. Patterns to be embroidered on tapestries\nwere drawn on paper with ink wash. Buildings to be\nconstructed from stone were tested on a smaller scale in wood.What made oil paint so exciting, when it\nfirst became popular in the fifteenth century, was that you\ncould actually make the finished work from the prototype.\nYou could make a preliminary drawing if you wanted to, but you\nweren't held to it; you could work out all the details, and\neven make major changes, as you finished the painting.You can do this in software too. A prototype doesn't have to\nbe just a model; you can refine it into the finished product.\nI think you should always do this when you can. It lets you\ntake advantage of new insights you have along the way. But\nperhaps even more important, it's good for morale.Morale is key in design. I'm surprised people\ndon't talk more about it. One of my first\ndrawing teachers told me: if you're bored when you're\ndrawing something, the drawing will look boring.\nFor example, suppose you have to draw a building, and you\ndecide to draw each brick individually. You can do this\nif you want, but if you get bored halfway through and start\nmaking the bricks mechanically instead of observing each one, \nthe drawing will look worse than if you had merely suggested\nthe bricks.Building something by gradually refining a prototype is good\nfor morale because it keeps you engaged. In software, my \nrule is: always have working code. If you're writing\nsomething that you'll be able to test in an hour, then you\nhave the prospect of an immediate reward to motivate you.\nThe same is true in the arts, and particularly in oil painting.\nMost painters start with a blurry sketch and gradually\nrefine it.\nIf you work this way, then in principle\nyou never have to end the day with something that actually\nlooks unfinished. Indeed, there is even a saying among\npainters: \"A painting is never finished, you just stop\nworking on it.\" This idea will be familiar to anyone who\nhas worked on software.Morale is another reason that it's hard to design something\nfor an unsophisticated user. It's hard to stay interested in\nsomething you don't like yourself. To make something \ngood, you have to be thinking, \"wow, this is really great,\"\nnot \"what a piece of shit; those fools will love it.\"Design means making things for humans. But it's not just the\nuser who's human. The designer is human too.Notice all this time I've been talking about \"the designer.\"\nDesign usually has to be under the control of a single person to\nbe any good. And yet it seems to be possible for several people\nto collaborate on a research project. This seems to\nme one of the most interesting differences between research and\ndesign.There have been famous instances of collaboration in the arts,\nbut most of them seem to have been cases of molecular bonding rather\nthan nuclear fusion. In an opera it's common for one person to\nwrite the libretto and another to write the music. And during the Renaissance, \njourneymen from northern\nEurope were often employed to do the landscapes in the\nbackgrounds of Italian paintings. But these aren't true collaborations.\nThey're more like examples of Robert Frost's\n\"good fences make good neighbors.\" You can stick instances\nof good design together, but within each individual project,\none person has to be in control.I'm not saying that good design requires that one person think\nof everything. There's nothing more valuable than the advice\nof someone whose judgement you trust. But after the talking is\ndone, the decision about what to do has to rest with one person.Why is it that research can be done by collaborators and \ndesign can't? This is an interesting question. I don't \nknow the answer. Perhaps,\nif design and research converge, the best research is also\ngood design, and in fact can't be done by collaborators.\nA lot of the most famous scientists seem to have worked alone.\nBut I don't know enough to say whether there\nis a pattern here. It could be simply that many famous scientists\nworked when collaboration was less common.Whatever the story is in the sciences, true collaboration\nseems to be vanishingly rare in the arts. Design by committee is a\nsynonym for bad design. Why is that so? Is there some way to\nbeat this limitation?I'm inclined to think there isn't-- that good design requires\na dictator. One reason is that good design has to \nbe all of a piece. Design is not just for humans, but\nfor individual humans. If a design represents an idea that \nfits in one person's head, then the idea will fit in the user's\nhead too.Related:"} {"title": "founders", "text": "\n\nWant to start a startup? Get funded by\nY Combinator.\n\n\n\n\nOctober 2010\n\n(I wrote this for Forbes, who asked me to write something\nabout the qualities we look for in founders. In print they had to cut\nthe last item because they didn't have room.)1. DeterminationThis has turned out to be the most important quality in startup\nfounders. We thought when we started Y Combinator that the most\nimportant quality would be intelligence. That's the myth in the\nValley. And certainly you don't want founders to be stupid. But\nas long as you're over a certain threshold of intelligence, what\nmatters most is determination. You're going to hit a lot of\nobstacles. You can't be the sort of person who gets demoralized\neasily.Bill Clerico and Rich Aberman of WePay \nare a good example. They're\ndoing a finance startup, which means endless negotiations with big,\nbureaucratic companies. When you're starting a startup that depends\non deals with big companies to exist, it often feels like they're\ntrying to ignore you out of existence. But when Bill Clerico starts\ncalling you, you may as well do what he asks, because he is not\ngoing away.\n2. FlexibilityYou do not however want the sort of determination implied by phrases\nlike \"don't give up on your dreams.\" The world of startups is so\nunpredictable that you need to be able to modify your dreams on the\nfly. The best metaphor I've found for the combination of determination\nand flexibility you need is a running back. \nHe's determined to get\ndownfield, but at any given moment he may need to go sideways or\neven backwards to get there.The current record holder for flexibility may be Daniel Gross of\nGreplin. He applied to YC with \nsome bad ecommerce idea. We told\nhim we'd fund him if he did something else. He thought for a second,\nand said ok. He then went through two more ideas before settling\non Greplin. He'd only been working on it for a couple days when\nhe presented to investors at Demo Day, but he got a lot of interest.\nHe always seems to land on his feet.\n3. ImaginationIntelligence does matter a lot of course. It seems like the type\nthat matters most is imagination. It's not so important to be able\nto solve predefined problems quickly as to be able to come up with\nsurprising new ideas. In the startup world, most good ideas \nseem\nbad initially. If they were obviously good, someone would already\nbe doing them. So you need the kind of intelligence that produces\nideas with just the right level of craziness.Airbnb is that kind of idea. \nIn fact, when we funded Airbnb, we\nthought it was too crazy. We couldn't believe large numbers of\npeople would want to stay in other people's places. We funded them\nbecause we liked the founders so much. As soon as we heard they'd\nbeen supporting themselves by selling Obama and McCain branded\nbreakfast cereal, they were in. And it turned out the idea was on\nthe right side of crazy after all.\n4. NaughtinessThough the most successful founders are usually good people, they\ntend to have a piratical gleam in their eye. They're not Goody\nTwo-Shoes type good. Morally, they care about getting the big\nquestions right, but not about observing proprieties. That's why\nI'd use the word naughty rather than evil. They delight in \nbreaking\nrules, but not rules that matter. This quality may be redundant\nthough; it may be implied by imagination.Sam Altman of Loopt \nis one of the most successful alumni, so we\nasked him what question we could put on the Y Combinator application\nthat would help us discover more people like him. He said to ask\nabout a time when they'd hacked something to their advantage\u2014hacked in the sense of beating the system, not breaking into\ncomputers. It has become one of the questions we pay most attention\nto when judging applications.\n5. FriendshipEmpirically it seems to be hard to start a startup with just \none\nfounder. Most of the big successes have two or three. And the\nrelationship between the founders has to be strong. They must\ngenuinely like one another, and work well together. Startups do\nto the relationship between the founders what a dog does to a sock:\nif it can be pulled apart, it will be.Emmett Shear and Justin Kan of Justin.tv \nare a good example of close\nfriends who work well together. They've known each other since\nsecond grade. They can practically read one another's minds. I'm\nsure they argue, like all founders, but I have never once sensed\nany unresolved tension between them.Thanks to Jessica Livingston and Chris Steiner for reading drafts of this."} {"title": "vw", "text": "January 2012A few hours before the Yahoo acquisition was announced in June 1998\nI took a snapshot of Viaweb's\nsite. I thought it might be interesting to look at one day.The first thing one notices is is how tiny the pages are. Screens\nwere a lot smaller in 1998. If I remember correctly, our frontpage\nused to just fit in the size window people typically used then.Browsers then (IE 6 was still 3 years in the future) had few fonts\nand they weren't antialiased. If you wanted to make pages that\nlooked good, you had to render display text as images.You may notice a certain similarity between the Viaweb and Y Combinator logos. We did that\nas an inside joke when we started YC. Considering how basic a red\ncircle is, it seemed surprising to me when we started Viaweb how\nfew other companies used one as their logo. A bit later I realized\nwhy.On the Company\npage you'll notice a mysterious individual called John McArtyem.\nRobert Morris (aka Rtm) was so publicity averse after the \nWorm that he\ndidn't want his name on the site. I managed to get him to agree\nto a compromise: we could use his bio but not his name. He has\nsince relaxed a bit\non that point.Trevor graduated at about the same time the acquisition closed, so in the\ncourse of 4 days he went from impecunious grad student to millionaire\nPhD. The culmination of my career as a writer of press releases\nwas one celebrating\nhis graduation, illustrated with a drawing I did of him during\na meeting.(Trevor also appears as Trevino\nBagwell in our directory of web designers merchants could hire\nto build stores for them. We inserted him as a ringer in case some\ncompetitor tried to spam our web designers. We assumed his logo\nwould deter any actual customers, but it did not.)Back in the 90s, to get users you had to get mentioned in magazines\nand newspapers. There were not the same ways to get found online\nthat there are today. So we used to pay a PR\nfirm $16,000 a month to get us mentioned in the press. Fortunately\nreporters liked\nus.In our advice about\ngetting traffic from search engines (I don't think the term SEO\nhad been coined yet), we say there are only 7 that matter: Yahoo,\nAltaVista, Excite, WebCrawler, InfoSeek, Lycos, and HotBot. Notice\nanything missing? Google was incorporated that September.We supported online transactions via a company called \nCybercash,\nsince if we lacked that feature we'd have gotten beaten up in product\ncomparisons. But Cybercash was so bad and most stores' order volumes\nwere so low that it was better if merchants processed orders like phone orders. We had a page in our site trying to talk merchants\nout of doing real time authorizations.The whole site was organized like a funnel, directing people to the\ntest drive.\nIt was a novel thing to be able to try out software online. We put\ncgi-bin in our dynamic urls to fool competitors about how our\nsoftware worked.We had some well\nknown users. Needless to say, Frederick's of Hollywood got the\nmost traffic. We charged a flat fee of $300/month for big stores,\nso it was a little alarming to have users who got lots of traffic.\nI once calculated how much Frederick's was costing us in bandwidth,\nand it was about $300/month.Since we hosted all the stores, which together were getting just\nover 10 million page views per month in June 1998, we consumed what\nat the time seemed a lot of bandwidth. We had 2 T1s (3 Mb/sec)\ncoming into our offices. In those days there was no AWS. Even\ncolocating servers seemed too risky, considering how often things\nwent wrong with them. So we had our servers in our offices. Or\nmore precisely, in Trevor's office. In return for the unique\nprivilege of sharing his office with no other humans, he had to\nshare it with 6 shrieking tower servers. His office was nicknamed\nthe Hot Tub on account of the heat they generated. Most days his\nstack of window air conditioners could keep up.For describing pages, we had a template language called RTML, which\nsupposedly stood for something, but which in fact I named after\nRtm. RTML was Common Lisp augmented by some macros and libraries,\nand concealed under a structure editor that made it look like it\nhad syntax.Since we did continuous releases, our software didn't actually have\nversions. But in those days the trade press expected versions, so\nwe made them up. If we wanted to get lots of attention, we made\nthe version number an\ninteger. That \"version 4.0\" icon was generated by our own\nbutton generator, incidentally. The whole Viaweb site was made\nwith our software, even though it wasn't an online store, because\nwe wanted to experience what our users did.At the end of 1997, we released a general purpose shopping search\nengine called Shopfind. It\nwas pretty advanced for the time. It had a programmable crawler\nthat could crawl most of the different stores online and pick out\nthe products."} {"title": "want", "text": "November 2022Since I was about 9 I've been puzzled by the apparent contradiction\nbetween being made of matter that behaves in a predictable way, and\nthe feeling that I could choose to do whatever I wanted. At the\ntime I had a self-interested motive for exploring the question. At\nthat age (like most succeeding ages) I was always in trouble with\nthe authorities, and it seemed to me that there might possibly be\nsome way to get out of trouble by arguing that I wasn't responsible\nfor my actions. I gradually lost hope of that, but the puzzle\nremained: How do you reconcile being a machine made of matter with\nthe feeling that you're free to choose what you do?\n[1]The best way to explain the answer may be to start with a slightly\nwrong version, and then fix it. The wrong version is: You can do\nwhat you want, but you can't want what you want. Yes, you can control\nwhat you do, but you'll do what you want, and you can't control\nthat.The reason this is mistaken is that people do sometimes change what\nthey want. People who don't want to want something \u2014 drug addicts,\nfor example \u2014 can sometimes make themselves stop wanting it. And\npeople who want to want something \u2014 who want to like classical\nmusic, or broccoli \u2014 sometimes succeed.So we modify our initial statement: You can do what you want, but\nyou can't want to want what you want.That's still not quite true. It's possible to change what you want\nto want. I can imagine someone saying \"I decided to stop wanting\nto like classical music.\" But we're getting closer to the truth.\nIt's rare for people to change what they want to want, and the more\n\"want to\"s we add, the rarer it gets.We can get arbitrarily close to a true statement by adding more \"want\nto\"s in much the same way we can get arbitrarily close to 1 by adding\nmore 9s to a string of 9s following a decimal point. In practice\nthree or four \"want to\"s must surely be enough. It's hard even to\nenvision what it would mean to change what you want to want to want\nto want, let alone actually do it.So one way to express the correct answer is to use a regular\nexpression. You can do what you want, but there's some statement\nof the form \"you can't (want to)* want what you want\" that's true.\nUltimately you get back to a want that you don't control.\n[2]\nNotes[1]\nI didn't know when I was 9 that matter might behave randomly,\nbut I don't think it affects the problem much. Randomness destroys\nthe ghost in the machine as effectively as determinism.[2]\nIf you don't like using an expression, you can make the same\npoint using higher-order desires: There is some n such that you\ndon't control your nth-order desires.\nThanks to Trevor Blackwell,\nJessica Livingston, Robert Morris, and\nMichael Nielsen for reading drafts of this."}