text
large_stringlengths 55
74.7k
|
---|
March 2021
I try to write using ordinary words and simple sentences.
That kind of writing is easier to read, and the easier something is to read,
the more deeply readers will engage with it. The less energy they expend on
your prose, the more they'll have left for your ideas.
And the further they'll read. Most readers' energy tends to flag part way
through an article or essay. If the friction of reading is low enough, more
keep going till the end.
There's an Italian dish called _saltimbocca_ , which means "leap into the
mouth." My goal when writing might be called _saltintesta_ : the ideas leap
into your head and you barely notice the words that got them there.
It's too much to hope that writing could ever be pure ideas. You might not
even want it to be. But for most writers, most of the time, that's the goal to
aim for. The gap between most writing and pure ideas is not filled with
poetry.
Plus it's more considerate to write simply. When you write in a fancy way to
impress people, you're making them do extra work just so you can seem cool.
It's like trailing a long train behind you that readers have to carry.
And remember, if you're writing in English, that a lot of your readers won't
be native English speakers. Their understanding of ideas may be way ahead of
their understanding of English. So you can't assume that writing about a
difficult topic means you can use difficult words.
Of course, fancy writing doesn't just conceal ideas. It can also conceal the
lack of them. That's why some people write that way, to conceal the fact that
they have
[__](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=hermeneutic+dialectics+hegemonic+modalities)nothing
to say. Whereas writing simply keeps you honest. If you say nothing simply, it
will be obvious to everyone, including you.
Simple writing also lasts better. People reading your stuff in the future will
be in much the same position as people from other countries reading it today.
The culture and the language will have changed. It's not vain to care about
that, any more than it's vain for a woodworker to build a chair to last.
Indeed, lasting is not merely an accidental quality of chairs, or writing.
It's a sign you did a good job.
But although these are all real advantages of writing simply, none of them are
why I do it. The main reason I write simply is that it offends me not to. When
I write a sentence that seems too complicated, or that uses unnecessarily
intellectual words, it doesn't seem fancy to me. It seems clumsy.
There are of course times when you want to use a complicated sentence or fancy
word for effect. But you should never do it by accident.
The other reason my writing ends up being simple is the way I do it. I write
the first draft fast, then spend days editing it, trying to get everything
just right. Much of this editing is cutting, and that makes simple writing
even simpler.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
December 2010
Someone we funded is talking to VCs now, and asked me how common it was for a
startup's founders to retain control of the board after a series A round. He
said VCs told him this almost never happened.
Ten years ago that was true. In the past, founders rarely kept control of the
board through a series A. The traditional series A board consisted of two
founders, two VCs, and one independent member. More recently the recipe is
often one founder, one VC, and one independent. In either case the founders
lose their majority.
But not always. Mark Zuckerberg kept control of Facebook's board through the
series A and still has it today. Mark Pincus has kept control of Zynga's too.
But are these just outliers? How common is it for founders to keep control
after an A round? I'd heard of several cases among the companies we've funded,
but I wasn't sure how many there were, so I emailed the ycfounders list.
The replies surprised me. In a dozen companies we've funded, the founders
still had a majority of the board seats after the series A round.
I feel like we're at a tipping point here. A lot of VCs still act as if
founders retaining board control after a series A is unheard-of. A lot of them
try to make you feel bad if you even ask — as if you're a noob or a control
freak for wanting such a thing. But the founders I heard from aren't noobs or
control freaks. Or if they are, they are, like Mark Zuckerberg, the kind of
noobs and control freaks VCs should be trying to fund more of.
Founders retaining control after a series A is clearly heard-of. And barring
financial catastrophe, I think in the coming year it will become the norm.
Control of a company is a more complicated matter than simply outvoting other
parties in board meetings. Investors usually get vetos over certain big
decisions, like selling the company, regardless of how many board seats they
have. And board votes are rarely split. Matters are decided in the discussion
preceding the vote, not in the vote itself, which is usually unanimous. But if
opinion is divided in such discussions, the side that knows it would lose in a
vote will tend to be less insistent. That's what board control means in
practice. You don't simply get to do whatever you want; the board still has to
act in the interest of the shareholders; but if you have a majority of board
seats, then your opinion about what's in the interest of the shareholders will
tend to prevail.
So while board control is not total control, it's not imaginary either.
There's inevitably a difference in how things feel within the company. Which
means if it becomes the norm for founders to retain board control after a
series A, that will change the way things feel in the whole startup world.
The switch to the new norm may be surprisingly fast, because the startups that
can retain control tend to be the best ones. They're the ones that set the
trends, both for other startups and for VCs.
A lot of the reason VCs are harsh when negotiating with startups is that
they're embarrassed to go back to their partners looking like they got beaten.
When they sign a termsheet, they want to be able to brag about the good terms
they got. A lot of them don't care that much personally about whether founders
keep board control. They just don't want to seem like they had to make
concessions. Which means if letting the founders keep control stops being
perceived as a concession, it will rapidly become much more common.
Like a lot of changes that have been forced on VCs, this change won't turn out
to be as big a problem as they might think. VCs will still be able to
convince; they just won't be able to compel. And the startups where they have
to resort to compulsion are not the ones that matter anyway. VCs make most of
their money from a few big hits, and those aren't them.
Knowing that founders will keep control of the board may even help VCs pick
better. If they know they can't fire the founders, they'll have to choose
founders they can trust. And that's who they should have been choosing all
along.
**Thanks** to Sam Altman, John Bautista, Trevor Blackwell, Paul Buchheit,
Brian Chesky, Bill Clerico, Patrick Collison, Adam Goldstein, James
Lindenbaum, Jessica Livingston, and Fred Wilson for reading drafts of this.
|
December 2008
A few months ago I read a _New York Times_ article on South Korean cram
schools that said
> Admission to the right university can make or break an ambitious young South
> Korean.
A parent added:
> "In our country, college entrance exams determine 70 to 80 percent of a
> person's future."
It was striking how old fashioned this sounded. And yet when I was in high
school it wouldn't have seemed too far off as a description of the US. Which
means things must have been changing here.
The course of people's lives in the US now seems to be determined less by
credentials and more by performance than it was 25 years ago. Where you go to
college still matters, but not like it used to.
What happened?
_____
Judging people by their academic credentials was in its time an advance. The
practice seems to have begun in China, where starting in 587 candidates for
the imperial civil service had to take an exam on classical literature. [1] It
was also a test of wealth, because the knowledge it tested was so specialized
that passing required years of expensive training. But though wealth was a
necessary condition for passing, it was not a sufficient one. By the standards
of the rest of the world in 587, the Chinese system was very enlightened.
Europeans didn't introduce formal civil service exams till the nineteenth
century, and even then they seem to have been influenced by the Chinese
example.
Before credentials, government positions were obtained mainly by family
influence, if not outright bribery. It was a great step forward to judge
people by their performance on a test. But by no means a perfect solution.
When you judge people that way, you tend to get cram schools—which they did in
Ming China and nineteenth century England just as much as in present day South
Korea.
What cram schools are, in effect, is leaks in a seal. The use of credentials
was an attempt to seal off the direct transmission of power between
generations, and cram schools represent that power finding holes in the seal.
Cram schools turn wealth in one generation into credentials in the next.
It's hard to beat this phenomenon, because the schools adjust to suit whatever
the tests measure. When the tests are narrow and predictable, you get cram
schools on the classic model, like those that prepared candidates for
Sandhurst (the British West Point) or the classes American students take now
to improve their SAT scores. But as the tests get broader, the schools do too.
Preparing a candidate for the Chinese imperial civil service exams took years,
as prep school does today. But the raison d'etre of all these institutions has
been the same: to beat the system. [2]
_____
History suggests that, all other things being equal, a society prospers in
proportion to its ability to prevent parents from influencing their children's
success directly. It's a fine thing for parents to help their children
indirectly—for example, by helping them to become smarter or more disciplined,
which then makes them more successful. The problem comes when parents use
direct methods: when they are able to use their own wealth or power as a
substitute for their children's qualities.
Parents will tend to do this when they can. Parents will die for their kids,
so it's not surprising to find they'll also push their scruples to the limits
for them. Especially if other parents are doing it.
Sealing off this force has a double advantage. Not only does a society get
"the best man for the job," but parents' ambitions are diverted from direct
methods to indirect ones—to actually trying to raise their kids well.
But we should expect it to be very hard to contain parents' efforts to obtain
an unfair advantage for their kids. We're dealing with one of the most
powerful forces in human nature. We shouldn't expect naive solutions to work,
any more than we'd expect naive solutions for keeping heroin out of a prison
to work.
_____
The obvious way to solve the problem is to make credentials better. If the
tests a society uses are currently hackable, we can study the way people beat
them and try to plug the holes. You can use the cram schools to show you where
most of the holes are. They also tell you when you're succeeding in fixing
them: when cram schools become less popular.
A more general solution would be to push for increased transparency,
especially at critical social bottlenecks like college admissions. In the US
this process still shows many outward signs of corruption. For example, legacy
admissions. The official story is that legacy status doesn't carry much
weight, because all it does is break ties: applicants are bucketed by ability,
and legacy status is only used to decide between the applicants in the bucket
that straddles the cutoff. But what this means is that a university can make
legacy status have as much or as little weight as they want, by adjusting the
size of the bucket that straddles the cutoff.
By gradually chipping away at the abuse of credentials, you could probably
make them more airtight. But what a long fight it would be. Especially when
the institutions administering the tests don't really want them to be
airtight.
_____
Fortunately there's a better way to prevent the direct transmission of power
between generations. Instead of trying to make credentials harder to hack, we
can also make them matter less.
Let's think about what credentials are for. What they are, functionally, is a
way of predicting performance. If you could measure actual performance, you
wouldn't need them.
So why did they even evolve? Why haven't we just been measuring actual
performance? Think about where credentialism first appeared: in selecting
candidates for large organizations. Individual performance is hard to measure
in large organizations, and the harder performance is to measure, the more
important it is to predict it. If an organization could immediately and
cheaply measure the performance of recruits, they wouldn't need to examine
their credentials. They could take everyone and keep just the good ones.
Large organizations can't do this. But a bunch of small organizations in a
market can come close. A market takes every organization and keeps just the
good ones. As organizations get smaller, this approaches taking every person
and keeping just the good ones. So all other things being equal, a society
consisting of more, smaller organizations will care less about credentials.
_____
That's what's been happening in the US. That's why those quotes from Korea
sound so old fashioned. They're talking about an economy like America's a few
decades ago, dominated by a few big companies. The route for the ambitious in
that sort of environment is to join one and climb to the top. Credentials
matter a lot then. In the culture of a large organization, an elite pedigree
becomes a self-fulfilling prophecy.
This doesn't work in small companies. Even if your colleagues were impressed
by your credentials, they'd soon be parted from you if your performance didn't
match, because the company would go out of business and the people would be
dispersed.
In a world of small companies, performance is all anyone cares about. People
hiring for a startup don't care whether you've even graduated from college,
let alone which one. All they care about is what you can do. Which is in fact
all that should matter, even in a large organization. The reason credentials
have such prestige is that for so long the large organizations in a society
tended to be the most powerful. But in the US at least they don't have the
monopoly on power they once did, precisely because they can't measure (and
thus reward) individual performance. Why spend twenty years climbing the
corporate ladder when you can get rewarded directly by the market?
I realize I see a more exaggerated version of the change than most other
people. As a partner at an early stage venture funding firm, I'm like a
jumpmaster shoving people out of the old world of credentials and into the new
one of performance. I'm an agent of the change I'm seeing. But I don't think
I'm imagining it. It was not so easy 25 years ago for an ambitious person to
choose to be judged directly by the market. You had to go through bosses, and
they were influenced by where you'd been to college.
_____
What made it possible for small organizations to succeed in America? I'm still
not entirely sure. Startups are certainly a large part of it. Small
organizations can develop new ideas faster than large ones, and new ideas are
increasingly valuable.
But I don't think startups account for all the shift from credentials to
measurement. My friend Julian Weber told me that when he went to work for a
New York law firm in the 1950s they paid associates far less than firms do
today. Law firms then made no pretense of paying people according to the value
of the work they'd done. Pay was based on seniority. The younger employees
were paying their dues. They'd be rewarded later.
The same principle prevailed at industrial companies. When my father was
working at Westinghouse in the 1970s, he had people working for him who made
more than he did, because they'd been there longer.
Now companies increasingly have to pay employees market price for the work
they do. One reason is that employees no longer trust companies to deliver
[deferred rewards](ladder.html): why work to accumulate deferred rewards at a
company that might go bankrupt, or be taken over and have all its implicit
obligations wiped out? The other is that some companies broke ranks and
started to pay young employees large amounts. This was particularly true in
consulting, law, and finance, where it led to the phenomenon of yuppies. The
word is rarely used today because it's no longer surprising to see a 25 year
old with money, but in 1985 the sight of a 25 year old _professional_ able to
afford a new BMW was so novel that it called forth a new word.
The classic yuppie worked for a small organization. He didn't work for General
Widget, but for the law firm that handled General Widget's acquisitions or the
investment bank that floated their bond issues.
Startups and yuppies entered the American conceptual vocabulary roughly
simultaneously in the late 1970s and early 1980s. I don't think there was a
causal connection. Startups happened because technology started to change so
fast that big companies could no longer keep a lid on the smaller ones. I
don't think the rise of yuppies was inspired by it; it seems more as if there
was a change in the social conventions (and perhaps the laws) governing the
way big companies worked. But the two phenomena rapidly fused to produce a
principle that now seems obvious: paying energetic young people market rates,
and getting correspondingly high performance from them.
At about the same time the US economy rocketed out of the doldrums that had
afflicted it for most of the 1970s. Was there a connection? I don't know
enough to say, but it felt like it at the time. There was a lot of energy
released.
_____
Countries worried about their competitiveness are right to be concerned about
the number of startups started within them. But they would do even better to
examine the underlying principle. Do they let energetic young people get paid
market rate for the work they do? The young are the test, because when people
aren't rewarded according to performance, they're invariably rewarded
according to seniority instead.
All it takes is a few beachheads in your economy that pay for performance.
Measurement spreads like heat. If one part of a society is better at
measurement than others, it tends to push the others to do better. If people
who are young but smart and driven can make more by starting their own
companies than by working for existing ones, the existing companies are forced
to pay more to keep them. So market rates gradually permeate every
organization, even the government. [3]
The measurement of performance will tend to push even the organizations
issuing credentials into line. When we were kids I used to annoy my sister by
ordering her to do things I knew she was about to do anyway. As credentials
are superseded by performance, a similar role is the best former gatekeepers
can hope for. Once credential granting institutions are no longer in the self-
fullfilling prophecy business, they'll have to work harder to predict the
future.
_____
Credentials are a step beyond bribery and influence. But they're not the final
step. There's an even better way to block the transmission of power between
generations: to encourage the trend toward an economy made of more, smaller
units. Then you can measure what credentials merely predict.
No one likes the transmission of power between generations—not the left or the
right. But the market forces favored by the right turn out to be a better way
of preventing it than the credentials the left are forced to fall back on.
The era of credentials began to end when the power of large organizations
[peaked](highres.html) in the late twentieth century. Now we seem to be
entering a new era based on measurement. The reason the new model has advanced
so rapidly is that it works so much better. It shows no sign of slowing.
**Notes**
[1] Miyazaki, Ichisada (Conrad Schirokauer trans.), _China's Examination Hell:
The Civil Service Examinations of Imperial China,_ Yale University Press,
1981.
Scribes in ancient Egypt took exams, but they were more the type of
proficiency test any apprentice might have to pass.
[2] When I say the raison d'etre of prep schools is to get kids into better
colleges, I mean this in the narrowest sense. I'm not saying that's all prep
schools do, just that if they had zero effect on college admissions there
would be far less demand for them.
[3] Progressive tax rates will tend to damp this effect, however, by
decreasing the difference between good and bad measurers.
**Thanks** to Trevor Blackwell, Sarah Harlin, Jessica Livingston, and David
Sloo for reading drafts of this.
|
October 2005
The first Summer Founders Program has just finished. We were surprised how
well it went. Overall only about 10% of startups succeed, but if I had to
guess now, I'd predict three or four of the eight startups we funded will make
it.
Of the startups that needed further funding, I believe all have either closed
a round or are likely to soon. Two have already turned down (lowball)
acquisition offers.
We would have been happy if just one of the eight seemed promising by the end
of the summer. What's going on? Did some kind of anomaly make this summer's
applicants especially good? We worry about that, but we can't think of one.
We'll find out this winter.
The whole summer was full of surprises. The best was that the
[hypothesis](hiring.html) we were testing seems to be correct. Young hackers
can start viable companies. This is good news for two reasons: (a) it's an
encouraging thought, and (b) it means that Y Combinator, which is predicated
on the idea, is not hosed.
**Age**
More precisely, the hypothesis was that success in a startup depends mainly on
how smart and energetic you are, and much less on how old you are or how much
business experience you have. The results so far bear this out. The 2005
summer founders ranged in age from 18 to 28 (average 23), and there is no
correlation between their ages and how well they're doing.
This should not really be surprising. Bill Gates and Michael Dell were both 19
when they started the companies that made them famous. Young founders are not
a new phenomenon: the trend began as soon as computers got cheap enough for
college kids to afford them.
Another of our hypotheses was that you can start a startup on less money than
most people think. Other investors were surprised to hear the most we gave any
group was $20,000. But we knew it was possible to start on that little because
we started Viaweb on $10,000.
And so it proved this summer. Three months' funding is enough to get into
second gear. We had a demo day for potential investors ten weeks in, and seven
of the eight groups had a prototype ready by that time. One,
[Reddit](http://reddit.com), had already launched, and were able to give a
demo of their live site.
A researcher who studied the SFP startups said the one thing they had in
common was that they all worked ridiculously hard. People this age are
commonly seen as lazy. I think in some cases it's not so much that they lack
the appetite for work, but that the work they're offered is unappetizing.
The experience of the SFP suggests that if you let motivated people do real
work, they work hard, whatever their age. As one of the founders said "I'd
read that starting a startup consumed your life, but I had no idea what that
meant until I did it."
I'd feel guilty if I were a boss making people work this hard. But we're not
these people's bosses. They're working on their own projects. And what makes
them work is not us but their competitors. Like good athletes, they don't work
hard because the coach yells at them, but because they want to win.
We have less power than bosses, and yet the founders work harder than
employees. It seems like a win for everyone. The only catch is that we get on
average only about 5-7% of the upside, while an employer gets nearly all of
it. (We're counting on it being 5-7% of a much larger number.)
As well as working hard, the groups all turned out to be extraordinarily
responsible. I can't think of a time when one failed to do something they'd
promised to, even by being late for an appointment. This is another lesson the
world has yet to learn. One of the founders discovered that the hardest part
of arranging a meeting with executives at a big cell phone carrier was getting
a rental company to rent him a car, because he was too young.
I think the problem here is much the same as with the apparent laziness of
people this age. They seem lazy because the work they're given is pointless,
and they act irresponsible because they're not given any power. Some of them,
anyway. We only have a sample size of about twenty, but it seems so far that
if you let people in their early twenties be their own bosses, they rise to
the occasion.
**Morale**
The summer founders were as a rule very idealistic. They also wanted very much
to get rich. These qualities might seem incompatible, but they're not. These
guys want to get rich, but they want to do it by changing the world. They
wouldn't (well, seven of the eight groups wouldn't) be interested in making
money by speculating in stocks. They want to make something people use.
I think this makes them more effective as founders. As hard as people will
work for money, they'll work harder for a cause. And since success in a
startup depends so much on motivation, the paradoxical result is that the
people likely to make the most money are those who aren't in it just for the
money.
The founders of [Kiko](http://kiko.com), for example, are working on an Ajax
calendar. They want to get rich, but they pay more attention to design than
they would if that were their only motivation. You can tell just by looking at
it.
I never considered it till this summer, but this might be another reason
startups run by hackers tend to do better than those run by MBAs. Perhaps it's
not just that hackers understand technology better, but that they're driven by
more powerful motivations. Microsoft, as I've said before, is a dangerously
misleading example. Their mean corporate culture only works for monopolies.
Google is a better model.
Considering that the summer founders are the sharks in this ocean, we were
surprised how frightened most of them were of competitors. But now that I
think of it, we were just as frightened when we started Viaweb. For the first
year, our initial reaction to news of a competitor was always: we're doomed.
Just as a hypochondriac magnifies his symptoms till he's convinced he has some
terrible disease, when you're not used to competitors you magnify them into
monsters.
Here's a handy rule for startups: competitors are rarely as dangerous as they
seem. Most will self-destruct before you can destroy them. And it certainly
doesn't matter how many of them there are, any more than it matters to the
winner of a marathon how many runners are behind him.
"It's a crowded market," I remember one founder saying worriedly.
"Are you the current leader?" I asked.
"Yes."
"Is anyone able to develop software faster than you?"
"Probably not."
"Well, if you're ahead now, and you're the fastest, then you'll stay ahead.
What difference does it make how many others there are?"
Another group was worried when they realized they had to rewrite their
software from scratch. I told them it would be a bad sign if they didn't. The
main function of your initial version is to be rewritten.
That's why we advise groups to ignore issues like scalability,
internationalization, and heavy-duty security at first. [1] I can imagine an
advocate of "best practices" saying these ought to be considered from the
start. And he'd be right, except that they interfere with the primary function
of software in a startup: to be a vehicle for experimenting with its own
design. Having to retrofit internationalization or scalability is a pain,
certainly. The only bigger pain is not needing to, because your initial
version was too big and rigid to evolve into something users wanted.
I suspect this is another reason startups beat big companies. Startups can be
irresponsible and release version 1s that are light enough to evolve. In big
companies, all the pressure is in the direction of over-engineering.
**What Got Learned**
One thing we were curious about this summer was where these groups would need
help. That turned out to vary a lot. Some we helped with technical advice--
for example, about how to set up an application to run on multiple servers.
Most we helped with strategy questions, like what to patent, and what to
charge for and what to give away. Nearly all wanted advice about dealing with
future investors: how much money should they take and what kind of terms
should they expect?
However, all the groups quickly learned how to deal with stuff like patents
and investors. These problems aren't intrinsically difficult, just unfamiliar.
It was surprising-- slightly frightening even-- how fast they learned. The
weekend before the demo day for investors, we had a practice session where all
the groups gave their presentations. They were all terrible. We tried to
explain how to make them better, but we didn't have much hope. So on demo day
I told the assembled angels and VCs that these guys were hackers, not MBAs,
and so while their software was good, we should not expect slick presentations
from them.
The groups then proceeded to give fabulously slick presentations. Gone were
the mumbling recitations of lists of features. It was as if they'd spent the
past week at acting school. I still don't know how they did it.
Perhaps watching each others' presentations helped them see what they'd been
doing wrong. Just as happens in college, the summer founders learned a lot
from one another-- maybe more than they learned from us. A lot of the problems
they face are the same, from dealing with investors to hacking Javascript.
I don't want to give the impression there were no problems this summer. A lot
went wrong, as usually happens with startups. One group got an "[exploding
term-sheet](http://www.ventureblog.com/articles/indiv/2003/000024.html)" from
some VCs. Pretty much all the groups who had dealings with big companies found
that big companies do everything infinitely slowly. (This is to be expected.
If big companies weren't incapable, there would be no room for startups to
exist.) And of course there were the usual nightmares associated with servers.
In short, the disasters this summer were just the usual childhood diseases.
Some of this summer's eight startups will probably die eventually; it would be
extraordinary if all eight succeeded. But what kills them will not be
dramatic, external threats, but a mundane, internal one: not getting enough
done.
So far, though, the news is all good. In fact, we were surprised how much fun
the summer was for us. The main reason was how much we liked the founders.
They're so earnest and hard-working. They seem to like us too. And this
illustrates another advantage of investing over hiring: our relationship with
them is way better than it would be between a boss and an employee. Y
Combinator ends up being more like an older brother than a parent.
I was surprised how much time I spent making introductions. Fortunately I
discovered that when a startup needed to talk to someone, I could usually get
to the right person by at most one hop. I remember wondering, how did my
friends get to be so eminent? and a second later realizing: shit, I'm forty.
Another surprise was that the three-month batch format, which we were forced
into by the constraints of the summer, turned out to be an advantage. When we
started Y Combinator, we planned to invest the way other venture firms do: as
proposals came in, we'd evaluate them and decide yes or no. The SFP was just
an experiment to get things started. But it worked so well that we plan to do
[all](http://ycombinator.com/funding.html) our investing this way, one cycle
in the summer and one in winter. It's more efficient for us, and better for
the startups too.
Several groups said our weekly dinners saved them from a common problem
afflicting startups: working so hard that one has no social life. (I remember
that part all too well.) This way, they were guaranteed a social event at
least once a week.
**Independence**
I've heard Y Combinator described as an "incubator." Actually we're the
opposite: incubators exert more control than ordinary VCs, and we make a point
of exerting less. Among other things, incubators usually make you work in
their office-- that's where the word "incubator" comes from. That seems the
wrong model. If investors get too involved, they smother one of the most
powerful forces in a startup: the feeling that it's your own company.
Incubators were conspicuous failures during the Bubble. There's still debate
about whether this was because of the Bubble, or because they're a bad idea.
My vote is they're a bad idea. I think they fail because they select for the
wrong people. When we were starting a startup, we would never have taken
funding from an "incubator." We can find office space, thanks; just give us
the money. And people with that attitude are the ones likely to succeed in
startups.
Indeed, one quality all the founders shared this summer was a spirit of
independence. I've been wondering about that. Are some people just a lot more
independent than others, or would everyone be this way if they were allowed
to?
As with most nature/nurture questions, the answer is probably: some of each.
But my main conclusion from the summer is that there's more environment in the
mix than most people realize. I could see that from how the founders'
attitudes _changed_ during the summer. Most were emerging from twenty or so
years of being told what to do. They seemed a little surprised at having total
freedom. But they grew into it really quickly; some of these guys now seem
about four inches taller (metaphorically) than they did at the beginning of
the summer.
When we asked the summer founders what surprised them most about starting a
company, one said "the most shocking thing is that it worked."
It will take more experience to know for sure, but my guess is that a lot of
hackers could do this-- that if you put people in a position of independence,
they develop the qualities they need. Throw them off a cliff, and most will
find on the way down that they have wings.
The reason this is news to anyone is that the same forces work in the other
direction too. Most hackers are employees, and this
[molds](http://software.ericsink.com/entries/No_Great_Hackers.html) you into
someone to whom starting a startup seems impossible as surely as starting a
startup molds you into someone who can handle it.
If I'm right, "hacker" will mean something different in twenty years than it
does now. Increasingly it will mean the people who run the company. Y
Combinator is just accelerating a process that would have happened anyway.
Power is shifting from the people who deal with money to the people who create
technology, and if our experience this summer is any guide, this will be a
good thing.
**Notes**
[1] By heavy-duty security I mean efforts to protect against truly determined
attackers.
The
[image](https://sep.turbifycdn.com/ty/cdn/paulgraham/sfptable.jpg?t=1688221954&)
shows us, the 2005 summer founders, and Smartleaf co-founders Mark Nitzberg
and Olin Shivers at the 30-foot table Kate Courteau designed for us. Photo by
Alex Lewin.
**Thanks** to Sarah Harlin, Steve Huffman, Jessica Livingston, Zak Stone, and
Aaron Swartz for reading drafts of this.
|
May 2002
"The quantity of meaning compressed into a small space by algebraic signs, is
another circumstance that facilitates the reasonings we are accustomed to
carry on by their aid."
\- Charles Babbage, quoted in Iverson's Turing Award Lecture
In the discussion about issues raised by [Revenge of the Nerds](icad.html) on
the LL1 mailing list, Paul Prescod wrote something that stuck in my mind.
> Python's goal is regularity and readability, not succinctness.
On the face of it, this seems a rather damning thing to claim about a
programming language. As far as I can tell, succinctness = power. If so, then
substituting, we get
> Python's goal is regularity and readability, not power.
and this doesn't seem a tradeoff (if it _is_ a tradeoff) that you'd want to
make. It's not far from saying that Python's goal is not to be effective as a
programming language.
Does succinctness = power? This seems to me an important question, maybe the
most important question for anyone interested in language design, and one that
it would be useful to confront directly. I don't feel sure yet that the answer
is a simple yes, but it seems a good hypothesis to begin with.
**Hypothesis**
My hypothesis is that succinctness is power, or is close enough that except in
pathological examples you can treat them as identical.
It seems to me that succinctness is what programming languages are _for._
Computers would be just as happy to be told what to do directly in machine
language. I think that the main reason we take the trouble to develop high-
level languages is to get leverage, so that we can say (and more importantly,
think) in 10 lines of a high-level language what would require 1000 lines of
machine language. In other words, the main point of high-level languages is to
make source code smaller.
If smaller source code is the purpose of high-level languages, and the power
of something is how well it achieves its purpose, then the measure of the
power of a programming language is how small it makes your programs.
Conversely, a language that doesn't make your programs small is doing a bad
job of what programming languages are supposed to do, like a knife that
doesn't cut well, or printing that's illegible.
**Metrics**
Small in what sense though? The most common measure of code size is lines of
code. But I think that this metric is the most common because it is the
easiest to measure. I don't think anyone really believes it is the true test
of the length of a program. Different languages have different conventions for
how much you should put on a line; in C a lot of lines have nothing on them
but a delimiter or two.
Another easy test is the number of characters in a program, but this is not
very good either; some languages (Perl, for example) just use shorter
identifiers than others.
I think a better measure of the size of a program would be the number of
elements, where an element is anything that would be a distinct node if you
drew a tree representing the source code. The name of a variable or function
is an element; an integer or a floating-point number is an element; a segment
of literal text is an element; an element of a pattern, or a format directive,
is an element; a new block is an element. There are borderline cases (is -5
two elements or one?) but I think most of them are the same for every
language, so they don't affect comparisons much.
This metric needs fleshing out, and it could require interpretation in the
case of specific languages, but I think it tries to measure the right thing,
which is the number of parts a program has. I think the tree you'd draw in
this exercise is what you have to make in your head in order to conceive of
the program, and so its size is proportionate to the amount of work you have
to do to write or read it.
**Design**
This kind of metric would allow us to compare different languages, but that is
not, at least for me, its main value. The main value of the succinctness test
is as a guide in _designing_ languages. The most useful comparison between
languages is between two potential variants of the same language. What can I
do in the language to make programs shorter?
If the conceptual load of a program is proportionate to its complexity, and a
given programmer can tolerate a fixed conceptual load, then this is the same
as asking, what can I do to enable programmers to get the most done? And that
seems to me identical to asking, how can I design a good language?
(Incidentally, nothing makes it more patently obvious that the old chestnut
"all languages are equivalent" is false than designing languages. When you are
designing a new language, you're _constantly_ comparing two languages-- the
language if I did x, and if I didn't-- to decide which is better. If this were
really a meaningless question, you might as well flip a coin.)
Aiming for succinctness seems a good way to find new ideas. If you can do
something that makes many different programs shorter, it is probably not a
coincidence: you have probably discovered a useful new abstraction. You might
even be able to write a program to help by searching source code for repeated
patterns. Among other languages, those with a reputation for succinctness
would be the ones to look to for new ideas: Forth, Joy, Icon.
**Comparison**
The first person to write about these issues, as far as I know, was Fred
Brooks in the _Mythical Man Month_. He wrote that programmers seemed to
generate about the same amount of code per day regardless of the language.
When I first read this in my early twenties, it was a big surprise to me and
seemed to have huge implications. It meant that (a) the only way to get
software written faster was to use a more succinct language, and (b) someone
who took the trouble to do this could leave competitors who didn't in the
dust.
Brooks' hypothesis, if it's true, seems to be at the very heart of hacking. In
the years since, I've paid close attention to any evidence I could get on the
question, from formal studies to anecdotes about individual projects. I have
seen nothing to contradict him.
I have not yet seen evidence that seemed to me conclusive, and I don't expect
to. Studies like Lutz Prechelt's comparison of programming languages, while
generating the kind of results I expected, tend to use problems that are too
short to be meaningful tests. A better test of a language is what happens in
programs that take a month to write. And the only real test, if you believe as
I do that the main purpose of a language is to be good to think in (rather
than just to tell a computer what to do once you've thought of it) is what new
things you can write in it. So any language comparison where you have to meet
a predefined spec is testing slightly the wrong thing.
The true test of a language is how well you can discover and solve new
problems, not how well you can use it to solve a problem someone else has
already formulated. These two are quite different criteria. In art, mediums
like embroidery and mosaic work well if you know beforehand what you want to
make, but are absolutely lousy if you don't. When you want to discover the
image as you make it-- as you have to do with anything as complex as an image
of a person, for example-- you need to use a more fluid medium like pencil or
ink wash or oil paint. And indeed, the way tapestries and mosaics are made in
practice is to make a painting first, then copy it. (The word "cartoon" was
originally used to describe a painting intended for this purpose).
What this means is that we are never likely to have accurate comparisons of
the relative power of programming languages. We'll have precise comparisons,
but not accurate ones. In particular, explicit studies for the purpose of
comparing languages, because they will probably use small problems, and will
necessarily use predefined problems, will tend to underestimate the power of
the more powerful languages.
Reports from the field, though they will necessarily be less precise than
"scientific" studies, are likely to be more meaningful. For example, Ulf Wiger
of Ericsson did a [study](http://www.erlang.se/publications/Ulf_Wiger.pdf)
that concluded that Erlang was 4-10x more succinct than C++, and
proportionately faster to develop software in:
> Comparisons between Ericsson-internal development projects indicate similar
> line/hour productivity, including all phases of software development, rather
> independently of which language (Erlang, PLEX, C, C++, or Java) was used.
> What differentiates the different languages then becomes source code volume.
The study also deals explictly with a point that was only implicit in Brooks'
book (since he measured lines of debugged code): programs written in more
powerful languages tend to have fewer bugs. That becomes an end in itself,
possibly more important than programmer productivity, in applications like
network switches.
**The Taste Test**
Ultimately, I think you have to go with your gut. What does it feel like to
program in the language? I think the way to find (or design) the best language
is to become hypersensitive to how well a language lets you think, then
choose/design the language that feels best. If some language feature is
awkward or restricting, don't worry, you'll know about it.
Such hypersensitivity will come at a cost. You'll find that you can't _stand_
programming in clumsy languages. I find it unbearably restrictive to program
in languages without macros, just as someone used to dynamic typing finds it
unbearably restrictive to have to go back to programming in a language where
you have to declare the type of every variable, and can't make a list of
objects of different types.
I'm not the only one. I know many Lisp hackers that this has happened to. In
fact, the most accurate measure of the relative power of programming languages
might be the percentage of people who know the language who will take any job
where they get to use that language, regardless of the application domain.
**Restrictiveness**
I think most hackers know what it means for a language to feel restrictive.
What's happening when you feel that? I think it's the same feeling you get
when the street you want to take is blocked off, and you have to take a long
detour to get where you wanted to go. There is something you want to say, and
the language won't let you.
What's really going on here, I think, is that a restrictive language is one
that isn't succinct enough. The problem is not simply that you can't say what
you planned to. It's that the detour the language makes you take is _longer._
Try this thought experiment. Suppose there were some program you wanted to
write, and the language wouldn't let you express it the way you planned to,
but instead forced you to write the program in some other way that was
_shorter._ For me at least, that wouldn't feel very restrictive. It would be
like the street you wanted to take being blocked off, and the policeman at the
intersection directing you to a shortcut instead of a detour. Great!
I think most (ninety percent?) of the feeling of restrictiveness comes from
being forced to make the program you write in the language longer than one you
have in your head. Restrictiveness is mostly lack of succinctness. So when a
language feels restrictive, what that (mostly) means is that it isn't succinct
enough, and when a language isn't succinct, it will feel restrictive.
**Readability**
The quote I began with mentions two other qualities, regularity and
readability. I'm not sure what regularity is, or what advantage, if any, code
that is regular and readable has over code that is merely readable. But I
think I know what is meant by readability, and I think it is also related to
succinctness.
We have to be careful here to distinguish between the readability of an
individual line of code and the readability of the whole program. It's the
second that matters. I agree that a line of Basic is likely to be more
readable than a line of Lisp. But a program written in Basic is is going to
have more lines than the same program written in Lisp (especially once you
cross over into Greenspunland). The total effort of reading the Basic program
will surely be greater.
> total effort = effort per line x number of lines
I'm not as sure that readability is directly proportionate to succinctness as
I am that power is, but certainly succinctness is a factor (in the
mathematical sense; see equation above) in readability. So it may not even be
meaningful to say that the goal of a language is readability, not
succinctness; it could be like saying the goal was readability, not
readability.
What readability-per-line does mean, to the user encountering the language for
the first time, is that source code will _look unthreatening_. So readability-
per-line could be a good marketing decision, even if it is a bad design
decision. It's isomorphic to the very successful technique of letting people
pay in installments: instead of frightening them with a high upfront price,
you tell them the low monthly payment. Installment plans are a net lose for
the buyer, though, as mere readability-per-line probably is for the
programmer. The buyer is going to make a _lot_ of those low, low payments; and
the programmer is going to read a _lot_ of those individually readable lines.
This tradeoff predates programming languages. If you're used to reading novels
and newspaper articles, your first experience of reading a math paper can be
dismaying. It could take half an hour to read a single page. And yet, I am
pretty sure that the notation is not the problem, even though it may feel like
it is. The math paper is hard to read because the ideas are hard. If you
expressed the same ideas in prose (as mathematicians had to do before they
evolved succinct notations), they wouldn't be any easier to read, because the
paper would grow to the size of a book.
**To What Extent?**
A number of people have rejected the idea that succinctness = power. I think
it would be more useful, instead of simply arguing that they are the same or
aren't, to ask: to what _extent_ does succinctness = power? Because clearly
succinctness is a large part of what higher-level languages are for. If it is
not all they're for, then what else are they for, and how important,
relatively, are these other functions?
I'm not proposing this just to make the debate more civilized. I really want
to know the answer. When, if ever, is a language too succinct for its own
good?
The hypothesis I began with was that, except in pathological examples, I
thought succinctness could be considered identical with power. What I meant
was that in any language anyone would design, they would be identical, but
that if someone wanted to design a language explicitly to disprove this
hypothesis, they could probably do it. I'm not even sure of that, actually.
**Languages, not Programs**
We should be clear that we are talking about the succinctness of languages,
not of individual programs. It certainly is possible for individual programs
to be written too densely.
I wrote about this in [On Lisp](onlisp.html). A complex macro may have to save
many times its own length to be justified. If writing some hairy macro could
save you ten lines of code every time you use it, and the macro is itself ten
lines of code, then you get a net saving in lines if you use it more than
once. But that could still be a bad move, because macro definitions are harder
to read than ordinary code. You might have to use the macro ten or twenty
times before it yielded a net improvement in readability.
I'm sure every language has such tradeoffs (though I suspect the stakes get
higher as the language gets more powerful). Every programmer must have seen
code that some clever person has made marginally shorter by using dubious
programming tricks.
So there is no argument about that-- at least, not from me. Individual
programs can certainly be too succinct for their own good. The question is,
can a language be? Can a language compel programmers to write code that's
short (in elements) at the expense of overall readability?
One reason it's hard to imagine a language being too succinct is that if there
were some excessively compact way to phrase something, there would probably
also be a longer way. For example, if you felt Lisp programs using a lot of
macros or higher-order functions were too dense, you could, if you preferred,
write code that was isomorphic to Pascal. If you don't want to express
factorial in Arc as a call to a higher-order function (rec zero 1 * 1-) you
can also write out a recursive definition: (rfn fact (x) (if (zero x) 1 (* x
(fact (1- x))))) Though I can't off the top of my head think of any examples,
I am interested in the question of whether a language could be too succinct.
Are there languages that force you to write code in a way that is crabbed and
incomprehensible? If anyone has examples, I would be very interested to see
them.
(Reminder: What I'm looking for are programs that are very dense according to
the metric of "elements" sketched above, not merely programs that are short
because delimiters can be omitted and everything has a one-character name.)
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
August 2007
_(This is a talk I gave at the last Y Combinator dinner of the summer.
Usually we don't have a speaker at the last dinner; it's more of a party. But
it seemed worth spoiling the atmosphere if I could save some of the startups
from preventable deaths. So at the last minute I cooked up this rather grim
talk. I didn't mean this as an essay; I wrote it down because I only had two
hours before dinner and think fastest while writing.)_
A couple days ago I told a reporter that we expected about a third of the
companies we funded to succeed. Actually I was being conservative. I'm hoping
it might be as much as a half. Wouldn't it be amazing if we could achieve a
50% success rate?
Another way of saying that is that half of you are going to die. Phrased that
way, it doesn't sound good at all. In fact, it's kind of weird when you think
about it, because our definition of success is that the founders get rich. If
half the startups we fund succeed, then half of you are going to get rich and
the other half are going to get nothing.
If you can just avoid dying, you get rich. That sounds like a joke, but it's
actually a pretty good description of what happens in a typical startup. It
certainly describes what happened in Viaweb. We avoided dying till we got
rich.
It was really close, too. When we were visiting Yahoo to talk about being
acquired, we had to interrupt everything and borrow one of their conference
rooms to talk down an investor who was about to back out of a new funding
round we needed to stay alive. So even in the middle of getting rich we were
fighting off the grim reaper.
You may have heard that quote about luck consisting of opportunity meeting
preparation. You've now done the preparation. The work you've done so far has,
in effect, put you in a position to get lucky: you can now get rich by not
letting your company die. That's more than most people have. So let's talk
about how not to die.
We've done this five times now, and we've seen a bunch of startups die. About
10 of them so far. We don't know exactly what happens when they die, because
they generally don't die loudly and heroically. Mostly they crawl off
somewhere and die.
For us the main indication of impending doom is when we don't hear from you.
When we haven't heard from, or about, a startup for a couple months, that's a
bad sign. If we send them an email asking what's up, and they don't reply,
that's a really bad sign. So far that is a 100% accurate predictor of death.
Whereas if a startup regularly does new deals and releases and either sends us
mail or shows up at YC events, they're probably going to live.
I realize this will sound naive, but maybe the linkage works in both
directions. Maybe if you can arrange that we keep hearing from you, you won't
die.
That may not be so naive as it sounds. You've probably noticed that having
dinners every Tuesday with us and the other founders causes you to get more
done than you would otherwise, because every dinner is a mini Demo Day. Every
dinner is a kind of a deadline. So the mere constraint of staying in regular
contact with us will push you to make things happen, because otherwise you'll
be embarrassed to tell us that you haven't done anything new since the last
time we talked.
If this works, it would be an amazing hack. It would be pretty cool if merely
by staying in regular contact with us you could get rich. It sounds crazy, but
there's a good chance that would work.
A variant is to stay in touch with other YC-funded startups. There is now a
whole neighborhood of them in San Francisco. If you move there, the peer
pressure that made you work harder all summer will continue to operate.
When startups die, the official cause of death is always either running out of
money or a critical founder bailing. Often the two occur simultaneously. But I
think the underlying cause is usually that they've become demoralized. You
rarely hear of a startup that's working around the clock doing deals and
pumping out new features, and dies because they can't pay their bills and
their ISP unplugs their server.
Startups rarely die in mid keystroke. So keep typing!
If so many startups get demoralized and fail when merely by hanging on they
could get rich, you have to assume that running a startup can be demoralizing.
That is certainly true. I've been there, and that's why I've never done
another startup. The low points in a startup are just unbelievably low. I bet
even Google had moments where things seemed hopeless.
Knowing that should help. If you know it's going to feel terrible sometimes,
then when it feels terrible you won't think "ouch, this feels terrible, I give
up." It feels that way for everyone. And if you just hang on, things will
probably get better. The metaphor people use to describe the way a startup
feels is at least a roller coaster and not drowning. You don't just sink and
sink; there are ups after the downs.
Another feeling that seems alarming but is in fact normal in a startup is the
feeling that what you're doing isn't working. The reason you can expect to
feel this is that what you do probably won't work. Startups almost never get
it right the first time. Much more commonly you launch something, and no one
cares. Don't assume when this happens that you've failed. That's normal for
startups. But don't sit around doing nothing. Iterate.
I like Paul Buchheit's suggestion of trying to make something that at least
someone really loves. As long as you've made something that a few users are
ecstatic about, you're on the right track. It will be good for your morale to
have even a handful of users who really love you, and startups run on morale.
But also it will tell you what to focus on. What is it about you that they
love? Can you do more of that? Where can you find more people who love that
sort of thing? As long as you have some core of users who love you, all you
have to do is expand it. It may take a while, but as long as you keep plugging
away, you'll win in the end. Both Blogger and Delicious did that. Both took
years to succeed. But both began with a core of fanatically devoted users, and
all Evan and Joshua had to do was grow that core incrementally.
[Wufoo](http://wufoo.com) is on the same trajectory now.
So when you release something and it seems like no one cares, look more
closely. Are there zero users who really love you, or is there at least some
little group that does? It's quite possible there will be zero. In that case,
tweak your product and try again. Every one of you is working on a space that
contains at least one winning permutation somewhere in it. If you just keep
trying, you'll find it.
Let me mention some things not to do. The number one thing not to do is other
things. If you find yourself saying a sentence that ends with "but we're going
to keep working on the startup," you are in big trouble. Bob's going to grad
school, but we're going to keep working on the startup. We're moving back to
Minnesota, but we're going to keep working on the startup. We're taking on
some consulting projects, but we're going to keep working on the startup. You
may as well just translate these to "we're giving up on the startup, but we're
not willing to admit that to ourselves," because that's what it means most of
the time. A startup is so hard that working on it can't be preceded by "but."
In particular, don't go to graduate school, and don't start other projects.
Distraction is fatal to startups. Going to (or back to) school is a huge
predictor of death because in addition to the distraction it gives you
something to say you're doing. If you're only doing a startup, then if the
startup fails, you fail. If you're in grad school and your startup fails, you
can say later "Oh yeah, we had this startup on the side when I was in grad
school, but it didn't go anywhere."
You can't use euphemisms like "didn't go anywhere" for something that's your
only occupation. People won't let you.
One of the most interesting things we've discovered from working on Y
Combinator is that founders are more motivated by the fear of looking bad than
by the hope of getting millions of dollars. So if you want to get millions of
dollars, put yourself in a position where failure will be public and
humiliating.
When we first met the founders of [Octopart](http://octopart.com), they seemed
very smart, but not a great bet to succeed, because they didn't seem
especially committed. One of the two founders was still in grad school. It was
the usual story: he'd drop out if it looked like the startup was taking off.
Since then he has not only dropped out of grad school, but appeared full
length in [Newsweek](http://docs.octopart.com/newsweek_octopart_small.jpg)
with the word "Billionaire" printed across his chest. He just cannot fail now.
Everyone he knows has seen that picture. Girls who dissed him in high school
have seen it. His mom probably has it on the fridge. It would be unthinkably
humiliating to fail now. At this point he is committed to fight to the death.
I wish every startup we funded could appear in a Newsweek article describing
them as the next generation of billionaires, because then none of them would
be able to give up. The success rate would be 90%. I'm not kidding.
When we first knew the Octoparts they were lighthearted, cheery guys. Now when
we talk to them they seem grimly determined. The electronic parts distributors
are trying to squash them to keep their monopoly pricing. (If it strikes you
as odd that people still order electronic parts out of thick paper catalogs in
2007, there's a reason for that. The distributors want to prevent the
transparency that comes from having prices online.) I feel kind of bad that
we've transformed these guys from lighthearted to grimly determined. But that
comes with the territory. If a startup succeeds, you get millions of dollars,
and you don't get that kind of money just by asking for it. You have to assume
it takes some amount of pain.
And however tough things get for the Octoparts, I predict they'll succeed.
They may have to morph themselves into something totally different, but they
won't just crawl off and die. They're smart; they're working in a promising
field; and they just cannot give up.
All of you guys already have the first two. You're all smart and working on
promising ideas. Whether you end up among the living or the dead comes down to
the third ingredient, not giving up.
So I'll tell you now: bad shit is coming. It always is in a startup. The odds
of getting from launch to liquidity without some kind of disaster happening
are one in a thousand. So don't get demoralized. When the disaster strikes,
just say to yourself, ok, this was what Paul was talking about. What did he
say to do? Oh, yeah. Don't give up.
|
January 2016
One advantage of being old is that you can see change happen in your lifetime.
A lot of the change I've seen is fragmentation. US politics is much more
polarized than it used to be. Culturally we have ever less common ground. The
creative class flocks to a handful of happy cities, abandoning the rest. And
increasing economic inequality means the spread between rich and poor is
growing too. I'd like to propose a hypothesis: that all these trends are
instances of the same phenomenon. And moreover, that the cause is not some
force that's pulling us apart, but rather the erosion of forces that had been
pushing us together.
Worse still, for those who worry about these trends, the forces that were
pushing us together were an anomaly, a one-time combination of circumstances
that's unlikely to be repeated — and indeed, that we would not want to repeat.
The two forces were war (above all World War II), and the rise of large
corporations.
The effects of World War II were both economic and social. Economically, it
decreased variation in income. Like all modern armed forces, America's were
socialist economically. From each according to his ability, to each according
to his need. More or less. Higher ranking members of the military got more (as
higher ranking members of socialist societies always do), but what they got
was fixed according to their rank. And the flattening effect wasn't limited to
those under arms, because the US economy was conscripted too. Between 1942 and
1945 all wages were set by the National War Labor Board. Like the military,
they defaulted to flatness. And this national standardization of wages was so
pervasive that its effects could still be seen years after the war ended. [1]
Business owners weren't supposed to be making money either. FDR said "not a
single war millionaire" would be permitted. To ensure that, any increase in a
company's profits over prewar levels was taxed at 85%. And when what was left
after corporate taxes reached individuals, it was taxed again at a marginal
rate of 93%. [2]
Socially too the war tended to decrease variation. Over 16 million men and
women from all sorts of different backgrounds were brought together in a way
of life that was literally uniform. Service rates for men born in the early
1920s approached 80%. And working toward a common goal, often under stress,
brought them still closer together.
Though strictly speaking World War II lasted less than 4 years for the US, its
effects lasted longer. Wars make central governments more powerful, and World
War II was an extreme case of this. In the US, as in all the other Allied
countries, the federal government was slow to give up the new powers it had
acquired. Indeed, in some respects the war didn't end in 1945; the enemy just
switched to the Soviet Union. In tax rates, federal power, defense spending,
conscription, and nationalism, the decades after the war looked more like
wartime than prewar peacetime. [3] And the social effects lasted too. The kid
pulled into the army from behind a mule team in West Virginia didn't simply go
back to the farm afterward. Something else was waiting for him, something that
looked a lot like the army.
If total war was the big political story of the 20th century, the big economic
story was the rise of a new kind of company. And this too tended to produce
both social and economic cohesion. [4]
The 20th century was the century of the big, national corporation. General
Electric, General Foods, General Motors. Developments in finance,
communications, transportation, and manufacturing enabled a new type of
company whose goal was above all scale. Version 1 of this world was low-res: a
Duplo world of a few giant companies dominating each big market. [5]
The late 19th and early 20th centuries had been a time of consolidation, led
especially by J. P. Morgan. Thousands of companies run by their founders were
merged into a couple hundred giant ones run by professional managers.
Economies of scale ruled the day. It seemed to people at the time that this
was the final state of things. John D. Rockefeller said in 1880
> The day of combination is here to stay. Individualism has gone, never to
> return.
He turned out to be mistaken, but he seemed right for the next hundred years.
The consolidation that began in the late 19th century continued for most of
the 20th. By the end of World War II, as Michael Lind writes, "the major
sectors of the economy were either organized as government-backed cartels or
dominated by a few oligopolistic corporations."
For consumers this new world meant the same choices everywhere, but only a few
of them. When I grew up there were only 2 or 3 of most things, and since they
were all aiming at the middle of the market there wasn't much to differentiate
them.
One of the most important instances of this phenomenon was in TV. Here there
were 3 choices: NBC, CBS, and ABC. Plus public TV for eggheads and communists.
The programs that the 3 networks offered were indistinguishable. In fact, here
there was a triple pressure toward the center. If one show did try something
daring, local affiliates in conservative markets would make them stop. Plus
since TVs were expensive, whole families watched the same shows together, so
they had to be suitable for everyone.
And not only did everyone get the same thing, they got it at the same time.
It's difficult to imagine now, but every night tens of millions of families
would sit down together in front of their TV set watching the same show, at
the same time, as their next door neighbors. What happens now with the Super
Bowl used to happen every night. We were literally in sync. [6]
In a way mid-century TV culture was good. The view it gave of the world was
like you'd find in a children's book, and it probably had something of the
effect that (parents hope) children's books have in making people behave
better. But, like children's books, TV was also misleading. Dangerously
misleading, for adults. In his autobiography, Robert MacNeil talks of seeing
gruesome images that had just come in from Vietnam and thinking, we can't show
these to families while they're having dinner.
I know how pervasive the common culture was, because I tried to opt out of it,
and it was practically impossible to find alternatives. When I was 13 I
realized, more from internal evidence than any outside source, that the ideas
we were being fed on TV were crap, and I stopped watching it. [7] But it
wasn't just TV. It seemed like everything around me was crap. The politicians
all saying the same things, the consumer brands making almost identical
products with different labels stuck on to indicate how prestigious they were
meant to be, the balloon-frame houses with fake "colonial" skins, the cars
with several feet of gratuitous metal on each end that started to fall apart
after a couple years, the "red delicious" apples that were red but only
nominally apples. And in retrospect, it _was_ crap. [8]
But when I went looking for alternatives to fill this void, I found
practically nothing. There was no Internet then. The only place to look was in
the chain bookstore in our local shopping mall. [9] There I found a copy of
_The Atlantic_. I wish I could say it became a gateway into a wider world, but
in fact I found it boring and incomprehensible. Like a kid tasting whisky for
the first time and pretending to like it, I preserved that magazine as
carefully as if it had been a book. I'm sure I still have it somewhere. But
though it was evidence that there was, somewhere, a world that wasn't red
delicious, I didn't find it till college.
It wasn't just as consumers that the big companies made us similar. They did
as employers too. Within companies there were powerful forces pushing people
toward a single model of how to look and act. IBM was particularly notorious
for this, but they were only a little more extreme than other big companies.
And the models of how to look and act varied little between companies. Meaning
everyone within this world was expected to seem more or less the same. And not
just those in the corporate world, but also everyone who aspired to it — which
in the middle of the 20th century meant most people who weren't already in it.
For most of the 20th century, working-class people tried hard to look middle
class. You can see it in old photos. Few adults aspired to look dangerous in
1950.
But the rise of national corporations didn't just compress us culturally. It
compressed us economically too, and on both ends.
Along with giant national corporations, we got giant national labor unions.
And in the mid 20th century the corporations cut deals with the unions where
they paid over market price for labor. Partly because the unions were
monopolies. [10] Partly because, as components of oligopolies themselves, the
corporations knew they could safely pass the cost on to their customers,
because their competitors would have to as well. And partly because in mid-
century most of the giant companies were still focused on finding new ways to
milk economies of scale. Just as startups rightly pay AWS a premium over the
cost of running their own servers so they can focus on growth, many of the big
national corporations were willing to pay a premium for labor. [11]
As well as pushing incomes up from the bottom, by overpaying unions, the big
companies of the 20th century also pushed incomes down at the top, by
underpaying their top management. Economist J. K. Galbraith wrote in 1967 that
"There are few corporations in which it would be suggested that executive
salaries are at a maximum." [12]
To some extent this was an illusion. Much of the de facto pay of executives
never showed up on their income tax returns, because it took the form of
perks. The higher the rate of income tax, the more pressure there was to pay
employees upstream of it. (In the UK, where taxes were even higher than in the
US, companies would even pay their kids' private school tuitions.) One of the
most valuable things the big companies of the mid 20th century gave their
employees was job security, and this too didn't show up in tax returns or
income statistics. So the nature of employment in these organizations tended
to yield falsely low numbers about economic inequality. But even accounting
for that, the big companies paid their best people less than market price.
There was no market; the expectation was that you'd work for the same company
for decades if not your whole career. [13]
Your work was so illiquid there was little chance of getting market price. But
that same illiquidity also encouraged you not to seek it. If the company
promised to employ you till you retired and give you a pension afterward, you
didn't want to extract as much from it this year as you could. You needed to
take care of the company so it could take care of you. Especially when you'd
been working with the same group of people for decades. If you tried to
squeeze the company for more money, you were squeezing the organization that
was going to take care of _them_. Plus if you didn't put the company first you
wouldn't be promoted, and if you couldn't switch ladders, promotion on this
one was the only way up. [14]
To someone who'd spent several formative years in the armed forces, this
situation didn't seem as strange as it does to us now. From their point of
view, as big company executives, they were high-ranking officers. They got
paid a lot more than privates. They got to have expense account lunches at the
best restaurants and fly around on the company's Gulfstreams. It probably
didn't occur to most of them to ask if they were being paid market price.
The ultimate way to get market price is to work for yourself, by starting your
own company. That seems obvious to any ambitious person now. But in the mid
20th century it was an alien concept. Not because starting one's own company
seemed too ambitious, but because it didn't seem ambitious enough. Even as
late as the 1970s, when I grew up, the ambitious plan was to get lots of
education at prestigious institutions, and then join some other prestigious
institution and work one's way up the hierarchy. Your prestige was the
prestige of the institution you belonged to. People did start their own
businesses of course, but educated people rarely did, because in those days
there was practically zero concept of starting what we now call a
[_startup_](growth.html): a business that starts small and grows big. That was
much harder to do in the mid 20th century. Starting one's own business meant
starting a business that would start small and stay small. Which in those days
of big companies often meant scurrying around trying to avoid being trampled
by elephants. It was more prestigious to be one of the executive class riding
the elephant.
By the 1970s, no one stopped to wonder where the big prestigious companies had
come from in the first place. It seemed like they'd always been there, like
the chemical elements. And indeed, there was a double wall between ambitious
kids in the 20th century and the origins of the big companies. Many of the big
companies were roll-ups that didn't have clear founders. And when they did,
the founders didn't seem like us. Nearly all of them had been uneducated, in
the sense of not having been to college. They were what Shakespeare called
rude mechanicals. College trained one to be a member of the professional
classes. Its graduates didn't expect to do the sort of grubby menial work that
Andrew Carnegie or Henry Ford started out doing. [15]
And in the 20th century there were more and more college graduates. They
increased from about 2% of the population in 1900 to about 25% in 2000. In the
middle of the century our two big forces intersect, in the form of the GI
Bill, which sent 2.2 million World War II veterans to college. Few thought of
it in these terms, but the result of making college the canonical path for the
ambitious was a world in which it was socially acceptable to work for Henry
Ford, but not to be Henry Ford. [16]
I remember this world well. I came of age just as it was starting to break up.
In my childhood it was still dominant. Not quite so dominant as it had been.
We could see from old TV shows and yearbooks and the way adults acted that
people in the 1950s and 60s had been even more conformist than us. The mid-
century model was already starting to get old. But that was not how we saw it
at the time. We would at most have said that one could be a bit more daring in
1975 than 1965. And indeed, things hadn't changed much yet.
But change was coming soon. And when the Duplo economy started to
disintegrate, it disintegrated in several different ways at once. Vertically
integrated companies literally dis-integrated because it was more efficient
to. Incumbents faced new competitors as (a) markets went global and (b)
technical innovation started to trump economies of scale, turning size from an
asset into a liability. Smaller companies were increasingly able to survive as
formerly narrow channels to consumers broadened. Markets themselves started to
change faster, as whole new categories of products appeared. And last but not
least, the federal government, which had previously smiled upon J. P. Morgan's
world as the natural state of things, began to realize it wasn't the last word
after all.
What J. P. Morgan was to the horizontal axis, Henry Ford was to the vertical.
He wanted to do everything himself. The giant plant he built at River Rouge
between 1917 and 1928 literally took in iron ore at one end and sent cars out
the other. 100,000 people worked there. At the time it seemed the future. But
that is not how car companies operate today. Now much of the design and
manufacturing happens in a long supply chain, whose products the car companies
ultimately assemble and sell. The reason car companies operate this way is
that it works better. Each company in the supply chain focuses on what they
know best. And they each have to do it well or they can be swapped out for
another supplier.
Why didn't Henry Ford realize that networks of cooperating companies work
better than a single big company? One reason is that supplier networks take a
while to evolve. In 1917, doing everything himself seemed to Ford the only way
to get the scale he needed. And the second reason is that if you want to solve
a problem using a network of cooperating companies, you have to be able to
coordinate their efforts, and you can do that much better with computers.
Computers reduce the transaction costs that Coase argued are the raison d'etre
of corporations. That is a fundamental change.
In the early 20th century, big companies were synonymous with efficiency. In
the late 20th century they were synonymous with inefficiency. To some extent
this was because the companies themselves had become sclerotic. But it was
also because our standards were higher.
It wasn't just within existing industries that change occurred. The industries
themselves changed. It became possible to make lots of new things, and
sometimes the existing companies weren't the ones who did it best.
Microcomputers are a classic example. The market was pioneered by upstarts
like Apple. When it got big enough, IBM decided it was worth paying attention
to. At the time IBM completely dominated the computer industry. They assumed
that all they had to do, now that this market was ripe, was to reach out and
pick it. Most people at the time would have agreed with them. But what
happened next illustrated how much more complicated the world had become. IBM
did launch a microcomputer. Though quite successful, it did not crush Apple.
But even more importantly, IBM itself ended up being supplanted by a supplier
coming in from the side — from software, which didn't even seem to be the same
business. IBM's big mistake was to accept a non-exclusive license for DOS. It
must have seemed a safe move at the time. No other computer manufacturer had
ever been able to outsell them. What difference did it make if other
manufacturers could offer DOS too? The result of that miscalculation was an
explosion of inexpensive PC clones. Microsoft now owned the PC standard, and
the customer. And the microcomputer business ended up being Apple vs
Microsoft.
Basically, Apple bumped IBM and then Microsoft stole its wallet. That sort of
thing did not happen to big companies in mid-century. But it was going to
happen increasingly often in the future.
Change happened mostly by itself in the computer business. In other
industries, legal obstacles had to be removed first. Many of the mid-century
oligopolies had been anointed by the federal government with policies (and in
wartime, large orders) that kept out competitors. This didn't seem as dubious
to government officials at the time as it sounds to us. They felt a two-party
system ensured sufficient competition in politics. It ought to work for
business too.
Gradually the government realized that anti-competitive policies were doing
more harm than good, and during the Carter administration it started to remove
them. The word used for this process was misleadingly narrow: deregulation.
What was really happening was de-oligopolization. It happened to one industry
after another. Two of the most visible to consumers were air travel and long-
distance phone service, which both became dramatically cheaper after
deregulation.
Deregulation also contributed to the wave of hostile takeovers in the 1980s.
In the old days the only limit on the inefficiency of companies, short of
actual bankruptcy, was the inefficiency of their competitors. Now companies
had to face absolute rather than relative standards. Any public company that
didn't generate sufficient returns on its assets risked having its management
replaced with one that would. Often the new managers did this by breaking
companies up into components that were more valuable separately. [17]
Version 1 of the national economy consisted of a few big blocks whose
relationships were negotiated in back rooms by a handful of executives,
politicians, regulators, and labor leaders. Version 2 was higher resolution:
there were more companies, of more different sizes, making more different
things, and their relationships changed faster. In this world there were still
plenty of back room negotiations, but more was left to market forces. Which
further accelerated the fragmentation.
It's a little misleading to talk of versions when describing a gradual
process, but not as misleading as it might seem. There was a lot of change in
a few decades, and what we ended up with was qualitatively different. The
companies in the S&P 500 in 1958 had been there an average of 61 years. By
2012 that number was 18 years. [18]
The breakup of the Duplo economy happened simultaneously with the spread of
computing power. To what extent were computers a precondition? It would take a
book to answer that. Obviously the spread of computing power was a
precondition for the rise of startups. I suspect it was for most of what
happened in finance too. But was it a precondition for globalization or the
LBO wave? I don't know, but I wouldn't discount the possibility. It may be
that the refragmentation was driven by computers in the way the industrial
revolution was driven by steam engines. Whether or not computers were a
precondition, they have certainly accelerated it.
The new fluidity of companies changed people's relationships with their
employers. Why climb a corporate ladder that might be yanked out from under
you? Ambitious people started to think of a career less as climbing a single
ladder than as a series of jobs that might be at different companies. More
movement (or even potential movement) between companies introduced more
competition in salaries. Plus as companies became smaller it became easier to
estimate how much an employee contributed to the company's revenue. Both
changes drove salaries toward market price. And since people vary dramatically
in productivity, paying market price meant salaries started to diverge.
By no coincidence it was in the early 1980s that the term "yuppie" was coined.
That word is not much used now, because the phenomenon it describes is so
taken for granted, but at the time it was a label for something novel. Yuppies
were young professionals who made lots of money. To someone in their twenties
today, this wouldn't seem worth naming. Why wouldn't young professionals make
lots of money? But until the 1980s, being underpaid early in your career was
part of what it meant to be a professional. Young professionals were paying
their dues, working their way up the ladder. The rewards would come later.
What was novel about yuppies was that they wanted market price for the work
they were doing now.
The first yuppies did not work for startups. That was still in the future. Nor
did they work for big companies. They were professionals working in fields
like law, finance, and consulting. But their example rapidly inspired their
peers. Once they saw that new BMW 325i, they wanted one too.
Underpaying people at the beginning of their career only works if everyone
does it. Once some employer breaks ranks, everyone else has to, or they can't
get good people. And once started this process spreads through the whole
economy, because at the beginnings of people's careers they can easily switch
not merely employers but industries.
But not all young professionals benefitted. You had to produce to get paid a
lot. It was no coincidence that the first yuppies worked in fields where it
was easy to measure that.
More generally, an idea was returning whose name sounds old-fashioned
precisely because it was so rare for so long: that you could make your
fortune. As in the past there were multiple ways to do it. Some made their
fortunes by creating wealth, and others by playing zero-sum games. But once it
became possible to make one's fortune, the ambitious had to decide whether or
not to. A physicist who chose physics over Wall Street in 1990 was making a
sacrifice that a physicist in 1960 didn't have to think about.
The idea even flowed back into big companies. CEOs of big companies make more
now than they used to, and I think much of the reason is prestige. In 1960,
corporate CEOs had immense prestige. They were the winners of the only
economic game in town. But if they made as little now as they did then, in
real dollar terms, they'd seem like small fry compared to professional
athletes and whiz kids making millions from startups and hedge funds. They
don't like that idea, so now they try to get as much as they can, which is
more than they had been getting. [19]
Meanwhile a similar fragmentation was happening at the other end of the
economic scale. As big companies' oligopolies became less secure, they were
less able to pass costs on to customers and thus less willing to overpay for
labor. And as the Duplo world of a few big blocks fragmented into many
companies of different sizes — some of them overseas — it became harder for
unions to enforce their monopolies. As a result workers' wages also tended
toward market price. Which (inevitably, if unions had been doing their job)
tended to be lower. Perhaps dramatically so, if automation had decreased the
need for some kind of work.
And just as the mid-century model induced social as well as economic cohesion,
its breakup brought social as well as economic fragmentation. People started
to dress and act differently. Those who would later be called the "creative
class" became more mobile. People who didn't care much for religion felt less
pressure to go to church for appearances' sake, while those who liked it a lot
opted for increasingly colorful forms. Some switched from meat loaf to tofu,
and others to Hot Pockets. Some switched from driving Ford sedans to driving
small imported cars, and others to driving SUVs. Kids who went to private
schools or wished they did started to dress "preppy," and kids who wanted to
seem rebellious made a conscious effort to look disreputable. In a hundred
ways people spread apart. [20]
Almost four decades later, fragmentation is still increasing. Has it been net
good or bad? I don't know; the question may be unanswerable. Not entirely bad
though. We take for granted the forms of fragmentation we like, and worry only
about the ones we don't. But as someone who caught the tail end of mid-century
[_conformism_](https://books.google.com/ngrams/graph?content=well-
adjusted&year_start=1800&year_end=2000&corpus=15&smoothing=3), I can tell you
it was no utopia. [21]
My goal here is not to say whether fragmentation has been good or bad, just to
explain why it's happening. With the centripetal forces of total war and 20th
century oligopoly mostly gone, what will happen next? And more specifically,
is it possible to reverse some of the fragmentation we've seen?
If it is, it will have to happen piecemeal. You can't reproduce mid-century
cohesion the way it was originally produced. It would be insane to go to war
just to induce more national unity. And once you understand the degree to
which the economic history of the 20th century was a low-res version 1, it's
clear you can't reproduce that either.
20th century cohesion was something that happened at least in a sense
naturally. The war was due mostly to external forces, and the Duplo economy
was an evolutionary phase. If you want cohesion now, you'd have to induce it
deliberately. And it's not obvious how. I suspect the best we'll be able to do
is address the symptoms of fragmentation. But that may be enough.
The form of fragmentation people worry most about lately is [_economic
inequality_](ineq.html), and if you want to eliminate that you're up against a
truly formidable headwind that has been in operation since the stone age.
Technology.
Technology is a lever. It magnifies work. And the lever not only grows
increasingly long, but the rate at which it grows is itself increasing.
Which in turn means the variation in the amount of wealth people can create
has not only been increasing, but accelerating. The unusual conditions that
prevailed in the mid 20th century masked this underlying trend. The ambitious
had little choice but to join large organizations that made them march in step
with lots of other people — literally in the case of the armed forces,
figuratively in the case of big corporations. Even if the big corporations had
wanted to pay people proportionate to their value, they couldn't have figured
out how. But that constraint has gone now. Ever since it started to erode in
the 1970s, we've seen the underlying forces at work again. [22]
Not everyone who gets rich now does it by creating wealth, certainly. But a
significant number do, and the Baumol Effect means all their peers get dragged
along too. [23] And as long as it's possible to get rich by creating wealth,
the default tendency will be for economic inequality to increase. Even if you
eliminate all the other ways to get rich. You can mitigate this with subsidies
at the bottom and taxes at the top, but unless taxes are high enough to
discourage people from creating wealth, you're always going to be fighting a
losing battle against increasing variation in productivity. [24]
That form of fragmentation, like the others, is here to stay. Or rather, back
to stay. Nothing is forever, but the tendency toward fragmentation should be
more forever than most things, precisely because it's not due to any
particular cause. It's simply a reversion to the mean. When Rockefeller said
individualism was gone, he was right for a hundred years. It's back now, and
that's likely to be true for longer.
I worry that if we don't acknowledge this, we're headed for trouble. If we
think 20th century cohesion disappeared because of few policy tweaks, we'll be
deluded into thinking we can get it back (minus the bad parts, somehow) with a
few countertweaks. And then we'll waste our time trying to eliminate
fragmentation, when we'd be better off thinking about how to mitigate its
consequences.
**Notes**
[1] Lester Thurow, writing in 1975, said the wage differentials prevailing at
the end of World War II had become so embedded that they "were regarded as
'just' even after the egalitarian pressures of World War II had disappeared.
Basically, the same differentials exist to this day, thirty years later." But
Goldin and Margo think market forces in the postwar period also helped
preserve the wartime compression of wages — specifically increased demand for
unskilled workers, and oversupply of educated ones.
(Oddly enough, the American custom of having employers pay for health
insurance derives from efforts by businesses to circumvent NWLB wage controls
in order to attract workers.)
[2] As always, tax rates don't tell the whole story. There were lots of
exemptions, especially for individuals. And in World War II the tax codes were
so new that the government had little acquired immunity to tax avoidance. If
the rich paid high taxes during the war it was more because they wanted to
than because they had to.
After the war, federal tax receipts as a percentage of GDP were about the same
as they are now. In fact, for the entire period since the war, tax receipts
have stayed close to 18% of GDP, despite dramatic changes in tax rates. The
lowest point occurred when marginal income tax rates were highest: 14.1% in
1950. Looking at the data, it's hard to avoid the conclusion that tax rates
have had little effect on what people actually paid.
[3] Though in fact the decade preceding the war had been a time of
unprecedented federal power, in response to the Depression. Which is not
entirely a coincidence, because the Depression was one of the causes of the
war. In many ways the New Deal was a sort of dress rehearsal for the measures
the federal government took during wartime. The wartime versions were much
more drastic and more pervasive though. As Anthony Badger wrote, "for many
Americans the decisive change in their experiences came not with the New Deal
but with World War II."
[4] I don't know enough about the origins of the world wars to say, but it's
not inconceivable they were connected to the rise of big corporations. If that
were the case, 20th century cohesion would have a single cause.
[5] More precisely, there was a bimodal economy consisting, in Galbraith's
words, of "the world of the technically dynamic, massively capitalized and
highly organized corporations on the one hand and the hundreds of thousands of
small and traditional proprietors on the other." Money, prestige, and power
were concentrated in the former, and there was near zero crossover.
[6] I wonder how much of the decline in families eating together was due to
the decline in families watching TV together afterward.
[7] I know when this happened because it was the season Dallas premiered.
Everyone else was talking about what was happening on Dallas, and I had no
idea what they meant.
[8] I didn't realize it till I started doing research for this essay, but the
meretriciousness of the products I grew up with is a well-known byproduct of
oligopoly. When companies can't compete on price, they compete on tailfins.
[9] Monroeville Mall was at the time of its completion in 1969 the largest in
the country. In the late 1970s the movie _Dawn of the Dead_ was shot there.
Apparently the mall was not just the location of the movie, but its
inspiration; the crowds of shoppers drifting through this huge mall reminded
George Romero of zombies. My first job was scooping ice cream in the Baskin-
Robbins.
[10] Labor unions were exempted from antitrust laws by the Clayton Antitrust
Act in 1914 on the grounds that a person's work is not "a commodity or article
of commerce." I wonder if that means service companies are also exempt.
[11] The relationships between unions and unionized companies can even be
symbiotic, because unions will exert political pressure to protect their
hosts. According to Michael Lind, when politicians tried to attack the A&P
supermarket chain because it was putting local grocery stores out of business,
"A&P successfully defended itself by allowing the unionization of its
workforce in 1938, thereby gaining organized labor as a constituency." I've
seen this phenomenon myself: hotel unions are responsible for more of the
political pressure against Airbnb than hotel companies.
[12] Galbraith was clearly puzzled that corporate executives would work so
hard to make money for other people (the shareholders) instead of themselves.
He devoted much of _The New Industrial State_ to trying to figure this out.
His theory was that professionalism had replaced money as a motive, and that
modern corporate executives were, like (good) scientists, motivated less by
financial rewards than by the desire to do good work and thereby earn the
respect of their peers. There is something in this, though I think lack of
movement between companies combined with self-interest explains much of
observed behavior.
[13] Galbraith (p. 94) says a 1952 study of the 800 highest paid executives at
300 big corporations found that three quarters of them had been with their
company for more than 20 years.
[14] It seems likely that in the first third of the 20th century executive
salaries were low partly because companies then were more dependent on banks,
who would have disapproved if executives got too much. This was certainly true
in the beginning. The first big company CEOs were J. P. Morgan's hired hands.
Companies didn't start to finance themselves with retained earnings till the
1920s. Till then they had to pay out their earnings in dividends, and so
depended on banks for capital for expansion. Bankers continued to sit on
corporate boards till the Glass-Steagall act in 1933.
By mid-century big companies funded 3/4 of their growth from earnings. But the
early years of bank dependence, reinforced by the financial controls of World
War II, must have had a big effect on social conventions about executive
salaries. So it may be that the lack of movement between companies was as much
the effect of low salaries as the cause.
Incidentally, the switch in the 1920s to financing growth with retained
earnings was one cause of the 1929 crash. The banks now had to find someone
else to lend to, so they made more margin loans.
[15] Even now it's hard to get them to. One of the things I find hardest to
get into the heads of would-be startup founders is how important it is to do
certain kinds of menial work early in the life of a company. Doing [_things
that don't scale_](ds.html) is to how Henry Ford got started as a high-fiber
diet is to the traditional peasant's diet: they had no choice but to do the
right thing, while we have to make a conscious effort.
[16] Founders weren't celebrated in the press when I was a kid. "Our founder"
meant a photograph of a severe-looking man with a walrus mustache and a wing
collar who had died decades ago. The thing to be when I was a kid was an
_executive_. If you weren't around then it's hard to grasp the cachet that
term had. The fancy version of everything was called the "executive" model.
[17] The wave of hostile takeovers in the 1980s was enabled by a combination
of circumstances: court decisions striking down state anti-takeover laws,
starting with the Supreme Court's 1982 decision in Edgar v. MITE Corp.; the
Reagan administration's comparatively sympathetic attitude toward takeovers;
the Depository Institutions Act of 1982, which allowed banks and savings and
loans to buy corporate bonds; a new SEC rule issued in 1982 (rule 415) that
made it possible to bring corporate bonds to market faster; the creation of
the junk bond business by Michael Milken; a vogue for conglomerates in the
preceding period that caused many companies to be combined that never should
have been; a decade of inflation that left many public companies trading below
the value of their assets; and not least, the increasing complacency of
managements.
[18] Foster, Richard. "Creative Destruction Whips through Corporate America."
Innosight, February 2012.
[19] CEOs of big companies may be overpaid. I don't know enough about big
companies to say. But it is certainly not impossible for a CEO to make 200x as
much difference to a company's revenues as the average employee. Look at what
Steve Jobs did for Apple when he came back as CEO. It would have been a good
deal for the board to give him 95% of the company. Apple's market cap the day
Steve came back in July 1997 was 1.73 billion. 5% of Apple now (January 2016)
would be worth about 30 billion. And it would not be if Steve hadn't come
back; Apple probably wouldn't even exist anymore.
Merely including Steve in the sample might be enough to answer the question of
whether public company CEOs in the aggregate are overpaid. And that is not as
facile a trick as it might seem, because the broader your holdings, the more
the aggregate is what you care about.
[20] The late 1960s were famous for social upheaval. But that was more
rebellion (which can happen in any era if people are provoked sufficiently)
than fragmentation. You're not seeing fragmentation unless you see people
breaking off to both left and right.
[21] Globally the trend has been in the other direction. While the US is
becoming more fragmented, the world as a whole is becoming less fragmented,
and mostly in good ways.
[22] There were a handful of ways to make a fortune in the mid 20th century.
The main one was drilling for oil, which was open to newcomers because it was
not something big companies could dominate through economies of scale. How did
individuals accumulate large fortunes in an era of such high taxes? Giant tax
loopholes defended by two of the most powerful men in Congress, Sam Rayburn
and Lyndon Johnson.
But becoming a Texas oilman was not in 1950 something one could aspire to the
way starting a startup or going to work on Wall Street were in 2000, because
(a) there was a strong local component and (b) success depended so much on
luck.
[23] The Baumol Effect induced by startups is very visible in Silicon Valley.
Google will pay people millions of dollars a year to keep them from leaving to
start or join startups.
[24] I'm not claiming variation in productivity is the only cause of economic
inequality in the US. But it's a significant cause, and it will become as big
a cause as it needs to, in the sense that if you ban other ways to get rich,
people who want to get rich will use this route instead.
**Thanks** to Sam Altman, Trevor Blackwell, Paul Buchheit, Patrick Collison,
Ron Conway, Chris Dixon, Benedict Evans, Richard Florida, Ben Horowitz,
Jessica Livingston, Robert Morris, Tim O'Reilly, Geoff Ralston, Max Roser,
Alexia Tsotsis, and Qasar Younis for reading drafts of this. Max also told me
about several valuable sources.
**Bibliography**
Allen, Frederick Lewis. _The Big Change_. Harper, 1952.
Averitt, Robert. _The Dual Economy_. Norton, 1968.
Badger, Anthony. _The New Deal_. Hill and Wang, 1989.
Bainbridge, John. _The Super-Americans_. Doubleday, 1961.
Beatty, Jack. _Collossus_. Broadway, 2001.
Brinkley, Douglas. _Wheels for the World_. Viking, 2003.
Brownleee, W. Elliot. _Federal Taxation in America_. Cambridge, 1996.
Chandler, Alfred. _The Visible Hand_. Harvard, 1977.
Chernow, Ron. _The House of Morgan_. Simon & Schuster, 1990.
Chernow, Ron. _Titan: The Life of John D. Rockefeller_. Random House, 1998.
Galbraith, John. _The New Industrial State_. Houghton Mifflin, 1967.
Goldin, Claudia and Robert A. Margo. "The Great Compression: The Wage
Structure in the United States at Mid-Century." NBER Working Paper 3817, 1991.
Gordon, John. _An Empire of Wealth_. HarperCollins, 2004.
Klein, Maury. _The Genesis of Industrial America, 1870-1920_. Cambridge, 2007.
Lind, Michael. _Land of Promise_. HarperCollins, 2012.
Mickelthwaite, John, and Adrian Wooldridge. _The Company_. Modern Library,
2003.
Nasaw, David. _Andrew Carnegie_. Penguin, 2006.
Sobel, Robert. _The Age of Giant Corporations_. Praeger, 1993.
Thurow, Lester. _Generating Inequality: Mechanisms of Distribution_. Basic
Books, 1975.
Witte, John. _The Politics and Development of the Federal Income Tax_.
Wisconsin, 1985.
**Related:**
|
September 2004
_(This essay is derived from an invited talk at ICFP 2004.)_
I had a front row seat for the Internet Bubble, because I worked at Yahoo
during 1998 and 1999. One day, when the stock was trading around $200, I sat
down and calculated what I thought the price should be. The answer I got was
$12. I went to the next cubicle and told my friend Trevor. "Twelve!" he said.
He tried to sound indignant, but he didn't quite manage it. He knew as well as
I did that our valuation was crazy.
Yahoo was a special case. It was not just our price to earnings ratio that was
bogus. Half our earnings were too. Not in the Enron way, of course. The
finance guys seemed scrupulous about reporting earnings. What made our
earnings bogus was that Yahoo was, in effect, the center of a Ponzi scheme.
Investors looked at Yahoo's earnings and said to themselves, here is proof
that Internet companies can make money. So they invested in new startups that
promised to be the next Yahoo. And as soon as these startups got the money,
what did they do with it? Buy millions of dollars worth of advertising on
Yahoo to promote their brand. Result: a capital investment in a startup this
quarter shows up as Yahoo earnings next quarter—stimulating another round of
investments in startups.
As in a Ponzi scheme, what seemed to be the returns of this system were simply
the latest round of investments in it. What made it not a Ponzi scheme was
that it was unintentional. At least, I think it was. The venture capital
business is pretty incestuous, and there were presumably people in a position,
if not to create this situation, to realize what was happening and to milk it.
A year later the game was up. Starting in January 2000, Yahoo's stock price
began to crash, ultimately losing 95% of its value.
Notice, though, that even with all the fat trimmed off its market cap, Yahoo
was still worth a lot. Even at the morning-after valuations of March and April
2001, the people at Yahoo had managed to create a company worth about $8
billion in just six years.
The fact is, despite all the nonsense we heard during the Bubble about the
"new economy," there was a core of truth. You need that to get a really big
bubble: you need to have something solid at the center, so that even smart
people are sucked in. (Isaac Newton and Jonathan Swift both lost money in the
South Sea Bubble of 1720.)
Now the pendulum has swung the other way. Now anything that became fashionable
during the Bubble is ipso facto unfashionable. But that's a mistake—an even
bigger mistake than believing what everyone was saying in 1999. Over the long
term, what the Bubble got right will be more important than what it got wrong.
**1\. Retail VC**
After the excesses of the Bubble, it's now considered dubious to take
companies public before they have earnings. But there is nothing intrinsically
wrong with that idea. Taking a company public at an early stage is simply
retail VC: instead of going to venture capital firms for the last round of
funding, you go to the public markets.
By the end of the Bubble, companies going public with no earnings were being
derided as "concept stocks," as if it were inherently stupid to invest in
them. But investing in concepts isn't stupid; it's what VCs do, and the best
of them are far from stupid.
The stock of a company that doesn't yet have earnings is worth _something._ It
may take a while for the market to learn how to value such companies, just as
it had to learn to value common stocks in the early 20th century. But markets
are good at solving that kind of problem. I wouldn't be surprised if the
market ultimately did a better job than VCs do now.
Going public early will not be the right plan for every company. And it can of
course be disruptive—by distracting the management, or by making the early
employees suddenly rich. But just as the market will learn how to value
startups, startups will learn how to minimize the damage of going public.
**2\. The Internet**
The Internet genuinely is a big deal. That was one reason even smart people
were fooled by the Bubble. Obviously it was going to have a huge effect.
Enough of an effect to triple the value of Nasdaq companies in two years? No,
as it turned out. But it was hard to say for certain at the time. [1]
The same thing happened during the Mississippi and South Sea Bubbles. What
drove them was the invention of organized public finance (the South Sea
Company, despite its name, was really a competitor of the Bank of England).
And that did turn out to be a big deal, in the long run.
Recognizing an important trend turns out to be easier than figuring out how to
profit from it. The mistake investors always seem to make is to take the trend
too literally. Since the Internet was the big new thing, investors supposed
that the more Internettish the company, the better. Hence such parodies as
Pets.Com.
In fact most of the money to be made from big trends is made indirectly. It
was not the railroads themselves that made the most money during the railroad
boom, but the companies on either side, like Carnegie's steelworks, which made
the rails, and Standard Oil, which used railroads to get oil to the East
Coast, where it could be shipped to Europe.
I think the Internet will have great effects, and that what we've seen so far
is nothing compared to what's coming. But most of the winners will only
indirectly be Internet companies; for every Google there will be ten JetBlues.
**3\. Choices**
Why will the Internet have great effects? The general argument is that new
forms of communication always do. They happen rarely (till industrial times
there were just speech, writing, and printing), but when they do, they always
cause a big splash.
The specific argument, or one of them, is the Internet gives us more choices.
In the "old" economy, the high cost of presenting information to people meant
they had only a narrow range of options to choose from. The tiny, expensive
pipeline to consumers was tellingly named "the channel." Control the channel
and you could feed them what you wanted, on your terms. And it was not just
big corporations that depended on this principle. So, in their way, did labor
unions, the traditional news media, and the art and literary establishments.
Winning depended not on doing good work, but on gaining control of some
bottleneck.
There are signs that this is changing. Google has over 82 million unique users
a month and annual revenues of about three billion dollars. [2] And yet have
you ever seen a Google ad? Something is going on here.
Admittedly, Google is an extreme case. It's very easy for people to switch to
a new search engine. It costs little effort and no money to try a new one, and
it's easy to see if the results are better. And so Google doesn't _have_ to
advertise. In a business like theirs, being the best is enough.
The exciting thing about the Internet is that it's shifting everything in that
direction. The hard part, if you want to win by making the best stuff, is the
beginning. Eventually everyone will learn by word of mouth that you're the
best, but how do you survive to that point? And it is in this crucial stage
that the Internet has the most effect. First, the Internet lets anyone find
you at almost zero cost. Second, it dramatically speeds up the rate at which
reputation spreads by word of mouth. Together these mean that in many fields
the rule will be: Build it, and they will come. Make something great and put
it online. That is a big change from the recipe for winning in the past
century.
**4\. Youth**
The aspect of the Internet Bubble that the press seemed most taken with was
the youth of some of the startup founders. This too is a trend that will last.
There is a huge standard deviation among 26 year olds. Some are fit only for
entry level jobs, but others are ready to rule the world if they can find
someone to handle the paperwork for them.
A 26 year old may not be very good at managing people or dealing with the SEC.
Those require experience. But those are also commodities, which can be handed
off to some lieutenant. The most important quality in a CEO is his vision for
the company's future. What will they build next? And in that department, there
are 26 year olds who can compete with anyone.
In 1970 a company president meant someone in his fifties, at least. If he had
technologists working for him, they were treated like a racing stable: prized,
but not powerful. But as technology has grown more important, the power of
nerds has grown to reflect it. Now it's not enough for a CEO to have someone
smart he can ask about technical matters. Increasingly, he has to be that
person himself.
As always, business has clung to old forms. VCs still seem to want to install
a legitimate-looking talking head as the CEO. But increasingly the founders of
the company are the real powers, and the grey-headed man installed by the VCs
more like a music group's manager than a general.
**5\. Informality**
In New York, the Bubble had dramatic consequences: suits went out of fashion.
They made one seem old. So in 1998 powerful New York types were suddenly
wearing open-necked shirts and khakis and oval wire-rimmed glasses, just like
guys in Santa Clara.
The pendulum has swung back a bit, driven in part by a panicked reaction by
the clothing industry. But I'm betting on the open-necked shirts. And this is
not as frivolous a question as it might seem. Clothes are important, as all
nerds can sense, though they may not realize it consciously.
If you're a nerd, you can understand how important clothes are by asking
yourself how you'd feel about a company that made you wear a suit and tie to
work. The idea sounds horrible, doesn't it? In fact, horrible far out of
proportion to the mere discomfort of wearing such clothes. A company that made
programmers wear suits would have something deeply wrong with it.
And what would be wrong would be that how one presented oneself counted more
than the quality of one's ideas. _That's_ the problem with formality. Dressing
up is not so much bad in itself. The problem is the receptor it binds to:
dressing up is inevitably a substitute for good ideas. It is no coincidence
that technically inept business types are known as "suits."
Nerds don't just happen to dress informally. They do it too consistently.
Consciously or not, they dress informally as a prophylactic measure against
stupidity.
**6\. Nerds**
Clothing is only the most visible battleground in the war against formality.
Nerds tend to eschew formality of any sort. They're not impressed by one's job
title, for example, or any of the other appurtenances of authority.
Indeed, that's practically the definition of a nerd. I found myself talking
recently to someone from Hollywood who was planning a show about nerds. I
thought it would be useful if I explained what a nerd was. What I came up with
was: someone who doesn't expend any effort on marketing himself.
A nerd, in other words, is someone who concentrates on substance. So what's
the connection between nerds and technology? Roughly that you can't fool
mother nature. In technical matters, you have to get the right answers. If
your software miscalculates the path of a space probe, you can't finesse your
way out of trouble by saying that your code is patriotic, or avant-garde, or
any of the other dodges people use in nontechnical fields.
And as technology becomes increasingly important in the economy, nerd culture
is [rising](nerdad.html) with it. Nerds are already a lot cooler than they
were when I was a kid. When I was in college in the mid-1980s, "nerd" was
still an insult. People who majored in computer science generally tried to
conceal it. Now women ask me where they can meet nerds. (The answer that
springs to mind is "Usenix," but that would be like drinking from a firehose.)
I have no illusions about why nerd culture is becoming more accepted. It's not
because people are realizing that substance is more important than marketing.
It's because the nerds are getting rich. But that is not going to change.
**7\. Options**
What makes the nerds rich, usually, is stock options. Now there are moves
afoot to make it harder for companies to grant options. To the extent there's
some genuine accounting abuse going on, by all means correct it. But don't
kill the golden goose. Equity is the fuel that drives technical innovation.
Options are a good idea because (a) they're fair, and (b) they work. Someone
who goes to work for a company is (one hopes) adding to its value, and it's
only fair to give them a share of it. And as a purely practical measure,
people work a _lot_ harder when they have options. I've seen that first hand.
The fact that a few crooks during the Bubble robbed their companies by
granting themselves options doesn't mean options are a bad idea. During the
railroad boom, some executives enriched themselves by selling watered stock—by
issuing more shares than they said were outstanding. But that doesn't make
common stock a bad idea. Crooks just use whatever means are available.
If there is a problem with options, it's that they reward slightly the wrong
thing. Not surprisingly, people do what you pay them to. If you pay them by
the hour, they'll work a lot of hours. If you pay them by the volume of work
done, they'll get a lot of work done (but only as you defined work). And if
you pay them to raise the stock price, which is what options amount to,
they'll raise the stock price.
But that's not quite what you want. What you want is to increase the actual
value of the company, not its market cap. Over time the two inevitably meet,
but not always as quickly as options vest. Which means options tempt
employees, if only unconsciously, to "pump and dump"—to do things that will
make the company _seem_ valuable. I found that when I was at Yahoo, I couldn't
help thinking, "how will this sound to investors?" when I should have been
thinking "is this a good idea?"
So maybe the standard option deal needs to be tweaked slightly. Maybe options
should be replaced with something tied more directly to earnings. It's still
early days.
**8\. Startups**
What made the options valuable, for the most part, is that they were options
on the stock of [startups](start.html). Startups were not of course a creation
of the Bubble, but they were more visible during the Bubble than ever before.
One thing most people did learn about for the first time during the Bubble was
the startup created with the intention of selling it. Originally a startup
meant a small company that hoped to grow into a big one. But increasingly
startups are evolving into a vehicle for developing technology on spec.
As I wrote in [Hackers & Painters](hackpaint.html), employees seem to be most
productive when they're paid in proportion to the wealth they generate. And
the advantage of a startup—indeed, almost its raison d'etre—is that it offers
something otherwise impossible to obtain: a way of _measuring_ that.
In many businesses, it just makes more sense for companies to get technology
by buying startups rather than developing it in house. You pay more, but there
is less risk, and risk is what big companies don't want. It makes the guys
developing the technology more accountable, because they only get paid if they
build the winner. And you end up with better technology, created faster,
because things are made in the innovative atmosphere of startups instead of
the bureaucratic atmosphere of big companies.
Our startup, Viaweb, was built to be sold. We were open with investors about
that from the start. And we were careful to create something that could slot
easily into a larger company. That is the pattern for the future.
**9\. California**
The Bubble was a California phenomenon. When I showed up in Silicon Valley in
1998, I felt like an immigrant from Eastern Europe arriving in America in
1900. Everyone was so cheerful and healthy and rich. It seemed a new and
improved world.
The press, ever eager to exaggerate small trends, now gives one the impression
that Silicon Valley is a ghost town. Not at all. When I drive down 101 from
the airport, I still feel a buzz of energy, as if there were a giant
transformer nearby. Real estate is still more expensive than just about
anywhere else in the country. The people still look healthy, and the weather
is still fabulous. The future is there. (I say "there" because I moved back to
the East Coast after Yahoo. I still wonder if this was a smart idea.)
What makes the Bay Area superior is the attitude of the people. I notice that
when I come home to Boston. The first thing I see when I walk out of the
airline terminal is the fat, grumpy guy in charge of the taxi line. I brace
myself for rudeness: _remember, you're back on the East Coast now._
The atmosphere varies from city to city, and fragile organisms like startups
are exceedingly sensitive to such variation. If it hadn't already been
hijacked as a new euphemism for liberal, the word to describe the atmosphere
in the Bay Area would be "progressive." People there are trying to build the
future. Boston has MIT and Harvard, but it also has a lot of truculent,
unionized employees like the police who recently held the Democratic National
Convention for
[ransom](http://www.usatoday.com/news/politicselections/nation/president/2004-04-30-boston-
police-convention_x.htm), and a lot of people trying to be Thurston Howell.
Two sides of an obsolete coin.
Silicon Valley may not be the next Paris or London, but it is at least the
next Chicago. For the next fifty years, that's where new wealth will come
from.
**10\. Productivity**
During the Bubble, optimistic analysts used to justify high price to earnings
ratios by saying that technology was going to increase productivity
dramatically. They were wrong about the specific companies, but not so wrong
about the underlying principle. I think one of the big trends we'll see in the
coming century is a huge increase in productivity.
Or more precisely, a huge increase in [variation](gh.html) in productivity.
Technology is a lever. It doesn't add; it multiplies. If the present range of
productivity is 0 to 100, introducing a multiple of 10 increases the range
from 0 to 1000.
One upshot of which is that the companies of the future may be surprisingly
small. I sometimes daydream about how big you could grow a company (in
revenues) without ever having more than ten people. What would happen if you
outsourced everything except product development? If you tried this
experiment, I think you'd be surprised at how far you could get. As Fred
Brooks pointed out, small groups are intrinsically more productive, because
the internal friction in a group grows as the square of its size.
Till quite recently, running a major company meant managing an army of
workers. Our standards about how many employees a company should have are
still influenced by old patterns. Startups are perforce small, because they
can't afford to hire a lot of people. But I think it's a big mistake for
companies to loosen their belts as revenues increase. The question is not
whether you can afford the extra salaries. Can you afford the loss in
productivity that comes from making the company bigger?
The prospect of technological leverage will of course raise the specter of
unemployment. I'm surprised people still worry about this. After centuries of
supposedly job-killing innovations, the number of jobs is within ten percent
of the number of people who want them. This can't be a coincidence. There must
be some kind of balancing mechanism.
**What's New**
When one looks over these trends, is there any overall theme? There does seem
to be: that in the coming century, good ideas will count for more. That 26
year olds with good ideas will increasingly have an edge over 50 year olds
with powerful connections. That doing good work will matter more than dressing
up—or advertising, which is the same thing for companies. That people will be
rewarded a bit more in proportion to the value of what they create.
If so, this is good news indeed. Good ideas always tend to win eventually. The
problem is, it can take a very long time. It took decades for relativity to be
accepted, and the greater part of a century to establish that central planning
didn't work. So even a small increase in the rate at which good ideas win
would be a momentous change—big enough, probably, to justify a name like the
"new economy."
**Notes**
[1] Actually it's hard to say now. As Jeremy Siegel points out, if the value
of a stock is its future earnings, you can't tell if it was overvalued till
you see what the earnings turn out to be. While certain famous Internet stocks
were almost certainly overvalued in 1999, it is still hard to say for sure
whether, e.g., the Nasdaq index was.
Siegel, Jeremy J. "What Is an Asset Price Bubble? An Operational Definition."
_European Financial Management,_ 9:1, 2003.
[2] The number of users comes from a 6/03 Nielsen study quoted on Google's
site. (You'd think they'd have something more recent.) The revenue estimate is
based on revenues of $1.35 billion for the first half of 2004, as reported in
their IPO filing.
**Thanks** to Chris Anderson, Trevor Blackwell, Sarah Harlin, Jessica
Livingston, and Robert Morris for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
March 2009
A couple days ago I finally got being a good startup founder down to two
words: relentlessly resourceful.
Till then the best I'd managed was to get the opposite quality down to one:
hapless. Most dictionaries say hapless means unlucky. But the dictionaries are
not doing a very good job. A team that outplays its opponents but loses
because of a bad decision by the referee could be called unlucky, but not
hapless. Hapless implies passivity. To be hapless is to be battered by
circumstances — to let the world have its way with you, instead of having your
way with the world. [1]
Unfortunately there's no antonym of hapless, which makes it difficult to tell
founders what to aim for. "Don't be hapless" is not much of a rallying cry.
It's not hard to express the quality we're looking for in metaphors. The best
is probably a running back. A good running back is not merely determined, but
flexible as well. They want to get downfield, but they adapt their plans on
the fly.
Unfortunately this is just a metaphor, and not a useful one to most people
outside the US. "Be like a running back" is no better than "Don't be hapless."
But finally I've figured out how to express this quality directly. I was
writing a talk for [investors](angelinvesting.html), and I had to explain what
to look for in founders. What would someone who was the opposite of hapless be
like? They'd be relentlessly resourceful. Not merely relentless. That's not
enough to make things go your way except in a few mostly uninteresting
domains. In any interesting domain, the difficulties will be novel. Which
means you can't simply plow through them, because you don't know initially how
hard they are; you don't know whether you're about to plow through a block of
foam or granite. So you have to be resourceful. You have to keep trying new
things.
Be relentlessly resourceful.
That sounds right, but is it simply a description of how to be successful in
general? I don't think so. This isn't the recipe for success in writing or
painting, for example. In that kind of work the recipe is more to be actively
curious. Resourceful implies the obstacles are external, which they generally
are in startups. But in writing and painting they're mostly internal; the
obstacle is your own obtuseness. [2]
There probably are other fields where "relentlessly resourceful" is the recipe
for success. But though other fields may share it, I think this is the best
short description we'll find of what makes a good startup founder. I doubt it
could be made more precise.
Now that we know what we're looking for, that leads to other questions. For
example, can this quality be taught? After four years of trying to teach it to
people, I'd say that yes, surprisingly often it can. Not to everyone, but to
many people. [3] Some people are just constitutionally passive, but others
have a latent ability to be relentlessly resourceful that only needs to be
brought out.
This is particularly true of young people who have till now always been under
the thumb of some kind of authority. Being relentlessly resourceful is
definitely not the recipe for success in big companies, or in most schools. I
don't even want to think what the recipe is in big companies, but it is
certainly longer and messier, involving some combination of resourcefulness,
obedience, and building alliances.
Identifying this quality also brings us closer to answering a question people
often wonder about: how many startups there could be. There is not, as some
people seem to think, any economic upper bound on this number. There's no
reason to believe there is any limit on the amount of newly created wealth
consumers can absorb, any more than there is a limit on the number of theorems
that can be proven. So probably the limiting factor on the number of startups
is the pool of potential founders. Some people would make good founders, and
others wouldn't. And now that we can say what makes a good founder, we know
how to put an upper bound on the size of the pool.
This test is also useful to individuals. If you want to know whether you're
the right sort of person to start a startup, ask yourself whether you're
relentlessly resourceful. And if you want to know whether to recruit someone
as a cofounder, ask if they are.
You can even use it tactically. If I were running a startup, this would be the
phrase I'd tape to the mirror. "Make something people want" is the
destination, but "Be relentlessly resourceful" is how you get there.
**Notes**
[1] I think the reason the dictionaries are wrong is that the meaning of the
word has shifted. No one writing a dictionary from scratch today would say
that hapless meant unlucky. But a couple hundred years ago they might have.
People were more at the mercy of circumstances in the past, and as a result a
lot of the words we use for good and bad outcomes have origins in words about
luck.
When I was living in Italy, I was once trying to tell someone that I hadn't
had much success in doing something, but I couldn't think of the Italian word
for success. I spent some time trying to describe the word I meant. Finally
she said "Ah! Fortuna!"
[2] There are aspects of startups where the recipe is to be actively curious.
There can be times when what you're doing is almost pure discovery.
Unfortunately these times are a small proportion of the whole. On the other
hand, they are in research too.
[3] I'd almost say to most people, but I realize (a) I have no idea what most
people are like, and (b) I'm pathologically optimistic about people's ability
to change.
**Thanks** to Trevor Blackwell and Jessica Livingston for reading drafts of
this.
|
July 2023
If you collected lists of techniques for doing great work in a lot of
different fields, what would the intersection look like? I decided to find out
by making it.
Partly my goal was to create a guide that could be used by someone working in
any field. But I was also curious about the shape of the intersection. And one
thing this exercise shows is that it does have a definite shape; it's not just
a point labelled "work hard."
The following recipe assumes you're very ambitious.
The first step is to decide what to work on. The work you choose needs to have
three qualities: it has to be something you have a natural aptitude for, that
you have a deep interest in, and that offers scope to do great work.
In practice you don't have to worry much about the third criterion. Ambitious
people are if anything already too conservative about it. So all you need to
do is find something you have an aptitude for and great interest in. [1]
That sounds straightforward, but it's often quite difficult. When you're young
you don't know what you're good at or what different kinds of work are like.
Some kinds of work you end up doing may not even exist yet. So while some
people know what they want to do at 14, most have to figure it out.
The way to figure out what to work on is by working. If you're not sure what
to work on, guess. But pick something and get going. You'll probably guess
wrong some of the time, but that's fine. It's good to know about multiple
things; some of the biggest discoveries come from noticing connections between
different fields.
Develop a habit of working on your own projects. Don't let "work" mean
something other people tell you to do. If you do manage to do great work one
day, it will probably be on a project of your own. It may be within some
bigger project, but you'll be driving your part of it.
What should your projects be? Whatever seems to you excitingly ambitious. As
you grow older and your taste in projects evolves, exciting and important will
converge. At 7 it may seem excitingly ambitious to build huge things out of
Lego, then at 14 to teach yourself calculus, till at 21 you're starting to
explore unanswered questions in physics. But always preserve excitingness.
There's a kind of excited curiosity that's both the engine and the rudder of
great work. It will not only drive you, but if you let it have its way, will
also show you what to work on.
What are you excessively curious about — curious to a degree that would bore
most other people? That's what you're looking for.
Once you've found something you're excessively interested in, the next step is
to learn enough about it to get you to one of the frontiers of knowledge.
Knowledge expands fractally, and from a distance its edges look smooth, but
once you learn enough to get close to one, they turn out to be full of gaps.
The next step is to notice them. This takes some skill, because your brain
wants to ignore such gaps in order to make a simpler model of the world. Many
discoveries have come from asking questions about things that everyone else
took for granted. [2]
If the answers seem strange, so much the better. Great work often has a
tincture of strangeness. You see this from painting to math. It would be
affected to try to manufacture it, but if it appears, embrace it.
Boldly chase outlier ideas, even if other people aren't interested in them —
in fact, especially if they aren't. If you're excited about some possibility
that everyone else ignores, and you have enough expertise to say precisely
what they're all overlooking, that's as good a bet as you'll find. [3]
Four steps: choose a field, learn enough to get to the frontier, notice gaps,
explore promising ones. This is how practically everyone who's done great work
has done it, from painters to physicists.
Steps two and four will require hard work. It may not be possible to prove
that you have to work hard to do great things, but the empirical evidence is
on the scale of the evidence for mortality. That's why it's essential to work
on something you're deeply interested in. Interest will drive you to work
harder than mere diligence ever could.
The three most powerful motives are curiosity, delight, and the desire to do
something impressive. Sometimes they converge, and that combination is the
most powerful of all.
The big prize is to discover a new fractal bud. You notice a crack in the
surface of knowledge, pry it open, and there's a whole world inside.
Let's talk a little more about the complicated business of figuring out what
to work on. The main reason it's hard is that you can't tell what most kinds
of work are like except by doing them. Which means the four steps overlap: you
may have to work at something for years before you know how much you like it
or how good you are at it. And in the meantime you're not doing, and thus not
learning about, most other kinds of work. So in the worst case you choose late
based on very incomplete information. [4]
The nature of ambition exacerbates this problem. Ambition comes in two forms,
one that precedes interest in the subject and one that grows out of it. Most
people who do great work have a mix, and the more you have of the former, the
harder it will be to decide what to do.
The educational systems in most countries pretend it's easy. They expect you
to commit to a field long before you could know what it's really like. And as
a result an ambitious person on an optimal trajectory will often read to the
system as an instance of breakage.
It would be better if they at least admitted it — if they admitted that the
system not only can't do much to help you figure out what to work on, but is
designed on the assumption that you'll somehow magically guess as a teenager.
They don't tell you, but I will: when it comes to figuring out what to work
on, you're on your own. Some people get lucky and do guess correctly, but the
rest will find themselves scrambling diagonally across tracks laid down on the
assumption that everyone does.
What should you do if you're young and ambitious but don't know what to work
on? What you should _not_ do is drift along passively, assuming the problem
will solve itself. You need to take action. But there is no systematic
procedure you can follow. When you read biographies of people who've done
great work, it's remarkable how much luck is involved. They discover what to
work on as a result of a chance meeting, or by reading a book they happen to
pick up. So you need to make yourself a big target for luck, and the way to do
that is to be curious. Try lots of things, meet lots of people, read lots of
books, ask lots of questions. [5]
When in doubt, optimize for interestingness. Fields change as you learn more
about them. What mathematicians do, for example, is very different from what
you do in high school math classes. So you need to give different types of
work a chance to show you what they're like. But a field should become
_increasingly_ interesting as you learn more about it. If it doesn't, it's
probably not for you.
Don't worry if you find you're interested in different things than other
people. The stranger your tastes in interestingness, the better. Strange
tastes are often strong ones, and a strong taste for work means you'll be
productive. And you're more likely to find new things if you're looking where
few have looked before.
One sign that you're suited for some kind of work is when you like even the
parts that other people find tedious or frightening.
But fields aren't people; you don't owe them any loyalty. If in the course of
working on one thing you discover another that's more exciting, don't be
afraid to switch.
If you're making something for people, make sure it's something they actually
want. The best way to do this is to make something you yourself want. Write
the story you want to read; build the tool you want to use. Since your friends
probably have similar interests, this will also get you your initial audience.
This _should_ follow from the excitingness rule. Obviously the most exciting
story to write will be the one you want to read. The reason I mention this
case explicitly is that so many people get it wrong. Instead of making what
they want, they try to make what some imaginary, more sophisticated audience
wants. And once you go down that route, you're lost. [6]
There are a lot of forces that will lead you astray when you're trying to
figure out what to work on. Pretentiousness, fashion, fear, money, politics,
other people's wishes, eminent frauds. But if you stick to what you find
genuinely interesting, you'll be proof against all of them. If you're
interested, you're not astray.
Following your interests may sound like a rather passive strategy, but in
practice it usually means following them past all sorts of obstacles. You
usually have to risk rejection and failure. So it does take a good deal of
boldness.
But while you need boldness, you don't usually need much planning. In most
cases the recipe for doing great work is simply: work hard on excitingly
ambitious projects, and something good will come of it. Instead of making a
plan and then executing it, you just try to preserve certain invariants.
The trouble with planning is that it only works for achievements you can
describe in advance. You can win a gold medal or get rich by deciding to as a
child and then tenaciously pursuing that goal, but you can't discover natural
selection that way.
I think for most people who want to do great work, the right strategy is not
to plan too much. At each stage do whatever seems most interesting and gives
you the best options for the future. I call this approach "staying upwind."
This is how most people who've done great work seem to have done it.
Even when you've found something exciting to work on, working on it is not
always straightforward. There will be times when some new idea makes you leap
out of bed in the morning and get straight to work. But there will also be
plenty of times when things aren't like that.
You don't just put out your sail and get blown forward by inspiration. There
are headwinds and currents and hidden shoals. So there's a technique to
working, just as there is to sailing.
For example, while you must work hard, it's possible to work too hard, and if
you do that you'll find you get diminishing returns: fatigue will make you
stupid, and eventually even damage your health. The point at which work yields
diminishing returns depends on the type. Some of the hardest types you might
only be able to do for four or five hours a day.
Ideally those hours will be contiguous. To the extent you can, try to arrange
your life so you have big blocks of time to work in. You'll shy away from hard
tasks if you know you might be interrupted.
It will probably be harder to start working than to keep working. You'll often
have to trick yourself to get over that initial threshold. Don't worry about
this; it's the nature of work, not a flaw in your character. Work has a sort
of activation energy, both per day and per project. And since this threshold
is fake in the sense that it's higher than the energy required to keep going,
it's ok to tell yourself a lie of corresponding magnitude to get over it.
It's usually a mistake to lie to yourself if you want to do great work, but
this is one of the rare cases where it isn't. When I'm reluctant to start work
in the morning, I often trick myself by saying "I'll just read over what I've
got so far." Five minutes later I've found something that seems mistaken or
incomplete, and I'm off.
Similar techniques work for starting new projects. It's ok to lie to yourself
about how much work a project will entail, for example. Lots of great things
began with someone saying "How hard could it be?"
This is one case where the young have an advantage. They're more optimistic,
and even though one of the sources of their optimism is ignorance, in this
case ignorance can sometimes beat knowledge.
Try to finish what you start, though, even if it turns out to be more work
than you expected. Finishing things is not just an exercise in tidiness or
self-discipline. In many projects a lot of the best work happens in what was
meant to be the final stage.
Another permissible lie is to exaggerate the importance of what you're working
on, at least in your own mind. If that helps you discover something new, it
may turn out not to have been a lie after all. [7]
Since there are two senses of starting work — per day and per project — there
are also two forms of procrastination. Per-project procrastination is far the
more dangerous. You put off starting that ambitious project from year to year
because the time isn't quite right. When you're procrastinating in units of
years, you can get a lot not done. [8]
One reason per-project procrastination is so dangerous is that it usually
camouflages itself as work. You're not just sitting around doing nothing;
you're working industriously on something else. So per-project procrastination
doesn't set off the alarms that per-day procrastination does. You're too busy
to notice it.
The way to beat it is to stop occasionally and ask yourself: Am I working on
what I most want to work on? When you're young it's ok if the answer is
sometimes no, but this gets increasingly dangerous as you get older. [9]
Great work usually entails spending what would seem to most people an
unreasonable amount of time on a problem. You can't think of this time as a
cost, or it will seem too high. You have to find the work sufficiently
engaging as it's happening.
There may be some jobs where you have to work diligently for years at things
you hate before you get to the good part, but this is not how great work
happens. Great work happens by focusing consistently on something you're
genuinely interested in. When you pause to take stock, you're surprised how
far you've come.
The reason we're surprised is that we underestimate the cumulative effect of
work. Writing a page a day doesn't sound like much, but if you do it every day
you'll write a book a year. That's the key: consistency. People who do great
things don't get a lot done every day. They get something done, rather than
nothing.
If you do work that compounds, you'll get exponential growth. Most people who
do this do it unconsciously, but it's worth stopping to think about. Learning,
for example, is an instance of this phenomenon: the more you learn about
something, the easier it is to learn more. Growing an audience is another: the
more fans you have, the more new fans they'll bring you.
The trouble with exponential growth is that the curve feels flat in the
beginning. It isn't; it's still a wonderful exponential curve. But we can't
grasp that intuitively, so we underrate exponential growth in its early
stages.
Something that grows exponentially can become so valuable that it's worth
making an extraordinary effort to get it started. But since we underrate
exponential growth early on, this too is mostly done unconsciously: people
push through the initial, unrewarding phase of learning something new because
they know from experience that learning new things always takes an initial
push, or they grow their audience one fan at a time because they have nothing
better to do. If people consciously realized they could invest in exponential
growth, many more would do it.
Work doesn't just happen when you're trying to. There's a kind of undirected
thinking you do when walking or taking a shower or lying in bed that can be
very powerful. By letting your mind wander a little, you'll often solve
problems you were unable to solve by frontal attack.
You have to be working hard in the normal way to benefit from this phenomenon,
though. You can't just walk around daydreaming. The daydreaming has to be
interleaved with deliberate work that feeds it questions. [10]
Everyone knows to avoid distractions at work, but it's also important to avoid
them in the other half of the cycle. When you let your mind wander, it wanders
to whatever you care about most at that moment. So avoid the kind of
distraction that pushes your work out of the top spot, or you'll waste this
valuable type of thinking on the distraction instead. (Exception: Don't avoid
love.)
Consciously cultivate your taste in the work done in your field. Until you
know which is the best and what makes it so, you don't know what you're aiming
for.
And that _is_ what you're aiming for, because if you don't try to be the best,
you won't even be good. This observation has been made by so many people in so
many different fields that it might be worth thinking about why it's true. It
could be because ambition is a phenomenon where almost all the error is in one
direction — where almost all the shells that miss the target miss by falling
short. Or it could be because ambition to be the best is a qualitatively
different thing from ambition to be good. Or maybe being good is simply too
vague a standard. Probably all three are true. [11]
Fortunately there's a kind of economy of scale here. Though it might seem like
you'd be taking on a heavy burden by trying to be the best, in practice you
often end up net ahead. It's exciting, and also strangely liberating. It
simplifies things. In some ways it's easier to try to be the best than to try
merely to be good.
One way to aim high is to try to make something that people will care about in
a hundred years. Not because their opinions matter more than your
contemporaries', but because something that still seems good in a hundred
years is more likely to be genuinely good.
Don't try to work in a distinctive style. Just try to do the best job you can;
you won't be able to help doing it in a distinctive way.
Style is doing things in a distinctive way without trying to. Trying to is
affectation.
Affectation is in effect to pretend that someone other than you is doing the
work. You adopt an impressive but fake persona, and while you're pleased with
the impressiveness, the fakeness is what shows in the work. [12]
The temptation to be someone else is greatest for the young. They often feel
like nobodies. But you never need to worry about that problem, because it's
self-solving if you work on sufficiently ambitious projects. If you succeed at
an ambitious project, you're not a nobody; you're the person who did it. So
just do the work and your identity will take care of itself.
"Avoid affectation" is a useful rule so far as it goes, but how would you
express this idea positively? How would you say what to be, instead of what
not to be? The best answer is earnest. If you're earnest you avoid not just
affectation but a whole set of similar vices.
The core of being earnest is being intellectually honest. We're taught as
children to be honest as an unselfish virtue — as a kind of sacrifice. But in
fact it's a source of power too. To see new ideas, you need an exceptionally
sharp eye for the truth. You're trying to see more truth than others have seen
so far. And how can you have a sharp eye for the truth if you're
intellectually dishonest?
One way to avoid intellectual dishonesty is to maintain a slight positive
pressure in the opposite direction. Be aggressively willing to admit that
you're mistaken. Once you've admitted you were mistaken about something,
you're free. Till then you have to carry it. [13]
Another more subtle component of earnestness is informality. Informality is
much more important than its grammatically negative name implies. It's not
merely the absence of something. It means focusing on what matters instead of
what doesn't.
What formality and affectation have in common is that as well as doing the
work, you're trying to seem a certain way as you're doing it. But any energy
that goes into how you seem comes out of being good. That's one reason nerds
have an advantage in doing great work: they expend little effort on seeming
anything. In fact that's basically the definition of a nerd.
Nerds have a kind of innocent boldness that's exactly what you need in doing
great work. It's not learned; it's preserved from childhood. So hold onto it.
Be the one who puts things out there rather than the one who sits back and
offers sophisticated-sounding criticisms of them. "It's easy to criticize" is
true in the most literal sense, and the route to great work is never easy.
There may be some jobs where it's an advantage to be cynical and pessimistic,
but if you want to do great work it's an advantage to be optimistic, even
though that means you'll risk looking like a fool sometimes. There's an old
tradition of doing the opposite. The Old Testament says it's better to keep
quiet lest you look like a fool. But that's advice for _seeming_ smart. If you
actually want to discover new things, it's better to take the risk of telling
people your ideas.
Some people are naturally earnest, and with others it takes a conscious
effort. Either kind of earnestness will suffice. But I doubt it would be
possible to do great work without being earnest. It's so hard to do even if
you are. You don't have enough margin for error to accommodate the distortions
introduced by being affected, intellectually dishonest, orthodox, fashionable,
or cool. [14]
Great work is consistent not only with who did it, but with itself. It's
usually all of a piece. So if you face a decision in the middle of working on
something, ask which choice is more consistent.
You may have to throw things away and redo them. You won't necessarily have
to, but you have to be willing to. And that can take some effort; when there's
something you need to redo, status quo bias and laziness will combine to keep
you in denial about it. To beat this ask: If I'd already made the change,
would I want to revert to what I have now?
Have the confidence to cut. Don't keep something that doesn't fit just because
you're proud of it, or because it cost you a lot of effort.
Indeed, in some kinds of work it's good to strip whatever you're doing to its
essence. The result will be more concentrated; you'll understand it better;
and you won't be able to lie to yourself about whether there's anything real
there.
Mathematical elegance may sound like a mere metaphor, drawn from the arts.
That's what I thought when I first heard the term "elegant" applied to a
proof. But now I suspect it's conceptually prior — that the main ingredient in
artistic elegance is mathematical elegance. At any rate it's a useful standard
well beyond math.
Elegance can be a long-term bet, though. Laborious solutions will often have
more prestige in the short term. They cost a lot of effort and they're hard to
understand, both of which impress people, at least temporarily.
Whereas some of the very best work will seem like it took comparatively little
effort, because it was in a sense already there. It didn't have to be built,
just seen. It's a very good sign when it's hard to say whether you're creating
something or discovering it.
When you're doing work that could be seen as either creation or discovery, err
on the side of discovery. Try thinking of yourself as a mere conduit through
which the ideas take their natural shape.
(Strangely enough, one exception is the problem of choosing a problem to work
on. This is usually seen as search, but in the best case it's more like
creating something. In the best case you create the field in the process of
exploring it.)
Similarly, if you're trying to build a powerful tool, make it gratuitously
unrestrictive. A powerful tool almost by definition will be used in ways you
didn't expect, so err on the side of eliminating restrictions, even if you
don't know what the benefit will be.
Great work will often be tool-like in the sense of being something others
build on. So it's a good sign if you're creating ideas that others could use,
or exposing questions that others could answer. The best ideas have
implications in many different areas.
If you express your ideas in the most general form, they'll be truer than you
intended.
True by itself is not enough, of course. Great ideas have to be true and new.
And it takes a certain amount of ability to see new ideas even once you've
learned enough to get to one of the frontiers of knowledge.
In English we give this ability names like originality, creativity, and
imagination. And it seems reasonable to give it a separate name, because it
does seem to some extent a separate skill. It's possible to have a great deal
of ability in other respects — to have a great deal of what's often called
_technical_ ability — and yet not have much of this.
I've never liked the term "creative process." It seems misleading. Originality
isn't a process, but a habit of mind. Original thinkers throw off new ideas
about whatever they focus on, like an angle grinder throwing off sparks. They
can't help it.
If the thing they're focused on is something they don't understand very well,
these new ideas might not be good. One of the most original thinkers I know
decided to focus on dating after he got divorced. He knew roughly as much
about dating as the average 15 year old, and the results were spectacularly
colorful. But to see originality separated from expertise like that made its
nature all the more clear.
I don't know if it's possible to cultivate originality, but there are
definitely ways to make the most of however much you have. For example, you're
much more likely to have original ideas when you're working on something.
Original ideas don't come from trying to have original ideas. They come from
trying to build or understand something slightly too difficult. [15]
Talking or writing about the things you're interested in is a good way to
generate new ideas. When you try to put ideas into words, a missing idea
creates a sort of vacuum that draws it out of you. Indeed, there's a kind of
thinking that can only be done by writing.
Changing your context can help. If you visit a new place, you'll often find
you have new ideas there. The journey itself often dislodges them. But you may
not have to go far to get this benefit. Sometimes it's enough just to go for a
walk. [16]
It also helps to travel in topic space. You'll have more new ideas if you
explore lots of different topics, partly because it gives the angle grinder
more surface area to work on, and partly because analogies are an especially
fruitful source of new ideas.
Don't divide your attention _evenly_ between many topics though, or you'll
spread yourself too thin. You want to distribute it according to something
more like a power law. [17] Be professionally curious about a few topics and
idly curious about many more.
Curiosity and originality are closely related. Curiosity feeds originality by
giving it new things to work on. But the relationship is closer than that.
Curiosity is itself a kind of originality; it's roughly to questions what
originality is to answers. And since questions at their best are a big
component of answers, curiosity at its best is a creative force.
Having new ideas is a strange game, because it usually consists of seeing
things that were right under your nose. Once you've seen a new idea, it tends
to seem obvious. Why did no one think of this before?
When an idea seems simultaneously novel and obvious, it's probably a good one.
Seeing something obvious sounds easy. And yet empirically having new ideas is
hard. What's the source of this apparent contradiction? It's that seeing the
new idea usually requires you to change the way you look at the world. We see
the world through models that both help and constrain us. When you fix a
broken model, new ideas become obvious. But noticing and fixing a broken model
is hard. That's how new ideas can be both obvious and yet hard to discover:
they're easy to see after you do something hard.
One way to discover broken models is to be stricter than other people. Broken
models of the world leave a trail of clues where they bash against reality.
Most people don't want to see these clues. It would be an understatement to
say that they're attached to their current model; it's what they think in; so
they'll tend to ignore the trail of clues left by its breakage, however
conspicuous it may seem in retrospect.
To find new ideas you have to seize on signs of breakage instead of looking
away. That's what Einstein did. He was able to see the wild implications of
Maxwell's equations not so much because he was looking for new ideas as
because he was stricter.
The other thing you need is a willingness to break rules. Paradoxical as it
sounds, if you want to fix your model of the world, it helps to be the sort of
person who's comfortable breaking rules. From the point of view of the old
model, which everyone including you initially shares, the new model usually
breaks at least implicit rules.
Few understand the degree of rule-breaking required, because new ideas seem
much more conservative once they succeed. They seem perfectly reasonable once
you're using the new model of the world they brought with them. But they
didn't at the time; it took the greater part of a century for the heliocentric
model to be generally accepted, even among astronomers, because it felt so
wrong.
Indeed, if you think about it, a good new idea has to seem bad to most people,
or someone would have already explored it. So what you're looking for is ideas
that seem crazy, but the right kind of crazy. How do you recognize these? You
can't with certainty. Often ideas that seem bad are bad. But ideas that are
the right kind of crazy tend to be exciting; they're rich in implications;
whereas ideas that are merely bad tend to be depressing.
There are two ways to be comfortable breaking rules: to enjoy breaking them,
and to be indifferent to them. I call these two cases being aggressively and
passively independent-minded.
The aggressively independent-minded are the naughty ones. Rules don't merely
fail to stop them; breaking rules gives them additional energy. For this sort
of person, delight at the sheer audacity of a project sometimes supplies
enough activation energy to get it started.
The other way to break rules is not to care about them, or perhaps even to
know they exist. This is why novices and outsiders often make new discoveries;
their ignorance of a field's assumptions acts as a source of temporary passive
independent-mindedness. Aspies also seem to have a kind of immunity to
conventional beliefs. Several I know say that this helps them to have new
ideas.
Strictness plus rule-breaking sounds like a strange combination. In popular
culture they're opposed. But popular culture has a broken model in this
respect. It implicitly assumes that issues are trivial ones, and in trivial
matters strictness and rule-breaking _are_ opposed. But in questions that
really matter, only rule-breakers can be truly strict.
An overlooked idea often doesn't lose till the semifinals. You do see it,
subconsciously, but then another part of your subconscious shoots it down
because it would be too weird, too risky, too much work, too controversial.
This suggests an exciting possibility: if you could turn off such filters, you
could see more new ideas.
One way to do that is to ask what would be good ideas for _someone else_ to
explore. Then your subconscious won't shoot them down to protect you.
You could also discover overlooked ideas by working in the other direction: by
starting from what's obscuring them. Every cherished but mistaken principle is
surrounded by a dead zone of valuable ideas that are unexplored because they
contradict it.
Religions are collections of cherished but mistaken principles. So anything
that can be described either literally or metaphorically as a religion will
have valuable unexplored ideas in its shadow. Copernicus and Darwin both made
discoveries of this type. [18]
What are people in your field religious about, in the sense of being too
attached to some principle that might not be as self-evident as they think?
What becomes possible if you discard it?
People show much more originality in solving problems than in deciding which
problems to solve. Even the smartest can be surprisingly conservative when
deciding what to work on. People who'd never dream of being fashionable in any
other way get sucked into working on fashionable problems.
One reason people are more conservative when choosing problems than solutions
is that problems are bigger bets. A problem could occupy you for years, while
exploring a solution might only take days. But even so I think most people are
too conservative. They're not merely responding to risk, but to fashion as
well. Unfashionable problems are undervalued.
One of the most interesting kinds of unfashionable problem is the problem that
people think has been fully explored, but hasn't. Great work often takes
something that already exists and shows its latent potential. Durer and Watt
both did this. So if you're interested in a field that others think is tapped
out, don't let their skepticism deter you. People are often wrong about this.
Working on an unfashionable problem can be very pleasing. There's no hype or
hurry. Opportunists and critics are both occupied elsewhere. The existing work
often has an old-school solidity. And there's a satisfying sense of economy in
cultivating ideas that would otherwise be wasted.
But the most common type of overlooked problem is not explicitly unfashionable
in the sense of being out of fashion. It just doesn't seem to matter as much
as it actually does. How do you find these? By being self-indulgent — by
letting your curiosity have its way, and tuning out, at least temporarily, the
little voice in your head that says you should only be working on "important"
problems.
You do need to work on important problems, but almost everyone is too
conservative about what counts as one. And if there's an important but
overlooked problem in your neighborhood, it's probably already on your
subconscious radar screen. So try asking yourself: if you were going to take a
break from "serious" work to work on something just because it would be really
interesting, what would you do? The answer is probably more important than it
seems.
Originality in choosing problems seems to matter even more than originality in
solving them. That's what distinguishes the people who discover whole new
fields. So what might seem to be merely the initial step — deciding what to
work on — is in a sense the key to the whole game.
Few grasp this. One of the biggest misconceptions about new ideas is about the
ratio of question to answer in their composition. People think big ideas are
answers, but often the real insight was in the question.
Part of the reason we underrate questions is the way they're used in schools.
In schools they tend to exist only briefly before being answered, like
unstable particles. But a really good question can be much more than that. A
really good question is a partial discovery. How do new species arise? Is the
force that makes objects fall to earth the same as the one that keeps planets
in their orbits? By even asking such questions you were already in excitingly
novel territory.
Unanswered questions can be uncomfortable things to carry around with you. But
the more you're carrying, the greater the chance of noticing a solution — or
perhaps even more excitingly, noticing that two unanswered questions are the
same.
Sometimes you carry a question for a long time. Great work often comes from
returning to a question you first noticed years before — in your childhood,
even — and couldn't stop thinking about. People talk a lot about the
importance of keeping your youthful dreams alive, but it's just as important
to keep your youthful questions alive. [19]
This is one of the places where actual expertise differs most from the popular
picture of it. In the popular picture, experts are certain. But actually the
more puzzled you are, the better, so long as (a) the things you're puzzled
about matter, and (b) no one else understands them either.
Think about what's happening at the moment just before a new idea is
discovered. Often someone with sufficient expertise is puzzled about
something. Which means that originality consists partly of puzzlement — of
confusion! You have to be comfortable enough with the world being full of
puzzles that you're willing to see them, but not so comfortable that you don't
want to solve them. [20]
It's a great thing to be rich in unanswered questions. And this is one of
those situations where the rich get richer, because the best way to acquire
new questions is to try answering existing ones. Questions don't just lead to
answers, but also to more questions.
The best questions grow in the answering. You notice a thread protruding from
the current paradigm and try pulling on it, and it just gets longer and
longer. So don't require a question to be obviously big before you try
answering it. You can rarely predict that. It's hard enough even to notice the
thread, let alone to predict how much will unravel if you pull on it.
It's better to be promiscuously curious — to pull a little bit on a lot of
threads, and see what happens. Big things start small. The initial versions of
big things were often just experiments, or side projects, or talks, which then
grew into something bigger. So start lots of small things.
Being prolific is underrated. The more different things you try, the greater
the chance of discovering something new. Understand, though, that trying lots
of things will mean trying lots of things that don't work. You can't have a
lot of good ideas without also having a lot of bad ones. [21]
Though it sounds more responsible to begin by studying everything that's been
done before, you'll learn faster and have more fun by trying stuff. And you'll
understand previous work better when you do look at it. So err on the side of
starting. Which is easier when starting means starting small; those two ideas
fit together like two puzzle pieces.
How do you get from starting small to doing something great? By making
successive versions. Great things are almost always made in successive
versions. You start with something small and evolve it, and the final version
is both cleverer and more ambitious than anything you could have planned.
It's particularly useful to make successive versions when you're making
something for people — to get an initial version in front of them quickly, and
then evolve it based on their response.
Begin by trying the simplest thing that could possibly work. Surprisingly
often, it does. If it doesn't, this will at least get you started.
Don't try to cram too much new stuff into any one version. There are names for
doing this with the first version (taking too long to ship) and the second
(the second system effect), but these are both merely instances of a more
general principle.
An early version of a new project will sometimes be dismissed as a toy. It's a
good sign when people do this. That means it has everything a new idea needs
except scale, and that tends to follow. [22]
The alternative to starting with something small and evolving it is to plan in
advance what you're going to do. And planning does usually seem the more
responsible choice. It sounds more organized to say "we're going to do x and
then y and then z" than "we're going to try x and see what happens." And it is
more _organized_ ; it just doesn't work as well.
Planning per se isn't good. It's sometimes necessary, but it's a necessary
evil — a response to unforgiving conditions. It's something you have to do
because you're working with inflexible media, or because you need to
coordinate the efforts of a lot of people. If you keep projects small and use
flexible media, you don't have to plan as much, and your designs can evolve
instead.
Take as much risk as you can afford. In an efficient market, risk is
proportionate to reward, so don't look for certainty, but for a bet with high
expected value. If you're not failing occasionally, you're probably being too
conservative.
Though conservatism is usually associated with the old, it's the young who
tend to make this mistake. Inexperience makes them fear risk, but it's when
you're young that you can afford the most.
Even a project that fails can be valuable. In the process of working on it,
you'll have crossed territory few others have seen, and encountered questions
few others have asked. And there's probably no better source of questions than
the ones you encounter in trying to do something slightly too hard.
Use the advantages of youth when you have them, and the advantages of age once
you have those. The advantages of youth are energy, time, optimism, and
freedom. The advantages of age are knowledge, efficiency, money, and power.
With effort you can acquire some of the latter when young and keep some of the
former when old.
The old also have the advantage of knowing which advantages they have. The
young often have them without realizing it. The biggest is probably time. The
young have no idea how rich they are in time. The best way to turn this time
to advantage is to use it in slightly frivolous ways: to learn about something
you don't need to know about, just out of curiosity, or to try building
something just because it would be cool, or to become freakishly good at
something.
That "slightly" is an important qualification. Spend time lavishly when you're
young, but don't simply waste it. There's a big difference between doing
something you worry might be a waste of time and doing something you know for
sure will be. The former is at least a bet, and possibly a better one than you
think. [23]
The most subtle advantage of youth, or more precisely of inexperience, is that
you're seeing everything with fresh eyes. When your brain embraces an idea for
the first time, sometimes the two don't fit together perfectly. Usually the
problem is with your brain, but occasionally it's with the idea. A piece of it
sticks out awkwardly and jabs you when you think about it. People who are used
to the idea have learned to ignore it, but you have the opportunity not to.
[24]
So when you're learning about something for the first time, pay attention to
things that seem wrong or missing. You'll be tempted to ignore them, since
there's a 99% chance the problem is with you. And you may have to set aside
your misgivings temporarily to keep progressing. But don't forget about them.
When you've gotten further into the subject, come back and check if they're
still there. If they're still viable in the light of your present knowledge,
they probably represent an undiscovered idea.
One of the most valuable kinds of knowledge you get from experience is to know
what you _don't_ have to worry about. The young know all the things that could
matter, but not their relative importance. So they worry equally about
everything, when they should worry much more about a few things and hardly at
all about the rest.
But what you don't know is only half the problem with inexperience. The other
half is what you do know that ain't so. You arrive at adulthood with your head
full of nonsense — bad habits you've acquired and false things you've been
taught — and you won't be able to do great work till you clear away at least
the nonsense in the way of whatever type of work you want to do.
Much of the nonsense left in your head is left there by schools. We're so used
to schools that we unconsciously treat going to school as identical with
learning, but in fact schools have all sorts of strange qualities that warp
our ideas about learning and thinking.
For example, schools induce passivity. Since you were a small child, there was
an authority at the front of the class telling all of you what you had to
learn and then measuring whether you did. But neither classes nor tests are
intrinsic to learning; they're just artifacts of the way schools are usually
designed.
The sooner you overcome this passivity, the better. If you're still in school,
try thinking of your education as your project, and your teachers as working
for you rather than vice versa. That may seem a stretch, but it's not merely
some weird thought experiment. It's the truth economically, and in the best
case it's the truth intellectually as well. The best teachers don't want to be
your bosses. They'd prefer it if you pushed ahead, using them as a source of
advice, rather than being pulled by them through the material.
Schools also give you a misleading impression of what work is like. In school
they tell you what the problems are, and they're almost always soluble using
no more than you've been taught so far. In real life you have to figure out
what the problems are, and you often don't know if they're soluble at all.
But perhaps the worst thing schools do to you is train you to win by hacking
the test. You can't do great work by doing that. You can't trick God. So stop
looking for that kind of shortcut. The way to beat the system is to focus on
problems and solutions that others have overlooked, not to skimp on the work
itself.
Don't think of yourself as dependent on some gatekeeper giving you a "big
break." Even if this were true, the best way to get it would be to focus on
doing good work rather than chasing influential people.
And don't take rejection by committees to heart. The qualities that impress
admissions officers and prize committees are quite different from those
required to do great work. The decisions of selection committees are only
meaningful to the extent that they're part of a feedback loop, and very few
are.
People new to a field will often copy existing work. There's nothing
inherently bad about that. There's no better way to learn how something works
than by trying to reproduce it. Nor does copying necessarily make your work
unoriginal. Originality is the presence of new ideas, not the absence of old
ones.
There's a good way to copy and a bad way. If you're going to copy something,
do it openly instead of furtively, or worse still, unconsciously. This is
what's meant by the famously misattributed phrase "Great artists steal." The
really dangerous kind of copying, the kind that gives copying a bad name, is
the kind that's done without realizing it, because you're nothing more than a
train running on tracks laid down by someone else. But at the other extreme,
copying can be a sign of superiority rather than subordination. [25]
In many fields it's almost inevitable that your early work will be in some
sense based on other people's. Projects rarely arise in a vacuum. They're
usually a reaction to previous work. When you're first starting out, you don't
have any previous work; if you're going to react to something, it has to be
someone else's. Once you're established, you can react to your own. But while
the former gets called derivative and the latter doesn't, structurally the two
cases are more similar than they seem.
Oddly enough, the very novelty of the most novel ideas sometimes makes them
seem at first to be more derivative than they are. New discoveries often have
to be conceived initially as variations of existing things, _even by their
discoverers_ , because there isn't yet the conceptual vocabulary to express
them.
There are definitely some dangers to copying, though. One is that you'll tend
to copy old things — things that were in their day at the frontier of
knowledge, but no longer are.
And when you do copy something, don't copy every feature of it. Some will make
you ridiculous if you do. Don't copy the manner of an eminent 50 year old
professor if you're 18, for example, or the idiom of a Renaissance poem
hundreds of years later.
Some of the features of things you admire are flaws they succeeded despite.
Indeed, the features that are easiest to imitate are the most likely to be the
flaws.
This is particularly true for behavior. Some talented people are jerks, and
this sometimes makes it seem to the inexperienced that being a jerk is part of
being talented. It isn't; being talented is merely how they get away with it.
One of the most powerful kinds of copying is to copy something from one field
into another. History is so full of chance discoveries of this type that it's
probably worth giving chance a hand by deliberately learning about other kinds
of work. You can take ideas from quite distant fields if you let them be
metaphors.
Negative examples can be as inspiring as positive ones. In fact you can
sometimes learn more from things done badly than from things done well;
sometimes it only becomes clear what's needed when it's missing.
If a lot of the best people in your field are collected in one place, it's
usually a good idea to visit for a while. It will increase your ambition, and
also, by showing you that these people are human, increase your self-
confidence. [26]
If you're earnest you'll probably get a warmer welcome than you might expect.
Most people who are very good at something are happy to talk about it with
anyone who's genuinely interested. If they're really good at their work, then
they probably have a hobbyist's interest in it, and hobbyists always want to
talk about their hobbies.
It may take some effort to find the people who are really good, though. Doing
great work has such prestige that in some places, particularly universities,
there's a polite fiction that everyone is engaged in it. And that is far from
true. People within universities can't say so openly, but the quality of the
work being done in different departments varies immensely. Some departments
have people doing great work; others have in the past; others never have.
Seek out the best colleagues. There are a lot of projects that can't be done
alone, and even if you're working on one that can be, it's good to have other
people to encourage you and to bounce ideas off.
Colleagues don't just affect your work, though; they also affect you. So work
with people you want to become like, because you will.
Quality is more important than quantity in colleagues. It's better to have one
or two great ones than a building full of pretty good ones. In fact it's not
merely better, but necessary, judging from history: the degree to which great
work happens in clusters suggests that one's colleagues often make the
difference between doing great work and not.
How do you know when you have sufficiently good colleagues? In my experience,
when you do, you know. Which means if you're unsure, you probably don't. But
it may be possible to give a more concrete answer than that. Here's an
attempt: sufficiently good colleagues offer _surprising_ insights. They can
see and do things that you can't. So if you have a handful of colleagues good
enough to keep you on your toes in this sense, you're probably over the
threshold.
Most of us can benefit from collaborating with colleagues, but some projects
require people on a larger scale, and starting one of those is not for
everyone. If you want to run a project like that, you'll have to become a
manager, and managing well takes aptitude and interest like any other kind of
work. If you don't have them, there is no middle path: you must either force
yourself to learn management as a second language, or avoid such projects.
[27]
Husband your morale. It's the basis of everything when you're working on
ambitious projects. You have to nurture and protect it like a living organism.
Morale starts with your view of life. You're more likely to do great work if
you're an optimist, and more likely to if you think of yourself as lucky than
if you think of yourself as a victim.
Indeed, work can to some extent protect you from your problems. If you choose
work that's pure, its very difficulties will serve as a refuge from the
difficulties of everyday life. If this is escapism, it's a very productive
form of it, and one that has been used by some of the greatest minds in
history.
Morale compounds via work: high morale helps you do good work, which increases
your morale and helps you do even better work. But this cycle also operates in
the other direction: if you're not doing good work, that can demoralize you
and make it even harder to. Since it matters so much for this cycle to be
running in the right direction, it can be a good idea to switch to easier work
when you're stuck, just so you start to get something done.
One of the biggest mistakes ambitious people make is to allow setbacks to
destroy their morale all at once, like a balloon bursting. You can inoculate
yourself against this by explicitly considering setbacks a part of your
process. Solving hard problems always involves some backtracking.
Doing great work is a depth-first search whose root node is the desire to. So
"If at first you don't succeed, try, try again" isn't quite right. It should
be: If at first you don't succeed, either try again, or backtrack and then try
again.
"Never give up" is also not quite right. Obviously there are times when it's
the right choice to eject. A more precise version would be: Never let setbacks
panic you into backtracking more than you need to. Corollary: Never abandon
the root node.
It's not necessarily a bad sign if work is a struggle, any more than it's a
bad sign to be out of breath while running. It depends how fast you're
running. So learn to distinguish good pain from bad. Good pain is a sign of
effort; bad pain is a sign of damage.
An audience is a critical component of morale. If you're a scholar, your
audience may be your peers; in the arts, it may be an audience in the
traditional sense. Either way it doesn't need to be big. The value of an
audience doesn't grow anything like linearly with its size. Which is bad news
if you're famous, but good news if you're just starting out, because it means
a small but dedicated audience can be enough to sustain you. If a handful of
people genuinely love what you're doing, that's enough.
To the extent you can, avoid letting intermediaries come between you and your
audience. In some types of work this is inevitable, but it's so liberating to
escape it that you might be better off switching to an adjacent type if that
will let you go direct. [28]
The people you spend time with will also have a big effect on your morale.
You'll find there are some who increase your energy and others who decrease
it, and the effect someone has is not always what you'd expect. Seek out the
people who increase your energy and avoid those who decrease it. Though of
course if there's someone you need to take care of, that takes precedence.
Don't marry someone who doesn't understand that you need to work, or sees your
work as competition for your attention. If you're ambitious, you need to work;
it's almost like a medical condition; so someone who won't let you work either
doesn't understand you, or does and doesn't care.
Ultimately morale is physical. You think with your body, so it's important to
take care of it. That means exercising regularly, eating and sleeping well,
and avoiding the more dangerous kinds of drugs. Running and walking are
particularly good forms of exercise because they're good for thinking. [29]
People who do great work are not necessarily happier than everyone else, but
they're happier than they'd be if they didn't. In fact, if you're smart and
ambitious, it's dangerous _not_ to be productive. People who are smart and
ambitious but don't achieve much tend to become bitter.
It's ok to want to impress other people, but choose the right people. The
opinion of people you respect is signal. Fame, which is the opinion of a much
larger group you might or might not respect, just adds noise.
The prestige of a type of work is at best a trailing indicator and sometimes
completely mistaken. If you do anything well enough, you'll make it
prestigious. So the question to ask about a type of work is not how much
prestige it has, but how well it could be done.
Competition can be an effective motivator, but don't let it choose the problem
for you; don't let yourself get drawn into chasing something just because
others are. In fact, don't let competitors make you do anything much more
specific than work harder.
Curiosity is the best guide. Your curiosity never lies, and it knows more than
you do about what's worth paying attention to.
Notice how often that word has come up. If you asked an oracle the secret to
doing great work and the oracle replied with a single word, my bet would be on
"curiosity."
That doesn't translate directly to advice. It's not enough just to be curious,
and you can't command curiosity anyway. But you can nurture it and let it
drive you.
Curiosity is the key to all four steps in doing great work: it will choose the
field for you, get you to the frontier, cause you to notice the gaps in it,
and drive you to explore them. The whole process is a kind of dance with
curiosity.
Believe it or not, I tried to make this essay as short as I could. But its
length at least means it acts as a filter. If you made it this far, you must
be interested in doing great work. And if so you're already further along than
you might realize, because the set of people willing to want to is small.
The factors in doing great work are factors in the literal, mathematical
sense, and they are: ability, interest, effort, and luck. Luck by definition
you can't do anything about, so we can ignore that. And we can assume effort,
if you do in fact want to do great work. So the problem boils down to ability
and interest. Can you find a kind of work where your ability and interest will
combine to yield an explosion of new ideas?
Here there are grounds for optimism. There are so many different ways to do
great work, and even more that are still undiscovered. Out of all those
different types of work, the one you're most suited for is probably a pretty
close match. Probably a comically close match. It's just a question of finding
it, and how far into it your ability and interest can take you. And you can
only answer that by trying.
Many more people could try to do great work than do. What holds them back is a
combination of modesty and fear. It seems presumptuous to try to be Newton or
Shakespeare. It also seems hard; surely if you tried something like that,
you'd fail. Presumably the calculation is rarely explicit. Few people
consciously decide not to try to do great work. But that's what's going on
subconsciously; they shy away from the question.
So I'm going to pull a sneaky trick on you. Do you want to do great work, or
not? Now you have to decide consciously. Sorry about that. I wouldn't have
done it to a general audience. But we already know you're interested.
Don't worry about being presumptuous. You don't have to tell anyone. And if
it's too hard and you fail, so what? Lots of people have worse problems than
that. In fact you'll be lucky if it's the worst problem you have.
Yes, you'll have to work hard. But again, lots of people have to work hard.
And if you're working on something you find very interesting, which you
necessarily will if you're on the right path, the work will probably feel less
burdensome than a lot of your peers'.
The discoveries are out there, waiting to be made. Why not by you?
**Notes**
[1] I don't think you could give a precise definition of what counts as great
work. Doing great work means doing something important so well that you expand
people's ideas of what's possible. But there's no threshold for importance.
It's a matter of degree, and often hard to judge at the time anyway. So I'd
rather people focused on developing their interests rather than worrying about
whether they're important or not. Just try to do something amazing, and leave
it to future generations to say if you succeeded.
[2] A lot of standup comedy is based on noticing anomalies in everyday life.
"Did you ever notice...?" New ideas come from doing this about nontrivial
things. Which may help explain why people's reaction to a new idea is often
the first half of laughing: Ha!
[3] That second qualifier is critical. If you're excited about something most
authorities discount, but you can't give a more precise explanation than "they
don't get it," then you're starting to drift into the territory of cranks.
[4] Finding something to work on is not simply a matter of finding a match
between the current version of you and a list of known problems. You'll often
have to coevolve with the problem. That's why it can sometimes be so hard to
figure out what to work on. The search space is huge. It's the cartesian
product of all possible types of work, both known and yet to be discovered,
and all possible future versions of you.
There's no way you could search this whole space, so you have to rely on
heuristics to generate promising paths through it and hope the best matches
will be clustered. Which they will not always be; different types of work have
been collected together as much by accidents of history as by the intrinsic
similarities between them.
[5] There are many reasons curious people are more likely to do great work,
but one of the more subtle is that, by casting a wide net, they're more likely
to find the right thing to work on in the first place.
[6] It can also be dangerous to make things for an audience you feel is less
sophisticated than you, if that causes you to talk down to them. You can make
a lot of money doing that, if you do it in a sufficiently cynical way, but
it's not the route to great work. Not that anyone using this m.o. would care.
[7] This idea I learned from Hardy's _A Mathematician's Apology_ , which I
recommend to anyone ambitious to do great work, in any field.
[8] Just as we overestimate what we can do in a day and underestimate what we
can do over several years, we overestimate the damage done by procrastinating
for a day and underestimate the damage done by procrastinating for several
years.
[9] You can't usually get paid for doing exactly what you want, especially
early on. There are two options: get paid for doing work close to what you
want and hope to push it closer, or get paid for doing something else entirely
and do your own projects on the side. Both can work, but both have drawbacks:
in the first approach your work is compromised by default, and in the second
you have to fight to get time to do it.
[10] If you set your life up right, it will deliver the focus-relax cycle
automatically. The perfect setup is an office you work in and that you walk to
and from.
[11] There may be some very unworldly people who do great work without
consciously trying to. If you want to expand this rule to cover that case, it
becomes: Don't try to be anything except the best.
[12] This gets more complicated in work like acting, where the goal is to
adopt a fake persona. But even here it's possible to be affected. Perhaps the
rule in such fields should be to avoid _unintentional_ affectation.
[13] It's safe to have beliefs that you treat as unquestionable if and only if
they're also unfalsifiable. For example, it's safe to have the principle that
everyone should be treated equally under the law, because a sentence with a
"should" in it isn't really a statement about the world and is therefore hard
to disprove. And if there's no evidence that could disprove one of your
principles, there can't be any facts you'd need to ignore in order to preserve
it.
[14] Affectation is easier to cure than intellectual dishonesty. Affectation
is often a shortcoming of the young that burns off in time, while intellectual
dishonesty is more of a character flaw.
[15] Obviously you don't have to be working at the exact moment you have the
idea, but you'll probably have been working fairly recently.
[16] Some say psychoactive drugs have a similar effect. I'm skeptical, but
also almost totally ignorant of their effects.
[17] For example you might give the nth most important topic (m-1)/m^n of your
attention, for some m > 1\. You couldn't allocate your attention so precisely,
of course, but this at least gives an idea of a reasonable distribution.
[18] The principles defining a religion have to be mistaken. Otherwise anyone
might adopt them, and there would be nothing to distinguish the adherents of
the religion from everyone else.
[19] It might be a good exercise to try writing down a list of questions you
wondered about in your youth. You might find you're now in a position to do
something about some of them.
[20] The connection between originality and uncertainty causes a strange
phenomenon: because the conventional-minded are more certain than the
independent-minded, this tends to give them the upper hand in disputes, even
though they're generally stupider.
> The best lack all conviction, while the worst
> Are full of passionate intensity.
[21] Derived from Linus Pauling's "If you want to have good ideas, you must
have many ideas."
[22] Attacking a project as a "toy" is similar to attacking a statement as
"inappropriate." It means that no more substantial criticism can be made to
stick.
[23] One way to tell whether you're wasting time is to ask if you're producing
or consuming. Writing computer games is less likely to be a waste of time than
playing them, and playing games where you create something is less likely to
be a waste of time than playing games where you don't.
[24] Another related advantage is that if you haven't said anything publicly
yet, you won't be biased toward evidence that supports your earlier
conclusions. With sufficient integrity you could achieve eternal youth in this
respect, but few manage to. For most people, having previously published
opinions has an effect similar to ideology, just in quantity 1.
[25] In the early 1630s Daniel Mytens made a painting of Henrietta Maria
handing a laurel wreath to Charles I. Van Dyck then painted his own version to
show how much better he was.
[26] I'm being deliberately vague about what a place is. As of this writing,
being in the same physical place has advantages that are hard to duplicate,
but that could change.
[27] This is false when the work the other people have to do is very
constrained, as with SETI@home or Bitcoin. It may be possible to expand the
area in which it's false by defining similarly restricted protocols with more
freedom of action in the nodes.
[28] Corollary: Building something that enables people to go around
intermediaries and engage directly with their audience is probably a good
idea.
[29] It may be helpful always to walk or run the same route, because that
frees attention for thinking. It feels that way to me, and there is some
historical evidence for it.
**Thanks** to Trevor Blackwell, Daniel Gackle, Pam Graham, Tom Howard, Patrick
Hsu, Steve Huffman, Jessica Livingston, Henry Lloyd-Baker, Bob Metcalfe, Ben
Miller, Robert Morris, Michael Nielsen, Courtenay Pipkin, Joris Poort, Mieke
Roos, Rajat Suri, Harj Taggar, Garry Tan, and my younger son for suggestions
and for reading drafts.
|
March 2008, rev May 2013
_(This essay grew out of something I wrote for myself to figure out what we
do. Even though Y Combinator is now 3 years old, we're still trying to
understand its implications.)_
I was annoyed recently to read a description of Y Combinator that said "Y
Combinator does seed funding for startups." What was especially annoying about
it was that I wrote it. This doesn't really convey what we do. And the reason
it's inaccurate is that, paradoxically, funding very early stage startups is
not mainly about funding.
Saying YC does seed funding for startups is a description in terms of earlier
models. It's like calling a car a horseless carriage.
When you scale animals you can't just keep everything in proportion. For
example, volume grows as the cube of linear dimension, but surface area only
as the square. So as animals get bigger they have trouble radiating heat.
That's why mice and rabbits are furry and elephants and hippos aren't. You
can't make a mouse by scaling down an elephant.
YC represents a new, smaller kind of animal—so much smaller that all the rules
are different.
Before us, most companies in the startup funding business were venture capital
funds. VCs generally fund later stage companies than we do. And they supply so
much money that, even though the other things they do may be very valuable,
it's not that inaccurate to regard VCs as sources of money. Good VCs are
"smart money," but they're still money.
All good investors supply a combination of money and help. But these scale
differently, just as volume and surface area do. Late stage investors supply
huge amounts of money and comparatively little help: when a company about to
go public gets a mezzanine round of $50 million, the deal tends to be almost
entirely about money. As you move earlier in the venture funding process, the
ratio of help to money increases, because earlier stage companies have
different needs. Early stage companies need less money because they're smaller
and cheaper to run, but they need more help because life is so precarious for
them. So when VCs do a series A round for, say, $2 million, they generally
expect to offer a significant amount of help along with the money.
Y Combinator occupies the earliest end of the spectrum. We're at least one and
generally two steps before VC funding. (Though some startups go straight from
YC to VC, the most common trajectory is to do an angel round first.) And what
happens at Y Combinator is as different from what happens in a series A round
as a series A round is from a mezzanine financing.
At our end, money is almost a negligible factor. The startup usually consists
of just the founders. Their living expenses are the company's main expense,
and since most founders are under 30, their living expenses are low. But at
this early stage companies need a lot of help. Practically every question is
still unanswered. Some companies we've funded have been working on their
software for a year or more, but others haven't decided what to work on, or
even who the founders should be.
When PR people and journalists recount the histories of startups after they've
become big, they always underestimate how uncertain things were at first.
They're not being deliberately misleading. When you look at a company like
Google, it's hard to imagine they could once have been small and helpless.
Sure, at one point they were a just a couple guys in a garage—but even then
their greatness was assured, and all they had to do was roll forward along the
railroad tracks of destiny.
Far from it. A lot of startups with just as promising beginnings end up
failing. Google has such momentum now that it would be hard for anyone to stop
them. But all it would have taken in the beginning would have been for two
Google employees to focus on the wrong things for six months, and the company
could have died.
We know, because we've been there, just how vulnerable startups are in the
earliest phases. Curiously enough, that's why founders tend to get so rich
from them. Reward is always proportionate to risk, and very early stage
startups are insanely risky.
What we really do at Y Combinator is get startups launched straight. One of
many metaphors you could use for YC is a steam catapult on an aircraft
carrier. We get startups airborne. Barely airborne, but enough that they can
accelerate fast.
When you're launching planes they have to be set up properly or you're just
launching projectiles. They have to be pointed straight down the deck; the
wings have to be trimmed properly; the engines have to be at full power; the
pilot has to be ready. These are the kind of problems we deal with. After we
fund startups we work closely with them for three months—so closely in fact
that we insist they move to where we are. And what we do in those three months
is make sure everything is set up for launch. If there are tensions between
cofounders we help sort them out. We get all the paperwork set up properly so
there are no nasty surprises later. If the founders aren't sure what to focus
on first, we try to figure that out. If there is some obstacle right in front
of them, we either try to remove it, or shift the startup sideways. The goal
is to get every distraction out of the way so the founders can use that time
to build (or finish building) something impressive. And then near the end of
the three months we push the button on the steam catapult in the form of Demo
Day, where the current group of startups present to pretty much every investor
in Silicon Valley.
Launching companies isn't identical with launching products. Though we do
spend a lot of time on launch strategies for products, there are some things
that take too long to build for a startup to launch them before raising their
next round of funding. Several of the most promising startups we've funded
haven't launched their products yet, but are definitely launched as companies.
In the earliest stage, startups not only have more questions to answer, but
they tend to be different kinds of questions. In later stage startups the
questions are about deals, or hiring, or organization. In the earliest phase
they tend to be about technology and design. What do you make? That's the
first problem to solve. That's why our motto is "Make something people want."
This is always a good thing for companies to do, but it's even more important
early on, because it sets the bounds for every other question. Who you hire,
how much money you raise, how you market yourself—they all depend on what
you're making.
Because the early problems are so much about technology and design, you
probably need to be hackers to do what we do. While some VCs have technical
backgrounds, I don't know any who still write code. Their expertise is mostly
in business—as it should be, because that's the kind of expertise you need in
the phase between series A and (if you're lucky) IPO.
We're so different from VCs that we're really a different kind of animal. Can
we claim founders are better off as a result of this new type of venture firm?
I'm pretty sure the answer is yes, because YC is an improved version of what
happened to our startup, and our case was not atypical. We started Viaweb with
$10,000 in seed money from our friend Julian. He was a lawyer and arranged all
our paperwork, so we could just code. We spent three months building a version
1, which we then presented to investors to raise more money. Sounds familiar,
doesn't it? But YC improves on that significantly. Julian knew a lot about law
and business, but his advice ended there; he was not a startup guy. So we made
some basic mistakes early on. And when we presented to investors, we presented
to only 2, because that was all we knew. If we'd had our later selves to
encourage and advise us, and Demo Day to present at, we would have been in
much better shape. We probably could have raised money at 3 to 5 times the
valuation we did.
If we take 7% of a company we fund, the founders only have to do
[7.5%](equity.html) better in their next round of funding to end up net ahead.
We certainly manage that.
So who is our 7% coming out of? If the founders end up net ahead it's not
coming out of them. So is it coming out of later stage investors? Well, they
do end up paying more. But I think they pay more because the company is
actually more valuable. And later stage investors have no problem with that.
The returns of a VC fund depend on the quality of the companies they invest
in, not how cheaply they can buy stock in them.
If what we do is useful, why wasn't anyone doing it before? There are two
answers to that. One is that people were doing it before, just haphazardly on
a smaller scale. Before us, seed funding came primarily from individual angel
investors. Larry and Sergey, for example, got their seed funding from Andy
Bechtolsheim, one of the founders of Sun. And because he was a startup guy he
probably gave them useful advice. But raising money from angel investors is a
hit or miss thing. It's a sideline for most of them, so they only do a handful
of deals a year and they don't spend a lot of time on the startups they invest
in. And they're hard to reach, because they don't want random startups
pestering them with business plans. The Google guys were lucky because they
knew someone who knew Bechtolsheim. It generally takes a personal introduction
with angels.
The other reason no one was doing quite what we do is that till recently it
was a lot more expensive to start a startup. You'll notice we haven't funded
any biotech startups. That's still expensive. But advancing technology has
made web startups so cheap that you really can get a company airborne for
$15,000. If you understand how to operate a steam catapult, at least.
So in effect what's happened is that a new ecological niche has opened up, and
Y Combinator is the new kind of animal that has moved into it. We're not a
replacement for venture capital funds. We occupy a new, adjacent niche. And
conditions in our niche are really quite different. It's not just that the
problems we face are different; the whole structure of the business is
different. VCs are playing a zero-sum game. They're all competing for a slice
of a fixed amount of "deal flow," and that explains a lot of their behavior.
Whereas our m.o. is to create new deal flow, by encouraging hackers who would
have gotten jobs to start their own startups instead. We compete more with
employers than VCs.
It's not surprising something like this would happen. Most fields become more
specialized—more articulated—as they develop, and startups are certainly an
area in which there has been a lot of development over the past couple
decades. The venture business in its present form is only about forty years
old. It stands to reason it would evolve.
And it's natural that the new niche would at first be described, even by its
inhabitants, in terms of the old one. But really Y Combinator is not in the
startup funding business. Really we're more of a small, furry steam catapult.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Robert Morris for
reading drafts of this.
[Comment](http://news.ycombinator.com/item?id=133430) on this essay.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
April 2005
This summer, as an experiment, some friends and I are giving [seed
funding](http://ycombinator.com) to a bunch of new startups. It's an
experiment because we're prepared to fund younger founders than most investors
would. That's why we're doing it during the summer—so even college students
can participate.
We know from Google and Yahoo that grad students can start successful
startups. And we know from experience that some undergrads are as capable as
most grad students. The accepted age for startup founders has been creeping
downward. We're trying to find the lower bound.
The deadline has now passed, and we're sifting through 227 applications. We
expected to divide them into two categories, promising and unpromising. But we
soon saw we needed a third: promising people with unpromising ideas. [1]
**The Artix Phase**
We should have expected this. It's very common for a group of founders to go
through one lame idea before realizing that a startup has to make something
people will pay for. In fact, we ourselves did.
Viaweb wasn't the first startup Robert Morris and I started. In January 1995,
we and a couple friends started a company called Artix. The plan was to put
art galleries on the Web. In retrospect, I wonder how we could have wasted our
time on anything so stupid. Galleries are not especially
[excited](http://www.knoedlergallery.com/) about being on the Web even now,
ten years later. They don't want to have their stock visible to any random
visitor, like an antique store. [2]
Besides which, art dealers are the most technophobic people on earth. They
didn't become art dealers after a difficult choice between that and a career
in the hard sciences. Most of them had never seen the Web before we came to
tell them why they should be on it. Some didn't even have computers. It
doesn't do justice to the situation to describe it as a hard _sell_ ; we soon
sank to building sites for free, and it was hard to convince galleries even to
do that.
Gradually it dawned on us that instead of trying to make Web sites for people
who didn't want them, we could make sites for people who did. In fact,
software that would let people who wanted sites make their own. So we ditched
Artix and started a new company, Viaweb, to make software for building online
stores. That one succeeded.
We're in good company here. Microsoft was not the first company Paul Allen and
Bill Gates started either. The first was called Traf-o-data. It does not seem
to have done as well as Micro-soft.
In Robert's defense, he was skeptical about Artix. I dragged him into it. [3]
But there were moments when he was optimistic. And if we, who were 29 and 30
at the time, could get excited about such a thoroughly boneheaded idea, we
should not be surprised that hackers aged 21 or 22 are pitching us ideas with
little hope of making money.
**The Still Life Effect**
Why does this happen? Why do good hackers have bad business ideas?
Let's look at our case. One reason we had such a lame idea was that it was the
first thing we thought of. I was in New York trying to be a starving artist at
the time (the starving part is actually quite easy), so I was haunting
galleries anyway. When I learned about the Web, it seemed natural to mix the
two. Make Web sites for galleries—that's the ticket!
If you're going to spend years working on something, you'd think it might be
wise to spend at least a couple days considering different ideas, instead of
going with the first that comes into your head. You'd think. But people don't.
In fact, this is a constant problem when you're painting still lifes. You
plonk down a bunch of stuff on a table, and maybe spend five or ten minutes
rearranging it to look interesting. But you're so impatient to get started
painting that ten minutes of rearranging feels very long. So you start
painting. Three days later, having spent twenty hours staring at it, you're
kicking yourself for having set up such an awkward and boring composition, but
by then it's too late.
Part of the problem is that big projects tend to grow out of small ones. You
set up a still life to make a quick sketch when you have a spare hour, and
days later you're still working on it. I once spent a month painting three
versions of a still life I set up in about four minutes. At each point (a day,
a week, a month) I thought I'd already put in so much time that it was too
late to change.
So the biggest cause of bad ideas is the still life effect: you come up with a
random idea, plunge into it, and then at each point (a day, a week, a month)
feel you've put so much time into it that this must be _the_ idea.
How do we fix that? I don't think we should discard plunging. Plunging into an
idea is a good thing. The solution is at the other end: to realize that having
invested time in something doesn't make it good.
This is clearest in the case of names. Viaweb was originally called Webgen,
but we discovered someone else had a product called that. We were so attached
to our name that we offered him _5% of the company_ if he'd let us have it.
But he wouldn't, so we had to think of another. [4] The best we could do was
Viaweb, which we disliked at first. It was like having a new mother. But
within three days we loved it, and Webgen sounded lame and old-fashioned.
If it's hard to change something so simple as a name, imagine how hard it is
to garbage-collect an idea. A name only has one point of attachment into your
head. An idea for a company gets woven into your thoughts. So you must
consciously discount for that. Plunge in, by all means, but remember later to
look at your idea in the harsh light of morning and ask: is this something
people will pay for? Is this, of all the things we could make, the thing
people will pay most for?
**Muck**
The second mistake we made with Artix is also very common. Putting galleries
on the Web seemed cool.
One of the most valuable things my father taught me is an old Yorkshire
saying: where there's muck, there's brass. Meaning that unpleasant work pays.
And more to the point here, vice versa. Work people like doesn't pay well, for
reasons of supply and demand. The most extreme case is developing programming
languages, which doesn't pay at all, because people like it so much they do it
for free.
When we started Artix, I was still ambivalent about business. I wanted to keep
one foot in the art world. Big, big, mistake. Going into business is like a
hang-glider launch: you'd better do it wholeheartedly, or not at all. The
purpose of a company, and a startup especially, is to make money. You can't
have divided loyalties.
Which is not to say that you have to do the most disgusting sort of work, like
spamming, or starting a company whose only purpose is patent litigation. What
I mean is, if you're starting a company that will do something cool, the aim
had better be to make money and maybe be cool, not to be cool and maybe make
money.
It's hard enough to make money that you can't do it by accident. Unless it's
your first priority, it's unlikely to happen at all.
**Hyenas**
When I probe our motives with Artix, I see a third mistake: timidity. If you'd
proposed at the time that we go into the e-commerce business, we'd have found
the idea terrifying. Surely a field like that would be dominated by fearsome
startups with five million dollars of VC money each. Whereas we felt pretty
sure that we could hold our own in the slightly less competitive business of
generating Web sites for art galleries.
We erred ridiculously far on the side of safety. As it turns out, VC-backed
startups are not that fearsome. They're too busy trying to spend all that
[money](venturecapital.html) to get software written. In 1995, the e-commerce
business was very competitive as measured in press releases, but not as
measured in software. And really it never was. The big fish like Open Market
(rest their souls) were just consulting companies pretending to be product
companies [5], and the offerings at our end of the market were a couple
hundred lines of Perl scripts. Or could have been implemented as a couple
hundred lines of Perl; in fact they were probably tens of thousands of lines
of C++ or Java. Once we actually took the plunge into e-commerce, it turned
out to be surprisingly easy to compete.
So why were we afraid? We felt we were good at programming, but we lacked
confidence in our ability to do a mysterious, undifferentiated thing we called
"business." In fact there is no such thing as "business." There's selling,
promotion, figuring out what people want, deciding how much to charge,
customer support, paying your bills, getting customers to pay you, getting
incorporated, raising money, and so on. And the combination is not as hard as
it seems, because some tasks (like raising money and getting incorporated) are
an O(1) pain in the ass, whether you're big or small, and others (like selling
and promotion) depend more on energy and imagination than any kind of special
training.
Artix was like a hyena, content to survive on carrion because we were afraid
of the lions. Except the lions turned out not to have any teeth, and the
business of putting galleries online barely qualified as carrion.
**A Familiar Problem**
Sum up all these sources of error, and it's no wonder we had such a bad idea
for a company. We did the first thing we thought of; we were ambivalent about
being in business at all; and we deliberately chose an impoverished market to
avoid competition.
Looking at the applications for the Summer Founders Program, I see signs of
all three. But the first is by far the biggest problem. Most of the groups
applying have not stopped to ask: of all the things we could do, is _this_ the
one with the best chance of making money?
If they'd already been through their Artix phase, they'd have learned to ask
that. After the reception we got from art dealers, we were ready to. This
time, we thought, let's make something people want.
Reading the _Wall Street Journal_ for a week should give anyone ideas for two
or three new startups. The articles are full of descriptions of problems that
need to be solved. But most of the applicants don't seem to have looked far
for ideas.
We expected the most common proposal to be for multiplayer games. We were not
far off: this was the second most common. The most common was some combination
of a blog, a calendar, a dating site, and Friendster. Maybe there is some new
killer app to be discovered here, but it seems perverse to go poking around in
this fog when there are valuable, unsolved problems lying about in the open
for anyone to see. Why did no one propose a new scheme for micropayments? An
ambitious project, perhaps, but I can't believe we've considered every
alternative. And newspapers and magazines are (literally) dying for a
solution.
Why did so few applicants really think about what customers want? I think the
problem with many, as with people in their early twenties generally, is that
they've been trained their whole lives to jump through predefined hoops.
They've spent 15-20 years solving problems other people have set for them. And
how much time deciding what problems would be good to solve? Two or three
course projects? They're good at solving problems, but bad at choosing them.
But that, I'm convinced, is just the effect of training. Or more precisely,
the effect of grading. To make grading efficient, everyone has to solve the
same problem, and that means it has to be decided in advance. It would be
great if schools taught students how to choose problems as well as how to
solve them, but I don't know how you'd run such a class in practice.
**Copper and Tin**
The good news is, choosing problems is something that can be learned. I know
that from experience. Hackers can learn to make things customers want. [6]
This is a controversial view. One expert on "entrepreneurship" told me that
any startup had to include business people, because only they could focus on
what customers wanted. I'll probably alienate this guy forever by quoting him,
but I have to risk it, because his email was such a perfect example of this
view:
> 80% of MIT spinoffs succeed _provided_ they have at least one management
> person in the team at the start. The business person represents the "voice
> of the customer" and that's what keeps the engineers and product development
> on track.
This is, in my opinion, a crock. Hackers are perfectly capable of hearing the
voice of the customer without a business person to amplify the signal for
them. Larry Page and Sergey Brin were grad students in computer science, which
presumably makes them "engineers." Do you suppose Google is only good because
they had some business guy whispering in their ears what customers wanted? It
seems to me the business guys who did the most for Google were the ones who
obligingly flew Altavista into a hillside just as Google was getting started.
The hard part about figuring out what customers want is figuring out that you
need to figure it out. But that's something you can learn quickly. It's like
seeing the other interpretation of an ambiguous picture. As soon as someone
tells you there's a rabbit as well as a duck, it's hard not to see it.
And compared to the sort of problems hackers are used to solving, giving
customers what they want is easy. Anyone who can write an optimizing compiler
can design a UI that doesn't confuse users, once they _choose_ to focus on
that problem. And once you apply that kind of brain power to petty but
profitable questions, you can create wealth very rapidly.
That's the essence of a startup: having brilliant people do work that's
beneath them. Big companies try to hire the right person for the job. Startups
win because they don't—because they take people so smart that they would in a
big company be doing "research," and set them to work instead on problems of
the most immediate and mundane sort. Think Einstein designing refrigerators.
[7]
If you want to learn what people want, read Dale Carnegie's _How to Win
Friends and Influence People._ [8] When a friend recommended this book, I
couldn't believe he was serious. But he insisted it was good, so I read it,
and he was right. It deals with the most difficult problem in human
experience: how to see things from other people's point of view, instead of
thinking only of yourself.
Most smart people don't do that very well. But adding this ability to raw
brainpower is like adding tin to copper. The result is bronze, which is so
much harder that it seems a different metal.
A hacker who has learned what to make, and not just how to make, is
extraordinarily powerful. And not just at making money: look what a small
group of volunteers has achieved with Firefox.
Doing an Artix teaches you to make something people want in the same way that
not drinking anything would teach you how much you depend on water. But it
would be more convenient for all involved if the Summer Founders didn't learn
this on our dime—if they could skip the Artix phase and go right on to make
something customers wanted. That, I think, is going to be the real experiment
this summer. How long will it take them to grasp this?
We decided we ought to have T-Shirts for the SFP, and we'd been thinking about
what to print on the back. Till now we'd been planning to use
> If you can read this, I should be working.
but now we've decided it's going to be
> Make something people want.
**Notes**
[1] SFP applicants: please don't assume that not being accepted means we think
your idea is bad. Because we want to keep the number of startups small this
first summer, we're going to have to turn down some good proposals too.
[2] Dealers try to give each customer the impression that the stuff they're
showing him is something special that only a few people have seen, when in
fact it may have been sitting in their racks for years while they tried to
unload it on buyer after buyer.
[3] On the other hand, he was skeptical about Viaweb too. I have a precise
measure of that, because at one point in the first couple months we made a
bet: if he ever made a million dollars out of Viaweb, he'd get his ear
pierced. We didn't let him [off](pierced.html), either.
[4] I wrote a program to generate all the combinations of "Web" plus a three
letter word. I learned from this that most three letter words are bad: Webpig,
Webdog, Webfat, Webzit, Webfug. But one of them was Webvia; I swapped them to
make Viaweb.
[5] It's much easier to sell services than a product, just as it's easier to
make a living playing at weddings than by selling recordings. But the margins
are greater on products. So during the Bubble a lot of companies used
consulting to generate revenues they could attribute to the sale of products,
because it made a better story for an IPO.
[6] Trevor Blackwell presents the following recipe for a startup: "Watch
people who have money to spend, see what they're wasting their time on, cook
up a solution, and try selling it to them. It's surprising how small a problem
can be and still provide a profitable market for a solution."
[7] You need to offer especially large rewards to get great people to do
tedious work. That's why startups always pay equity rather than just salary.
[8] Buy an
[old](http://dogbert.abebooks.com/servlet/SearchResults?bx=on&sts=t&ds=30&bi=0&an=carnegie&kn=1938+OR+1939+OR+1940+OR+1941+OR+1942+OR+1943+OR+1944+OR+1945+OR+1946+OR+1947+OR+1948&tn=influence+friends&sortby=2)
copy from the 1940s or 50s instead of the current edition, which has been
rewritten to suit present fashions. The original edition contained a few unPC
ideas, but it's always better to read an original book, bearing in mind that
it's a book from a past era, than to read a new version sanitized for your
protection.
**Thanks** to Bill Birch, Trevor Blackwell, Jessica Livingston, and Robert
Morris for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
September 2012
I've done several types of work over the years but I don't know another as
counterintuitive as startup investing.
The two most important things to understand about startup investing, as a
business, are (1) that effectively all the returns are concentrated in a few
big winners, and (2) that the best ideas look initially like bad ideas.
The first rule I knew intellectually, but didn't really grasp till it happened
to us. The total value of the companies we've funded is around 10 billion,
give or take a few. But just two companies, Dropbox and Airbnb, account for
about three quarters of it.
In startups, the big winners are big to a degree that violates our
expectations about variation. I don't know whether these expectations are
innate or learned, but whatever the cause, we are just not prepared for the
1000x variation in outcomes that one finds in startup investing.
That yields all sorts of strange consequences. For example, in purely
financial terms, there is probably at most one company in each YC batch that
will have a significant effect on our returns, and the rest are just a cost of
doing business. [1] I haven't really assimilated that fact, partly because
it's so counterintuitive, and partly because we're not doing this just for
financial reasons; YC would be a pretty lonely place if we only had one
company per batch. And yet it's true.
To succeed in a domain that violates your intuitions, you need to be able to
turn them off the way a pilot does when flying through clouds. [2] You need to
do what you know intellectually to be right, even though it feels wrong.
It's a constant battle for us. It's hard to make ourselves take enough risks.
When you interview a startup and think "they seem likely to succeed," it's
hard not to fund them. And yet, financially at least, there is only one kind
of success: they're either going to be one of the really big winners or not,
and if not it doesn't matter whether you fund them, because even if they
succeed the effect on your returns will be insignificant. In the same day of
interviews you might meet some smart 19 year olds who aren't even sure what
they want to work on. Their chances of succeeding seem small. But again, it's
not their chances of succeeding that matter but their chances of succeeding
really big. The probability that any group will succeed really big is
microscopically small, but the probability that those 19 year olds will might
be higher than that of the other, safer group.
The probability that a startup will make it big is not simply a constant
fraction of the probability that they will succeed at all. If it were, you
could fund everyone who seemed likely to succeed at all, and you'd get that
fraction of big hits. Unfortunately picking winners is harder than that. You
have to ignore the elephant in front of you, the likelihood they'll succeed,
and focus instead on the separate and almost invisibly intangible question of
whether they'll succeed really big.
**Harder**
That's made harder by the fact that the best startup ideas seem at first like
bad ideas. I've written about this before: if a good idea were obviously good,
someone else would already have done it. So the most successful founders tend
to work on ideas that few beside them realize are good. Which is not that far
from a description of insanity, till you reach the point where you see
results.
The first time Peter Thiel spoke at YC he drew a Venn diagram that illustrates
the situation perfectly. He drew two intersecting circles, one labelled "seems
like a bad idea" and the other "is a good idea." The intersection is the sweet
spot for startups.
This concept is a simple one and yet seeing it as a Venn diagram is
illuminating. It reminds you that there is an intersection—that there are good
ideas that seem bad. It also reminds you that the vast majority of ideas that
seem bad are bad.
The fact that the best ideas seem like bad ideas makes it even harder to
recognize the big winners. It means the probability of a startup making it
really big is not merely not a constant fraction of the probability that it
will succeed, but that the startups with a high probability of the former will
seem to have a disproportionately low probability of the latter.
History tends to get rewritten by big successes, so that in retrospect it
seems obvious they were going to make it big. For that reason one of my most
valuable memories is how lame Facebook sounded to me when I first heard about
it. A site for college students to waste time? It seemed the perfect bad idea:
a site (1) for a niche market (2) with no money (3) to do something that
didn't matter.
One could have described Microsoft and Apple in exactly the same terms. [3]
**Harder Still**
Wait, it gets worse. You not only have to solve this hard problem, but you
have to do it with no indication of whether you're succeeding. When you pick a
big winner, you won't know it for two years.
Meanwhile, the one thing you _can_ measure is dangerously misleading. The one
thing we can track precisely is how well the startups in each batch do at
fundraising after Demo Day. But we know that's the wrong metric. There's no
correlation between the percentage of startups that raise money and the metric
that does matter financially, whether that batch of startups contains a big
winner or not.
Except an inverse one. That's the scary thing: fundraising is not merely a
useless metric, but positively misleading. We're in a business where we need
to pick unpromising-looking outliers, and the huge scale of the successes
means we can afford to spread our net very widely. The big winners could
generate 10,000x returns. That means for each big winner we could pick a
thousand companies that returned nothing and still end up 10x ahead.
If we ever got to the point where 100% of the startups we funded were able to
raise money after Demo Day, it would almost certainly mean we were being too
conservative. [4]
It takes a conscious effort not to do that too. After 15 cycles of preparing
startups for investors and then watching how they do, I can now look at a
group we're interviewing through Demo Day investors' eyes. But those are the
wrong eyes to look through!
We can afford to take at least 10x as much risk as Demo Day investors. And
since risk is usually proportionate to reward, if you can afford to take more
risk you should. What would it mean to take 10x more risk than Demo Day
investors? We'd have to be willing to fund 10x more startups than they would.
Which means that even if we're generous to ourselves and assume that YC can on
average triple a startup's expected value, we'd be taking the right amount of
risk if only 30% of the startups were able to raise significant funding after
Demo Day.
I don't know what fraction of them currently raise more after Demo Day. I
deliberately avoid calculating that number, because if you start measuring
something you start optimizing it, and I know it's the wrong thing to
optimize. [5] But the percentage is certainly way over 30%. And frankly the
thought of a 30% success rate at fundraising makes my stomach clench. A Demo
Day where only 30% of the startups were fundable would be a shambles. Everyone
would agree that YC had jumped the shark. We ourselves would feel that YC had
jumped the shark. And yet we'd all be wrong.
For better or worse that's never going to be more than a thought experiment.
We could never stand it. How about that for counterintuitive? I can lay out
what I know to be the right thing to do, and still not do it. I can make up
all sorts of plausible justifications. It would hurt YC's brand (at least
among the innumerate) if we invested in huge numbers of risky startups that
flamed out. It might dilute the value of the alumni network. Perhaps most
convincingly, it would be demoralizing for us to be up to our chins in failure
all the time. But I know the real reason we're so conservative is that we just
haven't assimilated the fact of 1000x variation in returns.
We'll probably never be able to bring ourselves to take risks proportionate to
the returns in this business. The best we can hope for is that when we
interview a group and find ourselves thinking "they seem like good founders,
but what are investors going to think of this crazy idea?" we'll continue to
be able to say "who cares what investors think?" That's what we thought about
Airbnb, and if we want to fund more Airbnbs we have to stay good at thinking
it.
**Notes**
[1] I'm not saying that the big winners are all that matters, just that
they're all that matters financially for investors. Since we're not doing YC
mainly for financial reasons, the big winners aren't all that matters to us.
We're delighted to have funded Reddit, for example. Even though we made
comparatively little from it, Reddit has had a big effect on the world, and it
introduced us to Steve Huffman and Alexis Ohanian, both of whom have become
good friends.
Nor do we push founders to try to become one of the big winners if they don't
want to. We didn't "swing for the fences" in our own startup (Viaweb, which
was acquired for $50 million), and it would feel pretty bogus to press
founders to do something we didn't do. Our rule is that it's up to the
founders. Some want to take over the world, and some just want that first few
million. But we invest in so many companies that we don't have to sweat any
one outcome. In fact, we don't have to sweat whether startups have exits at
all. The biggest exits are the only ones that matter financially, and those
are guaranteed in the sense that if a company becomes big enough, a market for
its shares will inevitably arise. Since the remaining outcomes don't have a
significant effect on returns, it's cool with us if the founders want to sell
early for a small amount, or grow slowly and never sell (i.e. become a so-
called lifestyle business), or even shut the company down. We're sometimes
disappointed when a startup we had high hopes for doesn't do well, but this
disappointment is mostly the ordinary variety that anyone feels when that
happens.
[2] Without visual cues (e.g. the horizon) you can't distinguish between
gravity and acceleration. Which means if you're flying through clouds you
can't tell what the attitude of the aircraft is. You could feel like you're
flying straight and level while in fact you're descending in a spiral. The
solution is to ignore what your body is telling you and listen only to your
instruments. But it turns out to be very hard to ignore what your body is
telling you. Every pilot knows about this
[problem](http://en.wikipedia.org/wiki/Spatial_disorientation) and yet it is
still a leading cause of accidents.
[3] Not all big hits follow this pattern though. The reason Google seemed a
bad idea was that there were already lots of search engines and there didn't
seem to be room for another.
[4] A startup's success at fundraising is a function of two things: what
they're selling and how good they are at selling it. And while we can teach
startups a lot about how to appeal to investors, even the most convincing
pitch can't sell an idea that investors don't like. I was genuinely worried
that Airbnb, for example, would not be able to raise money after Demo Day. I
couldn't convince [Fred Wilson](airbnb.html) to fund them. They might not have
raised money at all but for the coincidence that Greg McAdoo, our contact at
Sequoia, was one of a handful of VCs who understood the vacation rental
business, having spent much of the previous two years investigating it.
[5] I calculated it once for the last batch before a consortium of investors
started offering investment automatically to every startup we funded, summer
2010. At the time it was 94% (33 of 35 companies that tried to raise money
succeeded, and one didn't try because they were already profitable).
Presumably it's lower now because of that investment; in the old days it was
raise after Demo Day or die.
**Thanks** to Sam Altman, Paul Buchheit, Patrick Collison, Jessica Livingston,
Geoff Ralston, and Harj Taggar for reading drafts of this.
|
November 2004
_(This is a new essay for the Japanese edition of[Hackers &
Painters](http://www.amazon.com/exec/obidos/tg/detail/-/0596006624). It tries
to explain why Americans make some things well and others badly.)_
A few years ago an Italian friend of mine travelled by train from Boston to
Providence. She had only been in America for a couple weeks and hadn't seen
much of the country yet. She arrived looking astonished. "It's so _ugly!"_
People from other rich countries can scarcely imagine the squalor of the man-
made bits of America. In travel books they show you mostly natural
environments: the Grand Canyon, whitewater rafting, horses in a field. If you
see pictures with man-made things in them, it will be either a view of the New
York skyline shot from a discreet distance, or a carefully cropped image of a
seacoast town in Maine.
How can it be, visitors must wonder. How can the richest country in the world
look like this?
Oddly enough, it may not be a coincidence. Americans are good at some things
and bad at others. We're good at making movies and software, and bad at making
cars and cities. And I think we may be good at what we're good at for the same
reason we're bad at what we're bad at. We're impatient. In America, if you
want to do something, you don't worry that it might come out badly, or upset
delicate social balances, or that people might think you're getting above
yourself. If you want to do something, as Nike says, _just do it._
This works well in some fields and badly in others. I suspect it works in
movies and software because they're both messy processes. "Systematic" is the
last word I'd use to describe the way [good programmers](gh.html) write
software. Code is not something they assemble painstakingly after careful
planning, like the pyramids. It's something they plunge into, working fast and
constantly changing their minds, like a charcoal sketch.
In software, paradoxical as it sounds, good craftsmanship means working fast.
If you work slowly and meticulously, you merely end up with a very fine
implementation of your initial, mistaken idea. Working slowly and meticulously
is premature optimization. Better to get a prototype done fast, and see what
new ideas it gives you.
It sounds like making movies works a lot like making software. Every movie is
a Frankenstein, full of imperfections and usually quite different from what
was originally envisioned. But interesting, and finished fairly quickly.
I think we get away with this in movies and software because they're both
malleable mediums. Boldness pays. And if at the last minute two parts don't
quite fit, you can figure out some hack that will at least conceal the
problem.
Not so with cars, or cities. They are all too physical. If the car business
worked like software or movies, you'd surpass your competitors by making a car
that weighed only fifty pounds, or folded up to the size of a motorcycle when
you wanted to park it. But with physical products there are more constraints.
You don't win by dramatic innovations so much as by good taste and attention
to detail.
The trouble is, the very word "taste" sounds slightly ridiculous to American
ears. It seems pretentious, or frivolous, or even effeminate. Blue staters
think it's "subjective," and red staters think it's for sissies. So anyone in
America who really cares about design will be sailing upwind.
Twenty years ago we used to hear that the problem with the US car industry was
the workers. We don't hear that any more now that Japanese companies are
building cars in the US. The problem with American cars is bad design. You can
see that just by looking at them.
All that extra sheet metal on the [AMC Matador](matador.html) wasn't added by
the workers. The problem with this car, as with American cars today, is that
it was designed by marketing people instead of designers.
Why do the Japanese make better cars than us? Some say it's because their
culture encourages cooperation. That may come into it. But in this case it
seems more to the point that their culture prizes design and craftsmanship.
For centuries the Japanese have made finer things than we have in the West.
When you look at swords they made in 1200, you just can't believe the date on
the label is right. Presumably their cars fit together more precisely than
ours for the same reason their joinery always has. They're obsessed with
making things well.
Not us. When we make something in America, our aim is just to get the job
done. Once we reach that point, we take one of two routes. We can stop there,
and have something crude but serviceable, like a Vise-grip. Or we can improve
it, which usually means encrusting it with gratuitous ornament. When we want
to make a car "better," we stick [tail fins](59eldorado.html) on it, or make
it [longer](75eldorado.html), or make the [windows smaller](04magnum.html),
depending on the current fashion.
Ditto for houses. In America you can have either a flimsy box banged together
out of two by fours and drywall, or a McMansion-- a flimsy box banged together
out of two by fours and drywall, but larger, more dramatic-looking, and full
of expensive fittings. Rich people don't get better design or craftsmanship;
they just get a larger, more conspicuous version of the standard house.
We don't especially prize design or craftsmanship here. What we like is speed,
and we're willing to do something in an ugly way to get it done fast. In some
fields, like software or movies, this is a net win.
But it's not just that software and movies are malleable mediums. In those
businesses, the designers (though they're not generally called that) have more
power. Software companies, at least successful ones, tend to be run by
programmers. And in the film industry, though producers may second-guess
directors, the director controls most of what appears on the screen. And so
American software and movies, and Japanese cars, all have this in common: the
people in charge care about design-- the former because the designers are in
charge, and the latter because the whole culture cares about design.
I think most Japanese executives would be horrified at the idea of making a
bad car. Whereas American executives, in their hearts, still believe the most
important thing about a car is the image it projects. Make a good car? What's
"good?" It's so _subjective._ If you want to know how to design a car, ask a
focus group.
Instead of relying on their own internal design compass (like Henry Ford did),
American car companies try to make what marketing people think consumers want.
But it isn't working. American cars continue to lose market share. And the
reason is that the customer doesn't want what he thinks he wants.
Letting focus groups design your cars for you only wins in the short term. In
the long term, it pays to bet on good design. The focus group may say they
want the meretricious feature du jour, but what they want even more is to
imitate sophisticated buyers, and they, though a small minority, really do
care about good design. Eventually the pimps and drug dealers notice that the
doctors and lawyers have switched from Cadillac to Lexus, and do the same.
Apple is an interesting counterexample to the general American trend. If you
want to buy a nice CD player, you'll probably buy a Japanese one. But if you
want to buy an MP3 player, you'll probably buy an iPod. What happened? Why
doesn't Sony dominate MP3 players? Because Apple is in the consumer
electronics business now, and unlike other American companies, they're
obsessed with good design. Or more precisely, their CEO is.
I just got an iPod, and it's not just nice. It's _surprisingly_ nice. For it
to surprise me, it must be satisfying expectations I didn't know I had. No
focus group is going to discover those. Only a great designer can.
Cars aren't the worst thing we make in America. Where the just-do-it model
fails most dramatically is in our cities-- or rather, [exurbs](denver.html).
If real estate developers operated on a large enough scale, if they built
whole towns, market forces would compel them to build towns that didn't suck.
But they only build a couple office buildings or suburban streets at a time,
and the result is so depressing that the inhabitants consider it a great treat
to fly to Europe and spend a couple weeks living what is, for people there,
just everyday life. [1]
But the just-do-it model does have advantages. It seems the clear winner for
generating wealth and technical innovations (which are practically the same
thing). I think speed is the reason. It's hard to create wealth by making a
commodity. The real value is in things that are new, and if you want to be the
first to make something, it helps to work fast. For better or worse, the just-
do-it model is fast, whether you're Dan Bricklin writing the prototype of
VisiCalc in a weekend, or a real estate developer building a block of shoddy
condos in a month.
If I had to choose between the just-do-it model and the careful model, I'd
probably choose just-do-it. But do we have to choose? Could we have it both
ways? Could Americans have nice places to live without undermining the
impatient, individualistic spirit that makes us good at software? Could other
countries introduce more individualism into their technology companies and
research labs without having it metastasize as strip malls? I'm optimistic.
It's harder to say about other countries, but in the US, at least, I think we
can have both.
Apple is an encouraging example. They've managed to preserve enough of the
impatient, hackerly spirit you need to write software. And yet when you pick
up a new Apple laptop, well, it doesn't seem American. It's too perfect. It
seems as if it must have been made by a Swedish or a Japanese company.
In many technologies, version 2 has higher resolution. Why not in design
generally? I think we'll gradually see national characters superseded by
occupational characters: hackers in Japan will be allowed to behave with a
[willfulness](gba.html) that would now seem unJapanese, and products in
America will be designed with an insistence on [taste](taste.html) that would
now seem unAmerican. Perhaps the most successful countries, in the future,
will be those most willing to ignore what are now considered national
characters, and do each kind of work in the way that works best. Race you.
**Notes**
[1] Japanese cities are ugly too, but for different reasons. Japan is prone to
earthquakes, so buildings are traditionally seen as temporary; there is no
grand tradition of city planning like the one Europeans inherited from Rome.
The other cause is the notoriously corrupt relationship between the government
and construction companies.
**Thanks** to Trevor Blackwell, Barry Eisler, Sarah Harlin, Shiro Kawai,
Jessica Livingston, Jackie McDonough, Robert Morris, and Eric Raymond for
reading drafts of this.
|
October 2007
After the last [talk](webstartups.html) I gave, one of the organizers got up
on the stage to deliver an impromptu rebuttal. That never happened before. I
only heard the first few sentences, but that was enough to tell what I said
that upset him: that startups would do better if they moved to Silicon Valley.
This conference was in London, and most of the audience seemed to be from the
UK. So saying startups should move to Silicon Valley seemed like a
nationalistic remark: an obnoxious American telling them that if they wanted
to do things right they should all just move to America.
Actually I'm less American than I seem. I didn't say so, but I'm British by
birth. And just as Jews are ex officio allowed to tell Jewish jokes, I don't
feel like I have to bother being diplomatic with a British audience.
The idea that startups would do better to move to Silicon Valley is not even a
nationalistic one. [1] It's the same thing I say to startups in the US. Y
Combinator alternates between coasts every 6 months. Every other funding cycle
is in Boston. And even though Boston is the second biggest startup hub in the
US (and the world), we tell the startups from those cycles that their best bet
is to move to Silicon Valley. If that's true of Boston, it's even more true of
every other city.
This is about cities, not countries.
And I think I can prove I'm right. You can easily reduce the opposing argument
ad what most people would agree was absurdum. Few would be willing to claim
that it doesn't matter at all where a startup is—that a startup operating out
of a small agricultural town wouldn't benefit from moving to a startup hub.
Most people could see how it might be helpful to be in a place where there was
infrastructure for startups, accumulated knowledge about how to make them
work, and other people trying to do it. And yet whatever argument you use to
prove that startups don't need to move from London to Silicon Valley could
equally well be used to prove startups don't need to move from smaller towns
to London.
The difference between cities is a matter of degree. And if, as nearly
everyone who knows agrees, startups are better off in Silicon Valley than
Boston, then they're better off in Silicon Valley than everywhere else too.
I realize I might seem to have a vested interest in this conclusion, because
startups that move to the US might do it through Y Combinator. But the
American startups we've funded will attest that I say the same thing to them.
I'm not claiming of course that every startup has to go to Silicon Valley to
succeed. Just that all other things being equal, the more of a startup hub a
place is, the better startups will do there. But other considerations can
outweigh the advantages of moving. I'm not saying founders with families
should uproot them to move halfway around the world; that might be too much of
a distraction.
Immigration difficulties might be another reason to stay put. Dealing with
immigration problems is like raising money: for some reason it seems to
consume all your attention. A startup can't afford much of that. One Canadian
startup we funded spent about 6 months working on moving to the US. Eventually
they just gave up, because they couldn't afford to take so much time away from
working on their software.
(If another country wanted to establish a rival to Silicon Valley, the single
best thing they could do might be to create a special visa for startup
founders. US immigration policy is one of Silicon Valley's biggest
weaknesses.)
If your startup is connected to a specific industry, you may be better off in
one of its centers. A startup doing something related to entertainment might
want to be in New York or LA.
And finally, if a good investor has committed to fund you if you stay where
you are, you should probably stay. Finding investors is hard. You generally
shouldn't pass up a definite funding offer to move. [2]
In fact, the quality of the investors may be the main advantage of startup
hubs. Silicon Valley investors are noticeably more aggressive than Boston
ones. Over and over, I've seen startups we've funded snatched by west coast
investors out from under the noses of Boston investors who saw them first but
acted too slowly. At this year's Boston Demo Day, I told the audience that
this happened every year, so if they saw a startup they liked, they should
make them an offer. And yet within a month it had happened again: an
aggressive west coast VC who had met the founder of a YC-funded startup a week
before beat out a Boston VC who had known him for years. By the time the
Boston VC grasped what was happening, the deal was already gone.
Boston investors will admit they're more conservative. Some want to believe
this comes from the city's prudent Yankee character. But Occam's razor
suggests the truth is less flattering. Boston investors are probably more
conservative than Silicon Valley investors for the same reason Chicago
investors are more conservative than Boston ones. They don't understand
startups as well.
West coast investors aren't bolder because they're irresponsible cowboys, or
because the good weather makes them optimistic. They're bolder because they
know what they're doing. They're the skiers who ski on the diamond slopes.
Boldness is the essence of venture investing. The way you get big returns is
not by trying to avoid losses, but by trying to ensure you get some of the big
hits. And the big hits often look risky at first.
Like Facebook. Facebook was started in Boston. Boston VCs had the first shot
at them. But they said no, so Facebook moved to Silicon Valley and raised
money there. The partner who turned them down now says that "may turn out to
have been a mistake."
Empirically, boldness wins. If the aggressive ways of west coast investors are
going to come back to bite them, it has been a long time coming. Silicon
Valley has been pulling ahead of Boston since the 1970s. If there was going to
be a comeuppance for the west coast investors, the bursting of the Bubble
would have been it. But since then the west coast has just pulled further
ahead.
West coast investors are confident enough of their judgement to act boldly;
east coast investors, not so much; but anyone who thinks east coast investors
act that way out of prudence should see the frantic reactions of an east coast
VC in the process of losing a deal to a west coast one.
In addition to the concentration that comes from specialization, startup hubs
are also markets. And markets are usually centralized. Even now, when traders
could be anywhere, they cluster in a few cities. It's hard to say exactly what
it is about face to face contact that makes deals happen, but whatever it is,
it hasn't yet been duplicated by technology.
Walk down University Ave at the right time, and you might overhear five
different people talking on the phone about deals. In fact, this is part of
the reason Y Combinator is in Boston half the time: it's hard to stand that
year round. But though it can sometimes be annoying to be surrounded by people
who only think about one thing, it's the place to be if that one thing is what
you're trying to do.
I was talking recently to someone who works on search at Google. He knew a lot
of people at Yahoo, so he was in a good position to compare the two companies.
I asked him why Google was better at search. He said it wasn't anything
specific Google did, but simply that they understood search so much better.
And that's why startups thrive in startup hubs like Silicon Valley. Startups
are a very specialized business, as specialized as diamond cutting. And in
startup hubs they understand it.
**Notes**
[1] The nationalistic idea is the converse: that startups should stay in a
certain city because of the country it's in. If you really have a "one world"
viewpoint, deciding to move from London to Silicon Valley is no different from
deciding to move from Chicago to Silicon Valley.
[2] An investor who merely seems like he will fund you, however, you can
ignore. Seeming like they will fund you one day is the way investors say No.
**Thanks** to Sam Altman, Jessica Livingston, Harjeet Taggar, and Kulveer
Taggar for reading drafts of this.
[Comment](http://news.ycombinator.com/item?id=65815) on this essay.
|
July 2007
I have too much stuff. Most people in America do. In fact, the poorer people
are, the more stuff they seem to have. Hardly anyone is so poor that they
can't afford a front yard full of old cars.
It wasn't always this way. Stuff used to be rare and valuable. You can still
see evidence of that if you look for it. For example, in my house in
Cambridge, which was built in 1876, the bedrooms don't have closets. In those
days people's stuff fit in a chest of drawers. Even as recently as a few
decades ago there was a lot less stuff. When I look back at photos from the
1970s, I'm surprised how empty houses look. As a kid I had what I thought was
a huge fleet of toy cars, but they'd be dwarfed by the number of toys my
nephews have. All together my Matchboxes and Corgis took up about a third of
the surface of my bed. In my nephews' rooms the bed is the only clear space.
Stuff has gotten a lot cheaper, but our attitudes toward it haven't changed
correspondingly. We overvalue stuff.
That was a big problem for me when I had no money. I felt poor, and stuff
seemed valuable, so almost instinctively I accumulated it. Friends would leave
something behind when they moved, or I'd see something as I was walking down
the street on trash night (beware of anything you find yourself describing as
"perfectly good"), or I'd find something in almost new condition for a tenth
its retail price at a garage sale. And pow, more stuff.
In fact these free or nearly free things weren't bargains, because they were
worth even less than they cost. Most of the stuff I accumulated was worthless,
because I didn't need it.
What I didn't understand was that the value of some new acquisition wasn't the
difference between its retail price and what I paid for it. It was the value I
derived from it. Stuff is an extremely illiquid asset. Unless you have some
plan for selling that valuable thing you got so cheaply, what difference does
it make what it's "worth?" The only way you're ever going to extract any value
from it is to use it. And if you don't have any immediate use for it, you
probably never will.
Companies that sell stuff have spent huge sums training us to think stuff is
still valuable. But it would be closer to the truth to treat stuff as
worthless.
In fact, worse than worthless, because once you've accumulated a certain
amount of stuff, it starts to own you rather than the other way around. I know
of one couple who couldn't retire to the town they preferred because they
couldn't afford a place there big enough for all their stuff. Their house
isn't theirs; it's their stuff's.
And unless you're extremely organized, a house full of stuff can be very
depressing. A cluttered room saps one's spirits. One reason, obviously, is
that there's less room for people in a room full of stuff. But there's more
going on than that. I think humans constantly scan their environment to build
a mental model of what's around them. And the harder a scene is to parse, the
less energy you have left for conscious thoughts. A cluttered room is
literally exhausting.
(This could explain why clutter doesn't seem to bother kids as much as adults.
Kids are less perceptive. They build a coarser model of their surroundings,
and this consumes less energy.)
I first realized the worthlessness of stuff when I lived in Italy for a year.
All I took with me was one large backpack of stuff. The rest of my stuff I
left in my landlady's attic back in the US. And you know what? All I missed
were some of the books. By the end of the year I couldn't even remember what
else I had stored in that attic.
And yet when I got back I didn't discard so much as a box of it. Throw away a
perfectly good rotary telephone? I might need that one day.
The really painful thing to recall is not just that I accumulated all this
useless stuff, but that I often spent money I desperately needed on stuff that
I didn't.
Why would I do that? Because the people whose job is to sell you stuff are
really, really good at it. The average 25 year old is no match for companies
that have spent years figuring out how to get you to spend money on stuff.
They make the experience of buying stuff so pleasant that "shopping" becomes a
leisure activity.
How do you protect yourself from these people? It can't be easy. I'm a fairly
skeptical person, and their tricks worked on me well into my thirties. But one
thing that might work is to ask yourself, before buying something, "is this
going to make my life noticeably better?"
A friend of mine cured herself of a clothes buying habit by asking herself
before she bought anything "Am I going to wear this all the time?" If she
couldn't convince herself that something she was thinking of buying would
become one of those few things she wore all the time, she wouldn't buy it. I
think that would work for any kind of purchase. Before you buy anything, ask
yourself: will this be something I use constantly? Or is it just something
nice? Or worse still, a mere bargain?
The worst stuff in this respect may be stuff you don't use much because it's
too good. Nothing owns you like fragile stuff. For example, the "good china"
so many households have, and whose defining quality is not so much that it's
fun to use, but that one must be especially careful not to break it.
Another way to resist acquiring stuff is to think of the overall cost of
owning it. The purchase price is just the beginning. You're going to have to
_think_ about that thing for years—perhaps for the rest of your life. Every
thing you own takes energy away from you. Some give more than they take. Those
are the only things worth having.
I've now stopped accumulating stuff. Except books—but books are different.
Books are more like a fluid than individual objects. It's not especially
inconvenient to own several thousand books, whereas if you owned several
thousand random possessions you'd be a local celebrity. But except for books,
I now actively avoid stuff. If I want to spend money on some kind of treat,
I'll take services over goods any day.
I'm not claiming this is because I've achieved some kind of zenlike detachment
from material things. I'm talking about something more mundane. A historical
change has taken place, and I've now realized it. Stuff used to be valuable,
and now it's not.
In industrialized countries the same thing happened with food in the middle of
the twentieth century. As food got cheaper (or we got richer; they're
indistinguishable), eating too much started to be a bigger danger than eating
too little. We've now reached that point with stuff. For most people, rich or
poor, stuff has become a burden.
The good news is, if you're carrying a burden without knowing it, your life
could be better than you realize. Imagine walking around for years with five
pound ankle weights, then suddenly having them removed.
|
September 2001
_(This article explains why much of the next generation of software may be
server-based, what that will mean for programmers, and why this new kind of
software is a great opportunity for startups. It's derived from a talk at BBN
Labs.)_
In the summer of 1995, my friend Robert Morris and I decided to start a
startup. The PR campaign leading up to Netscape's IPO was running full blast
then, and there was a lot of talk in the press about online commerce. At the
time there might have been thirty actual stores on the Web, all made by hand.
If there were going to be a lot of online stores, there would need to be
software for making them, so we decided to write some.
For the first week or so we intended to make this an ordinary desktop
application. Then one day we had the idea of making the software run on our
Web server, using the browser as an interface. We tried rewriting the software
to work over the Web, and it was clear that this was the way to go. If we
wrote our software to run on the server, it would be a lot easier for the
users and for us as well.
This turned out to be a good plan. Now, as [Yahoo
Store](http://store.yahoo.com), this software is the most popular online store
builder, with about 14,000 users.
When we started Viaweb, hardly anyone understood what we meant when we said
that the software ran on the server. It was not until Hotmail was launched a
year later that people started to get it. Now everyone knows that this is a
valid approach. There is a name now for what we were: an Application Service
Provider, or ASP.
I think that a lot of the next generation of software will be written on this
model. Even Microsoft, who have the most to lose, seem to see the inevitablity
of moving some things off the desktop. If software moves off the desktop and
onto servers, it will mean a very different world for developers. This article
describes the surprising things we saw, as some of the first visitors to this
new world. To the extent software does move onto servers, what I'm describing
here is the future.
**The Next Thing?**
When we look back on the desktop software era, I think we'll marvel at the
inconveniences people put up with, just as we marvel now at what early car
owners put up with. For the first twenty or thirty years, you had to be a car
expert to own a car. But cars were such a big win that lots of people who
weren't car experts wanted to have them as well.
Computers are in this phase now. When you own a desktop computer, you end up
learning a lot more than you wanted to know about what's happening inside it.
But more than half the households in the US own one. My mother has a computer
that she uses for email and for keeping accounts. About a year ago she was
alarmed to receive a letter from Apple, offering her a discount on a new
version of the operating system. There's something wrong when a sixty-five
year old woman who wants to use a computer for email and accounts has to think
about installing new operating systems. Ordinary users shouldn't even know the
words "operating system," much less "device driver" or "patch."
There is now another way to deliver software that will save users from
becoming system administrators. Web-based applications are programs that run
on Web servers and use Web pages as the user interface. For the average user
this new kind of software will be easier, cheaper, more mobile, more reliable,
and often more powerful than desktop software.
With Web-based software, most users won't have to think about anything except
the applications they use. All the messy, changing stuff will be sitting on a
server somewhere, maintained by the kind of people who are good at that kind
of thing. And so you won't ordinarily need a computer, per se, to use
software. All you'll need will be something with a keyboard, a screen, and a
Web browser. Maybe it will have wireless Internet access. Maybe it will also
be your cell phone. Whatever it is, it will be consumer electronics: something
that costs about $200, and that people choose mostly based on how the case
looks. You'll pay more for Internet services than you do for the hardware,
just as you do now with telephones. [1]
It will take about a tenth of a second for a click to get to the server and
back, so users of heavily interactive software, like Photoshop, will still
want to have the computations happening on the desktop. But if you look at the
kind of things most people use computers for, a tenth of a second latency
would not be a problem. My mother doesn't really need a desktop computer, and
there are a lot of people like her.
**The Win for Users**
Near my house there is a car with a bumper sticker that reads "death before
inconvenience." Most people, most of the time, will take whatever choice
requires least work. If Web-based software wins, it will be because it's more
convenient. And it looks as if it will be, for users and developers both.
To use a purely Web-based application, all you need is a browser connected to
the Internet. So you can use a Web-based application anywhere. When you
install software on your desktop computer, you can only use it on that
computer. Worse still, your files are trapped on that computer. The
inconvenience of this model becomes more and more evident as people get used
to networks.
The thin end of the wedge here was Web-based email. Millions of people now
realize that you should have access to email messages no matter where you are.
And if you can see your email, why not your calendar? If you can discuss a
document with your colleagues, why can't you edit it? Why should any of your
data be trapped on some computer sitting on a faraway desk?
The whole idea of "your computer" is going away, and being replaced with "your
data." You should be able to get at your data from any computer. Or rather,
any client, and a client doesn't have to be a computer.
Clients shouldn't store data; they should be like telephones. In fact they may
become telephones, or vice versa. And as clients get smaller, you have another
reason not to keep your data on them: something you carry around with you can
be lost or stolen. Leaving your PDA in a taxi is like a disk crash, except
that your data is handed to [someone
else](http://news.zdnet.co.uk/business/0,39020645,2077931,00.htm) instead of
being vaporized.
With purely Web-based software, neither your data nor the applications are
kept on the client. So you don't have to install anything to use it. And when
there's no installation, you don't have to worry about installation going
wrong. There can't be incompatibilities between the application and your
operating system, because the software doesn't run on your operating system.
Because it needs no installation, it will be easy, and common, to try Web-
based software before you "buy" it. You should expect to be able to test-drive
any Web-based application for free, just by going to the site where it's
offered. At Viaweb our whole site was like a big arrow pointing users to the
test drive.
After trying the demo, signing up for the service should require nothing more
than filling out a brief form (the briefer the better). And that should be the
last work the user has to do. With Web-based software, you should get new
releases without paying extra, or doing any work, or possibly even knowing
about it.
Upgrades won't be the big shocks they are now. Over time applications will
quietly grow more powerful. This will take some effort on the part of the
developers. They will have to design software so that it can be updated
without confusing the users. That's a new problem, but there are ways to solve
it.
With Web-based applications, everyone uses the same version, and bugs can be
fixed as soon as they're discovered. So Web-based software should have far
fewer bugs than desktop software. At Viaweb, I doubt we ever had ten known
bugs at any one time. That's orders of magnitude better than desktop software.
Web-based applications can be used by several people at the same time. This is
an obvious win for collaborative applications, but I bet users will start to
want this in most applications once they realize it's possible. It will often
be useful to let two people edit the same document, for example. Viaweb let
multiple users edit a site simultaneously, more because that was the right way
to write the software than because we expected users to want to, but it turned
out that many did.
When you use a Web-based application, your data will be safer. Disk crashes
won't be a thing of the past, but users won't hear about them anymore. They'll
happen within server farms. And companies offering Web-based applications will
actually do backups-- not only because they'll have real system administrators
worrying about such things, but because an ASP that does lose people's data
will be in big, big trouble. When people lose their own data in a disk crash,
they can't get that mad, because they only have themselves to be mad at. When
a company loses their data for them, they'll get a lot madder.
Finally, Web-based software should be less vulnerable to viruses. If the
client doesn't run anything except a browser, there's less chance of running
viruses, and no data locally to damage. And a program that attacked the
servers themselves should find them very well defended. [2]
For users, Web-based software will be _less stressful._ I think if you looked
inside the average Windows user you'd find a huge and pretty much untapped
desire for software meeting that description. Unleashed, it could be a
powerful force.
**City of Code**
To developers, the most conspicuous difference between Web-based and desktop
software is that a Web-based application is not a single piece of code. It
will be a collection of programs of different types rather than a single big
binary. And so designing Web-based software is like desiging a city rather
than a building: as well as buildings you need roads, street signs, utilities,
police and fire departments, and plans for both growth and various kinds of
disasters.
At Viaweb, software included fairly big applications that users talked to
directly, programs that those programs used, programs that ran constantly in
the background looking for problems, programs that tried to restart things if
they broke, programs that ran occasionally to compile statistics or build
indexes for searches, programs we ran explicitly to garbage-collect resources
or to move or restore data, programs that pretended to be users (to measure
performance or expose bugs), programs for diagnosing network troubles,
programs for doing backups, interfaces to outside services, software that
drove an impressive collection of dials displaying real-time server statistics
(a hit with visitors, but indispensable for us too), modifications (including
bug fixes) to open-source software, and a great many configuration files and
settings. Trevor Blackwell wrote a spectacular program for moving stores to
new servers across the country, without shutting them down, after we were
bought by Yahoo. Programs paged us, sent faxes and email to users, conducted
transactions with credit card processors, and talked to one another through
sockets, pipes, http requests, ssh, udp packets, shared memory, and files.
Some of Viaweb even consisted of the absence of programs, since one of the
keys to Unix security is not to run unnecessary utilities that people might
use to break into your servers.
It did not end with software. We spent a lot of time thinking about server
configurations. We built the servers ourselves, from components-- partly to
save money, and partly to get exactly what we wanted. We had to think about
whether our upstream ISP had fast enough connections to all the backbones. We
serially
[dated](http://groups.google.com/groups?selm=6hdipo%243o0%241%40FreeBSD.csie.NCTU.edu.tw)
RAID suppliers.
But hardware is not just something to worry about. When you control it you can
do more for users. With a desktop application, you can specify certain minimum
hardware, but you can't add more. If you administer the servers, you can in
one step enable all your users to page people, or send faxes, or send commands
by phone, or process credit cards, etc, just by installing the relevant
hardware. We always looked for new ways to add features with hardware, not
just because it pleased users, but also as a way to distinguish ourselves from
competitors who (either because they sold desktop software, or resold Web-
based applications through ISPs) didn't have direct control over the hardware.
Because the software in a Web-based application will be a collection of
programs rather than a single binary, it can be written in any number of
different languages. When you're writing desktop software, you're practically
forced to write the application in the same language as the underlying
operating system-- meaning C and C++. And so these languages (especially among
nontechnical people like managers and VCs) got to be considered as the
languages for "serious" software development. But that was just an artifact of
the way desktop software had to be delivered. For server-based software you
can use any language you want. [3] Today a lot of the top hackers are using
languages far removed from C and C++: Perl, Python, and even Lisp.
With server-based software, no one can tell you what language to use, because
you control the whole system, right down to the hardware. Different languages
are good for different tasks. You can use whichever is best for each. And when
you have competitors, "you can" means "you must" (we'll return to this later),
because if you don't take advantage of this possibility, your competitors
will.
Most of our competitors used C and C++, and this made their software visibly
inferior because (among other things), they had no way around the
statelessness of CGI scripts. If you were going to change something, all the
changes had to happen on one page, with an Update button at the bottom. As
I've written elsewhere, by using [Lisp](avg.html), which many people still
consider a research language, we could make the Viaweb editor behave more like
desktop software.
**Releases**
One of the most important changes in this new world is the way you do
releases. In the desktop software business, doing a release is a huge trauma,
in which the whole company sweats and strains to push out a single, giant
piece of code. Obvious comparisons suggest themselves, both to the process and
the resulting product.
With server-based software, you can make changes almost as you would in a
program you were writing for yourself. You release software as a series of
incremental changes instead of an occasional big explosion. A typical desktop
software company might do one or two releases a year. At Viaweb we often did
three to five releases a day.
When you switch to this new model, you realize how much software development
is affected by the way it is released. Many of the nastiest problems you see
in the desktop software business are due to catastrophic nature of releases.
When you release only one new version a year, you tend to deal with bugs
wholesale. Some time before the release date you assemble a new version in
which half the code has been torn out and replaced, introducing countless
bugs. Then a squad of QA people step in and start counting them, and the
programmers work down the list, fixing them. They do not generally get to the
end of the list, and indeed, no one is sure where the end is. It's like
fishing rubble out of a pond. You never really know what's happening inside
the software. At best you end up with a statistical sort of correctness.
With server-based software, most of the change is small and incremental. That
in itself is less likely to introduce bugs. It also means you know what to
test most carefully when you're about to release software: the last thing you
changed. You end up with a much firmer grip on the code. As a general rule,
you do know what's happening inside it. You don't have the source code
memorized, of course, but when you read the source you do it like a pilot
scanning the instrument panel, not like a detective trying to unravel some
mystery.
Desktop software breeds a certain fatalism about bugs. You know that you're
shipping something loaded with bugs, and you've even set up mechanisms to
compensate for it (e.g. patch releases). So why worry about a few more? Soon
you're releasing whole features you know are broken.
[Apple](http://news.cnet.com/news/0-1006-200-5195914.html) did this earlier
this year. They felt under pressure to release their new OS, whose release
date had already slipped four times, but some of the software (support for CDs
and DVDs) wasn't ready. The solution? They released the OS without the
unfinished parts, and users will have to install them later.
With Web-based software, you never have to release software before it works,
and you can release it as soon as it does work.
The industry veteran may be thinking, it's a fine-sounding idea to say that
you never have to release software before it works, but what happens when
you've promised to deliver a new version of your software by a certain date?
With Web-based software, you wouldn't make such a promise, because there are
no versions. Your software changes gradually and continuously. Some changes
might be bigger than others, but the idea of versions just doesn't naturally
fit onto Web-based software.
If anyone remembers Viaweb this might sound odd, because we were always
announcing new versions. This was done entirely for PR purposes. The trade
press, we learned, thinks in version numbers. They will give you major
coverage for a major release, meaning a new first digit on the version number,
and generally a paragraph at most for a point release, meaning a new digit
after the decimal point.
Some of our competitors were offering desktop software and actually had
version numbers. And for these releases, the mere fact of which seemed to us
evidence of their backwardness, they would get all kinds of publicity. We
didn't want to miss out, so we started giving version numbers to our software
too. When we wanted some publicity, we'd make a list of all the features we'd
added since the last "release," stick a new version number on the software,
and issue a press release saying that the new version was available
immediately. Amazingly, no one ever called us on it.
By the time we were bought, we had done this three times, so we were on
Version 4. Version 4.1 if I remember correctly. After Viaweb became Yahoo
Store, there was no longer such a desperate need for publicity, so although
the software continued to evolve, the whole idea of version numbers was
quietly dropped.
**Bugs**
The other major technical advantage of Web-based software is that you can
reproduce most bugs. You have the users' data right there on your disk. If
someone breaks your software, you don't have to try to guess what's going on,
as you would with desktop software: you should be able to reproduce the error
while they're on the phone with you. You might even know about it already, if
you have code for noticing errors built into your application.
Web-based software gets used round the clock, so everything you do is
immediately put through the wringer. Bugs turn up quickly.
Software companies are sometimes accused of letting the users debug their
software. And that is just what I'm advocating. For Web-based software it's
actually a good plan, because the bugs are fewer and transient. When you
release software gradually you get far fewer bugs to start with. And when you
can reproduce errors and release changes instantly, you can find and fix most
bugs as soon as they appear. We never had enough bugs at any one time to
bother with a formal bug-tracking system.
You should test changes before you release them, of course, so no major bugs
should get released. Those few that inevitably slip through will involve
borderline cases and will only affect the few users that encounter them before
someone calls in to complain. As long as you fix bugs right away, the net
effect, for the average user, is far fewer bugs. I doubt the average Viaweb
user ever saw a bug.
Fixing fresh bugs is easier than fixing old ones. It's usually fairly quick to
find a bug in code you just wrote. When it turns up you often know what's
wrong before you even look at the source, because you were already worrying
about it subconsciously. Fixing a bug in something you wrote six months ago
(the average case if you release once a year) is a lot more work. And since
you don't understand the code as well, you're more likely to fix it in an ugly
way, or even introduce more bugs. [4]
When you catch bugs early, you also get fewer compound bugs. Compound bugs are
two separate bugs that interact: you trip going downstairs, and when you reach
for the handrail it comes off in your hand. In software this kind of bug is
the hardest to find, and also tends to have the worst consequences. [5] The
traditional "break everything and then filter out the bugs" approach
inherently yields a lot of compound bugs. And software that's released in a
series of small changes inherently tends not to. The floors are constantly
being swept clean of any loose objects that might later get stuck in
something.
It helps if you use a technique called functional programming. Functional
programming means avoiding side-effects. It's something you're more likely to
see in research papers than commercial software, but for Web-based
applications it turns out to be really useful. It's hard to write entire
programs as purely functional code, but you can write substantial chunks this
way. It makes those parts of your software easier to test, because they have
no state, and that is very convenient in a situation where you are constantly
making and testing small modifications. I wrote much of Viaweb's editor in
this style, and we made our scripting language,
[RTML](http://store.yahoo.com/rtml.html), a purely functional language.
People from the desktop software business will find this hard to credit, but
at Viaweb bugs became almost a game. Since most released bugs involved
borderline cases, the users who encountered them were likely to be advanced
users, pushing the envelope. Advanced users are more forgiving about bugs,
especially since you probably introduced them in the course of adding some
feature they were asking for. In fact, because bugs were rare and you had to
be doing sophisticated things to see them, advanced users were often proud to
catch one. They would call support in a spirit more of triumph than anger, as
if they had scored points off us.
**Support**
When you can reproduce errors, it changes your approach to customer support.
At most software companies, support is offered as a way to make customers feel
better. They're either calling you about a known bug, or they're just doing
something wrong and you have to figure out what. In either case there's not
much you can learn from them. And so you tend to view support calls as a pain
in the ass that you want to isolate from your developers as much as possible.
This was not how things worked at Viaweb. At Viaweb, support was free, because
we wanted to hear from customers. If someone had a problem, we wanted to know
about it right away so that we could reproduce the error and release a fix.
So at Viaweb the developers were always in close contact with support. The
customer support people were about thirty feet away from the programmers, and
knew that they could always interrupt anything with a report of a genuine bug.
We would leave a board meeting to fix a serious bug.
Our approach to support made everyone happier. The customers were delighted.
Just imagine how it would feel to call a support line and be treated as
someone bringing important news. The customer support people liked it because
it meant they could help the users, instead of reading scripts to them. And
the programmers liked it because they could reproduce bugs instead of just
hearing vague second-hand reports about them.
Our policy of fixing bugs on the fly changed the relationship between customer
support people and hackers. At most software companies, support people are
underpaid human shields, and hackers are little copies of God the Father,
creators of the world. Whatever the procedure for reporting bugs, it is likely
to be one-directional: support people who hear about bugs fill out some form
that eventually gets passed on (possibly via QA) to programmers, who put it on
their list of things to do. It was very different at Viaweb. Within a minute
of hearing about a bug from a customer, the support people could be standing
next to a programmer hearing him say "Shit, you're right, it's a bug." It
delighted the support people to hear that "you're right" from the hackers.
They used to bring us bugs with the same expectant air as a cat bringing you a
mouse it has just killed. It also made them more careful in judging the
seriousness of a bug, because now their honor was on the line.
After we were bought by Yahoo, the customer support people were moved far away
from the programmers. It was only then that we realized that they were
effectively QA and to some extent marketing as well. In addition to catching
bugs, they were the keepers of the knowledge of vaguer, buglike things, like
features that confused users. [6] They were also a kind of proxy focus group;
we could ask them which of two new features users wanted more, and they were
always right.
**Morale**
Being able to release software immediately is a big motivator. Often as I was
walking to work I would think of some change I wanted to make to the software,
and do it that day. This worked for bigger features as well. Even if something
was going to take two weeks to write (few projects took longer), I knew I
could see the effect in the software as soon as it was done.
If I'd had to wait a year for the next release, I would have shelved most of
these ideas, for a while at least. The thing about ideas, though, is that they
lead to more ideas. Have you ever noticed that when you sit down to write
something, half the ideas that end up in it are ones you thought of while
writing it? The same thing happens with software. Working to implement one
idea gives you more ideas. So shelving an idea costs you not only that delay
in implementing it, but also all the ideas that implementing it would have led
to. In fact, shelving an idea probably even inhibits new ideas: as you start
to think of some new feature, you catch sight of the shelf and think "but I
already have a lot of new things I want to do for the next release."
What big companies do instead of implementing features is plan them. At Viaweb
we sometimes ran into trouble on this account. Investors and analysts would
ask us what we had planned for the future. The truthful answer would have
been, we didn't have any plans. We had general ideas about things we wanted to
improve, but if we knew how we would have done it already. What were we going
to do in the next six months? Whatever looked like the biggest win. I don't
know if I ever dared give this answer, but that was the truth. Plans are just
another word for ideas on the shelf. When we thought of good ideas, we
implemented them.
At Viaweb, as at many software companies, most code had one definite owner.
But when you owned something you really owned it: no one except the owner of a
piece of software had to approve (or even know about) a release. There was no
protection against breakage except the fear of looking like an idiot to one's
peers, and that was more than enough. I may have given the impression that we
just blithely plowed forward writing code. We did go fast, but we thought very
carefully before we released software onto those servers. And paying attention
is more important to reliability than moving slowly. Because he pays close
attention, a Navy pilot can land a 40,000 lb. aircraft at 140 miles per hour
on a pitching carrier deck, at night, more safely than the average teenager
can cut a bagel.
This way of writing software is a double-edged sword of course. It works a lot
better for a small team of good, trusted programmers than it would for a big
company of mediocre ones, where bad ideas are caught by committees instead of
the people that had them.
**Brooks in Reverse**
Fortunately, Web-based software does require fewer programmers. I once worked
for a medium-sized desktop software company that had over 100 people working
in engineering as a whole. Only 13 of these were in product development. All
the rest were working on releases, ports, and so on. With Web-based software,
all you need (at most) are the 13 people, because there are no releases,
ports, and so on.
Viaweb was written by just three people. [7] I was always under pressure to
hire more, because we wanted to get bought, and we knew that buyers would have
a hard time paying a high price for a company with only three programmers.
(Solution: we hired more, but created new projects for them.)
When you can write software with fewer programmers, it saves you more than
money. As Fred Brooks pointed out in _The Mythical Man-Month,_ adding people
to a project tends to slow it down. The number of possible connections between
developers grows exponentially with the size of the group. The larger the
group, the more time they'll spend in meetings negotiating how their software
will work together, and the more bugs they'll get from unforeseen
interactions. Fortunately, this process also works in reverse: as groups get
smaller, software development gets exponentially more efficient. I can't
remember the programmers at Viaweb ever having an actual meeting. We never had
more to say at any one time than we could say as we were walking to lunch.
If there is a downside here, it is that all the programmers have to be to some
degree system administrators as well. When you're hosting software, someone
has to be watching the servers, and in practice the only people who can do
this properly are the ones who wrote the software. At Viaweb our system had so
many components and changed so frequently that there was no definite border
between software and infrastructure. Arbitrarily declaring such a border would
have constrained our design choices. And so although we were constantly hoping
that one day ("in a couple months") everything would be stable enough that we
could hire someone whose job was just to worry about the servers, it never
happened.
I don't think it could be any other way, as long as you're still actively
developing the product. Web-based software is never going to be something you
write, check in, and go home. It's a live thing, running on your servers right
now. A bad bug might not just crash one user's process; it could crash them
all. If a bug in your code corrupts some data on disk, you have to fix it. And
so on. We found that you don't have to watch the servers every minute (after
the first year or so), but you definitely want to keep an eye on things you've
changed recently. You don't release code late at night and then go home.
**Watching Users**
With server-based software, you're in closer touch with your code. You can
also be in closer touch with your users. Intuit is famous for introducing
themselves to customers at retail stores and asking to follow them home. If
you've ever watched someone use your software for the first time, you know
what surprises must have awaited them.
Software should do what users think it will. But you can't have any idea what
users will be thinking, believe me, until you watch them. And server-based
software gives you unprecedented information about their behavior. You're not
limited to small, artificial focus groups. You can see every click made by
every user. You have to consider carefully what you're going to look at,
because you don't want to violate users' privacy, but even the most general
statistical sampling can be very useful.
When you have the users on your server, you don't have to rely on benchmarks,
for example. Benchmarks are simulated users. With server-based software, you
can watch actual users. To decide what to optimize, just log into a server and
see what's consuming all the CPU. And you know when to stop optimizing too: we
eventually got the Viaweb editor to the point where it was memory-bound rather
than CPU-bound, and since there was nothing we could do to decrease the size
of users' data (well, nothing easy), we knew we might as well stop there.
Efficiency matters for server-based software, because you're paying for the
hardware. The number of users you can support per server is the divisor of
your capital cost, so if you can make your software very efficient you can
undersell competitors and still make a profit. At Viaweb we got the capital
cost per user down to about $5. It would be less now, probably less than the
cost of sending them the first month's bill. Hardware is free now, if your
software is reasonably efficient.
Watching users can guide you in design as well as optimization. Viaweb had a
scripting language called RTML that let advanced users define their own page
styles. We found that RTML became a kind of suggestion box, because users only
used it when the predefined page styles couldn't do what they wanted.
Originally the editor put button bars across the page, for example, but after
a number of users used RTML to put buttons down the left
[side](https://sep.turbifycdn.com/ca/I/paulgraham_1656_3563), we made that an
option (in fact the default) in the predefined page styles.
Finally, by watching users you can often tell when they're in trouble. And
since the customer is always right, that's a sign of something you need to
fix. At Viaweb the key to getting users was the online test drive. It was not
just a series of slides built by marketing people. In our test drive, users
actually used the software. It took about five minutes, and at the end of it
they had built a real, working store.
The test drive was the way we got nearly all our new users. I think it will be
the same for most Web-based applications. If users can get through a test
drive successfully, they'll like the product. If they get confused or bored,
they won't. So anything we could do to get more people through the test drive
would increase our growth rate.
I studied click trails of people taking the test drive and found that at a
certain step they would get confused and click on the browser's Back button.
(If you try writing Web-based applications, you'll find that the Back button
becomes one of your most interesting philosophical problems.) So I added a
message at that point, telling users that they were nearly finished, and
reminding them not to click on the Back button. Another great thing about Web-
based software is that you get instant feedback from changes: the number of
people completing the test drive rose immediately from 60% to 90%. And since
the number of new users was a function of the number of completed test drives,
our revenue growth increased by 50%, just from that change.
**Money**
In the early 1990s I read an article in which someone said that software was a
subscription business. At first this seemed a very cynical statement. But
later I realized that it reflects reality: software development is an ongoing
process. I think it's cleaner if you openly charge subscription fees, instead
of forcing people to keep buying and installing new versions so that they'll
keep paying you. And fortunately, subscriptions are the natural way to bill
for Web-based applications.
Hosting applications is an area where companies will play a role that is not
likely to be filled by freeware. Hosting applications is a lot of stress, and
has real expenses. No one is going to want to do it for free.
For companies, Web-based applications are an ideal source of revenue. Instead
of starting each quarter with a blank slate, you have a recurring revenue
stream. Because your software evolves gradually, you don't have to worry that
a new model will flop; there never need be a new model, per se, and if you do
something to the software that users hate, you'll know right away. You have no
trouble with uncollectable bills; if someone won't pay you can just turn off
the service. And there is no possibility of piracy.
That last "advantage" may turn out to be a problem. Some amount of piracy is
to the advantage of software companies. If some user really would not have
bought your software at any price, you haven't lost anything if he uses a
pirated copy. In fact you gain, because he is one more user helping to make
your software the standard-- or who might buy a copy later, when he graduates
from high school.
When they can, companies like to do something called price discrimination,
which means charging each customer as much as they can afford. [8] Software is
particularly suitable for price discrimination, because the marginal cost is
close to zero. This is why some software costs more to run on Suns than on
Intel boxes: a company that uses Suns is not interested in saving money and
can safely be charged more. Piracy is effectively the lowest tier of price
discrimination. I think that software companies understand this and
deliberately turn a blind eye to some kinds of piracy. [9] With server-based
software they are going to have to come up with some other solution.
Web-based software sells well, especially in comparison to desktop software,
because it's easy to buy. You might think that people decide to buy something,
and then buy it, as two separate steps. That's what I thought before Viaweb,
to the extent I thought about the question at all. In fact the second step can
propagate back into the first: if something is hard to buy, people will change
their mind about whether they wanted it. And vice versa: you'll sell more of
something when it's easy to buy. I buy more books because Amazon exists. Web-
based software is just about the easiest thing in the world to buy, especially
if you have just done an online demo. Users should not have to do much more
than enter a credit card number. (Make them do more at your peril.)
Sometimes Web-based software is offered through ISPs acting as resellers. This
is a bad idea. You have to be administering the servers, because you need to
be constantly improving both hardware and software. If you give up direct
control of the servers, you give up most of the advantages of developing Web-
based applications.
Several of our competitors shot themselves in the foot this way-- usually, I
think, because they were overrun by suits who were excited about this huge
potential channel, and didn't realize that it would ruin the product they
hoped to sell through it. Selling Web-based software through ISPs is like
selling sushi through vending machines.
**Customers**
Who will the customers be? At Viaweb they were initially individuals and
smaller companies, and I think this will be the rule with Web-based
applications. These are the users who are ready to try new things, partly
because they're more flexible, and partly because they want the lower costs of
new technology.
Web-based applications will often be the best thing for big companies too
(though they'll be slow to realize it). The best intranet is the Internet. If
a company uses true Web-based applications, the software will work better, the
servers will be better administered, and employees will have access to the
system from anywhere.
The argument against this approach usually hinges on security: if access is
easier for employees, it will be for bad guys too. Some larger merchants were
reluctant to use Viaweb because they thought customers' credit card
information would be safer on their own servers. It was not easy to make this
point diplomatically, but in fact the data was almost certainly safer in our
hands than theirs. Who can hire better people to manage security, a technology
startup whose whole business is running servers, or a clothing retailer? Not
only did we have better people worrying about security, we worried more about
it. If someone broke into the clothing retailer's servers, it would affect at
most one merchant, could probably be hushed up, and in the worst case might
get one person fired. If someone broke into ours, it could affect thousands of
merchants, would probably end up as news on CNet, and could put us out of
business.
If you want to keep your money safe, do you keep it under your mattress at
home, or put it in a bank? This argument applies to every aspect of server
administration: not just security, but uptime, bandwidth, load management,
backups, etc. Our existence depended on doing these things right. Server
problems were the big no-no for us, like a dangerous toy would be for a toy
maker, or a salmonella outbreak for a food processor.
A big company that uses Web-based applications is to that extent outsourcing
IT. Drastic as it sounds, I think this is generally a good idea. Companies are
likely to get better service this way than they would from in-house system
administrators. System administrators can become cranky and unresponsive
because they're not directly exposed to competitive pressure: a salesman has
to deal with customers, and a developer has to deal with competitors'
software, but a system administrator, like an old bachelor, has few external
forces to keep him in line. [10] At Viaweb we had external forces in plenty to
keep us in line. The people calling us were customers, not just co-workers. If
a server got wedged, we jumped; just thinking about it gives me a jolt of
adrenaline, years later.
So Web-based applications will ordinarily be the right answer for big
companies too. They will be the last to realize it, however, just as they were
with desktop computers. And partly for the same reason: it will be worth a lot
of money to convince big companies that they need something more expensive.
There is always a tendency for rich customers to buy expensive solutions, even
when cheap solutions are better, because the people offering expensive
solutions can spend more to sell them. At Viaweb we were always up against
this. We lost several high-end merchants to Web consulting firms who convinced
them they'd be better off if they paid half a million dollars for a custom-
made online store on their own server. They were, as a rule, not better off,
as more than one discovered when Christmas shopping season came around and
loads rose on their server. Viaweb was a lot more sophisticated than what most
of these merchants got, but we couldn't afford to tell them. At $300 a month,
we couldn't afford to send a team of well-dressed and authoritative-sounding
people to make presentations to customers.
A large part of what big companies pay extra for is the cost of selling
expensive things to them. (If the Defense Department pays a thousand dollars
for toilet seats, it's partly because it costs a lot to sell toilet seats for
a thousand dollars.) And this is one reason intranet software will continue to
thrive, even though it is probably a bad idea. It's simply more expensive.
There is nothing you can do about this conundrum, so the best plan is to go
for the smaller customers first. The rest will come in time.
**Son of Server**
Running software on the server is nothing new. In fact it's the old model:
mainframe applications are all server-based. If server-based software is such
a good idea, why did it lose last time? Why did desktop computers eclipse
mainframes?
At first desktop computers didn't look like much of a threat. The first users
were all hackers-- or hobbyists, as they were called then. They liked
microcomputers because they were cheap. For the first time, you could have
your own computer. The phrase "personal computer" is part of the language now,
but when it was first used it had a deliberately audacious sound, like the
phrase "personal satellite" would today.
Why did desktop computers take over? I think it was because they had better
software. And I think the reason microcomputer software was better was that it
could be written by small companies.
I don't think many people realize how fragile and tentative startups are in
the earliest stage. Many startups begin almost by accident-- as a couple guys,
either with day jobs or in school, writing a prototype of something that
might, if it looks promising, turn into a company. At this larval stage, any
significant obstacle will stop the startup dead in its tracks. Writing
mainframe software required too much commitment up front. Development machines
were expensive, and because the customers would be big companies, you'd need
an impressive-looking sales force to sell it to them. Starting a startup to
write mainframe software would be a much more serious undertaking than just
hacking something together on your Apple II in the evenings. And so you didn't
get a lot of startups writing mainframe applications.
The arrival of desktop computers inspired a lot of new software, because
writing applications for them seemed an attainable goal to larval startups.
Development was cheap, and the customers would be individual people that you
could reach through computer stores or even by mail-order.
The application that pushed desktop computers out into the mainstream was
[VisiCalc](http://www.bricklin.com/visicalc.htm), the first spreadsheet. It
was written by two guys working in an attic, and yet did things no mainframe
software could do. [11] VisiCalc was such an advance, in its time, that people
bought Apple IIs just to run it. And this was the beginning of a trend:
desktop computers won because startups wrote software for them.
It looks as if server-based software will be good this time around, because
startups will write it. Computers are so cheap now that you can get started,
as we did, using a desktop computer as a server. Inexpensive processors have
eaten the workstation market (you rarely even hear the word now) and are most
of the way through the server market; Yahoo's servers, which deal with loads
as high as any on the Internet, all have the same inexpensive Intel processors
that you have in your desktop machine. And once you've written the software,
all you need to sell it is a Web site. Nearly all our users came direct to our
site through word of mouth and references in the press. [12]
Viaweb was a typical larval startup. We were terrified of starting a company,
and for the first few months comforted ourselves by treating the whole thing
as an experiment that we might call off at any moment. Fortunately, there were
few obstacles except technical ones. While we were writing the software, our
Web server was the same desktop machine we used for development, connected to
the outside world by a dialup line. Our only expenses in that phase were food
and rent.
There is all the more reason for startups to write Web-based software now,
because writing desktop software has become a lot less fun. If you want to
write desktop software now you do it on Microsoft's terms, calling their APIs
and working around their buggy OS. And if you manage to write something that
takes off, you may find that you were merely doing market research for
Microsoft.
If a company wants to make a platform that startups will build on, they have
to make it something that hackers themselves will want to use. That means it
has to be inexpensive and well-designed. The Mac was popular with hackers when
it first came out, and a lot of them wrote software for it. [13] You see this
less with Windows, because hackers don't use it. The kind of people who are
good at writing software tend to be running Linux or FreeBSD now.
I don't think we would have started a startup to write desktop software,
because desktop software has to run on Windows, and before we could write
software for Windows we'd have to use it. The Web let us do an end-run around
Windows, and deliver software running on Unix direct to users through the
browser. That is a liberating prospect, a lot like the arrival of PCs twenty-
five years ago.
**Microsoft**
Back when desktop computers arrived, IBM was the giant that everyone was
afraid of. It's hard to imagine now, but I remember the feeling very well. Now
the frightening giant is Microsoft, and I don't think they are as blind to the
threat facing them as IBM was. After all, Microsoft deliberately built their
business in IBM's blind spot.
I mentioned earlier that my mother doesn't really need a desktop computer.
Most users probably don't. That's a problem for Microsoft, and they know it.
If applications run on remote servers, no one needs Windows. What will
Microsoft do? Will they be able to use their control of the desktop to
prevent, or constrain, this new generation of software?
My guess is that Microsoft will develop some kind of server/desktop hybrid,
where the operating system works together with servers they control. At a
minimum, files will be centrally available for users who want that. I don't
expect Microsoft to go all the way to the extreme of doing the computations on
the server, with only a browser for a client, if they can avoid it. If you
only need a browser for a client, you don't need Microsoft on the client, and
if Microsoft doesn't control the client, they can't push users towards their
server-based applications.
I think Microsoft will have a hard time keeping the genie in the bottle. There
will be too many different types of clients for them to control them all. And
if Microsoft's applications only work with some clients, competitors will be
able to trump them by offering applications that work from any client. [14]
In a world of Web-based applications, there is no automatic place for
Microsoft. They may succeed in making themselves a place, but I don't think
they'll dominate this new world as they did the world of desktop applications.
It's not so much that a competitor will trip them up as that they will trip
over themselves. With the rise of Web-based software, they will be facing not
just technical problems but their own wishful thinking. What they need to do
is cannibalize their existing business, and I can't see them facing that. The
same single-mindedness that has brought them this far will now be working
against them. IBM was in exactly the same situation, and they could not master
it. IBM made a late and half-hearted entry into the microcomputer business
because they were ambivalent about threatening their cash cow, mainframe
computing. Microsoft will likewise be hampered by wanting to save the desktop.
A cash cow can be a damned heavy monkey on your back.
I'm not saying that no one will dominate server-based applications. Someone
probably will eventually. But I think that there will be a good long period of
cheerful chaos, just as there was in the early days of microcomputers. That
was a good time for startups. Lots of small companies flourished, and did it
by making cool things.
**Startups but More So**
The classic startup is fast and informal, with few people and little money.
Those few people work very hard, and technology magnifies the effect of the
decisions they make. If they win, they win big.
In a startup writing Web-based applications, everything you associate with
startups is taken to an extreme. You can write and launch a product with even
fewer people and even less money. You have to be even faster, and you can get
away with being more informal. You can literally launch your product as three
guys sitting in the living room of an apartment, and a server collocated at an
ISP. We did.
Over time the teams have gotten smaller, faster, and more informal. In 1960,
software development meant a roomful of men with horn rimmed glasses and
narrow black neckties, industriously writing ten lines of code a day on IBM
coding forms. In 1980, it was a team of eight to ten people wearing jeans to
the office and typing into vt100s. Now it's a couple of guys sitting in a
living room with laptops. (And jeans turn out not to be the last word in
informality.)
Startups are stressful, and this, unfortunately, is also taken to an extreme
with Web-based applications. Many software companies, especially at the
beginning, have periods where the developers slept under their desks and so
on. The alarming thing about Web-based software is that there is nothing to
prevent this becoming the default. The stories about sleeping under desks
usually end: then at last we shipped it and we all went home and slept for a
week. Web-based software never ships. You can work 16-hour days for as long as
you want to. And because you can, and your competitors can, you tend to be
forced to. You can, so you must. It's Parkinson's Law running in reverse.
The worst thing is not the hours but the responsibility. Programmers and
system administrators traditionally each have their own separate worries.
Programmers have to worry about bugs, and system administrators have to worry
about infrastructure. Programmers may spend a long day up to their elbows in
source code, but at some point they get to go home and forget about it. System
administrators never quite leave the job behind, but when they do get paged at
4:00 AM, they don't usually have to do anything very complicated. With Web-
based applications, these two kinds of stress get combined. The programmers
become system administrators, but without the sharply defined limits that
ordinarily make the job bearable.
At Viaweb we spent the first six months just writing software. We worked the
usual long hours of an early startup. In a desktop software company, this
would have been the part where we were working hard, but it felt like a
vacation compared to the next phase, when we took users onto our server. The
second biggest benefit of selling Viaweb to Yahoo (after the money) was to be
able to dump ultimate responsibility for the whole thing onto the shoulders of
a big company.
Desktop software forces users to become system administrators. Web-based
software forces programmers to. There is less stress in total, but more for
the programmers. That's not necessarily bad news. If you're a startup
competing with a big company, it's good news. [15] Web-based applications
offer a straightforward way to outwork your competitors. No startup asks for
more.
**Just Good Enough**
One thing that might deter you from writing Web-based applications is the
lameness of Web pages as a UI. That is a problem, I admit. There were a few
things we would have _really_ liked to add to HTML and HTTP. What matters,
though, is that Web pages are just good enough.
There is a parallel here with the first microcomputers. The processors in
those machines weren't actually intended to be the CPUs of computers. They
were designed to be used in things like traffic lights. But guys like Ed
Roberts, who designed the [Altair](http://en.wikipedia.org/wiki/Altair_8800),
realized that they were just good enough. You could combine one of these chips
with some memory (256 bytes in the first Altair), and front panel switches,
and you'd have a working computer. Being able to have your own computer was so
exciting that there were plenty of people who wanted to buy them, however
limited.
Web pages weren't designed to be a UI for applications, but they're just good
enough. And for a significant number of users, software that you can use from
any browser will be enough of a win in itself to outweigh any awkwardness in
the UI. Maybe you can't write the best-looking spreadsheet using HTML, but you
can write a spreadsheet that several people can use simultaneously from
different locations without special client software, or that can incorporate
live data feeds, or that can page you when certain conditions are triggered.
More importantly, you can write new kinds of applications that don't even have
names yet. VisiCalc was not merely a microcomputer version of a mainframe
application, after all-- it was a new type of application.
Of course, server-based applications don't have to be Web-based. You could
have some other kind of client. But I'm pretty sure that's a bad idea. It
would be very convenient if you could assume that everyone would install your
client-- so convenient that you could easily convince yourself that they all
would-- but if they don't, you're hosed. Because Web-based software assumes
nothing about the client, it will work anywhere the Web works. That's a big
advantage already, and the advantage will grow as new Web devices proliferate.
Users will like you because your software just works, and your life will be
easier because you won't have to tweak it for every new client. [16]
I feel like I've watched the evolution of the Web as closely as anyone, and I
can't predict what's going to happen with clients. Convergence is probably
coming, but where? I can't pick a winner. One thing I can predict is conflict
between AOL and Microsoft. Whatever Microsoft's .NET turns out to be, it will
probably involve connecting the desktop to servers. Unless AOL fights back,
they will either be pushed aside or turned into a pipe between Microsoft
client and server software. If Microsoft and AOL get into a client war, the
only thing sure to work on both will be browsing the Web, meaning Web-based
applications will be the only kind that work everywhere.
How will it all play out? I don't know. And you don't have to know if you bet
on Web-based applications. No one can break that without breaking browsing.
The Web may not be the only way to deliver software, but it's one that works
now and will continue to work for a long time. Web-based applications are
cheap to develop, and easy for even the smallest startup to deliver. They're a
lot of work, and of a particularly stressful kind, but that only makes the
odds better for startups.
**Why Not?**
E. B. White was amused to learn from a farmer friend that many electrified
fences don't have any current running through them. The cows apparently learn
to stay away from them, and after that you don't need the current. "Rise up,
cows!" he wrote, "Take your liberty while despots snore!"
If you're a hacker who has thought of one day starting a startup, there are
probably two things keeping you from doing it. One is that you don't know
anything about business. The other is that you're afraid of competition.
Neither of these fences have any current in them.
There are only two things you have to know about business: build something
users love, and make more than you spend. If you get these two right, you'll
be ahead of most startups. You can figure out the rest as you go.
You may not at first make more than you spend, but as long as the gap is
closing fast enough you'll be ok. If you start out underfunded, it will at
least encourage a habit of frugality. The less you spend, the easier it is to
make more than you spend. Fortunately, it can be very cheap to launch a Web-
based application. We launched on under $10,000, and it would be even cheaper
today. We had to spend thousands on a server, and thousands more to get SSL.
(The only company selling SSL software at the time was Netscape.) Now you can
rent a much more powerful server, with SSL included, for less than we paid for
bandwidth alone. You could launch a Web-based application now for less than
the cost of a fancy office chair.
As for building something users love, here are some general tips. Start by
making something clean and simple that you would want to use yourself. Get a
version 1.0 out fast, then continue to improve the software, listening closely
to the users as you do. The customer is always right, but different customers
are right about different things; the least sophisticated users show you what
you need to simplify and clarify, and the most sophisticated tell you what
features you need to add. The best thing software can be is easy, but the way
to do this is to get the defaults right, not to limit users' choices. Don't
get complacent if your competitors' software is lame; the standard to compare
your software to is what it could be, not what your current competitors happen
to have. Use your software yourself, all the time. Viaweb was supposed to be
an online store builder, but we used it to make our own site too. Don't listen
to marketing people or designers or product managers just because of their job
titles. If they have good ideas, use them, but it's up to you to decide;
software has to be designed by hackers who understand design, not designers
who know a little about software. If you can't design software as well as
implement it, don't start a startup.
Now let's talk about competition. What you're afraid of is not presumably
groups of hackers like you, but actual companies, with offices and business
plans and salesmen and so on, right? Well, they are more afraid of you than
you are of them, and they're right. It's a lot easier for a couple of hackers
to figure out how to rent office space or hire sales people than it is for a
company of any size to get software written. I've been on both sides, and I
know. When Viaweb was bought by Yahoo, I suddenly found myself working for a
big company, and it was like trying to run through waist-deep water.
I don't mean to disparage Yahoo. They had some good hackers, and the top
management were real butt-kickers. For a big company, they were exceptional.
But they were still only about a tenth as productive as a small startup. No
big company can do much better than that. What's scary about Microsoft is that
a company so big can develop software at all. They're like a mountain that can
walk.
Don't be intimidated. You can do as much that Microsoft can't as they can do
that you can't. And no one can stop you. You don't have to ask anyone's
permission to develop Web-based applications. You don't have to do licensing
deals, or get shelf space in retail stores, or grovel to have your application
bundled with the OS. You can deliver software right to the browser, and no one
can get between you and potential users without preventing them from browsing
the Web.
You may not believe it, but I promise you, Microsoft is scared of you. The
complacent middle managers may not be, but Bill is, because he was you once,
back in 1975, the last time a new way of delivering software appeared.
**Notes**
[1] Realizing that much of the money is in the services, companies building
lightweight clients have usually tried to combine the hardware with an [online
service](http://news.cnet.com/news/0-1006-200-3622600.html). This approach has
not worked well, partly because you need two different kinds of companies to
build consumer electronics and to run an online service, and partly because
users hate the idea. Giving away the razor and making money on the blades may
work for Gillette, but a razor is much smaller commitment than a Web terminal.
Cell phone handset makers are satisfied to sell hardware without trying to
capture the service revenue as well. That should probably be the model for
Internet clients too. If someone just sold a nice-looking little box with a
Web browser that you could use to connect through any ISP, every technophobe
in the country would buy one.
[2] Security always depends more on not screwing up than any design decision,
but the nature of server-based software will make developers pay more
attention to not screwing up. Compromising a server could cause such damage
that ASPs (that want to stay in business) are likely to be careful about
security.
[3] In 1995, when we started Viaweb, Java applets were supposed to be the
technology everyone was going to use to develop server-based applications.
Applets seemed to us an old-fashioned idea. Download programs to run on the
client? Simpler just to go all the way and run the programs on the server. We
wasted little time on applets, but countless other startups must have been
lured into this tar pit. Few can have escaped alive, or Microsoft could not
have gotten away with dropping Java in the most recent version of Explorer.
[4] This point is due to Trevor Blackwell, who adds "the cost of writing
software goes up more than linearly with its size. Perhaps this is mainly due
to fixing old bugs, and the cost can be more linear if all bugs are found
quickly."
[5] The hardest kind of bug to find may be a variant of compound bug where one
bug happens to compensate for another. When you fix one bug, the other becomes
visible. But it will seem as if the fix is at fault, since that was the last
thing you changed.
[6] Within Viaweb we once had a contest to describe the worst thing about our
software. Two customer support people tied for first prize with entries I
still shiver to recall. We fixed both problems immediately.
[7] Robert Morris wrote the ordering system, which shoppers used to place
orders. Trevor Blackwell wrote the image generator and the manager, which
merchants used to retrieve orders, view statistics, and configure domain names
etc. I wrote the editor, which merchants used to build their sites. The
ordering system and image generator were written in C and C++, the manager
mostly in Perl, and the editor in [Lisp](avg.html).
[8] Price discrimination is so pervasive (how often have you heard a retailer
claim that their buying power meant lower prices for you?) that I was
surprised to find it was outlawed in the U.S. by the Robinson-Patman Act of
1936. This law does not appear to be vigorously enforced.
[9] In _No Logo,_ Naomi Klein says that clothing brands favored by "urban
youth" do not try too hard to prevent shoplifting because in their target
market the shoplifters are also the fashion leaders.
[10] Companies often wonder what to outsource and what not to. One possible
answer: outsource any job that's not directly exposed to competitive pressure,
because outsourcing it will thereby expose it to competitive pressure.
[11] The two guys were Dan Bricklin and Bob Frankston. Dan wrote a prototype
in Basic in a couple days, then over the course of the next year they worked
together (mostly at night) to make a more powerful version written in 6502
machine language. Dan was at Harvard Business School at the time and Bob
nominally had a day job writing software. "There was no great risk in doing a
business," Bob wrote, "If it failed it failed. No big deal."
[12] It's not quite as easy as I make it sound. It took a painfully long time
for word of mouth to get going, and we did not start to get a lot of press
coverage until we hired a [PR firm](http://www.schwartz-pr.com) (admittedly
the best in the business) for $16,000 per month. However, it was true that the
only significant channel was our own Web site.
[13] If the Mac was so great, why did it lose? Cost, again. Microsoft
concentrated on the software business, and unleashed a swarm of cheap
component suppliers on Apple hardware. It did not help, either, that suits
took over during a critical period.
[14] One thing that would help Web-based applications, and help keep the next
generation of software from being overshadowed by Microsoft, would be a good
open-source browser. Mozilla is open-source but seems to have suffered from
having been corporate software for so long. A small, fast browser that was
actively maintained would be a great thing in itself, and would probably also
encourage companies to build little Web appliances.
Among other things, a proper open-source browser would cause HTTP and HTML to
continue to evolve (as e.g. Perl has). It would help Web-based applications
greatly to be able to distinguish between selecting a link and following it;
all you'd need to do this would be a trivial enhancement of HTTP, to allow
multiple urls in a request. Cascading menus would also be good.
If you want to change the world, write a new Mosaic. Think it's too late? In
1998 a lot of people thought it was too late to launch a new search engine,
but Google proved them wrong. There is always room for something new if the
current options suck enough. Make sure it works on all the free OSes first--
new things start with their users.
[15] Trevor Blackwell, who probably knows more about this from personal
experience than anyone, writes:
"I would go farther in saying that because server-based software is so hard on
the programmers, it causes a fundamental economic shift away from large
companies. It requires the kind of intensity and dedication from programmers
that they will only be willing to provide when it's their own company.
Software companies can hire skilled people to work in a not-too-demanding
environment, and can hire unskilled people to endure hardships, but they can't
hire highly skilled people to bust their asses. Since capital is no longer
needed, big companies have little to bring to the table."
[16] In the original version of this essay, I advised avoiding Javascript.
That was a good plan in 2001, but Javascript now works.
**Thanks** to Sarah Harlin, Trevor Blackwell, Robert Morris, Eric Raymond, Ken
Anderson, and Dan Giffin for reading drafts of this paper; to Dan Bricklin and
Bob Frankston for information about VisiCalc; and again to Ken Anderson for
inviting me to speak at BBN.
You'll find this essay and 14 others in [**_Hackers &
Painters_**](hackpaint.html).
|
March 2005
_(In the process of answering an email, I accidentally wrote a tiny essay
about writing. I usually spend weeks on an essay. This one took 67 minutes—23
of writing, and 44 of rewriting.)_
I think it's far more important to write well than most people realize.
Writing doesn't just communicate ideas; it generates them. If you're bad at
writing and don't like to do it, you'll miss out on most of the ideas writing
would have generated.
As for how to write well, here's the short version: Write a bad version 1 as
fast as you can; rewrite it over and over; cut ~~out~~ everything unnecessary;
write in a conversational tone; develop a nose for bad writing, so you can see
and fix it in yours; imitate writers you like; if you can't get started, tell
someone what you plan to write about, then write down what you said; expect
80% of the ideas in an essay to happen after you start writing it, and 50% of
those you start with to be wrong; be confident enough to cut; have friends you
trust read your stuff and tell you which bits are confusing or drag; don't
(always) make detailed outlines; mull ideas over for a few days before
writing; carry a small notebook or scrap paper with you; start writing when
you think of the first sentence; if a deadline forces you to start before
that, just say the most important sentence first; write about stuff you like;
don't try to sound impressive; don't hesitate to change the topic on the fly;
use footnotes to contain digressions; use anaphora to knit sentences together;
read your essays out loud to see (a) where you stumble over awkward phrases
and (b) which bits are boring (the paragraphs you dread reading); try to tell
the reader something new and useful; work in fairly big quanta of time; when
you restart, begin by rereading what you have so far; when you finish, leave
yourself something easy to start with; accumulate notes for topics you plan to
cover at the bottom of the file; don't feel obliged to cover any of them;
write for a reader who won't read the essay as carefully as you do, just as
pop songs are designed to sound ok on crappy car radios; if you say anything
mistaken, fix it immediately; ask friends which sentence you'll regret most;
go back and tone down harsh remarks; publish stuff online, because an audience
makes you write more, and thus generate more ideas; print out drafts instead
of just looking at them on the screen; use simple, germanic words; learn to
distinguish surprises from digressions; learn to recognize the approach of an
ending, and when one appears, grab it.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
November 2012
The way to get startup ideas is not to try to think of startup ideas. It's to
look for problems, preferably problems you have yourself.
The very best startup ideas tend to have three things in common: they're
something the founders themselves want, that they themselves can build, and
that few others realize are worth doing. Microsoft, Apple, Yahoo, Google, and
Facebook all began this way.
**Problems**
Why is it so important to work on a problem you have? Among other things, it
ensures the problem really exists. It sounds obvious to say you should only
work on problems that exist. And yet by far the most common mistake startups
make is to solve problems no one has.
I made it myself. In 1995 I started a company to put art galleries online. But
galleries didn't want to be online. It's not how the art business works. So
why did I spend 6 months working on this stupid idea? Because I didn't pay
attention to users. I invented a model of the world that didn't correspond to
reality, and worked from that. I didn't notice my model was wrong until I
tried to convince users to pay for what we'd built. Even then I took
embarrassingly long to catch on. I was attached to my model of the world, and
I'd spent a lot of time on the software. They had to want it!
Why do so many founders build things no one wants? Because they begin by
trying to think of startup ideas. That m.o. is doubly dangerous: it doesn't
merely yield few good ideas; it yields bad ideas that sound plausible enough
to fool you into working on them.
At YC we call these "made-up" or "sitcom" startup ideas. Imagine one of the
characters on a TV show was starting a startup. The writers would have to
invent something for it to do. But coming up with good startup ideas is hard.
It's not something you can do for the asking. So (unless they got amazingly
lucky) the writers would come up with an idea that sounded plausible, but was
actually bad.
For example, a social network for pet owners. It doesn't sound obviously
mistaken. Millions of people have pets. Often they care a lot about their pets
and spend a lot of money on them. Surely many of these people would like a
site where they could talk to other pet owners. Not all of them perhaps, but
if just 2 or 3 percent were regular visitors, you could have millions of
users. You could serve them targeted offers, and maybe charge for premium
features. [1]
The danger of an idea like this is that when you run it by your friends with
pets, they don't say "I would _never_ use this." They say "Yeah, maybe I could
see using something like that." Even when the startup launches, it will sound
plausible to a lot of people. They don't want to use it themselves, at least
not right now, but they could imagine other people wanting it. Sum that
reaction across the entire population, and you have zero users. [2]
**Well**
When a startup launches, there have to be at least some users who really need
what they're making — not just people who could see themselves using it one
day, but who want it urgently. Usually this initial group of users is small,
for the simple reason that if there were something that large numbers of
people urgently needed and that could be built with the amount of effort a
startup usually puts into a version one, it would probably already exist.
Which means you have to compromise on one dimension: you can either build
something a large number of people want a small amount, or something a small
number of people want a large amount. Choose the latter. Not all ideas of that
type are good startup ideas, but nearly all good startup ideas are of that
type.
Imagine a graph whose x axis represents all the people who might want what
you're making and whose y axis represents how much they want it. If you invert
the scale on the y axis, you can envision companies as holes. Google is an
immense crater: hundreds of millions of people use it, and they need it a lot.
A startup just starting out can't expect to excavate that much volume. So you
have two choices about the shape of hole you start with. You can either dig a
hole that's broad but shallow, or one that's narrow and deep, like a well.
Made-up startup ideas are usually of the first type. Lots of people are mildly
interested in a social network for pet owners.
Nearly all good startup ideas are of the second type. Microsoft was a well
when they made Altair Basic. There were only a couple thousand Altair owners,
but without this software they were programming in machine language. Thirty
years later Facebook had the same shape. Their first site was exclusively for
Harvard students, of which there are only a few thousand, but those few
thousand users wanted it a lot.
When you have an idea for a startup, ask yourself: who wants this right now?
Who wants this so much that they'll use it even when it's a crappy version one
made by a two-person startup they've never heard of? If you can't answer that,
the idea is probably bad. [3]
You don't need the narrowness of the well per se. It's depth you need; you get
narrowness as a byproduct of optimizing for depth (and speed). But you almost
always do get it. In practice the link between depth and narrowness is so
strong that it's a good sign when you know that an idea will appeal strongly
to a specific group or type of user.
But while demand shaped like a well is almost a necessary condition for a good
startup idea, it's not a sufficient one. If Mark Zuckerberg had built
something that could only ever have appealed to Harvard students, it would not
have been a good startup idea. Facebook was a good idea because it started
with a small market there was a fast path out of. Colleges are similar enough
that if you build a facebook that works at Harvard, it will work at any
college. So you spread rapidly through all the colleges. Once you have all the
college students, you get everyone else simply by letting them in.
Similarly for Microsoft: Basic for the Altair; Basic for other machines; other
languages besides Basic; operating systems; applications; IPO.
**Self**
How do you tell whether there's a path out of an idea? How do you tell whether
something is the germ of a giant company, or just a niche product? Often you
can't. The founders of Airbnb didn't realize at first how big a market they
were tapping. Initially they had a much narrower idea. They were going to let
hosts rent out space on their floors during conventions. They didn't foresee
the expansion of this idea; it forced itself upon them gradually. All they
knew at first is that they were onto something. That's probably as much as
Bill Gates or Mark Zuckerberg knew at first.
Occasionally it's obvious from the beginning when there's a path out of the
initial niche. And sometimes I can see a path that's not immediately obvious;
that's one of our specialties at YC. But there are limits to how well this can
be done, no matter how much experience you have. The most important thing to
understand about paths out of the initial idea is the meta-fact that these are
hard to see.
So if you can't predict whether there's a path out of an idea, how do you
choose between ideas? The truth is disappointing but interesting: if you're
the right sort of person, you have the right sort of hunches. If you're at the
leading edge of a field that's changing fast, when you have a hunch that
something is worth doing, you're more likely to be right.
In _Zen and the Art of Motorcycle Maintenance_ , Robert Pirsig says:
> You want to know how to paint a perfect painting? It's easy. Make yourself
> perfect and then just paint naturally.
I've wondered about that passage since I read it in high school. I'm not sure
how useful his advice is for painting specifically, but it fits this situation
well. Empirically, the way to have good startup ideas is to become the sort of
person who has them.
Being at the leading edge of a field doesn't mean you have to be one of the
people pushing it forward. You can also be at the leading edge as a user. It
was not so much because he was a programmer that Facebook seemed a good idea
to Mark Zuckerberg as because he used computers so much. If you'd asked most
40 year olds in 2004 whether they'd like to publish their lives semi-publicly
on the Internet, they'd have been horrified at the idea. But Mark already
lived online; to him it seemed natural.
Paul Buchheit says that people at the leading edge of a rapidly changing field
"live in the future." Combine that with Pirsig and you get:
> Live in the future, then build what's missing.
That describes the way many if not most of the biggest startups got started.
Neither Apple nor Yahoo nor Google nor Facebook were even supposed to be
companies at first. They grew out of things their founders built because there
seemed a gap in the world.
If you look at the way successful founders have had their ideas, it's
generally the result of some external stimulus hitting a prepared mind. Bill
Gates and Paul Allen hear about the Altair and think "I bet we could write a
Basic interpreter for it." Drew Houston realizes he's forgotten his USB stick
and thinks "I really need to make my files live online." Lots of people heard
about the Altair. Lots forgot USB sticks. The reason those stimuli caused
those founders to start companies was that their experiences had prepared them
to notice the opportunities they represented.
The verb you want to be using with respect to startup ideas is not "think up"
but "notice." At YC we call ideas that grow naturally out of the founders' own
experiences "organic" startup ideas. The most successful startups almost all
begin this way.
That may not have been what you wanted to hear. You may have expected recipes
for coming up with startup ideas, and instead I'm telling you that the key is
to have a mind that's prepared in the right way. But disappointing though it
may be, this is the truth. And it is a recipe of a sort, just one that in the
worst case takes a year rather than a weekend.
If you're not at the leading edge of some rapidly changing field, you can get
to one. For example, anyone reasonably smart can probably get to an edge of
programming (e.g. building mobile apps) in a year. Since a successful startup
will consume at least 3-5 years of your life, a year's preparation would be a
reasonable investment. Especially if you're also looking for a cofounder. [4]
You don't have to learn programming to be at the leading edge of a domain
that's changing fast. Other domains change fast. But while learning to hack is
not necessary, it is for the forseeable future sufficient. As Marc Andreessen
put it, software is eating the world, and this trend has decades left to run.
Knowing how to hack also means that when you have ideas, you'll be able to
implement them. That's not absolutely necessary (Jeff Bezos couldn't) but it's
an advantage. It's a big advantage, when you're considering an idea like
putting a college facebook online, if instead of merely thinking "That's an
interesting idea," you can think instead "That's an interesting idea. I'll try
building an initial version tonight." It's even better when you're both a
programmer and the target user, because then the cycle of generating new
versions and testing them on users can happen inside one head.
**Noticing**
Once you're living in the future in some respect, the way to notice startup
ideas is to look for things that seem to be missing. If you're really at the
leading edge of a rapidly changing field, there will be things that are
obviously missing. What won't be obvious is that they're startup ideas. So if
you want to find startup ideas, don't merely turn on the filter "What's
missing?" Also turn off every other filter, particularly "Could this be a big
company?" There's plenty of time to apply that test later. But if you're
thinking about that initially, it may not only filter out lots of good ideas,
but also cause you to focus on bad ones.
Most things that are missing will take some time to see. You almost have to
trick yourself into seeing the ideas around you.
But you _know_ the ideas are out there. This is not one of those problems
where there might not be an answer. It's impossibly unlikely that this is the
exact moment when technological progress stops. You can be sure people are
going to build things in the next few years that will make you think "What did
I do before x?"
And when these problems get solved, they will probably seem flamingly obvious
in retrospect. What you need to do is turn off the filters that usually
prevent you from seeing them. The most powerful is simply taking the current
state of the world for granted. Even the most radically open-minded of us
mostly do that. You couldn't get from your bed to the front door if you
stopped to question everything.
But if you're looking for startup ideas you can sacrifice some of the
efficiency of taking the status quo for granted and start to question things.
Why is your inbox overflowing? Because you get a lot of email, or because it's
hard to get email out of your inbox? Why do you get so much email? What
problems are people trying to solve by sending you email? Are there better
ways to solve them? And why is it hard to get emails out of your inbox? Why do
you keep emails around after you've read them? Is an inbox the optimal tool
for that?
Pay particular attention to things that chafe you. The advantage of taking the
status quo for granted is not just that it makes life (locally) more
efficient, but also that it makes life more tolerable. If you knew about all
the things we'll get in the next 50 years but don't have yet, you'd find
present day life pretty constraining, just as someone from the present would
if they were sent back 50 years in a time machine. When something annoys you,
it could be because you're living in the future.
When you find the right sort of problem, you should probably be able to
describe it as _obvious_ , at least to you. When we started Viaweb, all the
online stores were built by hand, by web designers making individual HTML
pages. It was obvious to us as programmers that these sites would have to be
generated by software. [5]
Which means, strangely enough, that coming up with startup ideas is a question
of seeing the obvious. That suggests how weird this process is: you're trying
to see things that are obvious, and yet that you hadn't seen.
Since what you need to do here is loosen up your own mind, it may be best not
to make too much of a direct frontal attack on the problem — i.e. to sit down
and try to think of ideas. The best plan may be just to keep a background
process running, looking for things that seem to be missing. Work on hard
problems, driven mainly by curiosity, but have a second self watching over
your shoulder, taking note of gaps and anomalies. [6]
Give yourself some time. You have a lot of control over the rate at which you
turn yours into a prepared mind, but you have less control over the stimuli
that spark ideas when they hit it. If Bill Gates and Paul Allen had
constrained themselves to come up with a startup idea in one month, what if
they'd chosen a month before the Altair appeared? They probably would have
worked on a less promising idea. Drew Houston did work on a less promising
idea before Dropbox: an SAT prep startup. But Dropbox was a much better idea,
both in the absolute sense and also as a match for his skills. [7]
A good way to trick yourself into noticing ideas is to work on projects that
seem like they'd be cool. If you do that, you'll naturally tend to build
things that are missing. It wouldn't seem as interesting to build something
that already existed.
Just as trying to think up startup ideas tends to produce bad ones, working on
things that could be dismissed as "toys" often produces good ones. When
something is described as a toy, that means it has everything an idea needs
except being important. It's cool; users love it; it just doesn't matter. But
if you're living in the future and you build something cool that users love,
it may matter more than outsiders think. Microcomputers seemed like toys when
Apple and Microsoft started working on them. I'm old enough to remember that
era; the usual term for people with their own microcomputers was "hobbyists."
BackRub seemed like an inconsequential science project. The Facebook was just
a way for undergrads to stalk one another.
At YC we're excited when we meet startups working on things that we could
imagine know-it-alls on forums dismissing as toys. To us that's positive
evidence an idea is good.
If you can afford to take a long view (and arguably you can't afford not to),
you can turn "Live in the future and build what's missing" into something even
better:
> Live in the future and build what seems interesting.
**School**
That's what I'd advise college students to do, rather than trying to learn
about "entrepreneurship." "Entrepreneurship" is something you learn best by
doing it. The examples of the most successful founders make that clear. What
you should be spending your time on in college is ratcheting yourself into the
future. College is an incomparable opportunity to do that. What a waste to
sacrifice an opportunity to solve the hard part of starting a startup —
becoming the sort of person who can have organic startup ideas — by spending
time learning about the easy part. Especially since you won't even really
learn about it, any more than you'd learn about sex in a class. All you'll
learn is the words for things.
The clash of domains is a particularly fruitful source of ideas. If you know a
lot about programming and you start learning about some other field, you'll
probably see problems that software could solve. In fact, you're doubly likely
to find good problems in another domain: (a) the inhabitants of that domain
are not as likely as software people to have already solved their problems
with software, and (b) since you come into the new domain totally ignorant,
you don't even know what the status quo is to take it for granted.
So if you're a CS major and you want to start a startup, instead of taking a
class on entrepreneurship you're better off taking a class on, say, genetics.
Or better still, go work for a biotech company. CS majors normally get summer
jobs at computer hardware or software companies. But if you want to find
startup ideas, you might do better to get a summer job in some unrelated
field. [8]
Or don't take any extra classes, and just build things. It's no coincidence
that Microsoft and Facebook both got started in January. At Harvard that is
(or was) Reading Period, when students have no classes to attend because
they're supposed to be studying for finals. [9]
But don't feel like you have to build things that will become startups. That's
premature optimization. Just build things. Preferably with other students.
It's not just the classes that make a university such a good place to crank
oneself into the future. You're also surrounded by other people trying to do
the same thing. If you work together with them on projects, you'll end up
producing not just organic ideas, but organic ideas with organic founding
teams — and that, empirically, is the best combination.
Beware of research. If an undergrad writes something all his friends start
using, it's quite likely to represent a good startup idea. Whereas a PhD
dissertation is extremely unlikely to. For some reason, the more a project has
to count as research, the less likely it is to be something that could be
turned into a startup. [10] I think the reason is that the subset of ideas
that count as research is so narrow that it's unlikely that a project that
satisfied that constraint would also satisfy the orthogonal constraint of
solving users' problems. Whereas when students (or professors) build something
as a side-project, they automatically gravitate toward solving users' problems
— perhaps even with an additional energy that comes from being freed from the
constraints of research.
**Competition**
Because a good idea should seem obvious, when you have one you'll tend to feel
that you're late. Don't let that deter you. Worrying that you're late is one
of the signs of a good idea. Ten minutes of searching the web will usually
settle the question. Even if you find someone else working on the same thing,
you're probably not too late. It's exceptionally rare for startups to be
killed by competitors — so rare that you can almost discount the possibility.
So unless you discover a competitor with the sort of lock-in that would
prevent users from choosing you, don't discard the idea.
If you're uncertain, ask users. The question of whether you're too late is
subsumed by the question of whether anyone urgently needs what you plan to
make. If you have something that no competitor does and that some subset of
users urgently need, you have a beachhead. [11]
The question then is whether that beachhead is big enough. Or more
importantly, who's in it: if the beachhead consists of people doing something
lots more people will be doing in the future, then it's probably big enough no
matter how small it is. For example, if you're building something
differentiated from competitors by the fact that it works on phones, but it
only works on the newest phones, that's probably a big enough beachhead.
Err on the side of doing things where you'll face competitors. Inexperienced
founders usually give competitors more credit than they deserve. Whether you
succeed depends far more on you than on your competitors. So better a good
idea with competitors than a bad one without.
You don't need to worry about entering a "crowded market" so long as you have
a thesis about what everyone else in it is overlooking. In fact that's a very
promising starting point. Google was that type of idea. Your thesis has to be
more precise than "we're going to make an x that doesn't suck" though. You
have to be able to phrase it in terms of something the incumbents are
overlooking. Best of all is when you can say that they didn't have the courage
of their convictions, and that your plan is what they'd have done if they'd
followed through on their own insights. Google was that type of idea too. The
search engines that preceded them shied away from the most radical
implications of what they were doing — particularly that the better a job they
did, the faster users would leave.
A crowded market is actually a good sign, because it means both that there's
demand and that none of the existing solutions are good enough. A startup
can't hope to enter a market that's obviously big and yet in which they have
no competitors. So any startup that succeeds is either going to be entering a
market with existing competitors, but armed with some secret weapon that will
get them all the users (like Google), or entering a market that looks small
but which will turn out to be big (like Microsoft). [12]
**Filters**
There are two more filters you'll need to turn off if you want to notice
startup ideas: the unsexy filter and the schlep filter.
Most programmers wish they could start a startup by just writing some
brilliant code, pushing it to a server, and having users pay them lots of
money. They'd prefer not to deal with tedious problems or get involved in
messy ways with the real world. Which is a reasonable preference, because such
things slow you down. But this preference is so widespread that the space of
convenient startup ideas has been stripped pretty clean. If you let your mind
wander a few blocks down the street to the messy, tedious ideas, you'll find
valuable ones just sitting there waiting to be implemented.
The schlep filter is so dangerous that I wrote a separate essay about the
condition it induces, which I called [schlep blindness](schlep.html). I gave
Stripe as an example of a startup that benefited from turning off this filter,
and a pretty striking example it is. Thousands of programmers were in a
position to see this idea; thousands of programmers knew how painful it was to
process payments before Stripe. But when they looked for startup ideas they
didn't see this one, because unconsciously they shrank from having to deal
with payments. And dealing with payments is a schlep for Stripe, but not an
intolerable one. In fact they might have had net less pain; because the fear
of dealing with payments kept most people away from this idea, Stripe has had
comparatively smooth sailing in other areas that are sometimes painful, like
user acquisition. They didn't have to try very hard to make themselves heard
by users, because users were desperately waiting for what they were building.
The unsexy filter is similar to the schlep filter, except it keeps you from
working on problems you despise rather than ones you fear. We overcame this
one to work on Viaweb. There were interesting things about the architecture of
our software, but we weren't interested in ecommerce per se. We could see the
problem was one that needed to be solved though.
Turning off the schlep filter is more important than turning off the unsexy
filter, because the schlep filter is more likely to be an illusion. And even
to the degree it isn't, it's a worse form of self-indulgence. Starting a
successful startup is going to be fairly laborious no matter what. Even if the
product doesn't entail a lot of schleps, you'll still have plenty dealing with
investors, hiring and firing people, and so on. So if there's some idea you
think would be cool but you're kept away from by fear of the schleps involved,
don't worry: any sufficiently good idea will have as many.
The unsexy filter, while still a source of error, is not as entirely useless
as the schlep filter. If you're at the leading edge of a field that's changing
rapidly, your ideas about what's sexy will be somewhat correlated with what's
valuable in practice. Particularly as you get older and more experienced. Plus
if you find an idea sexy, you'll work on it more enthusiastically. [13]
**Recipes**
While the best way to discover startup ideas is to become the sort of person
who has them and then build whatever interests you, sometimes you don't have
that luxury. Sometimes you need an idea now. For example, if you're working on
a startup and your initial idea turns out to be bad.
For the rest of this essay I'll talk about tricks for coming up with startup
ideas on demand. Although empirically you're better off using the organic
strategy, you could succeed this way. You just have to be more disciplined.
When you use the organic method, you don't even notice an idea unless it's
evidence that something is truly missing. But when you make a conscious effort
to think of startup ideas, you have to replace this natural constraint with
self-discipline. You'll see a lot more ideas, most of them bad, so you need to
be able to filter them.
One of the biggest dangers of not using the organic method is the example of
the organic method. Organic ideas feel like inspirations. There are a lot of
stories about successful startups that began when the founders had what seemed
a crazy idea but "just knew" it was promising. When you feel that about an
idea you've had while trying to come up with startup ideas, you're probably
mistaken.
When searching for ideas, look in areas where you have some expertise. If
you're a database expert, don't build a chat app for teenagers (unless you're
also a teenager). Maybe it's a good idea, but you can't trust your judgment
about that, so ignore it. There have to be other ideas that involve databases,
and whose quality you can judge. Do you find it hard to come up with good
ideas involving databases? That's because your expertise raises your
standards. Your ideas about chat apps are just as bad, but you're giving
yourself a Dunning-Kruger pass in that domain.
The place to start looking for ideas is things you need. There _must_ be
things you need. [14]
One good trick is to ask yourself whether in your previous job you ever found
yourself saying "Why doesn't someone make x? If someone made x we'd buy it in
a second." If you can think of any x people said that about, you probably have
an idea. You know there's demand, and people don't say that about things that
are impossible to build.
More generally, try asking yourself whether there's something unusual about
you that makes your needs different from most other people's. You're probably
not the only one. It's especially good if you're different in a way people
will increasingly be.
If you're changing ideas, one unusual thing about you is the idea you'd
previously been working on. Did you discover any needs while working on it?
Several well-known startups began this way. Hotmail began as something its
founders wrote to talk about their previous startup idea while they were
working at their day jobs. [15]
A particularly promising way to be unusual is to be young. Some of the most
valuable new ideas take root first among people in their teens and early
twenties. And while young founders are at a disadvantage in some respects,
they're the only ones who really understand their peers. It would have been
very hard for someone who wasn't a college student to start Facebook. So if
you're a young founder (under 23 say), are there things you and your friends
would like to do that current technology won't let you?
The next best thing to an unmet need of your own is an unmet need of someone
else. Try talking to everyone you can about the gaps they find in the world.
What's missing? What would they like to do that they can't? What's tedious or
annoying, particularly in their work? Let the conversation get general; don't
be trying too hard to find startup ideas. You're just looking for something to
spark a thought. Maybe you'll notice a problem they didn't consciously realize
they had, because you know how to solve it.
When you find an unmet need that isn't your own, it may be somewhat blurry at
first. The person who needs something may not know exactly what they need. In
that case I often recommend that founders act like consultants — that they do
what they'd do if they'd been retained to solve the problems of this one user.
People's problems are similar enough that nearly all the code you write this
way will be reusable, and whatever isn't will be a small price to start out
certain that you've reached the bottom of the well. [16]
One way to ensure you do a good job solving other people's problems is to make
them your own. When Rajat Suri of E la Carte decided to write software for
restaurants, he got a job as a waiter to learn how restaurants worked. That
may seem like taking things to extremes, but startups are extreme. We love it
when founders do such things.
In fact, one strategy I recommend to people who need a new idea is not merely
to turn off their schlep and unsexy filters, but to seek out ideas that are
unsexy or involve schleps. Don't try to start Twitter. Those ideas are so rare
that you can't find them by looking for them. Make something unsexy that
people will pay you for.
A good trick for bypassing the schlep and to some extent the unsexy filter is
to ask what you wish someone else would build, so that you could use it. What
would you pay for right now?
Since startups often garbage-collect broken companies and industries, it can
be a good trick to look for those that are dying, or deserve to, and try to
imagine what kind of company would profit from their demise. For example,
journalism is in free fall at the moment. But there may still be money to be
made from something like journalism. What sort of company might cause people
in the future to say "this replaced journalism" on some axis?
But imagine asking that in the future, not now. When one company or industry
replaces another, it usually comes in from the side. So don't look for a
replacement for x; look for something that people will later say turned out to
be a replacement for x. And be imaginative about the axis along which the
replacement occurs. Traditional journalism, for example, is a way for readers
to get information and to kill time, a way for writers to make money and to
get attention, and a vehicle for several different types of advertising. It
could be replaced on any of these axes (it has already started to be on most).
When startups consume incumbents, they usually start by serving some small but
important market that the big players ignore. It's particularly good if
there's an admixture of disdain in the big players' attitude, because that
often misleads them. For example, after Steve Wozniak built the computer that
became the Apple I, he felt obliged to give his then-employer Hewlett-Packard
the option to produce it. Fortunately for him, they turned it down, and one of
the reasons they did was that it used a TV for a monitor, which seemed
intolerably d�class� to a high-end hardware company like HP was at the time.
[17]
Are there groups of [scruffy](marginal.html) but sophisticated users like the
early microcomputer "hobbyists" that are currently being ignored by the big
players? A startup with its sights set on bigger things can often capture a
small market easily by expending an effort that wouldn't be justified by that
market alone.
Similarly, since the most successful startups generally ride some wave bigger
than themselves, it could be a good trick to look for waves and ask how one
could benefit from them. The prices of gene sequencing and 3D printing are
both experiencing Moore's Law-like declines. What new things will we be able
to do in the new world we'll have in a few years? What are we unconsciously
ruling out as impossible that will soon be possible?
**Organic**
But talking about looking explicitly for waves makes it clear that such
recipes are plan B for getting startup ideas. Looking for waves is essentially
a way to simulate the organic method. If you're at the leading edge of some
rapidly changing field, you don't have to look for waves; you are the wave.
Finding startup ideas is a subtle business, and that's why most people who try
fail so miserably. It doesn't work well simply to try to think of startup
ideas. If you do that, you get bad ones that sound dangerously plausible. The
best approach is more indirect: if you have the right sort of background, good
startup ideas will seem obvious to you. But even then, not immediately. It
takes time to come across situations where you notice something missing. And
often these gaps won't seem to be ideas for companies, just things that would
be interesting to build. Which is why it's good to have the time and the
inclination to build things just because they're interesting.
Live in the future and build what seems interesting. Strange as it sounds,
that's the real recipe.
**Notes**
[1] This form of bad idea has been around as long as the web. It was common in
the 1990s, except then people who had it used to say they were going to create
a portal for x instead of a social network for x. Structurally the idea is
stone soup: you post a sign saying "this is the place for people interested in
x," and all those people show up and you make money from them. What lures
founders into this sort of idea are statistics about the millions of people
who might be interested in each type of x. What they forget is that any given
person might have 20 affinities by this standard, and no one is going to visit
20 different communities regularly.
[2] I'm not saying, incidentally, that I know for sure a social network for
pet owners is a bad idea. I know it's a bad idea the way I know randomly
generated DNA would not produce a viable organism. The set of plausible
sounding startup ideas is many times larger than the set of good ones, and
many of the good ones don't even sound that plausible. So if all you know
about a startup idea is that it sounds plausible, you have to assume it's bad.
[3] More precisely, the users' need has to give them sufficient activation
energy to start using whatever you make, which can vary a lot. For example,
the activation energy for enterprise software sold through traditional
channels is very high, so you'd have to be a _lot_ better to get users to
switch. Whereas the activation energy required to switch to a new search
engine is low. Which in turn is why search engines are so much better than
enterprise software.
[4] This gets harder as you get older. While the space of ideas doesn't have
dangerous local maxima, the space of careers does. There are fairly high walls
between most of the paths people take through life, and the older you get, the
higher the walls become.
[5] It was also obvious to us that the web was going to be a big deal. Few
non-programmers grasped that in 1995, but the programmers had seen what GUIs
had done for desktop computers.
[6] Maybe it would work to have this second self keep a journal, and each
night to make a brief entry listing the gaps and anomalies you'd noticed that
day. Not startup ideas, just the raw gaps and anomalies.
[7] Sam Altman points out that taking time to come up with an idea is not
merely a better strategy in an absolute sense, but also like an undervalued
stock in that so few founders do it.
There's comparatively little competition for the best ideas, because few
founders are willing to put in the time required to notice them. Whereas there
is a great deal of competition for mediocre ideas, because when people make up
startup ideas, they tend to make up the same ones.
[8] For the computer hardware and software companies, summer jobs are the
first phase of the recruiting funnel. But if you're good you can skip the
first phase. If you're good you'll have no trouble getting hired by these
companies when you graduate, regardless of how you spent your summers.
[9] The empirical evidence suggests that if colleges want to help their
students start startups, the best thing they can do is leave them alone in the
right way.
[10] I'm speaking here of IT startups; in biotech things are different.
[11] This is an instance of a more general rule: focus on users, not
competitors. The most important information about competitors is what you
learn via users anyway.
[12] In practice most successful startups have elements of both. And you can
describe each strategy in terms of the other by adjusting the boundaries of
what you call the market. But it's useful to consider these two ideas
separately.
[13] I almost hesitate to raise that point though. Startups are businesses;
the point of a business is to make money; and with that additional constraint,
you can't expect you'll be able to spend all your time working on what
interests you most.
[14] The need has to be a strong one. You can retroactively describe any made-
up idea as something you need. But do you really need that recipe site or
local event aggregator as much as Drew Houston needed Dropbox, or Brian Chesky
and Joe Gebbia needed Airbnb?
Quite often at YC I find myself asking founders "Would you use this thing
yourself, if you hadn't written it?" and you'd be surprised how often the
answer is no.
[15] Paul Buchheit points out that trying to sell something bad can be a
source of better ideas:
"The best technique I've found for dealing with YC companies that have bad
ideas is to tell them to go sell the product ASAP (before wasting time
building it). Not only do they learn that nobody wants what they are building,
they very often come back with a real idea that they discovered in the process
of trying to sell the bad idea."
[16] Here's a recipe that might produce the next Facebook, if you're college
students. If you have a connection to one of the more powerful sororities at
your school, approach the queen bees thereof and offer to be their personal IT
consultants, building anything they could imagine needing in their social
lives that didn't already exist. Anything that got built this way would be
very promising, because such users are not just the most demanding but also
the perfect point to spread from.
I have no idea whether this would work.
[17] And the reason it used a TV for a monitor is that Steve Wozniak started
out by solving his own problems. He, like most of his peers, couldn't afford a
monitor.
**Thanks** to Sam Altman, Mike Arrington, Paul Buchheit, John Collison,
Patrick Collison, Garry Tan, and Harj Taggar for reading drafts of this, and
Marc Andreessen, Joe Gebbia, Reid Hoffman, Shel Kaphan, Mike Moritz and Kevin
Systrom for answering my questions about startup history.
|
April 2009
_Inc_ recently asked me who I thought were the 5 most interesting startup
founders of the last 30 years. How do you decide who's the most interesting?
The best test seemed to be influence: who are the 5 who've influenced me most?
Who do I use as examples when I'm talking to companies we fund? Who do I find
myself quoting?
**1\. Steve Jobs**
I'd guess Steve is the most influential founder not just for me but for most
people you could ask. A lot of startup culture is Apple culture. He was the
original young founder. And while the concept of "insanely great" already
existed in the arts, it was a novel idea to introduce into a company in the
1980s.
More remarkable still, he's stayed interesting for 30 years. People await new
Apple products the way they'd await new books by a popular novelist. Steve may
not literally design them, but they wouldn't happen if he weren't CEO.
Steve is clever and driven, but so are a lot of people in the Valley. What
makes him unique is his [sense of design](taste.html). Before him, most
companies treated design as a frivolous extra. Apple's competitors now know
better.
**2\. TJ Rodgers**
TJ Rodgers isn't as famous as Steve Jobs, but he may be the best writer among
Silicon Valley CEOs. I've probably learned more from him about the startup way
of thinking than from anyone else. Not so much from specific things he's
written as by reconstructing the mind that produced them: brutally candid;
aggressively garbage-collecting outdated ideas; and yet driven by pragmatism
rather than ideology.
The first essay of his that I read was so electrifying that I remember exactly
where I was at the time. It was [High Technology Innovation: Free Markets or
Government Subsidies?](http://www.cypress.com/?rID=34993) and I was downstairs
in the Harvard Square T Station. It felt as if someone had flipped on a light
switch inside my head.
**3\. Larry & Sergey**
I'm sorry to treat Larry and Sergey as one person. I've always thought that
was unfair to them. But it does seem as if Google was a collaboration.
Before Google, companies in Silicon Valley already knew it was important to
have the best hackers. So they claimed, at least. But Google pushed this idea
further than anyone had before. Their hypothesis seems to have been that, in
the initial stages at least, _all_ you need is good hackers: if you hire all
the smartest people and put them to work on a problem where their success can
be measured, you win. All the other stuff—which includes all the stuff that
business schools think business consists of—you can figure out along the way.
The results won't be perfect, but they'll be optimal. If this was their
hypothesis, it's now been verified experimentally.
**4\. Paul Buchheit**
Few know this, but one person, Paul Buchheit, is responsible for three of the
best things Google has done. He was the original author of GMail, which is the
most impressive thing Google has after search. He also wrote the first
prototype of AdSense, and was the author of Google's mantra "Don't be evil."
PB made a point in a talk once that I now mention to every startup we fund:
that it's better, initially, to make a small number of users really love you
than a large number kind of like you. If I could tell startups only [ten
sentences](13sentences.html), this would be one of them.
Now he's cofounder of a startup called Friendfeed. It's only a year old, but
already everyone in the Valley is watching them. Someone responsible for three
of the biggest ideas at Google is going to come up with more.
**5\. Sam Altman**
I was told I shouldn't mention founders of YC-funded companies in this list.
But Sam Altman can't be stopped by such flimsy rules. If he wants to be on
this list, he's going to be.
Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm
advising startups. On questions of design, I ask "What would Steve do?" but on
questions of strategy or ambition I ask "What would Sama do?"
What I learned from meeting Sama is that the doctrine of the elect applies to
startups. It applies way less than most people think: startup investing does
not consist of trying to pick winners the way you might in a horse race. But
there are a few people with such force of will that they're going to get
whatever they want.
|
May 2008
Great cities attract ambitious people. You can sense it when you walk around
one. In a hundred subtle ways, the city sends you a message: you could do
more; you should try harder.
The surprising thing is how different these messages can be. New York tells
you, above all: you should make more money. There are other messages too, of
course. You should be hipper. You should be better looking. But the clearest
message is that you should be richer.
What I like about Boston (or rather Cambridge) is that the message there is:
you should be smarter. You really should get around to reading all those books
you've been meaning to.
When you ask what message a city sends, you sometimes get surprising answers.
As much as they respect brains in Silicon Valley, the message the Valley sends
is: you should be more powerful.
That's not quite the same message New York sends. Power matters in New York
too of course, but New York is pretty impressed by a billion dollars even if
you merely inherited it. In Silicon Valley no one would care except a few real
estate agents. What matters in Silicon Valley is how much effect you have on
the world. The reason people there care about Larry and Sergey is not their
wealth but the fact that they control Google, which affects practically
everyone.
_____
How much does it matter what message a city sends? Empirically, the answer
seems to be: a lot. You might think that if you had enough strength of mind to
do great things, you'd be able to transcend your environment. Where you live
should make at most a couple percent difference. But if you look at the
historical evidence, it seems to matter more than that. Most people who did
great things were clumped together in a few places where that sort of thing
was done at the time.
You can see how powerful cities are from something I wrote about
[earlier](taste.html): the case of the Milanese Leonardo. Practically every
fifteenth century Italian painter you've heard of was from Florence, even
though Milan was just as big. People in Florence weren't genetically
different, so you have to assume there was someone born in Milan with as much
natural ability as Leonardo. What happened to him?
If even someone with the same natural ability as Leonardo couldn't beat the
force of environment, do you suppose you can?
I don't. I'm fairly stubborn, but I wouldn't try to fight this force. I'd
rather use it. So I've thought a lot about where to live.
I'd always imagined Berkeley would be the ideal place — that it would
basically be Cambridge with good weather. But when I finally tried living
there a couple years ago, it turned out not to be. The message Berkeley sends
is: you should live better. Life in Berkeley is very civilized. It's probably
the place in America where someone from Northern Europe would feel most at
home. But it's not humming with ambition.
In retrospect it shouldn't have been surprising that a place so pleasant would
attract people interested above all in quality of life. Cambridge with good
weather, it turns out, is not Cambridge. The people you find in Cambridge are
not there by accident. You have to make sacrifices to live there. It's
expensive and somewhat grubby, and the weather's often bad. So the kind of
people you find in Cambridge are the kind of people who want to live where the
smartest people are, even if that means living in an expensive, grubby place
with bad weather.
As of this writing, Cambridge seems to be the intellectual capital of the
world. I realize that seems a preposterous claim. What makes it true is that
it's more preposterous to claim about anywhere else. American universities
currently seem to be the best, judging from the flow of ambitious students.
And what US city has a stronger claim? New York? A fair number of smart
people, but diluted by a much larger number of neanderthals in suits. The Bay
Area has a lot of smart people too, but again, diluted; there are two great
universities, but they're far apart. Harvard and MIT are practically adjacent
by West Coast standards, and they're surrounded by about 20 other colleges and
universities. [1]
Cambridge as a result feels like a town whose main industry is ideas, while
New York's is finance and Silicon Valley's is startups.
_____
When you talk about cities in the sense we are, what you're really talking
about is collections of people. For a long time cities were the only large
collections of people, so you could use the two ideas interchangeably. But we
can see how much things are changing from the examples I've mentioned. New
York is a classic great city. But Cambridge is just part of a city, and
Silicon Valley is not even that. (San Jose is not, as it sometimes claims, the
capital of Silicon Valley. It's just 178 square miles at one end of it.)
Maybe the Internet will change things further. Maybe one day the most
important community you belong to will be a virtual one, and it won't matter
where you live physically. But I wouldn't bet on it. The physical world is
very high bandwidth, and some of the ways cities send you messages are quite
subtle.
One of the exhilarating things about coming back to Cambridge every spring is
walking through the streets at dusk, when you can see into the houses. When
you walk through Palo Alto in the evening, you see nothing but the blue glow
of TVs. In Cambridge you see shelves full of promising-looking books. Palo
Alto was probably much like Cambridge in 1960, but you'd never guess now that
there was a university nearby. Now it's just one of the richer neighborhoods
in Silicon Valley. [2]
A city speaks to you mostly by accident — in things you see through windows,
in conversations you overhear. It's not something you have to seek out, but
something you can't turn off. One of the occupational hazards of living in
Cambridge is overhearing the conversations of people who use interrogative
intonation in declarative sentences. But on average I'll take Cambridge
conversations over New York or Silicon Valley ones.
A friend who moved to Silicon Valley in the late 90s said the worst thing
about living there was the low quality of the eavesdropping. At the time I
thought she was being deliberately eccentric. Sure, it can be interesting to
eavesdrop on people, but is good quality eavesdropping so important that it
would affect where you chose to live? Now I understand what she meant. The
conversations you overhear tell you what sort of people you're among.
_____
No matter how determined you are, it's hard not to be influenced by the people
around you. It's not so much that you do whatever a city expects of you, but
that you get discouraged when no one around you cares about the same things
you do.
There's an imbalance between encouragement and discouragement like that
between gaining and losing money. Most people overvalue negative amounts of
money: they'll work much harder to avoid losing a dollar than to gain one.
Similarly, although there are plenty of people strong enough to resist doing
something just because that's what one is supposed to do where they happen to
be, there are few strong enough to keep working on something no one around
them cares about.
Because ambitions are to some extent incompatible and admiration is a zero-sum
game, each city tends to focus on one type of ambition. The reason Cambridge
is the intellectual capital is not just that there's a concentration of smart
people there, but that there's nothing _else_ people there care about more.
Professors in New York and the Bay area are second class citizens — till they
start hedge funds or startups respectively.
This suggests an answer to a question people in New York have wondered about
since the Bubble: whether New York could grow into a startup hub to rival
Silicon Valley. One reason that's unlikely is that someone starting a startup
in New York would feel like a second class citizen. [3] There's already
something else people in New York admire more.
In the long term, that could be a bad thing for New York. The power of an
important new technology does eventually convert to money. So by caring more
about money and less about power than Silicon Valley, New York is recognizing
the same thing, but slower. [4] And in fact it has been losing to Silicon
Valley at its own game: the ratio of New York to California residents in the
Forbes 400 has decreased from 1.45 (81:56) when the list was first published
in 1982 to .83 (73:88) in 2007.
_____
Not all cities send a message. Only those that are centers for some type of
ambition do. And it can be hard to tell exactly what message a city sends
without living there. I understand the messages of New York, Cambridge, and
Silicon Valley because I've lived for several years in each of them. DC and LA
seem to send messages too, but I haven't spent long enough in either to say
for sure what they are.
The big thing in LA seems to be fame. There's an A List of people who are most
in demand right now, and what's most admired is to be on it, or friends with
those who are. Beneath that, the message is much like New York's, though
perhaps with more emphasis on physical attractiveness.
In DC the message seems to be that the most important thing is who you know.
You want to be an insider. In practice this seems to work much as in LA.
There's an A List and you want to be on it or close to those who are. The only
difference is how the A List is selected. And even that is not that different.
At the moment, San Francisco's message seems to be the same as Berkeley's: you
should live better. But this will change if enough startups choose SF over the
Valley. During the Bubble that was a predictor of failure — a self-indulgent
choice, like buying expensive office furniture. Even now I'm suspicious when
startups choose SF. But if enough good ones do, it stops being a self-
indulgent choice, because the center of gravity of Silicon Valley will shift
there.
I haven't found anything like Cambridge for intellectual ambition. Oxford and
Cambridge (England) feel like Ithaca or Hanover: the message is there, but not
as strong.
Paris was once a great intellectual center. If you went there in 1300, it
might have sent the message Cambridge does now. But I tried living there for a
bit last year, and the ambitions of the inhabitants are not intellectual ones.
The message Paris sends now is: do things with style. I liked that, actually.
Paris is the only city I've lived in where people genuinely cared about art.
In America only a few rich people buy original art, and even the more
sophisticated ones rarely get past judging it by the brand name of the artist.
But looking through windows at dusk in Paris you can see that people there
actually care what paintings look like. Visually, Paris has the best
eavesdropping I know. [5]
There's one more message I've heard from cities: in London you can still
(barely) hear the message that one should be more aristocratic. If you listen
for it you can also hear it in Paris, New York, and Boston. But this message
is everywhere very faint. It would have been strong 100 years ago, but now I
probably wouldn't have picked it up at all if I hadn't deliberately tuned in
to that wavelength to see if there was any signal left.
_____
So far the complete list of messages I've picked up from cities is: wealth,
style, hipness, physical attractiveness, fame, political power, economic
power, intelligence, social class, and quality of life.
My immediate reaction to this list is that it makes me slightly queasy. I'd
always considered ambition a good thing, but I realize now that was because
I'd always implicitly understood it to mean ambition in the areas I cared
about. When you list everything ambitious people are ambitious about, it's not
so pretty.
On closer examination I see a couple things on the list that are surprising in
the light of history. For example, physical attractiveness wouldn't have been
there 100 years ago (though it might have been 2400 years ago). It has always
mattered for women, but in the late twentieth century it seems to have started
to matter for men as well. I'm not sure why — probably some combination of the
increasing power of women, the increasing influence of actors as models, and
the fact that so many people work in offices now: you can't show off by
wearing clothes too fancy to wear in a factory, so you have to show off with
your body instead.
Hipness is another thing you wouldn't have seen on the list 100 years ago. Or
wouldn't you? What it means is to know what's what. So maybe it has simply
replaced the component of social class that consisted of being "au fait." That
could explain why hipness seems particularly admired in London: it's version 2
of the traditional English delight in obscure codes that only insiders
understand.
Economic power would have been on the list 100 years ago, but what we mean by
it is changing. It used to mean the control of vast human and material
resources. But increasingly it means the ability to direct the course of
technology, and some of the people in a position to do that are not even rich
— leaders of important open source projects, for example. The Captains of
Industry of times past had laboratories full of clever people cooking up new
technologies for them. The new breed are themselves those people.
As this force gets more attention, another is dropping off the list: social
class. I think the two changes are related. Economic power, wealth, and social
class are just names for the same thing at different stages in its life:
economic power converts to wealth, and wealth to social class. So the focus of
admiration is simply shifting upstream.
_____
Does anyone who wants to do great work have to live in a great city? No; all
great cities inspire some sort of ambition, but they aren't the only places
that do. For some kinds of work, all you need is a handful of talented
colleagues.
What cities provide is an audience, and a funnel for peers. These aren't so
critical in something like math or physics, where no audience matters except
your peers, and judging ability is sufficiently straightforward that hiring
and admissions committees can do it reliably. In a field like math or physics
all you need is a department with the right colleagues in it. It could be
anywhere — in Los Alamos, New Mexico, for example.
It's in fields like the arts or writing or technology that the larger
environment matters. In these the best practitioners aren't conveniently
collected in a few top university departments and research labs — partly
because talent is harder to judge, and partly because people pay for these
things, so one doesn't need to rely on teaching or research funding to support
oneself. It's in these more chaotic fields that it helps most to be in a great
city: you need the encouragement of feeling that people around you care about
the kind of work you do, and since you have to find peers for yourself, you
need the much larger intake mechanism of a great city.
You don't have to live in a great city your whole life to benefit from it. The
critical years seem to be the early and middle ones of your career. Clearly
you don't have to grow up in a great city. Nor does it seem to matter if you
go to college in one. To most college students a world of a few thousand
people seems big enough. Plus in college you don't yet have to face the
hardest kind of work — discovering new problems to solve.
It's when you move on to the next and much harder step that it helps most to
be in a place where you can find peers and encouragement. You seem to be able
to leave, if you want, once you've found both. The Impressionists show the
typical pattern: they were born all over France (Pissarro was born in the
Carribbean) and died all over France, but what defined them were the years
they spent together in Paris.
_____
Unless you're sure what you want to do and where the leading center for it is,
your best bet is probably to try living in several places when you're young.
You can never tell what message a city sends till you live there, or even
whether it still sends one. Often your information will be wrong: I tried
living in Florence when I was 25, thinking it would be an art center, but it
turned out I was 450 years too late.
Even when a city is still a live center of ambition, you won't know for sure
whether its message will resonate with you till you hear it. When I moved to
New York, I was very excited at first. It's an exciting place. So it took me
quite a while to realize I just wasn't like the people there. I kept searching
for the Cambridge of New York. It turned out it was way, way uptown: an hour
uptown by air.
Some people know at 16 what sort of work they're going to do, but in most
ambitious kids, ambition seems to precede anything specific to be ambitious
about. They know they want to do something great. They just haven't decided
yet whether they're going to be a rock star or a brain surgeon. There's
nothing wrong with that. But it means if you have this most common type of
ambition, you'll probably have to figure out where to live by trial and error.
You'll probably have to find the city where you feel at home to know what sort
of ambition you have.
**Notes**
[1] This is one of the advantages of not having the universities in your
country controlled by the government. When governments decide how to allocate
resources, political deal-making causes things to be spread out
geographically. No central goverment would put its two best universities in
the same town, unless it was the capital (which would cause other problems).
But scholars seem to like to cluster together as much as people in any other
field, and when given the freedom to they derive the same advantages from it.
[2] There are still a few old professors in Palo Alto, but one by one they die
and their houses are transformed by developers into McMansions and sold to VPs
of Bus Dev.
[3] How many times have you read about startup founders who continued to live
inexpensively as their companies took off? Who continued to dress in jeans and
t-shirts, to drive the old car they had in grad school, and so on? If you did
that in New York, people would treat you like shit. If you walk into a fancy
restaurant in San Francisco wearing a jeans and a t-shirt, they're nice to
you; who knows who you might be? Not in New York.
One sign of a city's potential as a technology center is the number of
restaurants that still require jackets for men. According to Zagat's there are
none in San Francisco, LA, Boston, or Seattle, 4 in DC, 6 in Chicago, 8 in
London, 13 in New York, and 20 in Paris.
(Zagat's lists the Ritz Carlton Dining Room in SF as requiring jackets but I
couldn't believe it, so I called to check and in fact they don't. Apparently
there's only one restaurant left on the entire West Coast that still requires
jackets: The French Laundry in Napa Valley.)
[4] Ideas are one step upstream from economic power, so it's conceivable that
intellectual centers like Cambridge will one day have an edge over Silicon
Valley like the one the Valley has over New York.
This seems unlikely at the moment; if anything Boston is falling further and
further behind. The only reason I even mention the possibility is that the
path from ideas to startups has recently been getting smoother. It's a lot
easier now for a couple of hackers with no business experience to start a
startup than it was 10 years ago. If you extrapolate another 20 years, maybe
the balance of power will start to shift back. I wouldn't bet on it, but I
wouldn't bet against it either.
[5] If Paris is where people care most about art, why is New York the center
of gravity of the art business? Because in the twentieth century, art as brand
split apart from art as stuff. New York is where the richest buyers are, but
all they demand from art is brand, and since you can base brand on anything
with a sufficiently identifiable style, you may as well use the local stuff.
**Thanks** to Trevor Blackwell, Sarah Harlin, Jessica Livingston, Jackie
McDonough, Robert Morris, and David Sloo for reading drafts of this.
|
August 2015
I recently got an email from a founder that helped me understand something
important: why it's safe for startup founders to be nice people.
I grew up with a cartoon idea of a very successful businessman (in the cartoon
it was always a man): a rapacious, cigar-smoking, table-thumping guy in his
fifties who wins by exercising power, and isn't too fussy about how. As I've
written before, one of the things that has surprised me most about startups is
[how few](mean.html) of the most successful founders are like that. Maybe
successful people in other industries are; I don't know; but not startup
founders. [1]
I knew this empirically, but I never saw the math of why till I got this
founder's email. In it he said he worried that he was fundamentally soft-
hearted and tended to give away too much for free. He thought perhaps he
needed "a little dose of sociopath-ness."
I told him not to worry about it, because so long as he built something good
enough to spread by word of mouth, he'd have a superlinear growth curve. If he
was bad at extracting money from people, at worst this curve would be some
constant multiple less than 1 of what it might have been. But a constant
multiple of any curve is exactly the same shape. The numbers on the Y axis are
smaller, but the curve is just as steep, and when anything grows at the rate
of a successful startup, the Y axis will take care of itself.
Some examples will make this clear. Suppose your company is making $1000 a
month now, and you've made something so great that it's growing at 5% a week.
Two years from now, you'll be making about $160k a month.
Now suppose you're so un-rapacious that you only extract half as much from
your users as you could. That means two years later you'll be making $80k a
month instead of $160k. How far behind are you? How long will it take to catch
up with where you'd have been if you were extracting every penny? A mere 15
weeks. After two years, the un-rapacious founder is only 3.5 months behind the
rapacious one. [2]
If you're going to optimize a number, the one to choose is your [growth
rate](growth.html). Suppose as before that you only extract half as much from
users as you could, but that you're able to grow 6% a week instead of 5%. Now
how are you doing compared to the rapacious founder after two years? You're
already ahead—$214k a month versus $160k—and pulling away fast. In another
year you'll be making $4.4 million a month to the rapacious founder's $2
million.
Obviously one case where it would help to be rapacious is when growth depends
on that. What makes startups different is that usually it doesn't. Startups
usually win by making something so great that people recommend it to their
friends. And being rapacious not only doesn't help you do that, but probably
hurts. [3]
The reason startup founders can safely be nice is that making great things is
compounded, and rapacity isn't.
So if you're a founder, here's a deal you can make with yourself that will
both make you happy and make your company successful. Tell yourself you can be
as nice as you want, so long as you work hard on your growth rate to
compensate. Most successful startups make that tradeoff unconsciously. Maybe
if you do it consciously you'll do it even better.
**Notes**
[1] Many think successful startup founders are driven by money. In fact the
secret weapon of the most successful founders is that they aren't. If they
were, they'd have taken one of the acquisition offers that every fast-growing
startup gets on the way up. What drives the most successful founders is the
same thing that drives most people who make things: the company is their
project.
[2] In fact since 2 ≈ 1.05 ^ 15, the un-rapacious founder is always 15 weeks
behind the rapacious one.
[3] The other reason it might help to be good at squeezing money out of
customers is that startups usually lose money at first, and making more per
customer makes it easier to get to profitability before your initial funding
runs out. But while it is very common for startups to [die](pinch.html) from
running through their initial funding and then being unable to raise more, the
underlying cause is usually slow growth or excessive spending rather than
insufficient effort to extract money from existing customers.
**Thanks** to Sam Altman, Harj Taggar, Jessica Livingston, and Geoff Ralston
for reading drafts of this, and to Randall Bennett for being such a nice guy.
|
September 2004
Remember the essays you had to write in high school? Topic sentence,
introductory paragraph, supporting paragraphs, conclusion. The conclusion
being, say, that Ahab in _Moby Dick_ was a Christ-like figure.
Oy. So I'm going to try to give the other side of the story: what an essay
really is, and how you write one. Or at least, how I write one.
**Mods**
The most obvious difference between real essays and the things one has to
write in school is that real essays are not exclusively about English
literature. Certainly schools should teach students how to write. But due to a
series of historical accidents the teaching of writing has gotten mixed
together with the study of literature. And so all over the country students
are writing not about how a baseball team with a small budget might compete
with the Yankees, or the role of color in fashion, or what constitutes a good
dessert, but about symbolism in Dickens.
With the result that writing is made to seem boring and pointless. Who cares
about symbolism in Dickens? Dickens himself would be more interested in an
essay about color or baseball.
How did things get this way? To answer that we have to go back almost a
thousand years. Around 1100, Europe at last began to catch its breath after
centuries of chaos, and once they had the luxury of curiosity they
rediscovered what we call "the classics." The effect was rather as if we were
visited by beings from another solar system. These earlier civilizations were
so much more sophisticated that for the next several centuries the main work
of European scholars, in almost every field, was to assimilate what they knew.
During this period the study of ancient texts acquired great prestige. It
seemed the essence of what scholars did. As European scholarship gained
momentum it became less and less important; by 1350 someone who wanted to
learn about science could find better teachers than Aristotle in his own era.
[1] But schools change slower than scholarship. In the 19th century the study
of ancient texts was still the backbone of the curriculum.
The time was then ripe for the question: if the study of ancient texts is a
valid field for scholarship, why not modern texts? The answer, of course, is
that the original raison d'etre of classical scholarship was a kind of
intellectual archaeology that does not need to be done in the case of
contemporary authors. But for obvious reasons no one wanted to give that
answer. The archaeological work being mostly done, it implied that those
studying the classics were, if not wasting their time, at least working on
problems of minor importance.
And so began the study of modern literature. There was a good deal of
resistance at first. The first courses in English literature seem to have been
offered by the newer colleges, particularly American ones. Dartmouth, the
University of Vermont, Amherst, and University College, London taught English
literature in the 1820s. But Harvard didn't have a professor of English
literature until 1876, and Oxford not till 1885. (Oxford had a chair of
Chinese before it had one of English.) [2]
What tipped the scales, at least in the US, seems to have been the idea that
professors should do research as well as teach. This idea (along with the PhD,
the department, and indeed the whole concept of the modern university) was
imported from Germany in the late 19th century. Beginning at Johns Hopkins in
1876, the new model spread rapidly.
Writing was one of the casualties. Colleges had long taught English
composition. But how do you do research on composition? The professors who
taught math could be required to do original math, the professors who taught
history could be required to write scholarly articles about history, but what
about the professors who taught rhetoric or composition? What should they do
research on? The closest thing seemed to be English literature. [3]
And so in the late 19th century the teaching of writing was inherited by
English professors. This had two drawbacks: (a) an expert on literature need
not himself be a good writer, any more than an art historian has to be a good
painter, and (b) the subject of writing now tends to be literature, since
that's what the professor is interested in.
High schools imitate universities. The seeds of our miserable high school
experiences were sown in 1892, when the National Education Association
"formally recommended that literature and composition be unified in the high
school course." [4] The 'riting component of the 3 Rs then morphed into
English, with the bizarre consequence that high school students now had to
write about English literature-- to write, without even realizing it,
imitations of whatever English professors had been publishing in their
journals a few decades before.
It's no wonder if this seems to the student a pointless exercise, because
we're now three steps removed from real work: the students are imitating
English professors, who are imitating classical scholars, who are merely the
inheritors of a tradition growing out of what was, 700 years ago, fascinating
and urgently needed work.
**No Defense**
The other big difference between a real essay and the things they make you
write in school is that a real essay doesn't take a position and then defend
it. That principle, like the idea that we ought to be writing about
literature, turns out to be another intellectual hangover of long forgotten
origins.
It's often mistakenly believed that medieval universities were mostly
seminaries. In fact they were more law schools. And at least in our tradition
lawyers are advocates, trained to take either side of an argument and make as
good a case for it as they can. Whether cause or effect, this spirit pervaded
early universities. The study of rhetoric, the art of arguing persuasively,
was a third of the undergraduate curriculum. [5] And after the lecture the
most common form of discussion was the disputation. This is at least nominally
preserved in our present-day thesis defense: most people treat the words
thesis and dissertation as interchangeable, but originally, at least, a thesis
was a position one took and the dissertation was the argument by which one
defended it.
Defending a position may be a necessary evil in a legal dispute, but it's not
the best way to get at the truth, as I think lawyers would be the first to
admit. It's not just that you miss subtleties this way. The real problem is
that you can't change the question.
And yet this principle is built into the very structure of the things they
teach you to write in high school. The topic sentence is your thesis, chosen
in advance, the supporting paragraphs the blows you strike in the conflict,
and the conclusion-- uh, what is the conclusion? I was never sure about that
in high school. It seemed as if we were just supposed to restate what we said
in the first paragraph, but in different enough words that no one could tell.
Why bother? But when you understand the origins of this sort of "essay," you
can see where the conclusion comes from. It's the concluding remarks to the
jury.
Good writing should be convincing, certainly, but it should be convincing
because you got the right answers, not because you did a good job of arguing.
When I give a draft of an essay to friends, there are two things I want to
know: which parts bore them, and which seem unconvincing. The boring bits can
usually be fixed by cutting. But I don't try to fix the unconvincing bits by
arguing more cleverly. I need to talk the matter over.
At the very least I must have explained something badly. In that case, in the
course of the conversation I'll be forced to come up a with a clearer
explanation, which I can just incorporate in the essay. More often than not I
have to change what I was saying as well. But the aim is never to be
convincing per se. As the reader gets smarter, convincing and true become
identical, so if I can convince smart readers I must be near the truth.
The sort of writing that attempts to persuade may be a valid (or at least
inevitable) form, but it's historically inaccurate to call it an essay. An
essay is something else.
**Trying**
To understand what a real essay is, we have to reach back into history again,
though this time not so far. To Michel de Montaigne, who in 1580 published a
book of what he called "essais." He was doing something quite different from
what lawyers do, and the difference is embodied in the name. _Essayer_ is the
French verb meaning "to try" and an _essai_ is an attempt. An essay is
something you write to try to figure something out.
Figure out what? You don't know yet. And so you can't begin with a thesis,
because you don't have one, and may never have one. An essay doesn't begin
with a statement, but with a question. In a real essay, you don't take a
position and defend it. You notice a door that's ajar, and you open it and
walk in to see what's inside.
If all you want to do is figure things out, why do you need to write anything,
though? Why not just sit and think? Well, there precisely is Montaigne's great
discovery. Expressing ideas helps to form them. Indeed, helps is far too weak
a word. Most of what ends up in my essays I only thought of when I sat down to
write them. That's why I write them.
In the things you write in school you are, in theory, merely explaining
yourself to the reader. In a real essay you're writing for yourself. You're
thinking out loud.
But not quite. Just as inviting people over forces you to clean up your
apartment, writing something that other people will read forces you to think
well. So it does matter to have an audience. The things I've written just for
myself are no good. They tend to peter out. When I run into difficulties, I
find I conclude with a few vague questions and then drift off to get a cup of
tea.
Many published essays peter out in the same way. Particularly the sort written
by the staff writers of newsmagazines. Outside writers tend to supply
editorials of the defend-a-position variety, which make a beeline toward a
rousing (and foreordained) conclusion. But the staff writers feel obliged to
write something "balanced." Since they're writing for a popular magazine, they
start with the most radioactively controversial questions, from which--
because they're writing for a popular magazine-- they then proceed to recoil
in terror. Abortion, for or against? This group says one thing. That group
says another. One thing is certain: the question is a complex one. (But don't
get mad at us. We didn't draw any conclusions.)
**The River**
Questions aren't enough. An essay has to come up with answers. They don't
always, of course. Sometimes you start with a promising question and get
nowhere. But those you don't publish. Those are like experiments that get
inconclusive results. An essay you publish ought to tell the reader something
he didn't already know.
But _what_ you tell him doesn't matter, so long as it's interesting. I'm
sometimes accused of meandering. In defend-a-position writing that would be a
flaw. There you're not concerned with truth. You already know where you're
going, and you want to go straight there, blustering through obstacles, and
hand-waving your way across swampy ground. But that's not what you're trying
to do in an essay. An essay is supposed to be a search for truth. It would be
suspicious if it didn't meander.
The Meander (aka Menderes) is a river in Turkey. As you might expect, it winds
all over the place. But it doesn't do this out of frivolity. The path it has
discovered is the most economical route to the sea. [6]
The river's algorithm is simple. At each step, flow down. For the essayist
this translates to: flow interesting. Of all the places to go next, choose the
most interesting. One can't have quite as little foresight as a river. I
always know generally what I want to write about. But not the specific
conclusions I want to reach; from paragraph to paragraph I let the ideas take
their course.
This doesn't always work. Sometimes, like a river, one runs up against a wall.
Then I do the same thing the river does: backtrack. At one point in this essay
I found that after following a certain thread I ran out of ideas. I had to go
back seven paragraphs and start over in another direction.
Fundamentally an essay is a train of thought-- but a cleaned-up train of
thought, as dialogue is cleaned-up conversation. Real thought, like real
conversation, is full of false starts. It would be exhausting to read. You
need to cut and fill to emphasize the central thread, like an illustrator
inking over a pencil drawing. But don't change so much that you lose the
spontaneity of the original.
Err on the side of the river. An essay is not a reference work. It's not
something you read looking for a specific answer, and feel cheated if you
don't find it. I'd much rather read an essay that went off in an unexpected
but interesting direction than one that plodded dutifully along a prescribed
course.
**Surprise**
So what's interesting? For me, interesting means surprise. Interfaces, as
Geoffrey James has said, should follow the principle of least astonishment. A
button that looks like it will make a machine stop should make it stop, not
speed up. Essays should do the opposite. Essays should aim for maximum
surprise.
I was afraid of flying for a long time and could only travel vicariously. When
friends came back from faraway places, it wasn't just out of politeness that I
asked what they saw. I really wanted to know. And I found the best way to get
information out of them was to ask what surprised them. How was the place
different from what they expected? This is an extremely useful question. You
can ask it of the most unobservant people, and it will extract information
they didn't even know they were recording.
Surprises are things that you not only didn't know, but that contradict things
you thought you knew. And so they're the most valuable sort of fact you can
get. They're like a food that's not merely healthy, but counteracts the
unhealthy effects of things you've already eaten.
How do you find surprises? Well, therein lies half the work of essay writing.
(The other half is expressing yourself well.) The trick is to use yourself as
a proxy for the reader. You should only write about things you've thought
about a lot. And anything you come across that surprises you, who've thought
about the topic a lot, will probably surprise most readers.
For example, in a recent [essay](gh.html) I pointed out that because you can
only judge computer programmers by working with them, no one knows who the
best programmers are overall. I didn't realize this when I began that essay,
and even now I find it kind of weird. That's what you're looking for.
So if you want to write essays, you need two ingredients: a few topics you've
thought about a lot, and some ability to ferret out the unexpected.
What should you think about? My guess is that it doesn't matter-- that
anything can be interesting if you get deeply enough into it. One possible
exception might be things that have deliberately had all the variation sucked
out of them, like working in fast food. In retrospect, was there anything
interesting about working at Baskin-Robbins? Well, it was interesting how
important color was to the customers. Kids a certain age would point into the
case and say that they wanted yellow. Did they want French Vanilla or Lemon?
They would just look at you blankly. They wanted yellow. And then there was
the mystery of why the perennial favorite Pralines 'n' Cream was so appealing.
(I think now it was the salt.) And the difference in the way fathers and
mothers bought ice cream for their kids: the fathers like benevolent kings
bestowing largesse, the mothers harried, giving in to pressure. So, yes, there
does seem to be some material even in fast food.
I didn't notice those things at the time, though. At sixteen I was about as
observant as a lump of rock. I can see more now in the fragments of memory I
preserve of that age than I could see at the time from having it all happening
live, right in front of me.
**Observation**
So the ability to ferret out the unexpected must not merely be an inborn one.
It must be something you can learn. How do you learn it?
To some extent it's like learning history. When you first read history, it's
just a whirl of names and dates. Nothing seems to stick. But the more you
learn, the more hooks you have for new facts to stick onto-- which means you
accumulate knowledge at an exponential rate. Once you remember that Normans
conquered England in 1066, it will catch your attention when you hear that
other Normans conquered southern Italy at about the same time. Which will make
you wonder about Normandy, and take note when a third book mentions that
Normans were not, like most of what is now called France, tribes that flowed
in as the Roman empire collapsed, but Vikings (norman = north man) who arrived
four centuries later in 911. Which makes it easier to remember that Dublin was
also established by Vikings in the 840s. Etc, etc squared.
Collecting surprises is a similar process. The more anomalies you've seen, the
more easily you'll notice new ones. Which means, oddly enough, that as you
grow older, life should become more and more surprising. When I was a kid, I
used to think adults had it all figured out. I had it backwards. Kids are the
ones who have it all figured out. They're just mistaken.
When it comes to surprises, the rich get richer. But (as with wealth) there
may be habits of mind that will help the process along. It's good to have a
habit of asking questions, especially questions beginning with Why. But not in
the random way that three year olds ask why. There are an infinite number of
questions. How do you find the fruitful ones?
I find it especially useful to ask why about things that seem wrong. For
example, why should there be a connection between humor and misfortune? Why do
we find it funny when a character, even one we like, slips on a banana peel?
There's a whole essay's worth of surprises there for sure.
If you want to notice things that seem wrong, you'll find a degree of
skepticism helpful. I take it as an axiom that we're only achieving 1% of what
we could. This helps counteract the rule that gets beaten into our heads as
children: that things are the way they are because that is how things have to
be. For example, everyone I've talked to while writing this essay felt the
same about English classes-- that the whole process seemed pointless. But none
of us had the balls at the time to hypothesize that it was, in fact, all a
mistake. We all thought there was just something we weren't getting.
I have a hunch you want to pay attention not just to things that seem wrong,
but things that seem wrong in a humorous way. I'm always pleased when I see
someone laugh as they read a draft of an essay. But why should I be? I'm
aiming for good ideas. Why should good ideas be funny? The connection may be
surprise. Surprises make us laugh, and surprises are what one wants to
deliver.
I write down things that surprise me in notebooks. I never actually get around
to reading them and using what I've written, but I do tend to reproduce the
same thoughts later. So the main value of notebooks may be what writing things
down leaves in your head.
People trying to be cool will find themselves at a disadvantage when
collecting surprises. To be surprised is to be mistaken. And the essence of
cool, as any fourteen year old could tell you, is _nil admirari._ When you're
mistaken, don't dwell on it; just act like nothing's wrong and maybe no one
will notice.
One of the keys to coolness is to avoid situations where inexperience may make
you look foolish. If you want to find surprises you should do the opposite.
Study lots of different things, because some of the most interesting surprises
are unexpected connections between different fields. For example, jam, bacon,
pickles, and cheese, which are among the most pleasing of foods, were all
originally intended as methods of preservation. And so were books and
paintings.
Whatever you study, include history-- but social and economic history, not
political history. History seems to me so important that it's misleading to
treat it as a mere field of study. Another way to describe it is _all the data
we have so far._
Among other things, studying history gives one confidence that there are good
ideas waiting to be discovered right under our noses. Swords evolved during
the Bronze Age out of daggers, which (like their flint predecessors) had a
hilt separate from the blade. Because swords are longer the hilts kept
breaking off. But it took five hundred years before someone thought of casting
hilt and blade as one piece.
**Disobedience**
Above all, make a habit of paying attention to things you're not supposed to,
either because they're "[inappropriate](say.html)," or not important, or not
what you're supposed to be working on. If you're curious about something,
trust your instincts. Follow the threads that attract your attention. If
there's something you're really interested in, you'll find they have an
uncanny way of leading back to it anyway, just as the conversation of people
who are especially proud of something always tends to lead back to it.
For example, I've always been fascinated by comb-overs, especially the extreme
sort that make a man look as if he's wearing a beret made of his own hair.
Surely this is a lowly sort of thing to be interested in-- the sort of
superficial quizzing best left to teenage girls. And yet there is something
underneath. The key question, I realized, is how does the comber-over not see
how odd he looks? And the answer is that he got to look that way
_incrementally._ What began as combing his hair a little carefully over a thin
patch has gradually, over 20 years, grown into a monstrosity. Gradualness is
very powerful. And that power can be used for constructive purposes too: just
as you can trick yourself into looking like a freak, you can trick yourself
into creating something so grand that you would never have dared to _plan_
such a thing. Indeed, this is just how most good software gets created. You
start by writing a stripped-down kernel (how hard can it be?) and gradually it
grows into a complete operating system. Hence the next leap: could you do the
same thing in painting, or in a novel?
See what you can extract from a frivolous question? If there's one piece of
advice I would give about writing essays, it would be: don't do as you're
told. Don't believe what you're supposed to. Don't write the essay readers
expect; one learns nothing from what one expects. And don't write the way they
taught you to in school.
The most important sort of disobedience is to write essays at all.
Fortunately, this sort of disobedience shows signs of becoming
[rampant](http://www.ojr.org/ojr/glaser/1056050270.php). It used to be that
only a tiny number of officially approved writers were allowed to write
essays. Magazines published few of them, and judged them less by what they
said than who wrote them; a magazine might publish a story by an unknown
writer if it was good enough, but if they published an essay on x it had to be
by someone who was at least forty and whose job title had x in it. Which is a
problem, because there are a lot of things insiders can't say precisely
because they're insiders.
The Internet is changing that. Anyone can publish an essay on the Web, and it
gets judged, as any writing should, by what it says, not who wrote it. Who are
you to write about x? You are whatever you wrote.
Popular magazines made the period between the spread of literacy and the
arrival of TV the golden age of the short story. The Web may well make this
the golden age of the essay. And that's certainly not something I realized
when I started writing this.
**Notes**
[1] I'm thinking of Oresme (c. 1323-82). But it's hard to pick a date, because
there was a sudden drop-off in scholarship just as Europeans finished
assimilating classical science. The cause may have been the plague of 1347;
the trend in scientific progress matches the population curve.
[2] Parker, William R. "Where Do College English Departments Come From?"
_College English_ 28 (1966-67), pp. 339-351. Reprinted in Gray, Donald J.
(ed). _The Department of English at Indiana University Bloomington 1868-1970._
Indiana University Publications.
Daniels, Robert V. _The University of Vermont: The First Two Hundred Years._
University of Vermont, 1991.
Mueller, Friedrich M. Letter to the _Pall Mall Gazette._ 1886/87. Reprinted in
Bacon, Alan (ed). _The Nineteenth-Century History of English Studies._
Ashgate, 1998.
[3] I'm compressing the story a bit. At first literature took a back seat to
philology, which (a) seemed more serious and (b) was popular in Germany, where
many of the leading scholars of that generation had been trained.
In some cases the writing teachers were transformed _in situ_ into English
professors. Francis James Child, who had been Boylston Professor of Rhetoric
at Harvard since 1851, became in 1876 the university's first professor of
English.
[4] Parker, _op. cit._ , p. 25.
[5] The undergraduate curriculum or _trivium_ (whence "trivial") consisted of
Latin grammar, rhetoric, and logic. Candidates for masters' degrees went on to
study the _quadrivium_ of arithmetic, geometry, music, and astronomy. Together
these were the seven liberal arts.
The study of rhetoric was inherited directly from Rome, where it was
considered the most important subject. It would not be far from the truth to
say that education in the classical world meant training landowners' sons to
speak well enough to defend their interests in political and legal disputes.
[6] Trevor Blackwell points out that this isn't strictly true, because the
outside edges of curves erode faster.
**Thanks** to Ken Anderson, Trevor Blackwell, Sarah Harlin, Jessica
Livingston, Jackie McDonough, and Robert Morris for reading drafts of this.
|
March 2009
_(This essay is derived from a talk at[AngelConf](http://angelconf.org).)_
When we sold our startup in 1998 I thought one day I'd do some angel
investing. Seven years later I still hadn't started. I put it off because it
seemed mysterious and complicated. It turns out to be easier than I expected,
and also more interesting.
The part I thought was hard, the mechanics of investing, really isn't. You
give a startup money and they give you stock. You'll probably get either
preferred stock, which means stock with extra rights like getting your money
back first in a sale, or convertible debt, which means (on paper) you're
lending the company money, and the debt converts to stock at the next
sufficiently big funding round. [1]
There are sometimes minor tactical advantages to using one or the other. The
paperwork for convertible debt is simpler. But really it doesn't matter much
which you use. Don't spend much time worrying about the details of deal terms,
especially when you first start angel investing. That's not how you win at
this game. When you hear people talking about a successful angel investor,
they're not saying "He got a 4x liquidation preference." They're saying "He
invested in Google."
That's how you win: by investing in the right startups. That is so much more
important than anything else that I worry I'm misleading you by even talking
about other things.
**Mechanics**
Angel investors often syndicate deals, which means they join together to
invest on the same terms. In a syndicate there is usually a "lead" investor
who negotiates the terms with the startup. But not always: sometimes the
startup cobbles together a syndicate of investors who approach them
independently, and the startup's lawyer supplies the paperwork.
The easiest way to get started in angel investing is to find a friend who
already does it, and try to get included in his syndicates. Then all you have
to do is write checks.
Don't feel like you have to join a syndicate, though. It's not that hard to do
it yourself. You can just use the standard [series
AA](http://ycombinator.com/seriesaa.html) documents Wilson Sonsini and Y
Combinator published online. You should of course have your lawyer review
everything. Both you and the startup should have lawyers. But the lawyers
don't have to create the agreement from scratch. [2]
When you negotiate terms with a startup, there are two numbers you care about:
how much money you're putting in, and the valuation of the company. The
valuation determines how much stock you get. If you put $50,000 into a company
at a pre-money valuation of $1 million, then the post-money valuation is $1.05
million, and you get .05/1.05, or 4.76% of the company's stock.
If the company raises more money later, the new investor will take a chunk of
the company away from all the existing shareholders just as you did. If in the
next round they sell 10% of the company to a new investor, your 4.76% will be
reduced to 4.28%.
That's ok. Dilution is normal. What saves you from being mistreated in future
rounds, usually, is that you're in the same boat as the founders. They can't
dilute you without diluting themselves just as much. And they won't dilute
themselves unless they end up [net ahead](equity.html). So in theory, each
further round of investment leaves you with a smaller share of an even more
valuable company, till after several more rounds you end up with .5% of the
company at the point where it IPOs, and you are very happy because your
$50,000 has become $5 million. [3]
The agreement by which you invest should have provisions that let you
contribute to future rounds to maintain your percentage. So it's your choice
whether you get diluted. [4] If the company does really well, you eventually
will, because eventually the valuations will get so high it's not worth it for
you.
How much does an angel invest? That varies enormously, from $10,000 to
hundreds of thousands or in rare cases even millions. The upper bound is
obviously the total amount the founders want to raise. The lower bound is
5-10% of the total or $10,000, whichever is greater. A typical angel round
these days might be $150,000 raised from 5 people.
Valuations don't vary as much. For angel rounds it's rare to see a valuation
lower than half a million or higher than 4 or 5 million. 4 million is starting
to be VC territory.
How do you decide what valuation to offer? If you're part of a round led by
someone else, that problem is solved for you. But what if you're investing by
yourself? There's no real answer. There is no rational way to value an early
stage startup. The valuation reflects nothing more than the strength of the
company's bargaining position. If they really want you, either because they
desperately need money, or you're someone who can help them a lot, they'll let
you invest at a low valuation. If they don't need you, it will be higher. So
guess. The startup may not have any more idea what the number should be than
you do. [5]
Ultimately it doesn't matter much. When angels make a lot of money from a
deal, it's not because they invested at a valuation of $1.5 million instead of
$3 million. It's because the company was really successful.
I can't emphasize that too much. Don't get hung up on mechanics or deal terms.
What you should spend your time thinking about is whether the company is good.
(Similarly, founders also should not get hung up on deal terms, but should
spend their time thinking about how to make the company good.)
There's a second less obvious component of an angel investment: how much
you're expected to help the startup. Like the amount you invest, this can vary
a lot. You don't have to do anything if you don't want to; you could simply be
a source of money. Or you can become a de facto employee of the company. Just
make sure that you and the startup agree in advance about roughly how much
you'll do for them.
Really hot companies sometimes have high standards for angels. The ones
everyone wants to invest in practically audition investors, and only take
money from people who are famous and/or will work hard for them. But don't
feel like you have to put in a lot of time or you won't get to invest in any
good startups. There is a surprising lack of correlation between how hot a
deal a startup is and how well it ends up doing. Lots of hot startups will end
up failing, and lots of startups no one likes will end up succeeding. And the
latter are so desperate for money that they'll take it from anyone at a low
valuation. [6]
**Picking Winners**
It would be nice to be able to pick those out, wouldn't it? The part of angel
investing that has most effect on your returns, picking the right companies,
is also the hardest. So you should practically ignore (or more precisely,
archive, in the Gmail sense) everything I've told you so far. You may need to
refer to it at some point, but it is not the central issue.
The central issue is picking the right startups. What "Make something people
want" is for startups, "Pick the right startups" is for investors. Combined
they yield "Pick the startups that will make something people want."
How do you do that? It's not as simple as picking startups that are already
making something wildly popular. By then it's too late for angels. VCs will
already be onto them. As an angel, you have to pick startups before they've
got a hit—either because they've made something great but users don't realize
it yet, like Google early on, or because they're still an iteration or two
away from the big hit, like Paypal when they were making software for
transferring money between PDAs.
To be a good angel investor, you have to be a good judge of potential. That's
what it comes down to. VCs can be fast followers. Most of them don't try to
predict what will win. They just try to notice quickly when something already
is winning. But angels have to be able to predict. [7]
One interesting consequence of this fact is that there are a lot of people out
there who have never even made an angel investment and yet are already better
angel investors than they realize. Someone who doesn't know the first thing
about the mechanics of venture funding but knows what a successful startup
founder looks like is actually far ahead of someone who knows termsheets
inside out, but thinks ["hacker"](gba.html) means someone who breaks into
computers. If you can recognize good startup founders by empathizing with
them—if you both resonate at the same frequency—then you may already be a
better startup picker than the median professional VC. [8]
Paul Buchheit, for example, started angel investing about a year after me, and
he was pretty much immediately as good as me at picking startups. My extra
year of experience was rounding error compared to our ability to empathize
with founders.
What makes a good founder? If there were a word that meant the opposite of
hapless, that would be the one. Bad founders seem hapless. They may be smart,
or not, but somehow events overwhelm them and they get discouraged and give
up. Good founders make things happen the way they want. Which is not to say
they force things to happen in a predefined way. Good founders have a healthy
respect for reality. But they are relentlessly resourceful. That's the closest
I can get to the opposite of hapless. You want to fund people who are
relentlessly resourceful.
Notice we started out talking about things, and now we're talking about
people. There is an ongoing debate between investors which is more important,
the people, or the idea—or more precisely, the market. Some, like Ron Conway,
say it's the people—that the idea will change, but the people are the
foundation of the company. Whereas Marc Andreessen says he'd back ok founders
in a hot market over great founders in a bad one. [9]
These two positions are not so far apart as they seem, because good people
find good markets. Bill Gates would probably have ended up pretty rich even if
IBM hadn't happened to drop the PC standard in his lap.
I've thought a lot about the disagreement between the investors who prefer to
bet on people and those who prefer to bet on markets. It's kind of surprising
that it even exists. You'd expect opinions to have converged more.
But I think I've figured out what's going on. The three most prominent people
I know who favor markets are Marc, Jawed Karim, and Joe Kraus. And all three
of them, in their own startups, basically flew into a thermal: they hit a
market growing so fast that it was all they could do to keep up with it. That
kind of experience is hard to ignore. Plus I think they underestimate
themselves: they think back to how easy it felt to ride that huge thermal
upward, and they think "anyone could have done it." But that isn't true; they
are not ordinary people.
So as an angel investor I think you want to go with Ron Conway and bet on
people. Thermals happen, yes, but no one can predict them—not even the
founders, and certainly not you as an investor. And only good people can ride
the thermals if they hit them anyway.
**Deal Flow**
Of course the question of how to choose startups presumes you have startups to
choose between. How do you find them? This is yet another problem that gets
solved for you by syndicates. If you tag along on a friend's investments, you
don't have to find startups.
The problem is not finding startups, exactly, but finding a stream of
reasonably high quality ones. The traditional way to do this is through
contacts. If you're friends with a lot of investors and founders, they'll send
deals your way. The Valley basically runs on referrals. And once you start to
become known as reliable, useful investor, people will refer lots of deals to
you. I certainly will.
There's also a newer way to find startups, which is to come to events like Y
Combinator's Demo Day, where a batch of newly created startups presents to
investors all at once. We have two Demo Days a year, one in March and one in
August. These are basically mass referrals.
But events like Demo Day only account for a fraction of matches between
startups and investors. The personal referral is still the most common route.
So if you want to hear about new startups, the best way to do it is to get
lots of referrals.
The best way to get lots of referrals is to invest in startups. No matter how
smart and nice you seem, insiders will be reluctant to send you referrals
until you've proven yourself by doing a couple investments. Some smart, nice
guys turn out to be flaky, high-maintenance investors. But once you prove
yourself as a good investor, the deal flow, as they call it, will increase
rapidly in both quality and quantity. At the extreme, for someone like Ron
Conway, it is basically identical with the deal flow of the whole Valley.
So if you want to invest seriously, the way to get started is to bootstrap
yourself off your existing connections, be a good investor in the startups you
meet that way, and eventually you'll start a chain reaction. Good investors
are rare, even in Silicon Valley. There probably aren't more than a couple
hundred serious angels in the whole Valley, and yet they're probably the
single most important ingredient in making the Valley what it is. Angels are
the limiting reagent in startup formation.
If there are only a couple hundred serious angels in the Valley, then by
deciding to become one you could single-handedly make the pipeline for
startups in Silicon Valley significantly wider. That is kind of mind-blowing.
**Being Good**
How do you be a good angel investor? The first thing you need is to be
decisive. When we talk to founders about good and bad investors, one of the
ways we describe the good ones is to say "he writes checks." That doesn't mean
the investor says yes to everyone. Far from it. It means he makes up his mind
quickly, and follows through. You may be thinking, how hard could that be?
You'll see when you try it. It follows from the nature of angel investing that
the decisions are hard. You have to guess early, at the stage when the most
promising ideas still seem counterintuitive, because if they were obviously
good, VCs would already have funded them.
Suppose it's 1998. You come across a startup founded by a couple grad
students. They say they're going to work on Internet search. There are already
a bunch of big public companies doing search. How can these grad students
possibly compete with them? And does search even matter anyway? All the search
engines are trying to get people to start calling them "portals" instead. Why
would you want to invest in a startup run by a couple of nobodies who are
trying to compete with large, aggressive companies in an area they themselves
have declared passe? And yet the grad students seem pretty smart. What do you
do?
There's a hack for being decisive when you're inexperienced: ratchet down the
size of your investment till it's an amount you wouldn't care too much about
losing. For every rich person (you probably shouldn't try angel investing
unless you think of yourself as rich) there's some amount that would be
painless, though annoying, to lose. Till you feel comfortable investing, don't
invest more than that per startup.
For example, if you have $5 million in investable assets, it would probably be
painless (though annoying) to lose $15,000. That's less than .3% of your net
worth. So start by making 3 or 4 $15,000 investments. Nothing will teach you
about angel investing like experience. Treat the first few as an educational
expense. $60,000 is less than a lot of graduate programs. Plus you get equity.
What's really uncool is to be strategically indecisive: to string founders
along while trying to gather more information about the startup's trajectory.
[10] There's always a temptation to do that, because you just have so little
to go on, but you have to consciously resist it. In the long term it's to your
advantage to be good.
The other component of being a good angel investor is simply to be a good
person. Angel investing is not a business where you make money by screwing
people over. Startups create wealth, and creating wealth is not a zero sum
game. No one has to lose for you to win. In fact, if you mistreat the founders
you invest in, they'll just get demoralized and the company will do worse.
Plus your referrals will dry up. So I recommend being good.
The most successful angel investors I know are all basically good people. Once
they invest in a company, all they want to do is help it. And they'll help
people they haven't invested in too. When they do favors they don't seem to
keep track of them. It's too much overhead. They just try to help everyone,
and assume good things will flow back to them somehow. Empirically that seems
to work.
**Notes**
[1] Convertible debt can be either capped at a particular valuation, or can be
done at a discount to whatever the valuation turns out to be when it converts.
E.g. convertible debt at a discount of 30% means when it converts you get
stock as if you'd invested at a 30% lower valuation. That can be useful in
cases where you can't or don't want to figure out what the valuation should
be. You leave it to the next investor. On the other hand, a lot of investors
want to know exactly what they're getting, so they will only do convertible
debt with a cap.
[2] The expensive part of creating an agreement from scratch is not writing
the agreement, but bickering at several hundred dollars an hour over the
details. That's why the series AA paperwork aims at a middle ground. You can
just start from the compromise you'd have reached after lots of back and
forth.
When you fund a startup, both your lawyers should be specialists in startups.
Do not use ordinary corporate lawyers for this. Their inexperience makes them
overbuild: they'll create huge, overcomplicated agreements, and spend hours
arguing over irrelevant things.
In the Valley, the top startup law firms are Wilson Sonsini, Orrick, Fenwick &
West, Gunderson Dettmer, and Cooley Godward. In Boston the best are Goodwin
Procter, Wilmer Hale, and Foley Hoag.
[3] Your mileage may vary.
[4] These anti-dilution provisions also protect you against tricks like a
later investor trying to steal the company by doing another round that values
the company at $1. If you have a competent startup lawyer handle the deal for
you, you should be protected against such tricks initially. But it could
become a problem later. If a big VC firm wants to invest in the startup after
you, they may try to make you take out your anti-dilution protections. And if
they do the startup will be pressuring you to agree. They'll tell you that if
you don't, you're going to kill their deal with the VC. I recommend you solve
this problem by having a gentlemen's agreement with the founders: agree with
them in advance that you're not going to give up your anti-dilution
protections. Then it's up to them to tell VCs early on.
The reason you don't want to give them up is the following scenario. The VCs
recapitalize the company, meaning they give it additional funding at a pre-
money valuation of zero. This wipes out the existing shareholders, including
both you and the founders. They then grant the founders lots of options,
because they need them to stay around, but you get nothing.
Obviously this is not a nice thing to do. It doesn't happen often. Brand-name
VCs wouldn't recapitalize a company just to steal a few percent from an angel.
But there's a continuum here. A less upstanding, lower-tier VC might be
tempted to do it to steal a big chunk of stock.
I'm not saying you should always absolutely refuse to give up your anti-
dilution protections. Everything is a negotiation. If you're part of a
powerful syndicate, you might be able to give up legal protections and rely on
social ones. If you invest in a deal led by a big angel like Ron Conway, for
example, you're pretty well protected against being mistreated, because any VC
would think twice before crossing him. This kind of protection is one of the
reasons angels like to invest in syndicates.
[5] Don't invest so much, or at such a low valuation, that you end up with an
excessively large share of a startup, unless you're sure your money will be
the last they ever need. Later stage investors won't invest in a company if
the founders don't have enough equity left to motivate them. I talked to a VC
recently who said he'd met with a company he really liked, but he turned them
down because investors already owned more than half of it. Those investors
probably thought they'd been pretty clever by getting such a large chunk of
this desirable company, but in fact they were shooting themselves in the foot.
[6] At any given time I know of at least 3 or 4 YC alumni who I believe will
be big successes but who are running on vapor, financially, because investors
don't yet get what they're doing. (And no, unfortunately, I can't tell you who
they are. I can't refer a startup to an investor I don't know.)
[7] There are some VCs who can predict instead of reacting. Not surprisingly,
these are the most successful ones.
[8] It's somewhat sneaky of me to put it this way, because the median VC loses
money. That's one of the most surprising things I've learned about VC while
working on Y Combinator. Only a fraction of VCs even have positive returns.
The rest exist to satisfy demand among fund managers for venture capital as an
asset class. Learning this explained a lot about some of the VCs I encountered
when we were working on Viaweb.
[9] VCs also generally say they prefer great markets to great people. But what
they're really saying is they want both. They're so selective that they only
even consider great people. So when they say they care above all about big
markets, they mean that's how they choose between great people.
[10] Founders rightly dislike the sort of investor who says he's interested in
investing but doesn't want to lead. There are circumstances where this is an
acceptable excuse, but more often than not what it means is "No, but if you
turn out to be a hot deal, I want to be able to claim retroactively I said
yes."
If you like a startup enough to invest in it, then invest in it. Just use the
standard [series AA](http://ycombinator.com/seriesaa.html) terms and write
them a check.
**Thanks** to Sam Altman, Paul Buchheit, Jessica Livingston, Robert Morris,
and Fred Wilson for reading drafts of this.
[Comment](http://news.ycombinator.com/item?id=506671) on this essay.
|
April 2021
When intellectuals talk about the death penalty, they talk about things like
whether it's permissible for the state to take someone's life, whether the
death penalty acts as a deterrent, and whether more death sentences are given
to some groups than others. But in practice the debate about the death penalty
is not about whether it's ok to kill murderers. It's about whether it's ok to
kill innocent people, because at least 4% of people on death row are
[_innocent_](https://www.pnas.org/content/111/20/7230).
When I was a kid I imagined that it was unusual for people to be convicted of
crimes they hadn't committed, and that in murder cases especially this must be
very rare. Far from it. Now, thanks to organizations like the [_Innocence
Project_](https://innocenceproject.org/all-cases), we see a constant stream of
stories about murder convictions being overturned after new evidence emerges.
Sometimes the police and prosecutors were just very sloppy. Sometimes they
were crooked, and knew full well they were convicting an innocent person.
Kenneth Adams and three other men spent 18 years in prison on a murder
conviction. They were exonerated after DNA testing implicated three different
men, two of whom later confessed. The police had been told about the other men
early in the investigation, but never followed up the lead.
Keith Harward spent 33 years in prison on a murder conviction. He was
convicted because "experts" said his teeth matched photos of bite marks on one
victim. He was exonerated after DNA testing showed the murder had been
committed by another man, Jerry Crotty.
Ricky Jackson and two other men spent 39 years in prison after being convicted
of murder on the testimony of a 12 year old boy, who later recanted and said
he'd been coerced by police. Multiple people have confirmed the boy was
elsewhere at the time. The three men were exonerated after the county
prosecutor dropped the charges, saying "The state is conceding the obvious."
Alfred Brown spent 12 years in prison on a murder conviction, including 10
years on death row. He was exonerated after it was discovered that the
assistant district attorney had concealed phone records proving he could not
have committed the crimes.
Glenn Ford spent 29 years on death row after having been convicted of murder.
He was exonerated after new evidence proved he was not even at the scene when
the murder occurred. The attorneys assigned to represent him had never tried a
jury case before.
Cameron Willingham was actually executed in 2004 by lethal injection. The
"expert" who testified that he deliberately set fire to his house has since
been discredited. A re-examination of the case ordered by the state of Texas
in 2009 concluded that "a finding of arson could not be sustained."
[_Rich Glossip_](https://saverichardglossip.com/facts) has spent 20 years on
death row after being convicted of murder on the testimony of the actual
killer, who escaped with a life sentence in return for implicating him. In
2015 he came within minutes of execution before it emerged that Oklahoma had
been planning to kill him with an illegal combination of drugs. They still
plan to go ahead with the execution, perhaps as soon as this summer, despite
[_new evidence_](https://www.usnews.com/news/best-
states/oklahoma/articles/2020-10-14/attorney-for-oklahoma-death-row-inmate-
claims-new-evidence) exonerating him.
I could go on. There are hundreds of similar cases. In Florida alone, 29 death
row prisoners have been exonerated so far.
Far from being rare, wrongful murder convictions are [_very
common_](https://deathpenaltyinfo.org/policy-issues/innocence/description-of-
innocence-cases). Police are under pressure to solve a crime that has gotten a
lot of attention. When they find a suspect, they want to believe he's guilty,
and ignore or even destroy evidence suggesting otherwise. District attorneys
want to be seen as effective and tough on crime, and in order to win
convictions are willing to manipulate witnesses and withhold evidence. Court-
appointed defense attorneys are overworked and often incompetent. There's a
ready supply of criminals willing to give false testimony in return for a
lighter sentence, suggestible witnesses who can be made to say whatever police
want, and bogus "experts" eager to claim that science proves the defendant is
guilty. And juries want to believe them, since otherwise some terrible crime
remains unsolved.
This circus of incompetence and dishonesty is the real issue with the death
penalty. We don't even reach the point where theoretical questions about the
moral justification or effectiveness of capital punishment start to matter,
because so many of the people sentenced to death are actually innocent.
Whatever it means in theory, in practice capital punishment means killing
innocent people.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Don Knight for reading
drafts of this.
**Related:**
|
September 2009
Publishers of all types, from news to music, are unhappy that consumers won't
pay for content anymore. At least, that's how they see it.
In fact consumers never really were paying for content, and publishers weren't
really selling it either. If the content was what they were selling, why has
the price of books or music or movies always depended mostly on the format?
Why didn't better content cost more? [1]
A copy of _Time_ costs $5 for 58 pages, or 8.6 cents a page. _The Economist_
costs $7 for 86 pages, or 8.1 cents a page. Better journalism is actually
slightly cheaper.
Almost every form of publishing has been organized as if the medium was what
they were selling, and the content was irrelevant. Book publishers, for
example, set prices based on the cost of producing and distributing books.
They treat the words printed in the book the same way a textile manufacturer
treats the patterns printed on its fabrics.
Economically, the print media are in the business of marking up paper. We can
all imagine an old-style editor getting a scoop and saying "this will sell a
lot of papers!" Cross out that final S and you're describing their business
model. The reason they make less money now is that people don't need as much
paper.
A few months ago I ran into a friend in a cafe. I had a copy of the _New York
Times_ , which I still occasionally buy on weekends. As I was leaving I
offered it to him, as I've done countless times before in the same situation.
But this time something new happened. I felt that sheepish feeling you get
when you offer someone something worthless. "Do you, er, want a printout of
yesterday's news?" I asked. (He didn't.)
Now that the medium is evaporating, publishers have nothing left to sell. Some
seem to think they're going to sell content—that they were always in the
content business, really. But they weren't, and it's unclear whether anyone
could be.
**Selling**
There have always been people in the business of selling information, but that
has historically been a distinct business from publishing. And the business of
selling information to consumers has always been a marginal one. When I was a
kid there were people who used to sell newsletters containing stock tips,
printed on colored paper that made them hard for the copiers of the day to
reproduce. That is a different world, both culturally and economically, from
the one publishers currently inhabit.
People will pay for information they think they can make money from. That's
why they paid for those stock tip newsletters, and why companies pay now for
Bloomberg terminals and Economist Intelligence Unit reports. But will people
pay for information otherwise? History offers little encouragement.
If audiences were willing to pay more for better content, why wasn't anyone
already selling it to them? There was no reason you couldn't have done that in
the era of physical media. So were the print media and the music labels simply
overlooking this opportunity? Or is it, rather, nonexistent?
What about iTunes? Doesn't that show people will pay for content? Well, not
really. iTunes is more of a tollbooth than a store. Apple controls the default
path onto the iPod. They offer a convenient list of songs, and whenever you
choose one they ding your credit card for a small amount, just below the
threshold of attention. Basically, iTunes makes money by taxing people, not
selling them stuff. You can only do that if you own the channel, and even then
you don't make much from it, because a toll has to be ignorable to work. Once
a toll becomes painful, people start to find ways around it, and that's pretty
easy with digital content.
The situation is much the same with digital books. Whoever controls the device
sets the terms. It's in their interest for content to be as cheap as possible,
and since they own the channel, there's a lot they can do to drive prices
down. Prices will fall even further once writers realize they don't need
publishers. Getting a book printed and distributed is a daunting prospect for
a writer, but most can upload a file.
Is software a counterexample? People pay a lot for desktop software, and
that's just information. True, but I don't think publishers can learn much
from software. Software companies can charge a lot because (a) many of the
customers are businesses, who get in
[trouble](http://www.bsa.org/country/News%20and%20Events/News%20Archives/en/2009/en-08312009-mueller.aspx?sc_lang=en)
if they use pirated versions, and (b) though in form merely information,
software is treated by both maker and purchaser as a different type of thing
from a song or an article. A Photoshop user needs Photoshop in a way that no
one needs a particular song or article.
That's why there's a separate word, "content," for information that's not
software. Software is a different business. Software and content blur together
in some of the most lightweight software, like casual games. But those are
usually free. To make money the way software companies do, publishers would
have to become software companies, and being publishers gives them no
particular head start in that domain. [2]
The most promising countertrend is the premium cable channel. People still pay
for those. But broadcasting isn't publishing: you're not selling a copy of
something. That's one reason the movie business hasn't seen their revenues
decline the way the news and music businesses have. They only have one foot in
publishing.
To the extent the movie business can avoid becoming publishers, they may avoid
publishing's problems. But there are limits to how well they'll be able to do
that. Once publishing—giving people copies—becomes the most natural way of
distributing your content, it probably doesn't work to stick to old forms of
distribution just because you make more that way. If free copies of your
content are available online, then you're competing with publishing's form of
distribution, and that's just as bad as being a publisher.
Apparently some people in the music business hope to retroactively convert it
away from publishing, by getting listeners to pay for subscriptions. It seems
unlikely that will work if they're just streaming the same files you can get
as mp3s.
**Next**
What happens to publishing if you can't sell content? You have two choices:
give it away and make money from it indirectly, or find ways to embody it in
things people will pay for.
The first is probably the future of most current media. [Give music
away](http://thesixtyone.com) and make money from concerts and t-shirts.
Publish articles for free and make money from one of a dozen permutations of
advertising. Both publishers and investors are down on advertising at the
moment, but it has more potential than they realize.
I'm not claiming that potential will be realized by the existing players. The
[optimal](http://ycombinator.com/rfs1.html) ways to make money from the
written word probably require different words written by different people.
It's harder to say what will happen to movies. They could evolve into ads. Or
they could return to their roots and make going to the theater a treat. If
they made the experience good enough, audiences might start to prefer it to
watching pirated movies at home. [3] Or maybe the movie business will dry up,
and the people working in it will go to work for game developers.
I don't know how big embodying information in physical form will be. It may be
surprisingly large; people overvalue [physical stuff](stuff.html). There
should remain some market for printed books, at least.
I can see the evolution of book publishing in the books on my shelves. Clearly
at some point in the 1960s the big publishing houses started to ask: how
cheaply can we make books before people refuse to buy them? The answer turned
out to be one step short of phonebooks. As long as it isn't floppy, consumers
still perceive it as a book.
That worked as long as buying printed books was the only way to read them. If
printed books are optional, publishers will have to work harder to entice
people to buy them. There should be some market, but it's hard to foresee how
big, because its size will depend not on macro trends like the amount people
read, but on the ingenuity of individual publishers. [4]
Some magazines may thrive by focusing on the magazine as a physical object.
Fashion magazines could be made lush in a way that would be hard to match
digitally, at least for a while. But this is probably not an option for most
magazines.
I don't know exactly what the future will look like, but I'm not too worried
about it. This sort of change tends to create as many good things as it kills.
Indeed, the really interesting question is not what will happen to existing
forms, but what new forms will appear.
The reason I've been writing about existing forms is that I don't _know_ what
new forms will appear. But though I can't predict specific winners, I can
offer a recipe for recognizing them. When you see something that's taking
advantage of new technology to give people something they want that they
couldn't have before, you're probably looking at a winner. And when you see
something that's merely reacting to new technology in an attempt to preserve
some existing source of revenue, you're probably looking at a loser.
**Notes**
[1] I don't like the word "content" and tried for a while to avoid using it,
but I have to admit there's no other word that means the right thing.
"Information" is too general.
Ironically, the main reason I don't like "content" is the thesis of this
essay. The word suggests an undifferentiated slurry, but economically that's
how both publishers and audiences treat it. Content is information you don't
need.
[2] Some types of publishers would be at a disadvantage trying to enter the
software business. Record labels, for example, would probably find it more
natural to expand into casinos than software, because the kind of people who
run them would be more at home at the mafia end of the business spectrum than
the don't-be-evil end.
[3] I never watch movies in theaters anymore. The tipping point for me was the
ads they show first.
[4] Unfortunately, making physically nice books will only be a niche within a
niche. Publishers are more likely to resort to expedients like selling
autographed copies, or editions with the buyer's picture on the cover.
**Thanks** to Michael Arrington, Trevor Blackwell, Steven Levy, Robert Morris,
and Geoff Ralston for reading drafts of this.
|
December 2019
Before I had kids, I was afraid of having kids. Up to that point I felt about
kids the way the young Augustine felt about living virtuously. I'd have been
sad to think I'd never have children. But did I want them now? No.
If I had kids, I'd become a parent, and parents, as I'd known since I was a
kid, were uncool. They were dull and responsible and had no fun. And while
it's not surprising that kids would believe that, to be honest I hadn't seen
much as an adult to change my mind. Whenever I'd noticed parents with kids,
the kids seemed to be terrors, and the parents pathetic harried creatures,
even when they prevailed.
When people had babies, I congratulated them enthusiastically, because that
seemed to be what one did. But I didn't feel it at all. "Better you than me,"
I was thinking.
Now when people have babies I congratulate them enthusiastically and I mean
it. Especially the first one. I feel like they just got the best gift in the
world.
What changed, of course, is that I had kids. Something I dreaded turned out to
be wonderful.
Partly, and I won't deny it, this is because of serious chemical changes that
happened almost instantly when our first child was born. It was like someone
flipped a switch. I suddenly felt protective not just toward our child, but
toward all children. As I was driving my wife and new son home from the
hospital, I approached a crosswalk full of pedestrians, and I found myself
thinking "I have to be really careful of all these people. Every one of them
is someone's child!"
So to some extent you can't trust me when I say having kids is great. To some
extent I'm like a religious cultist telling you that you'll be happy if you
join the cult too � but only because joining the cult will alter your mind in
a way that will make you happy to be a cult member.
But not entirely. There were some things about having kids that I clearly got
wrong before I had them.
For example, there was a huge amount of selection bias in my observations of
parents and children. Some parents may have noticed that I wrote "Whenever I'd
noticed parents with kids." Of course the times I noticed kids were when
things were going wrong. I only noticed them when they made noise. And where
was I when I noticed them? Ordinarily I never went to places with kids, so the
only times I encountered them were in shared bottlenecks like airplanes. Which
is not exactly a representative sample. Flying with a toddler is something
very few parents enjoy.
What I didn't notice, because they tend to be much quieter, were all the great
moments parents had with kids. People don't talk about these much � the magic
is hard to put into words, and all other parents know about them anyway � but
one of the great things about having kids is that there are so many times when
you feel there is nowhere else you'd rather be, and nothing else you'd rather
be doing. You don't have to be doing anything special. You could just be going
somewhere together, or putting them to bed, or pushing them on the swings at
the park. But you wouldn't trade these moments for anything. One doesn't tend
to associate kids with peace, but that's what you feel. You don't need to look
any further than where you are right now.
Before I had kids, I had moments of this kind of peace, but they were rarer.
With kids it can happen several times a day.
My other source of data about kids was my own childhood, and that was
similarly misleading. I was pretty bad, and was always in trouble for
something or other. So it seemed to me that parenthood was essentially law
enforcement. I didn't realize there were good times too.
I remember my mother telling me once when I was about 30 that she'd really
enjoyed having me and my sister. My god, I thought, this woman is a saint. She
not only endured all the pain we subjected her to, but actually enjoyed it?
Now I realize she was simply telling the truth.
She said that one reason she liked having us was that we'd been interesting to
talk to. That took me by surprise when I had kids. You don't just love them.
They become your friends too. They're really interesting. And while I admit
small children are disastrously fond of repetition (anything worth doing once
is worth doing fifty times) it's often genuinely fun to play with them. That
surprised me too. Playing with a 2 year old was fun when I was 2 and
definitely not fun when I was 6. Why would it become fun again later? But it
does.
There are of course times that are pure drudgery. Or worse still, terror.
Having kids is one of those intense types of experience that are hard to
imagine unless you've had them. But it is not, as I implicitly believed before
having kids, simply your DNA heading for the lifeboats.
Some of my worries about having kids were right, though. They definitely make
you less productive. I know having kids makes some people get their act
together, but if your act was already together, you're going to have less time
to do it in. In particular, you're going to have to work to a schedule. Kids
have schedules. I'm not sure if it's because that's how kids are, or because
it's the only way to integrate their lives with adults', but once you have
kids, you tend to have to work on their schedule.
You will have chunks of time to work. But you can't let work spill
promiscuously through your whole life, like I used to before I had kids.
You're going to have to work at the same time every day, whether inspiration
is flowing or not, and there are going to be times when you have to stop, even
if it is.
I've been able to adapt to working this way. Work, like love, finds a way. If
there are only certain times it can happen, it happens at those times. So
while I don't get as much done as before I had kids, I get enough done.
I hate to say this, because being ambitious has always been a part of my
identity, but having kids may make one less ambitious. It hurts to see that
sentence written down. I squirm to avoid it. But if there weren't something
real there, why would I squirm? The fact is, once you have kids, you're
probably going to care more about them than you do about yourself. And
attention is a zero-sum game. Only one idea at a time can be the [_top idea in
your mind_](top.html). Once you have kids, it will often be your kids, and
that means it will less often be some project you're working on.
I have some hacks for sailing close to this wind. For example, when I write
essays, I think about what I'd want my kids to know. That drives me to get
things right. And when I was writing [_Bel_](bel.html), I told my kids that
once I finished it I'd take them to Africa. When you say that sort of thing to
a little kid, they treat it as a promise. Which meant I had to finish or I'd
be taking away their trip to Africa. Maybe if I'm really lucky such tricks
could put me net ahead. But the wind is there, no question.
On the other hand, what kind of wimpy ambition do you have if it won't survive
having kids? Do you have so little to spare?
And while having kids may be warping my present judgement, it hasn't
overwritten my memory. I remember perfectly well what life was like before.
Well enough to miss some things a lot, like the ability to take off for some
other country at a moment's notice. That was so great. Why did I never do
that?
See what I did there? The fact is, most of the freedom I had before kids, I
never used. I paid for it in loneliness, but I never used it.
I had plenty of happy times before I had kids. But if I count up happy
moments, not just potential happiness but actual happy moments, there are more
after kids than before. Now I practically have it on tap, almost any bedtime.
People's experiences as parents vary a lot, and I know I've been lucky. But I
think the worries I had before having kids must be pretty common, and judging
by other parents' faces when they see their kids, so must the happiness that
kids bring.
**Note**
[1] Adults are sophisticated enough to see 2 year olds for the fascinatingly
complex characters they are, whereas to most 6 year olds, 2 year olds are just
defective 6 year olds.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Robert Morris for
reading drafts of this.
|
February 2008
The fiery reaction to the release of [Arc](arc.html) had an unexpected
consequence: it made me realize I had a design philosophy. The main complaint
of the more articulate critics was that Arc seemed so flimsy. After years of
working on it, all I had to show for myself were a few thousand lines of
macros? Why hadn't I worked on more substantial problems?
As I was mulling over these remarks it struck me how familiar they seemed.
This was exactly the kind of thing people said at first about Viaweb, and Y
Combinator, and most of my essays.
When we launched Viaweb, it seemed laughable to VCs and e-commerce "experts."
We were just a couple guys in an apartment, which did not seem cool in 1995
the way it does now. And the thing we'd built, as far as they could tell,
wasn't even software. Software, to them, equalled big, honking Windows apps.
Since Viaweb was the first web-based app they'd seen, it seemed to be nothing
more than a website. They were even more contemptuous when they discovered
that Viaweb didn't process credit card transactions (we didn't for the whole
first year). Transaction processing seemed to them what e-commerce was all
about. It sounded serious and difficult.
And yet, mysteriously, Viaweb ended up crushing all its competitors.
The initial reaction to [Y Combinator](http://ycombinator.com) was almost
identical. It seemed laughably lightweight. Startup funding meant series A
rounds: millions of dollars given to a small number of startups founded by
people with established credentials after months of serious, businesslike
meetings, on terms described in a document a foot thick. Y Combinator seemed
inconsequential. It's too early to say yet whether Y Combinator will turn out
like Viaweb, but judging from the number of imitations, a lot of people seem
to think we're on to something.
I can't measure whether my essays are successful, except in page views, but
the reaction to them is at least different from when I started. At first the
default reaction of the Slashdot trolls was (translated into articulate
terms): "Who is this guy and what authority does he have to write about these
topics? I haven't read the essay, but there's no way anything so short and
written in such an informal style could have anything useful to say about such
and such topic, when people with degrees in the subject have already written
many thick books about it." Now there's a new generation of trolls on a new
generation of sites, but they have at least started to omit the initial "Who
is this guy?"
Now people are saying the same things about Arc that they said at first about
Viaweb and Y Combinator and most of my essays. Why the pattern? The answer, I
realized, is that my m.o. for all four has been the same.
Here it is: I like to find (a) simple solutions (b) to overlooked problems (c)
that actually need to be solved, and (d) deliver them as informally as
possible, (e) starting with a very crude version 1, then (f) iterating
rapidly.
When I first laid out these principles explicitly, I noticed something
striking: this is practically a recipe for generating a contemptuous initial
reaction. Though simple solutions are better, they don't seem as impressive as
complex ones. Overlooked problems are by definition problems that most people
think don't matter. Delivering solutions in an informal way means that instead
of judging something by the way it's presented, people have to actually
understand it, which is more work. And starting with a crude version 1 means
your initial effort is always small and incomplete.
I'd noticed, of course, that people never seemed to grasp new ideas at first.
I thought it was just because most people were stupid. Now I see there's more
to it than that. Like a contrarian investment fund, someone following this
strategy will almost always be doing things that seem wrong to the average
person.
As with contrarian investment strategies, that's exactly the point. This
technique is successful (in the long term) because it gives you all the
advantages other people forgo by trying to seem legit. If you work on
overlooked problems, you're more likely to discover new things, because you
have less competition. If you deliver solutions informally, you (a) save all
the effort you would have had to expend to make them look impressive, and (b)
avoid the danger of fooling yourself as well as your audience. And if you
release a crude version 1 then iterate, your solution can benefit from the
imagination of nature, which, as Feynman pointed out, is more powerful than
your own.
In the case of Viaweb, the simple solution was to make the software run on the
server. The overlooked problem was to generate web sites automatically; in
1995, online stores were all made by hand by human designers, but we knew this
wouldn't scale. The part that actually mattered was graphic design, not
transaction processing. The informal delivery mechanism was me, showing up in
jeans and a t-shirt at some retailer's office. And the crude version 1 was, if
I remember correctly, less than 10,000 lines of code when we launched.
The power of this technique extends beyond startups and programming languages
and essays. It probably extends to any kind of creative work. Certainly it can
be used in painting: this is exactly what Cezanne and Klee did.
At Y Combinator we bet money on it, in the sense that we encourage the
startups we fund to work this way. There are always new ideas right under your
nose. So look for simple things that other people have overlooked—things
people will later claim were "obvious"—especially when they've been led astray
by obsolete conventions, or by trying to do things that are superficially
impressive. Figure out what the real problem is, and make sure you solve that.
Don't worry about trying to look corporate; the product is what wins in the
long term. And launch as soon as you can, so you start learning from users
what you should have been making.
[Reddit](http://reddit.com) is a classic example of this approach. When Reddit
first launched, it seemed like there was nothing to it. To the graphically
unsophisticated its deliberately minimal design seemed like no design at all.
But Reddit solved the real problem, which was to tell people what was new and
otherwise stay out of the way. As a result it became massively successful. Now
that conventional ideas have caught up with it, it seems obvious. People look
at Reddit and think the founders were lucky. Like all such things, it was
harder than it looked. The Reddits pushed so hard against the current that
they reversed it; now it looks like they're merely floating downstream.
So when you look at something like Reddit and think "I wish I could think of
an idea like that," remember: ideas like that are all around you. But you
ignore them because they look wrong.
|
August 2020
Some politicians are proposing to introduce wealth taxes in addition to income
and capital gains taxes. Let's try modeling the effects of various levels of
wealth tax to see what they would mean in practice for a startup founder.
Suppose you start a successful startup in your twenties, and then live for
another 60 years. How much of your stock will a wealth tax consume?
If the wealth tax applies to all your assets, it's easy to calculate its
effect. A wealth tax of 1% means you get to keep 99% of your stock each year.
After 60 years the proportion of stock you'll have left will be .99^60, or
.547. So a straight 1% wealth tax means the government will over the course of
your life take 45% of your stock.
(Losing shares does not, obviously, mean becoming _net_ poorer unless the
value per share is increasing by less than the wealth tax rate.)
Here's how much stock the government would take over 60 years at various
levels of wealth tax:
wealth taxgovernment takes
0.1%6%
0.5%26%
1.0%45%
2.0%70%
3.0%84%
4.0%91%
5.0%95%
A wealth tax will usually have a threshold at which it starts. How much
difference would a high threshold make? To model that, we need to make some
assumptions about the initial value of your stock and the growth rate.
Suppose your stock is initially worth $2 million, and the company's trajectory
is as follows: the value of your stock grows 3x for 2 years, then 2x for 2
years, then 50% for 2 years, after which you just get a typical public company
growth rate, which we'll call 8%. [1] Suppose the wealth tax threshold is $50
million. How much stock does the government take now? wealth taxgovernment
takes
0.1%5%
0.5%23%
1.0%41%
2.0%65%
3.0%79%
4.0%88%
5.0%93%
It may at first seem surprising that such apparently small tax rates produce
such dramatic effects. A 2% wealth tax with a $50 million threshold takes
about two thirds of a successful founder's stock.
The reason wealth taxes have such dramatic effects is that they're applied
over and over to the same money. Income tax happens every year, but only to
that year's income. Whereas if you live for 60 years after acquiring some
asset, a wealth tax will tax that same asset 60 times. A wealth tax compounds.
**Note**
[1] In practice, eventually some of this 8% would come in the form of
dividends, which are taxed as income at issue, so this model actually
represents the most optimistic case for the founder.
|
June 2021
A few days ago, on the way home from school, my nine year old son told me he
couldn't wait to get home to write more of the story he was working on. This
made me as happy as anything I've heard him say — not just because he was
excited about his story, but because he'd discovered this way of working.
Working on a project of your own is as different from ordinary work as skating
is from walking. It's more fun, but also much more productive.
What proportion of great work has been done by people who were skating in this
sense? If not all of it, certainly a lot.
There is something special about working on a project of your own. I wouldn't
say exactly that you're happier. A better word would be excited, or engaged.
You're happy when things are going well, but often they aren't. When I'm
writing an essay, most of the time I'm worried and puzzled: worried that the
essay will turn out badly, and puzzled because I'm groping for some idea that
I can't see clearly enough. Will I be able to pin it down with words? In the
end I usually can, if I take long enough, but I'm never sure; the first few
attempts often fail.
You have moments of happiness when things work out, but they don't last long,
because then you're on to the next problem. So why do it at all? Because to
the kind of people who like working this way, nothing else feels as right. You
feel as if you're an animal in its natural habitat, doing what you were meant
to do — not always happy, maybe, but awake and alive.
Many kids experience the excitement of working on projects of their own. The
hard part is making this converge with the work you do as an adult. And our
customs make it harder. We treat "playing" and "hobbies" as qualitatively
different from "work". It's not clear to a kid building a treehouse that
there's a direct (though long) route from that to architecture or engineering.
And instead of pointing out the route, we conceal it, by implicitly treating
the stuff kids do as different from real work. [1]
Instead of telling kids that their treehouses could be on the path to the work
they do as adults, we tell them the path goes through school. And
unfortunately schoolwork tends to be very different from working on projects
of one's own. It's usually neither a project, nor one's own. So as school gets
more serious, working on projects of one's own is something that survives, if
at all, as a thin thread off to the side.
It's a bit sad to think of all the high school kids turning their backs on
building treehouses and sitting in class dutifully learning about Darwin or
Newton to pass some exam, when the work that made Darwin and Newton famous was
actually closer in spirit to building treehouses than studying for exams.
If I had to choose between my kids getting good grades and working on
ambitious projects of their own, I'd pick the projects. And not because I'm an
indulgent parent, but because I've been on the other end and I know which has
more predictive value. When I was picking startups for Y Combinator, I didn't
care about applicants' grades. But if they'd worked on projects of their own,
I wanted to hear all about those. [2]
It may be inevitable that school is the way it is. I'm not saying we have to
redesign it (though I'm not saying we don't), just that we should understand
what it does to our attitudes to work — that it steers us toward the dutiful
plodding kind of work, often using competition as bait, and away from skating.
There are occasionally times when schoolwork becomes a project of one's own.
Whenever I had to write a paper, that would become a project of my own —
except in English classes, ironically, because the things one has to write in
English classes are so [_bogus_](essay.html). And when I got to college and
started taking CS classes, the programs I had to write became projects of my
own. Whenever I was writing or programming, I was usually skating, and that
has been true ever since.
So where exactly is the edge of projects of one's own? That's an interesting
question, partly because the answer is so complicated, and partly because
there's so much at stake. There turn out to be two senses in which work can be
one's own: 1) that you're doing it voluntarily, rather than merely because
someone told you to, and 2) that you're doing it by yourself.
The edge of the former is quite sharp. People who care a lot about their work
are usually very sensitive to the difference between pulling, and being
pushed, and work tends to fall into one category or the other. But the test
isn't simply whether you're told to do something. You can choose to do
something you're told to do. Indeed, you can own it far more thoroughly than
the person who told you to do it.
For example, math homework is for most people something they're told to do.
But for my father, who was a mathematician, it wasn't. Most of us think of the
problems in a math book as a way to test or develop our knowledge of the
material explained in each section. But to my father the problems were the
part that mattered, and the text was merely a sort of annotation. Whenever he
got a new math book it was to him like being given a puzzle: here was a new
set of problems to solve, and he'd immediately set about solving all of them.
The other sense of a project being one's own — working on it by oneself — has
a much softer edge. It shades gradually into collaboration. And interestingly,
it shades into collaboration in two different ways. One way to collaborate is
to share a single project. For example, when two mathematicians collaborate on
a proof that takes shape in the course of a conversation between them. The
other way is when multiple people work on separate projects of their own that
fit together like a jigsaw puzzle. For example, when one person writes the
text of a book and another does the graphic design. [3]
These two paths into collaboration can of course be combined. But under the
right conditions, the excitement of working on a project of one's own can be
preserved for quite a while before disintegrating into the turbulent flow of
work in a large organization. Indeed, the history of successful organizations
is partly the history of techniques for preserving that excitement. [4]
The team that made the original Macintosh were a great example of this
phenomenon. People like Burrell Smith and Andy Hertzfeld and Bill Atkinson and
Susan Kare were not just following orders. They were not tennis balls hit by
Steve Jobs, but rockets let loose by Steve Jobs. There was a lot of
collaboration between them, but they all seem to have individually felt the
excitement of working on a project of one's own.
In Andy Hertzfeld's book on the Macintosh, he describes how they'd come back
into the office after dinner and work late into the night. People who've never
experienced the thrill of working on a project they're excited about can't
distinguish this kind of working long hours from the kind that happens in
sweatshops and boiler rooms, but they're at opposite ends of the spectrum.
That's why it's a mistake to insist dogmatically on "work/life balance."
Indeed, the mere expression "work/life" embodies a mistake: it assumes work
and life are distinct. For those to whom the word "work" automatically implies
the dutiful plodding kind, they are. But for the skaters, the relationship
between work and life would be better represented by a dash than a slash. I
wouldn't want to work on anything that I didn't want to take over my life.
Of course, it's easier to achieve this level of motivation when you're making
something like the Macintosh. It's easy for something new to feel like a
project of your own. That's one of the reasons for the tendency programmers
have to rewrite things that don't need rewriting, and to write their own
versions of things that already exist. This sometimes alarms managers, and
measured by total number of characters typed, it's rarely the optimal
solution. But it's not always driven simply by arrogance or cluelessness.
Writing code from scratch is also much more rewarding — so much more rewarding
that a good programmer can end up net ahead, despite the shocking waste of
characters. Indeed, it may be one of the advantages of capitalism that it
encourages such rewriting. A company that needs software to do something can't
use the software already written to do it at another company, and thus has to
write their own, which often turns out better. [5]
The natural alignment between skating and solving new problems is one of the
reasons the payoffs from startups are so high. Not only is the market price of
unsolved problems higher, you also get a discount on productivity when you
work on them. In fact, you get a double increase in productivity: when you're
doing a clean-sheet design, it's easier to recruit skaters, and they get to
spend all their time skating.
Steve Jobs knew a thing or two about skaters from having watched Steve
Wozniak. If you can find the right people, you only have to tell them what to
do at the highest level. They'll handle the details. Indeed, they insist on
it. For a project to feel like your own, you must have sufficient autonomy.
You can't be working to order, or [_slowed down_](artistsship.html) by
bureaucracy.
One way to ensure autonomy is not to have a boss at all. There are two ways to
do that: to be the boss yourself, and to work on projects outside of work.
Though they're at opposite ends of the scale financially, startups and open
source projects have a lot in common, including the fact that they're often
run by skaters. And indeed, there's a wormhole from one end of the scale to
the other: one of the best ways to discover [_startup
ideas_](startupideas.html) is to work on a project just for fun.
If your projects are the kind that make money, it's easy to work on them. It's
harder when they're not. And the hardest part, usually, is morale. That's
where adults have it harder than kids. Kids just plunge in and build their
treehouse without worrying about whether they're wasting their time, or how it
compares to other treehouses. And frankly we could learn a lot from kids here.
The high standards most grownups have for "real" work do not always serve us
well.
The most important phase in a project of one's own is at the beginning: when
you go from thinking it might be cool to do x to actually doing x. And at that
point high standards are not merely useless but positively harmful. There are
a few people who start too many new projects, but far more, I suspect, who are
deterred by fear of failure from starting projects that would have succeeded
if they had.
But if we couldn't benefit as kids from the knowledge that our treehouses were
on the path to grownup projects, we can at least benefit as grownups from
knowing that our projects are on a path that stretches back to treehouses.
Remember that careless confidence you had as a kid when starting something
new? That would be a powerful thing to recapture.
If it's harder as adults to retain that kind of confidence, we at least tend
to be more aware of what we're doing. Kids bounce, or are herded, from one
kind of work to the next, barely realizing what's happening to them. Whereas
we know more about different types of work and have more control over which we
do. Ideally we can have the best of both worlds: to be deliberate in choosing
to work on projects of our own, and carelessly confident in starting new ones.
**Notes**
[1] "Hobby" is a curious word. Now it means work that isn't _real_ work — work
that one is not to be judged by — but originally it just meant an obsession in
a fairly general sense (even a political opinion, for example) that one
metaphorically rode as a child rides a hobby-horse. It's hard to say if its
recent, narrower meaning is a change for the better or the worse. For sure
there are lots of false positives — lots of projects that end up being
important but are dismissed initially as mere hobbies. But on the other hand,
the concept provides valuable cover for projects in the early, ugly duckling
phase.
[2] Tiger parents, as parents so often do, are fighting the last war. Grades
mattered more in the old days when the route to success was to acquire
[_credentials_](credentials.html) while ascending some predefined ladder. But
it's just as well that their tactics are focused on grades. How awful it would
be if they invaded the territory of projects, and thereby gave their kids a
distaste for this kind of work by forcing them to do it. Grades are already a
grim, fake world, and aren't harmed much by parental interference, but working
on one's own projects is a more delicate, private thing that could be damaged
very easily.
[3] The complicated, gradual edge between working on one's own projects and
collaborating with others is one reason there is so much disagreement about
the idea of the "lone genius." In practice people collaborate (or not) in all
kinds of different ways, but the idea of the lone genius is definitely not a
myth. There's a core of truth to it that goes with a certain way of working.
[4] Collaboration is powerful too. The optimal organization would combine
collaboration and ownership in such a way as to do the least damage to each.
Interestingly, companies and university departments approach this ideal from
opposite directions: companies insist on collaboration, and occasionally also
manage both to recruit skaters and allow them to skate, and university
departments insist on the ability to do independent research (which is by
custom treated as skating, whether it is or not), and the people they hire
collaborate as much as they choose.
[5] If a company could design its software in such a way that the best newly
arrived programmers always got a clean sheet, it could have a kind of eternal
youth. That might not be impossible. If you had a software backbone defining a
game with sufficiently clear rules, individual programmers could write their
own players.
**Thanks** to Trevor Blackwell, Paul Buchheit, Andy Hertzfeld, Jessica
Livingston, and Peter Norvig for reading drafts of this.
|
March 2006, rev August 2009
Yesterday one of the founders we funded asked me why we started [Y
Combinator](http://ycombinator.com). Or more precisely, he asked if we'd
started YC mainly for fun.
Kind of, but not quite. It is enormously fun to be able to work with Rtm and
Trevor again. I missed that after we sold Viaweb, and for all the years after
I always had a background process running, looking for something we could do
together. There is definitely an aspect of a band reunion to Y Combinator.
Every couple days I slip and call it "Viaweb."
Viaweb we started very explicitly to make money. I was sick of living from one
freelance project to the next, and decided to just work as hard as I could
till I'd made enough to solve the problem once and for all. Viaweb was
sometimes fun, but it wasn't designed for fun, and mostly it wasn't. I'd be
surprised if any startup is. All startups are mostly schleps.
The real reason we started Y Combinator is neither selfish nor virtuous. We
didn't start it mainly to make money; we have no idea what our average returns
might be, and won't know for years. Nor did we start YC mainly to help out
young would-be founders, though we do like the idea, and comfort ourselves
occasionally with the thought that if all our investments tank, we will thus
have been doing something unselfish. (It's oddly nondeterministic.)
The real reason we started Y Combinator is one probably only a
[hacker](gba.html) would understand. We did it because it seems such a great
hack. There are thousands of smart people who could start companies and don't,
and with a relatively small amount of force applied at just the right place,
we can spring on the world a stream of new startups that might otherwise not
have existed.
In a way this is virtuous, because I think startups are a good thing. But
really what motivates us is the completely amoral desire that would motivate
any hacker who looked at some complex device and realized that with a tiny
tweak he could make it run more efficiently. In this case, the device is the
world's economy, which fortunately happens to be open source.
|
April 2007
A few days ago I suddenly realized Microsoft was dead. I was talking to a
young startup founder about how Google was different from Yahoo. I said that
Yahoo had been warped from the start by their fear of Microsoft. That was why
they'd positioned themselves as a "media company" instead of a technology
company. Then I looked at his face and realized he didn't understand. It was
as if I'd told him how much girls liked Barry Manilow in the mid 80s. Barry
who?
Microsoft? He didn't say anything, but I could tell he didn't quite believe
anyone would be frightened of them.
Microsoft cast a shadow over the software world for almost 20 years starting
in the late 80s. I can remember when it was IBM before them. I mostly ignored
this shadow. I never used Microsoft software, so it only affected me
indirectly—for example, in the spam I got from botnets. And because I wasn't
paying attention, I didn't notice when the shadow disappeared.
But it's gone now. I can sense that. No one is even afraid of Microsoft
anymore. They still make a lot of money—so does IBM, for that matter. But
they're not dangerous.
When did Microsoft die, and of what? I know they seemed dangerous as late as
2001, because I wrote an [essay](road.html) then about how they were less
dangerous than they seemed. I'd guess they were dead by 2005. I know when we
started Y Combinator we didn't worry about Microsoft as competition for the
startups we funded. In fact, we've never even invited them to the demo days we
organize for startups to present to investors. We invite Yahoo and Google and
some other Internet companies, but we've never bothered to invite Microsoft.
Nor has anyone there ever even sent us an email. They're in a different world.
What killed them? Four things, I think, all of them occurring simultaneously
in the mid 2000s.
The most obvious is Google. There can only be one big man in town, and they're
clearly it. Google is the most dangerous company now by far, in both the good
and bad senses of the word. Microsoft can at best [limp](http://live.com)
along afterward.
When did Google take the lead? There will be a tendency to push it back to
their IPO in August 2004, but they weren't setting the terms of the debate
then. I'd say they took the lead in 2005\. Gmail was one of the things that
put them over the edge. Gmail showed they could do more than search.
Gmail also showed how much you could do with web-based software, if you took
advantage of what later came to be called "Ajax." And that was the second
cause of Microsoft's death: everyone can see the desktop is over. It now seems
inevitable that applications will live on the web—not just email, but
everything, right up to [Photoshop](http://snipshot.com). Even Microsoft sees
that now.
Ironically, Microsoft unintentionally helped create Ajax. The x in Ajax is
from the XMLHttpRequest object, which lets the browser communicate with the
server in the background while displaying a page. (Originally the only way to
communicate with the server was to ask for a new page.) XMLHttpRequest was
created by Microsoft in the late 90s because they needed it for Outlook. What
they didn't realize was that it would be useful to a lot of other people
too—in fact, to anyone who wanted to make web apps work like desktop ones.
The other critical component of Ajax is Javascript, the programming language
that runs in the browser. Microsoft saw the danger of Javascript and tried to
keep it broken for as long as they could. [1] But eventually the open source
world won, by producing Javascript libraries that grew over the brokenness of
Explorer the way a tree grows over barbed wire.
The third cause of Microsoft's death was broadband Internet. Anyone who cares
can have fast Internet access now. And the bigger the pipe to the server, the
less you need the desktop.
The last nail in the coffin came, of all places, from Apple. Thanks to OS X,
Apple has come back from the dead in a way that is extremely rare in
technology. [2] Their victory is so complete that I'm now surprised when I
come across a computer running Windows. Nearly all the people we fund at Y
Combinator use Apple laptops. It was the same in the audience at [startup
school](http://www.bosstalks.com/StartupSchool2007/all_macs_and_all_writing.jpg).
All the computer people use Macs or Linux now. Windows is for grandmas, like
Macs used to be in the 90s. So not only does the desktop no longer matter, no
one who cares about computers uses Microsoft's anyway.
And of course Apple has Microsoft on the run in music too, with TV and phones
on the way.
I'm glad Microsoft is dead. They were like Nero or Commodus—evil in the way
only inherited power can make you. Because remember, the Microsoft monopoly
didn't begin with Microsoft. They got it from IBM. The software business was
overhung by a monopoly from about the mid-1950s to about 2005. For practically
its whole existence, that is. One of the reasons "Web 2.0" has such an air of
euphoria about it is the feeling, conscious or not, that this era of monopoly
may finally be over.
Of course, as a hacker I can't help thinking about how something broken could
be fixed. Is there some way Microsoft could come back? In principle, yes. To
see how, envision two things: (a) the amount of cash Microsoft now has on
hand, and (b) Larry and Sergey making the rounds of all the search engines ten
years ago trying to sell the idea for Google for a million dollars, and being
turned down by everyone.
The surprising fact is, brilliant hackers—dangerously brilliant hackers—can be
had very cheaply, by the standards of a company as rich as Microsoft. They
can't [hire](hiring.html) smart people anymore, but they could buy as many as
they wanted for only an order of magnitude more. So if they wanted to be a
contender again, this is how they could do it:
1. Buy all the good "Web 2.0" startups. They could get substantially all of them for less than they'd have to pay for Facebook.
2. Put them all in a building in Silicon Valley, surrounded by lead shielding to protect them from any contact with Redmond.
I feel safe suggesting this, because they'd never do it. Microsoft's biggest
weakness is that they still don't realize how much they suck. They still think
they can write software in house. Maybe they can, by the standards of the
desktop world. But that world ended a few years ago.
I already know what the reaction to this essay will be. Half the readers will
say that Microsoft is still an enormously profitable company, and that I
should be more careful about drawing conclusions based on what a few people
think in our insular little "Web 2.0" bubble. The other half, the younger
half, will complain that this is old news.
**See also:** [Microsoft is Dead: the Cliffs Notes](cliffsnotes.html)
**Notes**
[1] It doesn't take a conscious effort to make software incompatible. All you
have to do is not work too hard at fixing bugs—which, if you're a big company,
you produce in copious quantities. The situation is analogous to the writing
of "literary theorists." Most don't try to be obscure; they just don't make an
effort to be clear. It wouldn't pay.
[2] In part because Steve Jobs got pushed out by John Sculley in a way that's
rare among technology companies. If Apple's board hadn't made that blunder,
they wouldn't have had to bounce back.
|
December 2019
I've seen the same pattern in many different fields: even though lots of
people have worked hard in the field, only a small fraction of the space of
possibilities has been explored, because they've all worked on similar things.
Even the smartest, most imaginative people are surprisingly conservative when
deciding what to work on. People who would never dream of being fashionable in
any other way get sucked into working on fashionable problems.
If you want to try working on unfashionable problems, one of the best places
to look is in fields that people think have already been fully explored:
essays, Lisp, venture funding � you may notice a pattern here. If you can find
a new approach into a big but apparently played out field, the value of
whatever you discover will be [_multiplied_](sun.html) by its enormous surface
area.
The best protection against getting drawn into working on the same things as
everyone else may be to [_genuinely love_](genius.html) what you're doing.
Then you'll continue to work on it even if you make the same mistake as other
people and think that it's too marginal to matter.
|
December 2020
As I was deciding what to write about next, I was surprised to find that two
separate essays I'd been planning to write were actually the same.
The first is about how to ace your Y Combinator interview. There has been so
much nonsense written about this topic that I've been meaning for years to
write something telling founders the truth.
The second is about something politicians sometimes say � that the only way to
become a billionaire is by exploiting people � and why this is mistaken.
Keep reading, and you'll learn both simultaneously.
I know the politicians are mistaken because it was my job to predict which
people will become billionaires. I think I can truthfully say that I know as
much about how to do this as anyone. If the key to becoming a billionaire �
the defining feature of billionaires � was to exploit people, then I, as a
professional billionaire scout, would surely realize this and look for people
who would be good at it, just as an NFL scout looks for speed in wide
receivers.
But aptitude for exploiting people is not what Y Combinator looks for at all.
In fact, it's the opposite of what they look for. I'll tell you what they do
look for, by explaining how to convince Y Combinator to fund you, and you can
see for yourself.
What YC looks for, above all, is founders who understand some group of users
and can make what they want. This is so important that it's YC's motto: "Make
something people want."
A big company can to some extent force unsuitable products on unwilling
customers, but a startup doesn't have the power to do that. A startup must
sing for its supper, by making things that genuinely delight its customers.
Otherwise it will never get off the ground.
Here's where things get difficult, both for you as a founder and for the YC
partners trying to decide whether to fund you. In a market economy, it's hard
to make something people want that they don't already have. That's the great
thing about market economies. If other people both knew about this need and
were able to satisfy it, they already would be, and there would be no room for
your startup.
Which means the conversation during your YC interview will have to be about
something new: either a new need, or a new way to satisfy one. And not just
new, but uncertain. If it were certain that the need existed and that you
could satisfy it, that certainty would be reflected in large and rapidly
growing revenues, and you wouldn't be seeking seed funding.
So the YC partners have to guess both whether you've discovered a real need,
and whether you'll be able to satisfy it. That's what they are, at least in
this part of their job: professional guessers. They have 1001 heuristics for
doing this, and I'm not going to tell you all of them, but I'm happy to tell
you the most important ones, because these can't be faked; the only way to
"hack" them would be to do what you should be doing anyway as a founder.
The first thing the partners will try to figure out, usually, is whether what
you're making will ever be something a lot of people want. It doesn't have to
be something a lot of people want now. The product and the market will both
evolve, and will influence each other's evolution. But in the end there has to
be something with a huge market. That's what the partners will be trying to
figure out: is there a path to a huge market? [1]
Sometimes it's obvious there will be a huge market. If
[_Boom_](https://boomsupersonic.com/) manages to ship an airliner at all,
international airlines will have to buy it. But usually it's not obvious.
Usually the path to a huge market is by growing a small market. This idea is
important enough that it's worth coining a phrase for, so let's call one of
these small but growable markets a "larval market."
The perfect example of a larval market might be Apple's market when they were
founded in 1976. In 1976, not many people wanted their own computer. But more
and more started to want one, till now every 10 year old on the planet wants a
computer (but calls it a "phone").
The ideal combination is the group of founders who are [_"living in the
future"_](startupideas.html) in the sense of being at the leading edge of some
kind of change, and who are building something they themselves want. Most
super-successful startups are of this type. Steve Wozniak wanted a computer.
Mark Zuckerberg wanted to engage online with his college friends. Larry and
Sergey wanted to find things on the web. All these founders were building
things they and their peers wanted, and the fact that they were at the leading
edge of change meant that more people would want these things in the future.
But although the ideal larval market is oneself and one's peers, that's not
the only kind. A larval market might also be regional, for example. You build
something to serve one location, and then expand to others.
The crucial feature of the initial market is that it exist. That may seem like
an obvious point, but the lack of it is the biggest flaw in most startup
ideas. There have to be some people who want what you're building right now,
and want it so urgently that they're willing to use it, bugs and all, even
though you're a small company they've never heard of. There don't have to be
many, but there have to be some. As long as you have some users, there are
straightforward ways to get more: build new features they want, seek out more
people like them, get them to refer you to their friends, and so on. But these
techniques all require some initial seed group of users.
So this is one thing the YC partners will almost certainly dig into during
your interview. Who are your first users going to be, and how do you know they
want this? If I had to decide whether to fund startups based on a single
question, it would be "How do you know people want this?"
The most convincing answer is "Because we and our friends want it." It's even
better when this is followed by the news that you've already built a
prototype, and even though it's very crude, your friends are using it, and
it's spreading by word of mouth. If you can say that and you're not lying, the
partners will switch from default no to default yes. Meaning you're in unless
there's some other disqualifying flaw.
That is a hard standard to meet, though. Airbnb didn't meet it. They had the
first part. They had made something they themselves wanted. But it wasn't
spreading. So don't feel bad if you don't hit this gold standard of
convincingness. If Airbnb didn't hit it, it must be too high.
In practice, the YC partners will be satisfied if they feel that you have a
deep understanding of your users' needs. And the Airbnbs did have that. They
were able to tell us all about what motivated hosts and guests. They knew from
first-hand experience, because they'd been the first hosts. We couldn't ask
them a question they didn't know the answer to. We ourselves were not very
excited about the idea as users, but we knew this didn't prove anything,
because there were lots of successful startups we hadn't been excited about as
users. We were able to say to ourselves "They seem to know what they're
talking about. Maybe they're onto something. It's not growing yet, but maybe
they can figure out how to make it grow during YC." Which they did, about
three weeks into the batch.
The best thing you can do in a YC interview is to teach the partners about
your users. So if you want to prepare for your interview, one of the best ways
to do it is to go talk to your users and find out exactly what they're
thinking. Which is what you should be doing anyway.
This may sound strangely credulous, but the YC partners want to rely on the
founders to tell them about the market. Think about how VCs typically judge
the potential market for an idea. They're not ordinarily domain experts
themselves, so they forward the idea to someone who is, and ask for their
opinion. YC doesn't have time to do this, but if the YC partners can convince
themselves that the founders both (a) know what they're talking about and (b)
aren't lying, they don't need outside domain experts. They can use the
founders themselves as domain experts when evaluating their own idea.
This is why YC interviews aren't pitches. To give as many founders as possible
a chance to get funded, we made interviews as short as we could: 10 minutes.
That is not enough time for the partners to figure out, through the indirect
evidence in a pitch, whether you know what you're talking about and aren't
lying. They need to dig in and ask you questions. There's not enough time for
sequential access. They need random access. [2]
The worst advice I ever heard about how to succeed in a YC interview is that
you should take control of the interview and make sure to deliver the message
you want to. In other words, turn the interview into a pitch. ⟨elaborate
expletive⟩. It is so annoying when people try to do that. You ask them a
question, and instead of answering it, they deliver some obviously
prefabricated blob of pitch. It eats up 10 minutes really fast.
There is no one who can give you accurate advice about what to do in a YC
interview except a current or former YC partner. People who've merely been
interviewed, even successfully, have no idea of this, but interviews take all
sorts of different forms depending on what the partners want to know about
most. Sometimes they're all about the founders, other times they're all about
the idea. Sometimes some very narrow aspect of the idea. Founders sometimes
walk away from interviews complaining that they didn't get to explain their
idea completely. True, but they explained enough.
Since a YC interview consists of questions, the way to do it well is to answer
them well. Part of that is answering them candidly. The partners don't expect
you to know everything. But if you don't know the answer to a question, don't
try to bullshit your way out of it. The partners, like most experienced
investors, are professional bullshit detectors, and you are (hopefully) an
amateur bullshitter. And if you try to bullshit them and fail, they may not
even tell you that you failed. So it's better to be honest than to try to sell
them. If you don't know the answer to a question, say you don't, and tell them
how you'd go about finding it, or tell them the answer to some related
question.
If you're asked, for example, what could go wrong, the worst possible answer
is "nothing." Instead of convincing them that your idea is bullet-proof, this
will convince them that you're a fool or a liar. Far better to go into
gruesome detail. That's what experts do when you ask what could go wrong. The
partners know that your idea is risky. That's what a good bet looks like at
this stage: a tiny probability of a huge outcome.
Ditto if they ask about competitors. Competitors are rarely what kills
startups. Poor execution does. But you should know who your competitors are,
and tell the YC partners candidly what your relative strengths and weaknesses
are. Because the YC partners know that competitors don't kill startups, they
won't hold competitors against you too much. They will, however, hold it
against you if you seem either to be unaware of competitors, or to be
minimizing the threat they pose. They may not be sure whether you're clueless
or lying, but they don't need to be.
The partners don't expect your idea to be perfect. This is seed investing. At
this stage, all they can expect are promising hypotheses. But they do expect
you to be thoughtful and honest. So if trying to make your idea seem perfect
causes you to come off as glib or clueless, you've sacrificed something you
needed for something you didn't.
If the partners are sufficiently convinced that there's a path to a big
market, the next question is whether you'll be able to find it. That in turn
depends on three things: the general qualities of the founders, their specific
expertise in this domain, and the relationship between them. How determined
are the founders? Are they good at building things? Are they resilient enough
to keep going when things go wrong? How strong is their friendship?
Though the Airbnbs only did ok in the idea department, they did spectacularly
well in this department. The story of how they'd funded themselves by making
Obama- and McCain-themed breakfast cereal was the single most important factor
in our decision to fund them. They didn't realize it at the time, but what
seemed to them an irrelevant story was in fact fabulously good evidence of
their qualities as founders. It showed they were resourceful and determined,
and could work together.
It wasn't just the cereal story that showed that, though. The whole interview
showed that they cared. They weren't doing this just for the money, or because
startups were cool. The reason they were working so hard on this company was
because it was their project. They had discovered an interesting new idea, and
they just couldn't let it go.
Mundane as it sounds, that's the most powerful motivator of all, not just in
startups, but in most ambitious undertakings: to be [_genuinely
interested_](genius.html) in what you're building. This is what really drives
billionaires, or at least the ones who become billionaires from starting
companies. The company is their project.
One thing few people realize about billionaires is that all of them could have
stopped sooner. They could have gotten acquired, or found someone else to run
the company. Many founders do. The ones who become really rich are the ones
who keep working. And what makes them keep working is not just money. What
keeps them working is the same thing that keeps anyone else working when they
could stop if they wanted to: that there's nothing else they'd rather do.
That, not exploiting people, is the defining quality of people who become
billionaires from starting companies. So that's what YC looks for in founders:
authenticity. People's motives for starting startups are usually mixed.
They're usually doing it from some combination of the desire to make money,
the desire to seem cool, genuine interest in the problem, and unwillingness to
work for someone else. The last two are more powerful motivators than the
first two. It's ok for founders to want to make money or to seem cool. Most
do. But if the founders seem like they're doing it _just_ to make money or
_just_ to seem cool, they're not likely to succeed on a big scale. The
founders who are doing it for the money will take the first sufficiently large
acquisition offer, and the ones who are doing it to seem cool will rapidly
discover that there are much less painful ways of seeming cool. [3]
Y Combinator certainly sees founders whose m.o. is to exploit people. YC is a
magnet for them, because they want the YC brand. But when the YC partners
detect someone like that, they reject them. If bad people made good founders,
the YC partners would face a moral dilemma. Fortunately they don't, because
bad people make bad founders. This exploitative type of founder is not going
to succeed on a large scale, and in fact probably won't even succeed on a
small one, because they're always going to be taking shortcuts. They see YC
itself as a shortcut.
Their exploitation usually begins with their own cofounders, which is
disastrous, since the cofounders' relationship is the foundation of the
company. Then it moves on to the users, which is also disastrous, because the
sort of early adopters a successful startup wants as its initial users are the
hardest to fool. The best this kind of founder can hope for is to keep the
edifice of deception tottering along until some acquirer can be tricked into
buying it. But that kind of acquisition is never very big. [4]
If professional billionaire scouts know that exploiting people is not the
skill to look for, why do some politicians think this is the defining quality
of billionaires?
I think they start from the feeling that it's wrong that one person could have
so much more money than another. It's understandable where that feeling comes
from. It's in our DNA, and even in the DNA of other species.
If they limited themselves to saying that it made them feel bad when one
person had so much more money than other people, who would disagree? It makes
me feel bad too, and I think people who make a lot of money have a moral
obligation to use it for the common good. The mistake they make is to jump
from feeling bad that some people are much richer than others to the
conclusion that there's no legitimate way to make a very large amount of
money. Now we're getting into statements that are not only falsifiable, but
false.
There are certainly some people who become rich by doing bad things. But there
are also plenty of people who behave badly and don't make that much from it.
There is no correlation � in fact, probably an inverse correlation � between
how badly you behave and how much money you make.
The greatest danger of this nonsense may not even be that it sends policy
astray, but that it misleads ambitious people. Can you imagine a better way to
destroy social mobility than by telling poor kids that the way to get rich is
by exploiting people, while the rich kids know, from having watched the
preceding generation do it, how it's really done?
I'll tell you how it's really done, so you can at least tell your own kids the
truth. It's all about users. The most reliable way to become a billionaire is
to start a company that [_grows fast_](growth.html), and the way to grow fast
is to make what users want. Newly started startups have no choice but to
delight users, or they'll never even get rolling. But this never stops being
the lodestar, and bigger companies take their eye off it at their peril. Stop
delighting users, and eventually someone else will.
Users are what the partners want to know about in YC interviews, and what I
want to know about when I talk to founders that we funded ten years ago and
who are billionaires now. What do users want? What new things could you build
for them? Founders who've become billionaires are always eager to talk about
that topic. That's how they became billionaires.
**Notes**
[1] The YC partners have so much practice doing this that they sometimes see
paths that the founders themselves haven't seen yet. The partners don't try to
seem skeptical, as buyers in transactions often do to increase their leverage.
Although the founders feel their job is to convince the partners of the
potential of their idea, these roles are not infrequently reversed, and the
founders leave the interview feeling their idea has more potential than they
realized.
[2] In practice, 7 minutes would be enough. You rarely change your mind at
minute 8. But 10 minutes is socially convenient.
[3] I myself took the first sufficiently large acquisition offer in my first
startup, so I don't blame founders for doing this. There's nothing wrong with
starting a startup to make money. You need to make money somehow, and for some
people startups are the most efficient way to do it. I'm just saying that
these are not the startups that get really big.
[4] Not these days, anyway. There were some big ones during the Internet
Bubble, and indeed some big IPOs.
**Thanks** to Trevor Blackwell, Jessica Livingston, Robert Morris, Geoff
Ralston, and Harj Taggar for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
October 2005
_(This essay is derived from a talk at the 2005[Startup
School.](http://startupschool.org))_
How do you get good ideas for [startups](start.html)? That's probably the
number one question people ask me.
I'd like to reply with another question: why do people think it's hard to come
up with ideas for startups?
That might seem a stupid thing to ask. Why do they _think_ it's hard? If
people can't do it, then it _is_ hard, at least for them. Right?
Well, maybe not. What people usually say is not that they can't think of
ideas, but that they don't have any. That's not quite the same thing. It could
be the reason they don't have any is that they haven't tried to generate them.
I think this is often the case. I think people believe that coming up with
ideas for startups is very hard-- that it _must_ be very hard-- and so they
don't try do to it. They assume ideas are like miracles: they either pop into
your head or they don't.
I also have a theory about why people think this. They overvalue ideas. They
think creating a startup is just a matter of implementing some fabulous
initial idea. And since a successful startup is worth millions of dollars, a
good idea is therefore a million dollar idea.
If coming up with an idea for a startup equals coming up with a million dollar
idea, then of course it's going to seem hard. Too hard to bother trying. Our
instincts tell us something so valuable would not be just lying around for
anyone to discover.
Actually, startup ideas are not million dollar ideas, and here's an experiment
you can try to prove it: just try to sell one. Nothing evolves faster than
markets. The fact that there's no market for startup ideas suggests there's no
demand. Which means, in the narrow sense of the word, that startup ideas are
worthless.
**Questions**
The fact is, most startups end up nothing like the initial idea. It would be
closer to the truth to say the main value of your initial idea is that, in the
process of discovering it's broken, you'll come up with your real idea.
The initial idea is just a starting point-- not a blueprint, but a question.
It might help if they were expressed that way. Instead of saying that your
idea is to make a collaborative, web-based spreadsheet, say: could one make a
collaborative, web-based spreadsheet? A few grammatical tweaks, and a woefully
incomplete idea becomes a promising question to explore.
There's a real difference, because an assertion provokes objections in a way a
question doesn't. If you say: I'm going to build a web-based spreadsheet, then
critics-- the most dangerous of which are in your own head-- will immediately
reply that you'd be competing with Microsoft, that you couldn't give people
the kind of UI they expect, that users wouldn't want to have their data on
your servers, and so on.
A question doesn't seem so challenging. It becomes: let's try making a web-
based spreadsheet and see how far we get. And everyone knows that if you tried
this you'd be able to make _something_ useful. Maybe what you'd end up with
wouldn't even be a spreadsheet. Maybe it would be some kind of new
spreadsheet-like collaboration tool that doesn't even have a name yet. You
wouldn't have thought of something like that except by implementing your way
toward it.
Treating a startup idea as a question changes what you're looking for. If an
idea is a blueprint, it has to be right. But if it's a question, it can be
wrong, so long as it's wrong in a way that leads to more ideas.
One valuable way for an idea to be wrong is to be only a partial solution.
When someone's working on a problem that seems too big, I always ask: is there
some way to bite off some subset of the problem, then gradually expand from
there? That will generally work unless you get trapped on a local maximum,
like 1980s-style AI, or C.
**Upwind**
So far, we've reduced the problem from thinking of a million dollar idea to
thinking of a mistaken question. That doesn't seem so hard, does it?
To generate such questions you need two things: to be familiar with promising
new technologies, and to have the right kind of friends. New technologies are
the ingredients startup ideas are made of, and conversations with friends are
the kitchen they're cooked in.
Universities have both, and that's why so many startups grow out of them.
They're filled with new technologies, because they're trying to produce
research, and only things that are new count as research. And they're full of
exactly the right kind of people to have ideas with: the other students, who
will be not only smart but elastic-minded to a fault.
The opposite extreme would be a well-paying but boring job at a big company.
Big companies are biased against new technologies, and the people you'd meet
there would be wrong too.
In an [essay](hs.html) I wrote for high school students, I said a good rule of
thumb was to stay upwind-- to work on things that maximize your future
options. The principle applies for adults too, though perhaps it has to be
modified to: stay upwind for as long as you can, then cash in the potential
energy you've accumulated when you need to pay for kids.
I don't think people consciously realize this, but one reason downwind jobs
like churning out Java for a bank pay so well is precisely that they are
downwind. The market price for that kind of work is higher because it gives
you fewer options for the future. A job that lets you work on exciting new
stuff will tend to pay less, because part of the compensation is in the form
of the new skills you'll learn.
Grad school is the other end of the spectrum from a coding job at a big
company: the pay's low but you spend most of your time working on new stuff.
And of course, it's called "school," which makes that clear to everyone,
though in fact all jobs are some percentage school.
The right environment for having startup ideas need not be a university per
se. It just has to be a situation with a large percentage of school.
It's obvious why you want exposure to new technology, but why do you need
other people? Can't you just think of new ideas yourself? The empirical answer
is: no. Even Einstein needed people to bounce ideas off. Ideas get developed
in the process of explaining them to the right kind of person. You need that
resistance, just as a carver needs the resistance of the wood.
This is one reason Y Combinator has a rule against investing in startups with
only one founder. Practically every successful company has at least two. And
because startup founders work under great pressure, it's critical they be
friends.
I didn't realize it till I was writing this, but that may help explain why
there are so few female startup founders. I read on the Internet (so it must
be true) that only 1.7% of VC-backed startups are founded by women. The
percentage of female hackers is small, but not that small. So why the
discrepancy?
When you realize that successful startups tend to have multiple founders who
were already friends, a possible explanation emerges. People's best friends
are likely to be of the same sex, and if one group is a minority in some
population, _pairs_ of them will be a minority squared. [1]
**Doodling**
What these groups of co-founders do together is more complicated than just
sitting down and trying to think of ideas. I suspect the most productive setup
is a kind of together-alone-together sandwich. Together you talk about some
hard problem, probably getting nowhere. Then, the next morning, one of you has
an idea in the shower about how to solve it. He runs eagerly to to tell the
others, and together they work out the kinks.
What happens in that shower? It seems to me that ideas just pop into my head.
But can we say more than that?
Taking a shower is like a form of meditation. You're alert, but there's
nothing to distract you. It's in a situation like this, where your mind is
free to roam, that it bumps into new ideas.
What happens when your mind wanders? It may be like doodling. Most people have
characteristic ways of doodling. This habit is unconscious, but not random: I
found my doodles changed after I started studying painting. I started to make
the kind of gestures I'd make if I were drawing from life. They were atoms of
drawing, but arranged randomly. [2]
Perhaps letting your mind wander is like doodling with ideas. You have certain
mental gestures you've learned in your work, and when you're not paying
attention, you keep making these same gestures, but somewhat randomly. In
effect, you call the same functions on random arguments. That's what a
metaphor is: a function applied to an argument of the wrong type.
Conveniently, as I was writing this, my mind wandered: would it be useful to
have metaphors in a programming language? I don't know; I don't have time to
think about this. But it's convenient because this is an example of what I
mean by habits of mind. I spend a lot of time thinking about language design,
and my habit of always asking "would x be useful in a programming language"
just got invoked.
If new ideas arise like doodles, this would explain why you have to work at
something for a while before you have any. It's not just that you can't judge
ideas till you're an expert in a field. You won't even generate ideas, because
you won't have any habits of mind to invoke.
Of course the habits of mind you invoke on some field don't have to be derived
from working in that field. In fact, it's often better if they're not. You're
not just looking for good ideas, but for good _new_ ideas, and you have a
better chance of generating those if you combine stuff from distant fields. As
hackers, one of our habits of mind is to ask, could one open-source x? For
example, what if you made an open-source operating system? A fine idea, but
not very novel. Whereas if you ask, could you make an open-source play? you
might be onto something.
Are some kinds of work better sources of habits of mind than others? I suspect
harder fields may be better sources, because to attack hard problems you need
powerful solvents. I find math is a good source of metaphors-- good enough
that it's worth studying just for that. Related fields are also good sources,
especially when they're related in unexpected ways. Everyone knows computer
science and electrical engineering are related, but precisely because everyone
knows it, importing ideas from one to the other doesn't yield great profits.
It's like importing something from Wisconsin to Michigan. Whereas (I claim)
hacking and [painting](hp.html) are also related, in the sense that hackers
and painters are both [makers](taste.html), and this source of new ideas is
practically virgin territory.
**Problems**
In theory you could stick together ideas at random and see what you came up
with. What if you built a peer-to-peer dating site? Would it be useful to have
an automatic book? Could you turn theorems into a commodity? When you assemble
ideas at random like this, they may not be just stupid, but semantically ill-
formed. What would it even mean to make theorems a commodity? You got me. I
didn't think of that idea, just its name.
You might come up with something useful this way, but I never have. It's like
knowing a fabulous sculpture is hidden inside a block of marble, and all you
have to do is remove the marble that isn't part of it. It's an encouraging
thought, because it reminds you there is an answer, but it's not much use in
practice because the search space is too big.
I find that to have good ideas I need to be working on some problem. You can't
start with randomness. You have to start with a problem, then let your mind
wander just far enough for new ideas to form.
In a way, it's harder to see problems than their solutions. Most people prefer
to remain in denial about problems. It's obvious why: problems are irritating.
They're problems! Imagine if people in 1700 saw their lives the way we'd see
them. It would have been unbearable. This denial is such a powerful force
that, even when presented with possible solutions, people often prefer to
believe they wouldn't work.
I saw this phenomenon when I worked on spam filters. In 2002, most people
preferred to ignore spam, and most of those who didn't preferred to believe
the heuristic filters then available were the best you could do.
I found spam intolerable, and I felt it had to be possible to recognize it
statistically. And it turns out that was all you needed to solve the problem.
The algorithm I used was ridiculously simple. Anyone who'd really tried to
solve the problem would have found it. It was just that no one had really
tried to solve the problem. [3]
Let me repeat that recipe: finding the problem intolerable and feeling it must
be possible to solve it. Simple as it seems, that's the recipe for a lot of
startup ideas.
**Wealth**
So far most of what I've said applies to ideas in general. What's special
about startup ideas? Startup ideas are ideas for companies, and companies have
to make money. And the way to make money is to make something people want.
Wealth is what people want. I don't mean that as some kind of philosophical
statement; I mean it as a tautology.
So an idea for a startup is an idea for something people want. Wouldn't any
good idea be something people want? Unfortunately not. I think new theorems
are a fine thing to create, but there is no great demand for them. Whereas
there appears to be great demand for celebrity gossip magazines. Wealth is
defined democratically. Good ideas and valuable ideas are not quite the same
thing; the difference is individual tastes.
But valuable ideas are very close to good ideas, especially in technology. I
think they're so close that you can get away with working as if the goal were
to discover good ideas, so long as, in the final stage, you stop and ask: will
people actually pay for this? Only a few ideas are likely to make it that far
and then get shot down; RPN calculators might be one example.
One way to make something people want is to look at stuff people use now
that's broken. Dating sites are a prime example. They have millions of users,
so they must be promising something people want. And yet they work horribly.
Just ask anyone who uses them. It's as if they used the worse-is-better
approach but stopped after the first stage and handed the thing over to
marketers.
Of course, the most obvious breakage in the average computer user's life is
Windows itself. But this is a special case: you can't defeat a monopoly by a
frontal attack. Windows can and will be overthrown, but not by giving people a
better desktop OS. The way to kill it is to redefine the problem as a superset
of the current one. The problem is not, what operating system should people
use on desktop computers? but how should people use applications? There are
answers to that question that don't even involve desktop computers.
Everyone thinks Google is going to solve this problem, but it is a very subtle
one, so subtle that a company as big as Google might well get it wrong. I
think the odds are better than 50-50 that the Windows killer-- or more
accurately, Windows transcender-- will come from some little startup.
Another classic way to make something people want is to take a luxury and make
it into a commmodity. People must want something if they pay a lot for it. And
it is a very rare product that can't be made dramatically cheaper if you try.
This was Henry Ford's plan. He made cars, which had been a luxury item, into a
commodity. But the idea is much older than Henry Ford. Water mills transformed
mechanical power from a luxury into a commodity, and they were used in the
Roman empire. Arguably pastoralism transformed a luxury into a commodity.
When you make something cheaper you can sell more of them. But if you make
something dramatically cheaper you often get qualitative changes, because
people start to use it in different ways. For example, once computers get so
cheap that most people can have one of their own, you can use them as
communication devices.
Often to make something dramatically cheaper you have to redefine the problem.
The Model T didn't have all the features previous cars did. It only came in
black, for example. But it solved the problem people cared most about, which
was getting from place to place.
One of the most useful mental habits I know I learned from Michael Rabin: that
the best way to solve a problem is often to redefine it. A lot of people use
this technique without being consciously aware of it, but Rabin was
spectacularly explicit. You need a big prime number? Those are pretty
expensive. How about if I give you a big number that only has a 10 to the
minus 100 chance of not being prime? Would that do? Well, probably; I mean,
that's probably smaller than the chance that I'm imagining all this anyway.
Redefining the problem is a particularly juicy heuristic when you have
competitors, because it's so hard for rigid-minded people to follow. You can
work in plain sight and they don't realize the danger. Don't worry about us.
We're just working on search. Do one thing and do it well, that's our motto.
Making things cheaper is actually a subset of a more general technique: making
things easier. For a long time it was most of making things easier, but now
that the things we build are so complicated, there's another rapidly growing
subset: making things easier to _use_.
This is an area where there's great room for improvement. What you want to be
able to say about technology is: it just works. How often do you say that now?
Simplicity takes effort-- genius, even. The average programmer seems to
produce UI designs that are almost willfully bad. I was trying to use the
stove at my mother's house a couple weeks ago. It was a new one, and instead
of physical knobs it had buttons and an LED display. I tried pressing some
buttons I thought would cause it to get hot, and you know what it said? "Err."
Not even "Error." "Err." You can't just say "Err" to the user of a _stove_.
You should design the UI so that errors are impossible. And the boneheads who
designed this stove even had an example of such a UI to work from: the old
one. You turn one knob to set the temperature and another to set the timer.
What was wrong with that? It just worked.
It seems that, for the average engineer, more options just means more rope to
hang yourself. So if you want to start a startup, you can take almost any
existing technology produced by a big company, and assume you could build
something way easier to use.
**Design for Exit**
Success for a startup approximately equals getting bought. You need some kind
of exit strategy, because you can't get the smartest people to work for you
without giving them options likely to be worth something. Which means you
either have to get bought or go public, and the number of startups that go
public is very small.
If success probably means getting bought, should you make that a conscious
goal? The old answer was no: you were supposed to pretend that you wanted to
create a giant, public company, and act surprised when someone made you an
offer. Really, you want to buy us? Well, I suppose we'd consider it, for the
right price.
I think things are changing. If 98% of the time success means getting bought,
why not be open about it? If 98% of the time you're doing product development
on spec for some big company, why not think of that as your task? One
advantage of this approach is that it gives you another source of ideas: look
at big companies, think what they [should](http://kiko.com) be doing, and do
it yourself. Even if they already know it, you'll probably be done faster.
Just be sure to make something multiple acquirers will want. Don't fix
Windows, because the only potential acquirer is Microsoft, and when there's
only one acquirer, they don't have to hurry. They can take their time and copy
you instead of buying you. If you want to get market price, work on something
where there's competition.
If an increasing number of startups are created to do product development on
spec, it will be a natural counterweight to monopolies. Once some type of
technology is captured by a monopoly, it will only evolve at big company rates
instead of startup rates, whereas alternatives will evolve with especial
speed. A free market interprets monopoly as damage and routes around it.
**The Woz Route**
The most productive way to generate startup ideas is also the most unlikely-
sounding: by accident. If you look at how famous startups got started, a lot
of them weren't initially supposed to be startups. Lotus began with a program
Mitch Kapor wrote for a friend. Apple got started because Steve Wozniak wanted
to build microcomputers, and his employer, Hewlett-Packard, wouldn't let him
do it at work. Yahoo began as David Filo's personal collection of links.
This is not the only way to start startups. You can sit down and consciously
come up with an idea for a company; we did. But measured in total market cap,
the build-stuff-for-yourself model might be more fruitful. It certainly has to
be the most fun way to come up with startup ideas. And since a startup ought
to have multiple founders who were already friends before they decided to
start a company, the rather surprising conclusion is that the best way to
generate startup ideas is to do what hackers do for fun: cook up amusing hacks
with your friends.
It seems like it violates some kind of conservation law, but there it is: the
best way to get a "million dollar idea" is just to do what hackers enjoy doing
anyway.
**Notes**
[1] This phenomenon may account for a number of discrepancies currently blamed
on various forbidden isms. Never attribute to malice what can be explained by
math.
[2] A lot of classic abstract expressionism is doodling of this type: artists
trained to paint from life using the same gestures but without using them to
represent anything. This explains why such paintings are (slightly) more
interesting than random marks would be.
[3] Bill Yerazunis had solved the problem, but he got there by another path.
He made a general-purpose file classifier so good that it also worked for
spam.
|
October 2021
If you asked people what was special about Einstein, most would say that he
was really smart. Even the ones who tried to give you a more sophisticated-
sounding answer would probably think this first. Till a few years ago I would
have given the same answer myself. But that wasn't what was special about
Einstein. What was special about him was that he had important new ideas.
Being very smart was a necessary precondition for having those ideas, but the
two are not identical.
It may seem a hair-splitting distinction to point out that intelligence and
its consequences are not identical, but it isn't. There's a big gap between
them. Anyone who's spent time around universities and research labs knows how
big. There are a lot of genuinely smart people who don't achieve very much.
I grew up thinking that being smart was the thing most to be desired. Perhaps
you did too. But I bet it's not what you really want. Imagine you had a choice
between being really smart but discovering nothing new, and being less smart
but discovering lots of new ideas. Surely you'd take the latter. I would. The
choice makes me uncomfortable, but when you see the two options laid out
explicitly like that, it's obvious which is better.
The reason the choice makes me uncomfortable is that being smart still feels
like the thing that matters, even though I know intellectually that it isn't.
I spent so many years thinking it was. The circumstances of childhood are a
perfect storm for fostering this illusion. Intelligence is much easier to
measure than the value of new ideas, and you're constantly being judged by it.
Whereas even the kids who will ultimately discover new things aren't usually
discovering them yet. For kids that way inclined, intelligence is the only
game in town.
There are more subtle reasons too, which persist long into adulthood.
Intelligence wins in conversation, and thus becomes the basis of the dominance
hierarchy. [1] Plus having new ideas is such a new thing historically, and
even now done by so few people, that society hasn't yet assimilated the fact
that this is the actual destination, and intelligence merely a means to an
end. [2]
Why do so many smart people fail to discover anything new? Viewed from that
direction, the question seems a rather depressing one. But there's another way
to look at it that's not just more optimistic, but more interesting as well.
Clearly intelligence is not the only ingredient in having new ideas. What are
the other ingredients? Are they things we could cultivate?
Because the trouble with intelligence, they say, is that it's mostly inborn.
The evidence for this seems fairly convincing, especially considering that
most of us don't want it to be true, and the evidence thus has to face a stiff
headwind. But I'm not going to get into that question here, because it's the
other ingredients in new ideas that I care about, and it's clear that many of
them can be cultivated.
That means the truth is excitingly different from the story I got as a kid. If
intelligence is what matters, and also mostly inborn, the natural consequence
is a sort of _Brave New World_ fatalism. The best you can do is figure out
what sort of work you have an "aptitude" for, so that whatever intelligence
you were born with will at least be put to the best use, and then work as hard
as you can at it. Whereas if intelligence isn't what matters, but only one of
several ingredients in what does, and many of those aren't inborn, things get
more interesting. You have a lot more control, but the problem of how to
arrange your life becomes that much more complicated.
So what are the other ingredients in having new ideas? The fact that I can
even ask this question proves the point I raised earlier — that society hasn't
assimilated the fact that it's this and not intelligence that matters.
Otherwise we'd all know the answers to such a fundamental question. [3]
I'm not going to try to provide a complete catalogue of the other ingredients
here. This is the first time I've posed the question to myself this way, and I
think it may take a while to answer. But I wrote recently about one of the
most important: an obsessive [_interest_](genius.html) in a particular topic.
And this can definitely be cultivated.
Another quality you need in order to discover new ideas is [_independent-
mindedness_](think.html). I wouldn't want to claim that this is distinct from
intelligence — I'd be reluctant to call someone smart who wasn't independent-
minded — but though largely inborn, this quality seems to be something that
can be cultivated to some extent.
There are general techniques for having new ideas — for example, for working
on your own [_projects_](own.html) and for overcoming the obstacles you face
with [_early_](early.html) work — and these can all be learned. Some of them
can be learned by societies. And there are also collections of techniques for
generating specific types of new ideas, like [startup
ideas](startupideas.html) and [essay topics](essay.html).
And of course there are a lot of fairly mundane ingredients in discovering new
ideas, like [_working hard_](hwh.html), getting enough sleep, avoiding certain
kinds of stress, having the right colleagues, and finding tricks for working
on what you want even when it's not what you're supposed to be working on.
Anything that prevents people from doing great work has an inverse that helps
them to. And this class of ingredients is not as boring as it might seem at
first. For example, having new ideas is generally associated with youth. But
perhaps it's not youth per se that yields new ideas, but specific things that
come with youth, like good health and lack of responsibilities. Investigating
this might lead to strategies that will help people of any age to have better
ideas.
One of the most surprising ingredients in having new ideas is writing ability.
There's a class of new ideas that are best discovered by writing essays and
books. And that "by" is deliberate: you don't think of the ideas first, and
then merely write them down. There is a kind of thinking that one does by
writing, and if you're clumsy at writing, or don't enjoy doing it, that will
get in your way if you try to do this kind of thinking. [4]
I predict the gap between intelligence and new ideas will turn out to be an
interesting place. If we think of this gap merely as a measure of unrealized
potential, it becomes a sort of wasteland that we try to hurry through with
our eyes averted. But if we flip the question, and start inquiring into the
other ingredients in new ideas that it implies must exist, we can mine this
gap for discoveries about discovery.
**Notes**
[1] What wins in conversation depends on who with. It ranges from mere
aggressiveness at the bottom, through quick-wittedness in the middle, to
something closer to actual intelligence at the top, though probably always
with some component of quick-wittedness.
[2] Just as intelligence isn't the only ingredient in having new ideas, having
new ideas isn't the only thing intelligence is useful for. It's also useful,
for example, in diagnosing problems and figuring out how to fix them. Both
overlap with having new ideas, but both have an end that doesn't.
Those ways of using intelligence are much more common than having new ideas.
And in such cases intelligence is even harder to distinguish from its
consequences.
[3] Some would attribute the difference between intelligence and having new
ideas to "creativity," but this doesn't seem a very useful term. As well as
being pretty vague, it's shifted half a frame sideways from what we care
about: it's neither separable from intelligence, nor responsible for all the
difference between intelligence and having new ideas.
[4] Curiously enough, this essay is an example. It started out as an essay
about writing ability. But when I came to the distinction between intelligence
and having new ideas, that seemed so much more important that I turned the
original essay inside out, making that the topic and my original topic one of
the points in it. As in many other fields, that level of reworking is easier
to contemplate once you've had a lot of practice.
**Thanks** to Trevor Blackwell, Patrick Collison, Jessica Livingston, Robert
Morris, Michael Nielsen, and Lisa Randall for reading drafts of this.
|
March 2011
Yesterday Fred Wilson published a remarkable
[post](http://avc.com/2011/03/airbnb) about missing
[Airbnb](http://airbnb.com). VCs miss good startups all the time, but it's
extraordinarily rare for one to talk about it publicly till long afterward. So
that post is further evidence what a rare bird Fred is. He's probably the
nicest VC I know.
Reading Fred's post made me go back and look at the emails I exchanged with
him at the time, trying to convince him to invest in Airbnb. It was quite
interesting to read. You can see Fred's mind at work as he circles the deal.
Fred and the Airbnb founders have generously agreed to let me publish this
email exchange (with one sentence redacted about something that's
strategically important to Airbnb and not an important part of the
conversation). It's an interesting illustration of an element of the startup
ecosystem that few except the participants ever see: investors trying to
convince one another to invest in their portfolio companies. Hundreds if not
thousands of conversations of this type are happening now, but if one has ever
been published, I haven't seen it. The Airbnbs themselves never even saw these
emails at the time.
We do a lot of this behind the scenes stuff at YC, because we invest in such a
large number of companies, and we invest so early that investors sometimes
need a lot of convincing to see their merits. I don't always try as hard as
this though. Fred must have found me quite annoying.
* * *
from: Paul Graham
to: Fred Wilson, AirBedAndBreakfast Founders
date: Fri, Jan 23, 2009 at 11:42 AM
subject: meet the airbeds
One of the startups from the batch that just started, AirbedAndBreakfast,
is in NYC right now meeting their users. (NYC is their biggest
market.) I'd recommend meeting them if your schedule allows.
I'd been thinking to myself that though these guys were going to
do really well, I should introduce them to angels, because VCs would
never go for it. But then I thought maybe I should give you more
credit. You'll certainly like meeting them. Be sure to ask about
how they funded themselves with breakfast cereal.
There's no reason this couldn't be as big as Ebay. And this team
is the right one to do it.
--pg
from: Brian Chesky
to: Paul Graham
cc: Nathan Blecharczyk, Joe Gebbia
date: Fri, Jan 23, 2009 at 11:40 AM
subject: Re: meet the airbeds
PG,
Thanks for the intro!
Brian
from: Paul Graham
to: Brian Chesky
cc: Nathan Blecharczyk, Joe Gebbia
date: Fri, Jan 23, 2009 at 12:38 PM
subject: Re: meet the airbeds
It's a longshot, at this stage, but if there was any VC who'd get
you guys, it would be Fred. He is the least suburban-golf-playing
VC I know.
He likes to observe startups for a while before acting, so don't
be bummed if he seems ambivalent.
--pg
from: Fred Wilson
to: Paul Graham,
date: Sun, Jan 25, 2009 at 5:28 PM
subject: Re: meet the airbeds
Thanks Paul
We are having a bit of a debate inside our partnership about the
airbed concept. We'll finish that debate tomorrow in our weekly
meeting and get back to you with our thoughts
Thanks
Fred
from: Paul Graham
to: Fred Wilson
date: Sun, Jan 25, 2009 at 10:48 PM
subject: Re: meet the airbeds
I'd recommend having the debate after meeting them instead of before.
We had big doubts about this idea, but they vanished on meeting the
guys.
from: Fred Wilson
to: Paul Graham
date: Mon, Jan 26, 2009 at 11:08 AM
subject: RE: meet the airbeds
We are still very suspect of this idea but will take a meeting as
you suggest
Thanks
fred
from: Fred Wilson
to: Paul Graham, AirBedAndBreakfast Founders
date: Mon, Jan 26, 2009 at 11:09 AM
subject: RE: meet the airbeds
Airbed team -
Are you still in NYC?
We'd like to meet if you are
Thanks
fred
from: Paul Graham
to: Fred Wilson
date: Mon, Jan 26, 2009 at 1:42 PM
subject: Re: meet the airbeds
Ideas can morph. Practically every really big startup could say,
five years later, "believe it or not, we started out doing ___."
It just seemed a very good sign to me that these guys were actually
on the ground in NYC hunting down (and understanding) their users.
On top of several previous good signs.
--pg
from: Fred Wilson
to: Paul Graham
date: Sun, Feb 1, 2009 at 7:15 AM
subject: Re: meet the airbeds
It's interesting
Our two junior team members were enthusiastic
The three "old guys" didn't get it
from: Paul Graham
to: Fred Wilson
date: Mon, Feb 9, 2009 at 5:58 PM
subject: airbnb
The Airbeds just won the first poll among all the YC startups in
their batch by a landslide. In the past this has not been a 100%
indicator of success (if only anything were) but much better than
random.
--pg
from: Fred Wilson
to: Paul Graham
date: Fri, Feb 13, 2009 at 5:29 PM
subject: Re: airbnb
I met them today
They have an interesting business
I'm just not sure how big it's going to be
fred
from: Paul Graham
to: Fred Wilson
date: Sat, Feb 14, 2009 at 9:50 AM
subject: Re: airbnb
Did they explain the long-term goal of being the market in accommodation
the way eBay is in stuff? That seems like it would be huge. Hotels
now are like airlines in the 1970s before they figured out how to
increase their load factors.
from: Fred Wilson
to: Paul Graham
date: Tue, Feb 17, 2009 at 2:05 PM
subject: Re: airbnb
They did but I am not sure I buy that
ABNB reminds me of Etsy in that it facilitates real commerce in a
marketplace model directly between two people
So I think it can scale all the way to the bed and breakfast market
But I am not sure they can take on the hotel market
I could be wrong
But even so, if you include short term room rental, second home
rental, bed and breakfast, and other similar classes of accommodations,
you get to a pretty big opportunity
fred
from: Paul Graham
to: Fred Wilson
date: Wed, Feb 18, 2009 at 12:21 AM
subject: Re: airbnb
So invest in them! They're very capital efficient. They would
make an investor's money go a long way.
It's also counter-cyclical. They just arrived back from NYC, and
when I asked them what was the most significant thing they'd observed,
it was how many of their users actually needed to do these rentals
to pay their rents.
--pg
from: Fred Wilson
to: Paul Graham
date: Wed, Feb 18, 2009 at 2:21 AM
subject: Re: airbnb
There's a lot to like
I've done a few things, like intro it to my friends at Foundry who
were investors in Service Metrics and understand this model
I am also talking to my friend Mark Pincus who had an idea like
this a few years ago.
So we are working on it
Thanks for the lead
Fred
from: Paul Graham
to: Fred Wilson
date: Fri, Feb 20, 2009 at 10:00 PM
subject: airbnb already spreading to pros
I know you're skeptical they'll ever get hotels, but there's a
continuum between private sofas and hotel rooms, and they just moved
one step further along it.
[link to an airbnb user]
This is after only a few months. I bet you they will get hotels
eventually. It will start with small ones. Just wait till all the
10-room pensiones in Rome discover this site. And once it spreads
to hotels, where is the point (in size of chain) at which it stops?
Once something becomes a big marketplace, you ignore it at your
peril.
--pg
from: Fred Wilson
to: Paul Graham
date: Sat, Feb 21, 2009 at 4:26 AM
subject: Re: airbnb already spreading to pros
That's true. It's also true that there are quite a few marketplaces
out there that serve this same market
If you look at many of the people who list at ABNB, they list
elsewhere too
I am not negative on this one, I am interested, but we are still
in the gathering data phase.
fred
|
July 2020
"Few people are capable of expressing with equanimity opinions which differ
from the prejudices of their social environment. Most people are even
incapable of forming such opinions."
� Einstein
There has been a lot of talk about privilege lately. Although the concept is
overused, there is something to it, and in particular to the idea that
privilege makes you blind � that you can't see things that are visible to
someone whose life is very different from yours.
But one of the most pervasive examples of this kind of blindness is one that I
haven't seen mentioned explicitly. I'm going to call it _orthodox privilege_ :
The more conventional-minded someone is, the more it seems to them that it's
safe for everyone to express their opinions.
It's safe for _them_ to express their opinions, because the source of their
opinions is whatever it's currently acceptable to believe. So it seems to them
that it must be safe for everyone. They literally can't imagine a true
statement that would get you in trouble.
And yet at every point in history, there [_were_](say.html) true things that
would get you in trouble to say. Is ours the first where this isn't so? What
an amazing coincidence that would be.
Surely it should at least be the default assumption that our time is not
unique, and that there are true things you can't say now, just as there have
always been. You would think. But even in the face of such overwhelming
historical evidence, most people will go with their gut on this one.
In the most extreme cases, people suffering from orthodox privilege will not
only deny that there's anything true that you can't say, but will accuse you
of heresy merely for saying there is. Though if there's more than one heresy
current in your time, these accusations will be weirdly non-deterministic: you
must either be an xist or a yist.
Frustrating as it is to deal with these people, it's important to realize that
they're in earnest. They're not pretending they think it's impossible for an
idea to be both unorthodox and true. The world really looks that way to them.
Indeed, this is a uniquely tenacious form of privilege. People can overcome
the blindness induced by most forms of privilege by learning more about
whatever they're not. But they can't overcome orthodox privilege just by
learning more. They'd have to become more independent-minded. If that happens
at all, it doesn't happen on the time scale of one conversation.
It may be possible to convince some people that orthodox privilege must exist
even though they can't sense it, just as one can with, say, dark matter. There
may be some who could be convinced, for example, that it's very unlikely that
this is the first point in history at which there's nothing true you can't
say, even if they can't imagine specific examples.
But in general I don't think it will work to say "check your privilege" about
this type of privilege, because those in its demographic don't realize they're
in it. It doesn't seem to conventional-minded people that they're
conventional-minded. It just seems to them that they're right. Indeed, they
tend to be particularly sure of it.
Perhaps the solution is to appeal to politeness. If someone says they can hear
a high-pitched noise that you can't, it's only polite to take them at their
word, instead of demanding evidence that's impossible to produce, or simply
denying that they hear anything. Imagine how rude that would seem. Similarly,
if someone says they can think of things that are true but that cannot be
said, it's only polite to take them at their word, even if you can't think of
any yourself.
**Thanks** to Sam Altman, Trevor Blackwell, Patrick Collison, Antonio Garcia-
Martinez, Jessica Livingston, Robert Morris, Michael Nielsen, Geoff Ralston,
Max Roser, and Harj Taggar for reading drafts of this.
|
October 2020
One of the biggest things holding people back from doing great work is the
fear of making something lame. And this fear is not an irrational one. Many
great projects go through a stage early on where they don't seem very
impressive, even to their creators. You have to push through this stage to
reach the great work that lies beyond. But many people don't. Most people
don't even reach the stage of making something they're embarrassed by, let
alone continue past it. They're too frightened even to start.
Imagine if we could turn off the fear of making something lame. Imagine how
much more we'd do.
Is there any hope of turning it off? I think so. I think the habits at work
here are not very deeply rooted.
Making new things is itself a new thing for us as a species. It has always
happened, but till the last few centuries it happened so slowly as to be
invisible to individual humans. And since we didn't need customs for dealing
with new ideas, we didn't develop any.
We just don't have enough experience with early versions of ambitious projects
to know how to respond to them. We judge them as we would judge more finished
work, or less ambitious projects. We don't realize they're a special case.
Or at least, most of us don't. One reason I'm confident we can do better is
that it's already starting to happen. There are already a few places that are
living in the future in this respect. Silicon Valley is one of them: an
unknown person working on a strange-sounding idea won't automatically be
dismissed the way they would back home. In Silicon Valley, people have learned
how dangerous that is.
The right way to deal with new ideas is to treat them as a challenge to your
imagination � not just to have lower standards, but to [_switch
polarity_](altair.html) entirely, from listing the reasons an idea won't work
to trying to think of ways it could. That's what I do when I meet people with
new ideas. I've become quite good at it, but I've had a lot of practice. Being
a partner at Y Combinator means being practically immersed in strange-sounding
ideas proposed by unknown people. Every six months you get thousands of new
ones thrown at you and have to sort through them, knowing that in a world with
a power-law distribution of outcomes, it will be painfully obvious if you miss
the needle in this haystack. Optimism becomes urgent.
But I'm hopeful that, with time, this kind of optimism can become widespread
enough that it becomes a social custom, not just a trick used by a few
specialists. It is after all an extremely lucrative trick, and those tend to
spread quickly.
Of course, inexperience is not the only reason people are too harsh on early
versions of ambitious projects. They also do it to seem clever. And in a field
where the new ideas are risky, like startups, those who dismiss them are in
fact more likely to be right. Just not when their predictions are [_weighted
by outcome_](swan.html).
But there is another more sinister reason people dismiss new ideas. If you try
something ambitious, many of those around you will hope, consciously or
unconsciously, that you'll fail. They worry that if you try something
ambitious and succeed, it will put you above them. In some countries this is
not just an individual failing but part of the national culture.
I wouldn't claim that people in Silicon Valley overcome these impulses because
they're morally better. [1] The reason many hope you'll succeed is that they
hope to rise with you. For investors this incentive is particularly explicit.
They want you to succeed because they hope you'll make them rich in the
process. But many other people you meet can hope to benefit in some way from
your success. At the very least they'll be able to say, when you're famous,
that they've known you since way back.
But even if Silicon Valley's encouraging attitude is rooted in self-interest,
it has over time actually grown into a sort of benevolence. Encouraging
startups has been practiced for so long that it has become a custom. Now it
just seems that that's what one does with startups.
Maybe Silicon Valley is too optimistic. Maybe it's too easily fooled by
impostors. Many less optimistic journalists want to believe that. But the
lists of impostors they cite are suspiciously short, and plagued with
asterisks. [2] If you use revenue as the test, Silicon Valley's optimism seems
better tuned than the rest of the world's. And because it works, it will
spread.
There's a lot more to new ideas than new startup ideas, of course. The fear of
making something lame holds people back in every field. But Silicon Valley
shows how quickly customs can evolve to support new ideas. And that in turn
proves that dismissing new ideas is not so deeply rooted in human nature that
it can't be unlearnt.
___________
Unfortunately, if you want to do new things, you'll face a force more powerful
than other people's skepticism: your own skepticism. You too will judge your
early work too harshly. How do you avoid that?
This is a difficult problem, because you don't want to completely eliminate
your horror of making something lame. That's what steers you toward doing good
work. You just want to turn it off temporarily, the way a painkiller
temporarily turns off pain.
People have already discovered several techniques that work. Hardy mentions
two in _A Mathematician's Apology_ :
> Good work is not done by "humble" men. It is one of the first duties of a
> professor, for example, in any subject, to exaggerate a little both the
> importance of his subject and his importance in it.
If you overestimate the importance of what you're working on, that will
compensate for your mistakenly harsh judgment of your initial results. If you
look at something that's 20% of the way to a goal worth 100 and conclude that
it's 10% of the way to a goal worth 200, your estimate of its expected value
is correct even though both components are wrong.
It also helps, as Hardy suggests, to be slightly overconfident. I've noticed
in many fields that the most successful people are slightly overconfident. On
the face of it this seems implausible. Surely it would be optimal to have
exactly the right estimate of one's abilities. How could it be an advantage to
be mistaken? Because this error compensates for other sources of error in the
opposite direction: being slightly overconfident armors you against both other
people's skepticism and your own.
Ignorance has a similar effect. It's safe to make the mistake of judging early
work as finished work if you're a sufficiently lax judge of finished work. I
doubt it's possible to cultivate this kind of ignorance, but empirically it's
a real advantage, especially for the young.
Another way to get through the lame phase of ambitious projects is to surround
yourself with the right people � to create an eddy in the social headwind. But
it's not enough to collect people who are always encouraging. You'd learn to
discount that. You need colleagues who can actually tell an ugly duckling from
a baby swan. The people best able to do this are those working on similar
projects of their own, which is why university departments and research labs
work so well. You don't need institutions to collect colleagues. They
naturally coalesce, given the chance. But it's very much worth accelerating
this process by seeking out other people trying to do new things.
Teachers are in effect a special case of colleagues. It's a teacher's job both
to see the promise of early work and to encourage you to continue. But
teachers who are good at this are unfortunately quite rare, so if you have the
opportunity to learn from one, take it. [3]
For some it might work to rely on sheer discipline: to tell yourself that you
just have to press on through the initial crap phase and not get discouraged.
But like a lot of "just tell yourself" advice, this is harder than it sounds.
And it gets still harder as you get older, because your standards rise. The
old do have one compensating advantage though: they've been through this
before.
It can help if you focus less on where you are and more on the rate of change.
You won't worry so much about doing bad work if you can see it improving.
Obviously the faster it improves, the easier this is. So when you start
something new, it's good if you can spend a lot of time on it. That's another
advantage of being young: you tend to have bigger blocks of time.
Another common trick is to start by considering new work to be of a different,
less exacting type. To start a painting saying that it's just a sketch, or a
new piece of software saying that it's just a quick hack. Then you judge your
initial results by a lower standard. Once the project is rolling you can
sneakily convert it to something more. [4]
This will be easier if you use a medium that lets you work fast and doesn't
require too much commitment up front. It's easier to convince yourself that
something is just a sketch when you're drawing in a notebook than when you're
carving stone. Plus you get initial results faster. [5] [6]
It will be easier to try out a risky project if you think of it as a way to
learn and not just as a way to make something. Then even if the project truly
is a failure, you'll still have gained by it. If the problem is sharply enough
defined, failure itself is knowledge: if the theorem you're trying to prove
turns out to be false, or you use a structural member of a certain size and it
fails under stress, you've learned something, even if it isn't what you wanted
to learn. [7]
One motivation that works particularly well for me is curiosity. I like to try
new things just to see how they'll turn out. We started Y Combinator in this
spirit, and it was one of main things that kept me going while I was working
on [_Bel_](bel.html). Having worked for so long with various dialects of Lisp,
I was very curious to see what its inherent shape was: what you'd end up with
if you followed the axiomatic approach all the way.
But it's a bit strange that you have to play mind games with yourself to avoid
being discouraged by lame-looking early efforts. The thing you're trying to
trick yourself into believing is in fact the truth. A lame-looking early
version of an ambitious project truly is more valuable than it seems. So the
ultimate solution may be to teach yourself that.
One way to do it is to study the histories of people who've done great work.
What were they thinking early on? What was the very first thing they did? It
can sometimes be hard to get an accurate answer to this question, because
people are often embarrassed by their earliest work and make little effort to
publish it. (They too misjudge it.) But when you can get an accurate picture
of the first steps someone made on the path to some great work, they're often
pretty feeble. [8]
Perhaps if you study enough such cases, you can teach yourself to be a better
judge of early work. Then you'll be immune both to other people's skepticism
and your own fear of making something lame. You'll see early work for what it
is.
Curiously enough, the solution to the problem of judging early work too
harshly is to realize that our attitudes toward it are themselves early work.
Holding everything to the same standard is a crude version 1. We're already
evolving better customs, and we can already see signs of how big the payoff
will be.
**Notes**
[1] This assumption may be too conservative. There is some evidence that
historically the Bay Area has attracted a [_different sort of
person_](cities.html) than, say, New York City.
[2] One of their great favorites is Theranos. But the most conspicuous feature
of Theranos's cap table is the absence of Silicon Valley firms. Journalists
were fooled by Theranos, but Silicon Valley investors weren't.
[3] I made two mistakes about teachers when I was younger. I cared more about
professors' research than their reputations as teachers, and I was also wrong
about what it meant to be a good teacher. I thought it simply meant to be good
at explaining things.
[4] Patrick Collison points out that you can go past treating something as a
hack in the sense of a prototype and onward to the sense of the word that
means something closer to a practical joke:
> I think there may be something related to being a hack that can be powerful
> � the idea of making the tenuousness and implausibility _a feature_. "Yes,
> it's a bit ridiculous, right? I'm just trying to see how far such a naive
> approach can get." YC seemed to me to have this characteristic.
[5] Much of the advantage of switching from physical to digital media is not
the software per se but that it lets you start something new with little
upfront commitment.
[6] John Carmack adds:
> The value of a medium without a vast gulf between the early work and the
> final work is exemplified in game mods. The original Quake game was a golden
> age for mods, because everything was very flexible, but so crude due to
> technical limitations, that quick hacks to try out a gameplay idea weren't
> all _that_ far from the official game. Many careers were born from that, but
> as the commercial game quality improved over the years, it became almost a
> full time job to make a successful mod that would be appreciated by the
> community. This was dramatically reversed with Minecraft and later Roblox,
> where the entire esthetic of the experience was so explicitly crude that
> innovative gameplay concepts became the overriding value. These "crude" game
> mods by single authors are now often bigger deals than massive professional
> teams' work.
[7] Lisa Randall suggests that we
> treat new things as experiments. That way there's no such thing as failing,
> since you learn something no matter what. You treat it like an experiment in
> the sense that if it really rules something out, you give up and move on,
> but if there's some way to vary it to make it work better, go ahead and do
> that
[8] Michael Nielsen points out that the internet has made this easier, because
you can see programmers' first commits, musicians' first videos, and so on.
**Thanks** to Trevor Blackwell, John Carmack, Patrick Collison, Jessica
Livingston, Michael Nielsen, and Lisa Randall for reading drafts of this.
|
_Note: The strategy described at the end of this essay didn't work. It would
work for a while, and then I'd gradually find myself using the Internet on my
work computer. I'm trying other strategies now, but I think this time I'll
wait till I'm sure they work before writing about them._
May 2008
Procrastination feeds on distractions. Most people find it uncomfortable just
to sit and do nothing; you avoid work by doing something else.
So one way to beat procrastination is to starve it of distractions. But that's
not as straightforward as it sounds, because there are people working hard to
distract you. Distraction is not a static obstacle that you avoid like you
might avoid a rock in the road. Distraction seeks you out.
Chesterfield described dirt as matter out of place. Distracting is, similarly,
desirable at the wrong time. And technology is continually being refined to
produce more and more desirable things. Which means that as we learn to avoid
one class of distractions, new ones constantly appear, like drug-resistant
bacteria.
Television, for example, has after 50 years of refinement reached the point
where it's like visual crack. I realized when I was 13 that TV was addictive,
so I stopped watching it. But I read recently that the average American
watches [4 hours](http://www.forbes.com/forbes/2003/0929/076.html) of TV a
day. A quarter of their life.
TV is in decline now, but only because people have found even more addictive
ways of wasting time. And what's especially dangerous is that many happen at
your computer. This is no accident. An ever larger percentage of office
workers sit in front of computers connected to the Internet, and distractions
always evolve toward the procrastinators.
I remember when computers were, for me at least, exclusively for work. I might
occasionally dial up a server to get mail or ftp files, but most of the time I
was offline. All I could do was write and program. Now I feel as if someone
snuck a television onto my desk. Terribly addictive things are just a click
away. Run into an obstacle in what you're working on? Hmm, I wonder what's new
online. Better check.
After years of carefully avoiding classic time sinks like TV, games, and
Usenet, I still managed to fall prey to distraction, because I didn't realize
that it evolves. Something that used to be safe, using the Internet, gradually
became more and more dangerous. Some days I'd wake up, get a cup of tea and
check the news, then check email, then check the news again, then answer a few
emails, then suddenly notice it was almost lunchtime and I hadn't gotten any
real work done. And this started to happen more and more often.
It took me surprisingly long to realize how distracting the Internet had
become, because the problem was intermittent. I ignored it the way you let
yourself ignore a bug that only appears intermittently. When I was in the
middle of a project, distractions weren't really a problem. It was when I'd
finished one project and was deciding what to do next that they always bit me.
Another reason it was hard to notice the danger of this new type of
distraction was that social customs hadn't yet caught up with it. If I'd spent
a whole morning sitting on a sofa watching TV, I'd have noticed very quickly.
That's a known danger sign, like drinking alone. But using the Internet still
looked and felt a lot like work.
Eventually, though, it became clear that the Internet had become so much more
distracting that I had to start treating it differently. Basically, I had to
add a new application to my list of known time sinks: Firefox.
* * *
The problem is a hard one to solve because most people still need the Internet
for some things. If you drink too much, you can solve that problem by stopping
entirely. But you can't solve the problem of overeating by stopping eating. I
couldn't simply avoid the Internet entirely, as I'd done with previous time
sinks.
At first I tried rules. For example, I'd tell myself I was only going to use
the Internet twice a day. But these schemes never worked for long. Eventually
something would come up that required me to use it more than that. And then
I'd gradually slip back into my old ways.
Addictive things have to be treated as if they were sentient adversaries—as if
there were a little man in your head always cooking up the most plausible
arguments for doing whatever you're trying to stop doing. If you leave a path
to it, he'll find it.
The key seems to be visibility. The biggest ingredient in most bad habits is
denial. So you have to make it so that you can't merely _slip_ into doing the
thing you're trying to avoid. It has to set off alarms.
Maybe in the long term the right answer for dealing with Internet distractions
will be [software](http://rescuetime.com) that watches and controls them. But
in the meantime I've found a more drastic solution that definitely works: to
set up a separate computer for using the Internet.
I now leave wifi turned off on my main computer except when I need to transfer
a file or edit a web page, and I have a separate laptop on the other side of
the room that I use to check mail or browse the web. (Irony of ironies, it's
the computer Steve Huffman wrote Reddit on. When Steve and Alexis auctioned
off their old laptops for charity, I bought them for the Y Combinator museum.)
My rule is that I can spend as much time online as I want, as long as I do it
on that computer. And this turns out to be enough. When I have to sit on the
other side of the room to check email or browse the web, I become much more
aware of it. Sufficiently aware, in my case at least, that it's hard to spend
more than about an hour a day online.
And my main computer is now freed for work. If you try this trick, you'll
probably be struck by how different it feels when your computer is
disconnected from the Internet. It was alarming to me how foreign it felt to
sit in front of a computer that could only be used for work, because that
showed how much time I must have been wasting.
_Wow. All I can do at this computer is work. Ok, I better work then._
That's the good part. Your old bad habits now help you to work. You're used to
sitting in front of that computer for hours at a time. But you can't browse
the web or check email now. What are you going to do? You can't just sit
there. So you start working.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
September 2013
Most startups that raise money do it more than once. A typical trajectory
might be (1) to get started with a few tens of thousands from something like Y
Combinator or individual angels, then (2) raise a few hundred thousand to a
few million to build the company, and then (3) once the company is clearly
succeeding, raise one or more later rounds to accelerate growth.
Reality can be messier. Some companies raise money twice in phase 2\. Others
skip phase 1 and go straight to phase 2. And at Y Combinator we get an
increasing number of companies that have already raised amounts in the
hundreds of thousands. But the three phase path is at least the one about
which individual startups' paths oscillate.
This essay focuses on phase 2 fundraising. That's the type the startups we
fund are doing on Demo Day, and this essay is the advice we give them.
**Forces**
Fundraising is hard in both senses: hard like lifting a heavy weight, and hard
like solving a puzzle. It's hard like lifting a weight because it's
intrinsically hard to convince people to part with large sums of money. That
problem is irreducible; it should be hard. But much of the other kind of
difficulty can be eliminated. Fundraising only seems a puzzle because it's an
alien world to most founders, and I hope to fix that by supplying a map
through it.
To founders, the behavior of investors is often opaque — partly because their
motivations are obscure, but partly because they deliberately mislead you. And
the misleading ways of investors combine horribly with the wishful thinking of
inexperienced founders. At YC we're always warning founders about this danger,
and investors are probably more circumspect with YC startups than with other
companies they talk to, and even so we witness a constant series of explosions
as these two volatile components combine. [1]
If you're an inexperienced founder, the only way to survive is by imposing
external constraints on yourself. You can't trust your intuitions. I'm going
to give you a set of rules here that will get you through this process if
anything will. At certain moments you'll be tempted to ignore them. So rule
number zero is: these rules exist for a reason. You wouldn't need a rule to
keep you going in one direction if there weren't powerful forces pushing you
in another.
The ultimate source of the forces acting on you are the forces acting on
investors. Investors are pinched between two kinds of fear: fear of investing
in startups that fizzle, and fear of missing out on startups that take off.
The cause of all this fear is the very thing that makes startups such
attractive investments: the successful ones grow very fast. But that fast
growth means investors can't wait around. If you wait till a startup is
obviously a success, it's too late. To get the really high returns, you have
to invest in startups when it's still unclear how they'll do. But that in turn
makes investors nervous they're about to invest in a flop. As indeed they
often are.
What investors would like to do, if they could, is wait. When a startup is
only a few months old, every week that passes gives you significantly more
information about them. But if you wait too long, other investors might take
the deal away from you. And of course the other investors are all subject to
the same forces. So what tends to happen is that they all wait as long as they
can, then when some act the rest have to.
**Don't raise money unless you want it and it wants you.**
Such a high proportion of successful startups raise money that it might seem
fundraising is one of the defining qualities of a startup. Actually it isn't.
[Rapid growth](growth.html) is what makes a company a startup. Most companies
in a position to grow rapidly find that (a) taking outside money helps them
grow faster, and (b) their growth potential makes it easy to attract such
money. It's so common for both (a) and (b) to be true of a successful startup
that practically all do raise outside money. But there may be cases where a
startup either wouldn't want to grow faster, or outside money wouldn't help
them to, and if you're one of them, don't raise money.
The other time not to raise money is when you won't be able to. If you try to
raise money before you can [convince](convince.html) investors, you'll not
only waste your time, but also burn your reputation with those investors.
**Be in fundraising mode or not.**
One of the things that surprises founders most about fundraising is how
distracting it is. When you start fundraising, everything else grinds to a
halt. The problem is not the time fundraising consumes but that it becomes the
[top idea in your mind](top.html). A startup can't endure that level of
distraction for long. An early stage startup grows mostly because the founders
[make](ds.html) it grow, and if the founders look away, growth usually drops
sharply.
Because fundraising is so distracting, a startup should either be in
fundraising mode or not. And when you do decide to raise money, you should
focus your whole attention on it so you can get it done quickly and get back
to work. [2]
You can take money from investors when you're not in fundraising mode. You
just can't expend any attention on it. There are two things that take
attention: convincing investors, and negotiating with them. So when you're not
in fundraising mode, you should take money from investors only if they require
no convincing, and are willing to invest on terms you'll take without
negotiation. For example, if a reputable investor is willing to invest on a
convertible note, using standard paperwork, that is either uncapped or capped
at a good valuation, you can take that without having to think. [3] The terms
will be whatever they turn out to be in your next equity round. And "no
convincing" means just that: zero time spent meeting with investors or
preparing materials for them. If an investor says they're ready to invest, but
they need you to come in for one meeting to meet some of the partners, tell
them no, if you're not in fundraising mode, because that's fundraising. [4]
Tell them politely; tell them you're focusing on the company right now, and
that you'll get back to them when you're fundraising; but do not get sucked
down the slippery slope.
Investors will try to lure you into fundraising when you're not. It's great
for them if they can, because they can thereby get a shot at you before
everyone else. They'll send you emails saying they want to meet to learn more
about you. If you get cold-emailed by an associate at a VC firm, you shouldn't
meet even if you are in fundraising mode. Deals don't happen that way. [5] But
even if you get an email from a partner you should try to delay meeting till
you're in fundraising mode. They may say they just want to meet and chat, but
investors never just want to meet and chat. What if they like you? What if
they start to talk about giving you money? Will you be able to resist having
that conversation? Unless you're experienced enough at fundraising to have a
casual conversation with investors that stays casual, it's safer to tell them
that you'd be happy to later, when you're fundraising, but that right now you
need to focus on the company. [6]
Companies that are successful at raising money in phase 2 sometimes tack on a
few investors after leaving fundraising mode. This is fine; if fundraising
went well, you'll be able to do it without spending time convincing them or
negotiating about terms.
**Get introductions to investors.**
Before you can talk to investors, you have to be introduced to them. If you're
presenting at a Demo Day, you'll be introduced to a whole bunch
simultaneously. But even if you are, you should supplement these with intros
you collect yourself.
Do you have to be introduced? In phase 2, yes. Some investors will let you
email them a business plan, but you can tell from the way their sites are
organized that they don't really want startups to approach them directly.
Intros vary greatly in effectiveness. The best type of intro is from a well-
known investor who has just invested in you. So when you get an investor to
commit, ask them to introduce you to other investors they respect. [7] The
next best type of intro is from a founder of a company they've funded. You can
also get intros from other people in the startup community, like lawyers and
reporters.
There are now sites like AngelList, FundersClub, and WeFunder that can
introduce you to investors. We recommend startups treat them as auxiliary
sources of money. Raise money first from leads you get yourself. Those will on
average be better investors. Plus you'll have an easier time raising money on
these sites once you can say you've already raised some from well-known
investors.
**Hear no till you hear yes.**
Treat investors as saying no till they unequivocally say yes, in the form of a
definite offer with no contingencies.
I mentioned earlier that investors prefer to wait if they can. What's
particularly dangerous for founders is the way they wait. Essentially, they
lead you on. They seem like they're about to invest right up till the moment
they say no. If they even say no. Some of the worse ones never actually do say
no; they just stop replying to your emails. They hope that way to get a free
option on investing. If they decide later that they want to invest — usually
because they've heard you're a hot deal — they can pretend they just got
distracted and then restart the conversation as if they'd been about to. [8]
That's not the worst thing investors will do. Some will use language that
makes it sound as if they're committing, but which doesn't actually commit
them. And wishful thinking founders are happy to meet them half way. [9]
Fortunately, the next rule is a tactic for neutralizing this behavior. But to
work it depends on you not being tricked by the no that sounds like yes. It's
so common for founders to be misled/mistaken about this that we designed a
[protocol](http://ycombinator.com/hdp.html) to fix the problem. If you believe
an investor has committed, get them to confirm it. If you and they have
different views of reality, whether the source of the discrepancy is their
sketchiness or your wishful thinking, the prospect of confirming a commitment
in writing will flush it out. And till they confirm, regard them as saying no.
**Do breadth-first search weighted by expected value.**
When you talk to investors your m.o. should be breadth-first search, weighted
by expected value. You should always talk to investors in parallel rather than
serially. You can't afford the time it takes to talk to investors serially,
plus if you only talk to one investor at a time, they don't have the pressure
of other investors to make them act. But you shouldn't pay the same attention
to every investor, because some are more promising prospects than others. The
optimal solution is to talk to all potential investors in parallel, but give
higher priority to the more promising ones. [10]
Expected value = how likely an investor is to say yes, multiplied by how good
it would be if they did. So for example, an eminent investor who would invest
a lot, but will be hard to convince, might have the same expected value as an
obscure angel who won't invest much, but will be easy to convince. Whereas an
obscure angel who will only invest a small amount, and yet needs to meet
multiple times before making up his mind, has very low expected value. Meet
such investors last, if at all. [11]
Doing breadth-first search weighted by expected value will save you from
investors who never explicitly say no but merely drift away, because you'll
drift away from them at the same rate. It protects you from investors who
flake in much the same way that a distributed algorithm protects you from
processors that fail. If some investor isn't returning your emails, or wants
to have lots of meetings but isn't progressing toward making you an offer, you
automatically focus less on them. But you have to be disciplined about
assigning probabilities. You can't let how much you want an investor influence
your estimate of how much they want you.
**Know where you stand.**
How do you judge how well you're doing with an investor, when investors
habitually seem more positive than they are? By looking at their actions
rather than their words. Every investor has some track they need to move along
from the first conversation to wiring the money, and you should always know
what that track consists of, where you are on it, and how fast you're moving
forward.
Never leave a meeting with an investor without asking what happens next. What
more do they need in order to decide? Do they need another meeting with you?
To talk about what? And how soon? Do they need to do something internally,
like talk to their partners, or investigate some issue? How long do they
expect it to take? Don't be too pushy, but know where you stand. If investors
are vague or resist answering such questions, assume the worst; investors who
are seriously interested in you will usually be happy to talk about what has
to happen between now and wiring the money, because they're already running
through that in their heads. [12]
If you're experienced at negotiations, you already know how to ask such
questions. [13] If you're not, there's a trick you can use in this situation.
Investors know you're inexperienced at raising money. Inexperience there
doesn't make you unattractive. Being a noob at technology would, if you're
starting a technology startup, but not being a noob at fundraising. Larry and
Sergey were noobs at fundraising. So you can just confess that you're
inexperienced at this and ask how their process works and where you are in it.
[14]
**Get the first commitment.**
The biggest factor in most investors' opinions of you is the opinion of [other
investors](herd.html). Once you start getting investors to commit, it becomes
increasingly easy to get more to. But the other side of this coin is that it's
often hard to get the first commitment.
Getting the first substantial offer can be half the total difficulty of
fundraising. What counts as a substantial offer depends on who it's from and
how much it is. Money from friends and family doesn't usually count, no matter
how much. But if you get $50k from a well known VC firm or angel investor,
that will usually be enough to set things rolling. [15]
**Close committed money.**
It's not a deal till the money's in the bank. I often hear inexperienced
founders say things like "We've raised $800,000," only to discover that zero
of it is in the bank so far. Remember the twin fears that torment investors?
The fear of missing out that makes them jump early, and the fear of jumping
onto a turd that results? This is a market where people are exceptionally
prone to buyer's remorse. And it's also one that furnishes them plenty of
excuses to gratify it. The public markets snap startup investing around like a
whip. If the Chinese economy blows up tomorrow, all bets are off. But there
are lots of surprises for individual startups too, and they tend to be
concentrated around fundraising. Tomorrow a big competitor could appear, or
you could get C&Ded, or your cofounder could quit. [16]
Even a day's delay can bring news that causes an investor to change their
mind. So when someone commits, get the money. Knowing where you stand doesn't
end when they say they'll invest. After they say yes, know what the timetable
is for getting the money, and then babysit that process till it happens.
Institutional investors have people in charge of wiring money, but you may
have to hunt angels down in person to collect a check.
Inexperienced investors are the ones most likely to get buyer's remorse.
Established ones have learned to treat saying yes as like diving off a diving
board, and they also have more brand to preserve. But I've heard of cases of
even top-tier VC firms welching on deals.
**Avoid investors who don't "lead."**
Since getting the first offer is most of the difficulty of fundraising, that
should be part of your calculation of expected value when you start. You have
to estimate not just the probability that an investor will say yes, but the
probability that they'd be the _first_ to say yes, and the latter is not
simply a constant fraction of the former. Some investors are known for
deciding quickly, and those are extra valuable early on.
Conversely, an investor who will only invest once other investors have is
worthless initially. And while most investors are influenced by how interested
other investors are in you, there are some who have an explicit policy of only
investing after other investors have. You can recognize this contemptible
subspecies of investor because they often talk about "leads." They say that
they don't lead, or that they'll invest once you have a lead. Sometimes they
even claim to be willing to lead themselves, by which they mean they won't
invest till you get $x from other investors. (It's great if by "lead" they
mean they'll invest unilaterally, and in addition will help you raise more.
What's lame is when they use the term to mean they won't invest unless you can
raise more elsewhere.) [17]
Where does this term "lead" come from? Up till a few years ago, startups
raising money in phase 2 would usually raise equity rounds in which several
investors invested at the same time using the same paperwork. You'd negotiate
the terms with one "lead" investor, and then all the others would sign the
same documents and all the money change hands at the closing.
Series A rounds still work that way, but things now work differently for most
fundraising prior to the series A. Now there are rarely actual rounds before
the A round, or leads for them. Now startups simply raise money from investors
one at a time till they feel they have enough.
Since there are no longer leads, why do investors use that term? Because it's
a more legitimate-sounding way of saying what they really mean. All they
really mean is that their interest in you is a function of other investors'
interest in you. I.e. the spectral signature of all mediocre investors. But
when phrased in terms of leads, it sounds like there is something structural
and therefore legitimate about their behavior.
When an investor tells you "I want to invest in you, but I don't lead,"
translate that in your mind to "No, except yes if you turn out to be a hot
deal." And since that's the default opinion of any investor about any startup,
they've essentially just told you nothing.
When you first start fundraising, the expected value of an investor who won't
"lead" is zero, so talk to such investors last if at all.
**Have multiple plans.**
Many investors will ask how much you're planning to raise. This question makes
founders feel they should be planning to raise a specific amount. But in fact
you shouldn't. It's a mistake to have fixed plans in an undertaking as
unpredictable as fundraising.
So why do investors ask how much you plan to raise? For much the same reasons
a salesperson in a store will ask "How much were you planning to spend?" if
you walk in looking for a gift for a friend. You probably didn't have a
precise amount in mind; you just want to find something good, and if it's
inexpensive, so much the better. The salesperson asks you this not because
you're supposed to have a plan to spend a specific amount, but so they can
show you only things that cost the most you'll pay.
Similarly, when investors ask how much you plan to raise, it's not because
you're supposed to have a plan. It's to see whether you'd be a suitable
recipient for the size of investment they like to make, and also to judge your
ambition, reasonableness, and how far you are along with fundraising.
If you're a wizard at fundraising, you can say "We plan to raise a $7 million
series A round, and we'll be accepting termsheets next tuesday." I've known a
handful of founders who could pull that off without having VCs laugh in their
faces. But if you're in the inexperienced but earnest majority, the solution
is analogous to the solution I recommend for [pitching](convince.html) your
startup: do the right thing and then just tell investors what you're doing.
And the right strategy, in fundraising, is to have multiple plans depending on
how much you can raise. Ideally you should be able to tell investors something
like: we can make it to profitability without raising any more money, but if
we raise a few hundred thousand we can hire one or two smart friends, and if
we raise a couple million, we can hire a whole engineering team, etc.
Different plans match different investors. If you're talking to a VC firm that
only does series A rounds (though there are few of those left), it would be a
waste of time talking about any but your most expensive plan. Whereas if
you're talking to an angel who invests $20k at a time and you haven't raised
any money yet, you probably want to focus on your least expensive plan.
If you're so fortunate as to have to think about the upper limit on what you
should raise, a good rule of thumb is to multiply the number of people you
want to hire times $15k times 18 months. In most startups, nearly all the
costs are a function of the number of people, and $15k per month is the
conventional total cost (including benefits and even office space) per person.
$15k per month is high, so don't actually spend that much. But it's ok to use
a high estimate when fundraising to add a margin for error. If you have
additional expenses, like manufacturing, add in those at the end. Assuming you
have none and you think you might hire 20 people, the most you'd want to raise
is 20 x $15k x 18 = $5.4 million. [18]
**Underestimate how much you want.**
Though you can focus on different plans when talking to different types of
investors, you should on the whole err on the side of underestimating the
amount you hope to raise.
For example, if you'd like to raise $500k, it's better to say initially that
you're trying to raise $250k. Then when you reach $150k you're more than half
done. That sends two useful signals to investors: that you're doing well, and
that they have to decide quickly because you're running out of room. Whereas
if you'd said you were raising $500k, you'd be less than a third done at
$150k. If fundraising stalled there for an appreciable time, you'd start to
read as a failure.
Saying initially that you're raising $250k doesn't limit you to raising that
much. When you reach your initial target and you still have investor interest,
you can just decide to raise more. Startups do that all the time. In fact,
most startups that are very successful at fundraising end up raising more than
they originally intended.
I'm not saying you should lie, but that you should lower your expectations
initially. There is almost no downside in starting with a low number. It not
only won't cap the amount you raise, but will on the whole tend to increase
it.
A good metaphor here is angle of attack. If you try to fly at too steep an
angle of attack, you just stall. If you say right out of the gate that you
want to raise a $5 million series A round, unless you're in a very strong
position, you not only won't get that but won't get anything. Better to start
at a low angle of attack, build up speed, and then gradually increase the
angle if you want.
**Be profitable if you can.**
You will be in a much stronger position if your collection of plans includes
one for raising zero dollars — i.e. if you can make it to profitability
without raising any additional money. Ideally you want to be able to say to
investors "We'll succeed no matter what, but raising money will help us do it
faster."
There are many analogies between fundraising and dating, and this is one of
the strongest. No one wants you if you seem desperate. And the best way not to
seem desperate is not to _be_ desperate. That's one reason we urge startups
during YC to keep expenses low and to try to make it to [ramen
profitability](ramenprofitable.html) before Demo Day. Though it sounds
slightly paradoxical, if you want to raise money, the best thing you can do is
get yourself to the point where you don't need to.
There are almost two distinct modes of fundraising: one in which founders who
need money knock on doors seeking it, knowing that otherwise the company will
die or at the very least people will have to be fired, and one in which
founders who don't need money take some to grow faster than they could merely
on their own revenues. To emphasize the distinction I'm going to name them:
type A fundraising is when you don't need money, and type B fundraising is
when you do.
Inexperienced founders read about famous startups doing what was type A
fundraising, and decide they should raise money too, since that seems to be
how startups work. Except when they raise money they don't have a clear path
to profitability and are thus doing type B fundraising. And they are then
surprised how difficult and unpleasant it is.
Of course not all startups can make it to ramen profitability in a few months.
And some that don't still manage to have the upper hand over investors, if
they have some other advantage like extraordinary growth numbers or
exceptionally formidable founders. But as time passes it gets increasingly
difficult to fundraise from a position of strength without being profitable.
[19]
**Don't optimize for valuation.**
When you raise money, what should your valuation be? The most important thing
to understand about valuation is that it's not that important.
Founders who raise money at high valuations tend to be unduly proud of it.
Founders are often competitive people, and since valuation is usually the only
visible number attached to a startup, they end up competing to raise money at
the highest valuation. This is stupid, because fundraising is not the test
that matters. The real test is revenue. Fundraising is just a means to that
end. Being proud of how well you did at fundraising is like being proud of
your college grades.
Not only is fundraising not the test that matters, valuation is not even the
thing to optimize about fundraising. The number one thing you want from phase
2 fundraising is to get the money you need, so you can get back to focusing on
the real test, the success of your company. Number two is good investors.
Valuation is at best third.
The empirical evidence shows just how unimportant it is. Dropbox and Airbnb
are the most successful companies we've funded so far, and they raised money
after Y Combinator at premoney valuations of $4 million and $2.6 million
respectively. Prices are so much higher now that if you can raise money at all
you'll probably raise it at higher valuations than Dropbox and Airbnb. So let
that satisfy your competitiveness. You're doing better than Dropbox and
Airbnb! At a test that doesn't matter.
When you start fundraising, your initial valuation (or valuation cap) will be
set by the deal you make with the first investor who commits. You can increase
the price for later investors, if you get a lot of interest, but by default
the valuation you got from the first investor becomes your asking price.
So if you're raising money from multiple investors, as most companies do in
phase 2, you have to be careful to avoid raising the first from an over-eager
investor at a price you won't be able to sustain. You can of course lower your
price if you need to (in which case you should give the same terms to
investors who invested earlier at a higher price), but you may lose a bunch of
leads in the process of realizing you need to do this.
What you can do if you have eager first investors is raise money from them on
an uncapped convertible note with an MFN clause. This is essentially a way of
saying that the valuation cap of the note will be determined by the next
investors you raise money from.
It will be easier to raise money at a lower valuation. It shouldn't be, but it
is. Since phase 2 prices vary at most 10x and the big successes generate
returns of at least 100x, investors should pick startups entirely based on
their estimate of the probability that the company will be a big success and
hardly at all on price. But although it's a mistake for investors to care
about price, a significant number do. A startup that investors seem to like
but won't invest in at a cap of $x will have an easier time at $x/2. [20]
**Yes/no before valuation.**
Some investors want to know what your valuation is before they even talk to
you about investing. If your valuation has already been set by a prior
investment at a specific valuation or cap, you can tell them that number. But
if it isn't set because you haven't closed anyone yet, and they try to push
you to name a price, resist doing so. If this would be the first investor
you've closed, then this could be the tipping point of fundraising. That means
closing this investor is the first priority, and you need to get the
conversation onto that instead of being dragged sideways into a discussion of
price.
Fortunately there is a way to avoid naming a price in this situation. And it
is not just a negotiating trick; it's how you (both) should be operating. Tell
them that valuation is not the most important thing to you and that you
haven't thought much about it, that you are looking for investors you want to
partner with and who want to partner with you, and that you should talk first
about whether they want to invest at all. Then if they decide they do want to
invest, you can figure out a price. But first things first.
Since valuation isn't that important and getting fundraising rolling is, we
usually tell founders to give the first investor who commits as low a price as
they need to. This is a safe technique so long as you combine it with the next
one. [21]
**Beware "valuation sensitive" investors.**
Occasionally you'll encounter investors who describe themselves as "valuation
sensitive." What this means in practice is that they are compulsive
negotiators who will suck up a lot of your time trying to push your price
down. You should therefore never approach such investors first. While you
shouldn't chase high valuations, you also don't want your valuation to be set
artificially low because the first investor who committed happened to be a
compulsive negotiator. Some such investors have value, but the time to
approach them is near the end of fundraising, when you're in a position to say
"this is the price everyone else has paid; take it or leave it" and not mind
if they leave it. This way, you'll not only get market price, but it will also
take less time.
Ideally you know which investors have a reputation for being "valuation
sensitive" and can postpone dealing with them till last, but occasionally one
you didn't know about will pop up early on. The rule of doing breadth first
search weighted by expected value already tells you what to do in this case:
slow down your interactions with them.
There are a handful of investors who will try to invest at a lower valuation
even when your price has already been set. Lowering your price is a backup
plan you resort to when you discover you've let the price get set too high to
close all the money you need. So you'd only want to talk to this sort of
investor if you were about to do that anyway. But since investor meetings have
to be arranged at least a few days in advance and you can't predict when
you'll need to resort to lowering your price, this means in practice that you
should approach this type of investor last if at all.
If you're surprised by a lowball offer, treat it as a backup offer and delay
responding to it. When someone makes an offer in good faith, you have a moral
obligation to respond in a reasonable time. But lowballing you is a dick move
that should be met with the corresponding countermove.
**Accept offers greedily.**
I'm a little leery of using the term "greedily" when writing about fundraising
lest non-programmers misunderstand me, but a greedy algorithm is simply one
that doesn't try to look into the future. A greedy algorithm takes the best of
the options in front of it right now. And that is how startups should approach
fundraising in phases 2 and later. Don't try to look into the future because
(a) the future is unpredictable, and indeed in this business you're often
being deliberately misled about it and (b) your first priority in fundraising
should be to get it finished and get back to work anyway.
If someone makes you an acceptable offer, take it. If you have multiple
incompatible offers, take the best. Don't reject an acceptable offer in the
hope of getting a better one in the future.
These simple rules cover a wide variety of cases. If you're raising money from
many investors, roll them up as they say yes. As you start to feel you've
raised enough, the threshold for acceptable will start to get higher.
In practice offers exist for stretches of time, not points. So when you get an
acceptable offer that would be incompatible with others (e.g. an offer to
invest most of the money you need), you can tell the other investors you're
talking to that you have an offer good enough to accept, and give them a few
days to make their own. This could lose you some that might have made an offer
if they had more time. But by definition you don't care; the initial offer was
acceptable.
Some investors will try to prevent others from having time to decide by giving
you an "exploding" offer, meaning one that's only valid for a few days. Offers
from the very best investors explode less frequently and less rapidly — Fred
Wilson never gives exploding offers, for example — because they're confident
you'll pick them. But lower-tier investors sometimes give offers with very
short fuses, because they believe no one who had other options would choose
them. A deadline of three working days is acceptable. You shouldn't need more
than that if you've been talking to investors in parallel. But a deadline any
shorter is a sign you're dealing with a sketchy investor. You can usually call
their bluff, and you may need to. [22]
It might seem that instead of accepting offers greedily, your goal should be
to get the best investors as partners. That is certainly a good goal, but in
phase 2 "get the best investors" only rarely conflicts with "accept offers
greedily," because the best investors don't usually take any longer to decide
than the others. The only case where the two strategies give conflicting
advice is when you have to forgo an offer from an acceptable investor to see
if you'll get an offer from a better one. If you talk to investors in parallel
and push back on exploding offers with excessively short deadlines, that will
almost never happen. But if it does, "get the best investors" is in the
average case bad advice. The best investors are also the most selective,
because they get their pick of all the startups. They reject nearly everyone
they talk to, which means in the average case it's a bad trade to exchange a
definite offer from an acceptable investor for a potential offer from a better
one.
(The situation is different in phase 1. You can't apply to all the incubators
in parallel, because some offset their schedules to prevent this. In phase 1,
"accept offers greedily" and "get the best investors" do conflict, so if you
want to apply to multiple incubators, you should do it in such a way that the
ones you want most decide first.)
Sometimes when you're raising money from multiple investors, a series A will
emerge out of those conversations, and these rules even cover what to do in
that case. When an investor starts to talk to you about a series A, keep
taking smaller investments till they actually give you a termsheet. There's no
practical difficulty. If the smaller investments are on convertible notes,
they'll just convert into the series A round. The series A investor won't like
having all these other random investors as bedfellows, but if it bothers them
so much they should get on with giving you a termsheet. Till they do, you
don't know for sure they will, and the greedy algorithm tells you what to do.
[23]
**Don't sell more than 25% in phase 2.**
If you do well, you will probably raise a series A round eventually. I say
probably because things are changing with series A rounds. Startups may start
to skip them. But only one company we've funded has so far, so tentatively
assume the path to huge passes through an A round. [24]
Which means you should avoid doing things in earlier rounds that will mess up
raising an A round. For example, if you've sold more than about 40% of your
company total, it starts to get harder to raise an A round, because VCs worry
there will not be enough stock left to keep the founders motivated.
Our rule of thumb is not to sell more than 25% in phase 2, on top of whatever
you sold in phase 1, which should be less than 15%. If you're raising money on
uncapped notes, you'll have to guess what the eventual equity round valuation
might be. Guess conservatively.
(Since the goal of this rule is to avoid messing up the series A, there's
obviously an exception if you end up raising a series A in phase 2, as a
handful of startups do.)
**Have one person handle fundraising.**
If you have multiple founders, pick one to handle fundraising so the other(s)
can keep working on the company. And since the danger of fundraising is not
the time taken up by the actual meetings but that it becomes the top idea in
your mind, the founder who handles fundraising should make a conscious effort
to insulate the other founder(s) from the details of the process. [25]
(If the founders mistrust one another, this could cause some friction. But if
the founders mistrust one another, you have worse problems to worry about than
how to organize fundraising.)
The founder who handles fundraising should be the CEO, who should in turn be
the most formidable of the founders. Even if the CEO is a programmer and
another founder is a salesperson? Yes. If you happen to be that type of
founding team, you're effectively a single founder when it comes to
fundraising.
It's ok to bring all the founders to meet an investor who will invest a lot,
and who needs this meeting as the final step before deciding. But wait till
that point. Introducing an investor to your cofounder(s) should be like
introducing a girl/boyfriend to your parents — something you do only when
things reach a certain stage of seriousness.
Even if there are still one or more founders focusing on the company during
fundraising, growth will slow. But try to get as much growth as you can,
because fundraising is a segment of time, not a point, and what happens to the
company during that time affects the outcome. If your numbers grow
significantly between two investor meetings, investors will be hot to close,
and if your numbers are flat or down they'll start to get cold feet.
**You'll need an executive summary and (maybe) a deck.**
Traditionally phase 2 fundraising consists of presenting a slide deck in
person to investors. Sequoia describes what such a deck should
[contain](http://www.sequoiacap.com/ideas), and since they're the customer you
can take their word for it.
I say "traditionally" because I'm ambivalent about decks, and (though perhaps
this is wishful thinking) they seem to be on the way out. A lot of the most
successful startups we fund never make decks in phase 2. They just talk to
investors and explain what they plan to do. Fundraising usually takes off fast
for the startups that are most successful at it, and they're thus able to
excuse themselves by saying that they haven't had time to make a deck.
You'll also want an executive summary, which should be no more than a page
long and describe in the most matter of fact language what you plan to do, why
it's a good idea, and what progress you've made so far. The point of the
summary is to remind the investor (who may have met many startups that day)
what you talked about.
Assume that if you give someone a copy of your deck or executive summary, it
will be passed on to whoever you'd least like to have it. But don't refuse on
that account to give copies to investors you meet. You just have to treat such
leaks as a cost of doing business. In practice it's not that high a cost.
Though founders are rightly indignant when their plans get leaked to
competitors, I can't think of a startup whose outcome has been affected by it.
Sometimes an investor will ask you to send them your deck and/or executive
summary before they decide whether to meet with you. I wouldn't do that. It's
a sign they're not really interested.
**Stop fundraising when it stops working.**
When do you stop fundraising? Ideally when you've raised enough. But what if
you haven't raised as much as you'd like? When do you give up?
It's hard to give general advice about this, because there have been cases of
startups that kept trying to raise money even when it seemed hopeless, and
miraculously succeeded. But what I usually tell founders is to stop
fundraising when you start to get a lot of air in the straw. When you're
drinking through a straw, you can tell when you get to the end of the liquid
because you start to get a lot of air in the straw. When your fundraising
options run out, they usually run out in the same way. Don't keep sucking on
the straw if you're just getting air. It's not going to get better.
**Don't get addicted to fundraising.**
Fundraising is a chore for most founders, but some find it more interesting
than working on their startup. The work at an early stage startup often
consists of unglamorous [schleps](schlep.html). Whereas fundraising, when it's
going well, can be quite the opposite. Instead of sitting in your grubby
apartment listening to users complain about bugs in your software, you're
being offered millions of dollars by famous investors over lunch at a nice
restaurant. [26]
The danger of fundraising is particularly acute for people who are good at it.
It's always fun to work on something you're good at. If you're one of these
people, beware. Fundraising is not what will make your company successful.
Listening to users complain about bugs in your software is what will make you
successful. And the big danger of getting addicted to fundraising is not
merely that you'll spend too long on it or raise too much money. It's that
you'll start to think of yourself as being already successful, and lose your
taste for the schleps you need to undertake to actually be successful.
Startups can be destroyed by this.
When I see a startup with young founders that is fabulously successful at
fundraising, I mentally decrease my estimate of the probability that they'll
succeed. The press may be writing about them as if they'd been anointed as the
next Google, but I'm thinking "this is going to end badly."
**Don't raise too much.**
Though only a handful of startups have to worry about this, it is possible to
raise too much. The dangers of raising too much are subtle but insidious. One
is that it will set impossibly high expectations. If you raise an excessive
amount of money, it will be at a high valuation, and the danger of raising
money at too high a valuation is that you won't be able to increase it
sufficiently the next time you raise money.
A company's valuation is expected to rise each time it raises money. If not
it's a sign of a company in trouble, which makes you unattractive to
investors. So if you raise money in phase 2 at a post-money valuation of $30
million, the pre-money valuation of your next round, if you want to raise one,
is going to have to be at least $50 million. And you have to be doing really,
really well to raise money at $50 million.
It's very dangerous to let the competitiveness of your current round set the
performance threshold you have to meet to raise your next one, because the two
are only loosely coupled.
But the money itself may be more dangerous than the valuation. The more you
raise, the more you spend, and spending a lot of money can be disastrous for
an early stage startup. Spending a lot makes it harder to become profitable,
and perhaps even worse, it makes you more rigid, because the main way to spend
money is people, and the more people you have, the harder it is to change
directions. So if you do raise a huge amount of money, don't spend it. (You
will find that advice almost impossible to follow, so hot will be the money
burning a hole in your pocket, but I feel obliged at least to try.)
**Be nice.**
Startups raising money occasionally alienate investors by seeming arrogant.
Sometimes because they are arrogant, and sometimes because they're noobs
clumsily attempting to mimic the toughness they've observed in experienced
founders.
It's a mistake to behave arrogantly to investors. While there are certain
situations in which certain investors like certain kinds of arrogance,
investors vary greatly in this respect, and a flick of the whip that will
bring one to heel will make another roar with indignation. The only safe
strategy is never to seem arrogant at all.
That will require some diplomacy if you follow the advice I've given here,
because the advice I've given is essentially how to play hardball back. When
you refuse to meet an investor because you're not in fundraising mode, or slow
down your interactions with an investor who moves too slow, or treat a
contingent offer as the no it actually is and then, by accepting offers
greedily, end up leaving that investor out, you're going to be doing things
investors don't like. So you must cushion the blow with soft words. At YC we
tell startups they can blame us. And now that I've written this, everyone else
can blame me if they want. That plus the inexperience card should work in most
situations: sorry, we think you're great, but PG said startups shouldn't ___,
and since we're new to fundraising, we feel like we have to play it safe.
The danger of behaving arrogantly is greatest when you're doing well. When
everyone wants you, it's hard not to let it go to your head. Especially if
till recently no one wanted you. But restrain yourself. The startup world is a
small place, and startups have lots of ups and downs. This is a domain where
it's more true than usual that pride goeth before a fall. [27]
Be nice when investors reject you as well. The best investors are not wedded
to their initial opinion of you. If they reject you in phase 2 and you end up
doing well, they'll often invest in phase 3\. In fact investors who reject you
are some of your warmest leads for future fundraising. Any investor who spent
significant time deciding probably came close to saying yes. Often you have
some internal champion who only needs a little more evidence to convince the
skeptics. So it's wise not merely to be nice to investors who reject you, but
(unless they behaved badly) to treat it as the beginning of a relationship.
**The bar will be higher next time.**
Assume the money you raise in phase 2 will be the last you ever raise. You
must make it to profitability on this money if you can.
Over the past several years, the investment community has evolved from a
strategy of anointing a small number of winners early and then supporting them
for years to a strategy of spraying money at early stage startups and then
ruthlessly culling them at the next stage. This is probably the optimal
strategy for investors. It's too hard to pick winners early on. Better to let
the market do it for you. But it often comes as a surprise to startups how
much harder it is to raise money in phase 3.
When your company is only a couple months old, all it has to be is a promising
experiment that's worth funding to see how it turns out. The next time you
raise money, the experiment has to have worked. You have to be on a trajectory
that leads to going public. And while there are some ideas where the proof
that the experiment worked might consist of e.g. query response times, usually
the proof is profitability. Usually phase 3 fundraising has to be type A
fundraising.
In practice there are two ways startups hose themselves between phases 2 and
3. Some are just too slow to become profitable. They raise enough money to
last for two years. There doesn't seem any particular urgency to be
profitable. So they don't make any effort to make money for a year. But by
that time, not making money has become habitual. When they finally decide to
try, they find they can't.
The other way companies hose themselves is by letting their expenses grow too
fast. Which almost always means hiring too many people. You usually shouldn't
go out and hire 8 people as soon as you raise money at phase 2. Usually you
want to wait till you have growth (and thus usually revenues) to justify them.
A lot of VCs will encourage you to hire aggressively. VCs generally tell you
to spend too much, partly because as money people they err on the side of
solving problems by spending money, and partly because they want you to sell
them more of your company in subsequent rounds. Don't listen to them.
**Don't make things complicated.**
I realize it may seem odd to sum up this huge treatise by saying that my
overall advice is not to make fundraising too complicated, but if you go back
and look at this list you'll see it's basically a simple recipe with a lot of
implications and edge cases. Avoid investors till you decide to raise money,
and then when you do, talk to them all in parallel, prioritized by expected
value, and accept offers greedily. That's fundraising in one sentence. Don't
introduce complicated optimizations, and don't let investors introduce
complications either.
Fundraising is not what will make you successful. It's just a means to an end.
Your primary goal should be to get it over with and get back to what will make
you successful — making things and talking to users — and the path I've
described will for most startups be the surest way to that destination.
Be good, take care of yourselves, and _don't leave the path_.
**Notes**
[1] The worst explosions happen when unpromising-seeming startups encounter
mediocre investors. Good investors don't lead startups on; their reputations
are too valuable. And startups that seem promising can usually get enough
money from good investors that they don't have to talk to mediocre ones. It is
the unpromising-seeming startups that have to resort to raising money from
mediocre investors. And it's particularly damaging when these investors flake,
because unpromising-seeming startups are usually more desperate for money.
(Not all unpromising-seeming startups do badly. Some are merely ugly ducklings
in the sense that they violate current startup fashions.)
[2] One YC founder told me:
> I think in general we've done ok at fundraising, but I managed to screw up
> twice at the exact same thing — trying to focus on building the company and
> fundraising at the same time.
[3] There is one subtle danger you have to watch out for here, which I warn
about later: beware of getting too high a valuation from an eager investor,
lest that set an impossibly high target when raising additional money.
[4] If they really need a meeting, then they're not ready to invest,
regardless of what they say. They're still deciding, which means you're being
asked to come in and convince them. Which is fundraising.
[5] Associates at VC firms regularly cold email startups. Naive founders think
"Wow, a VC is interested in us!" But an associate is not a VC. They have no
decision-making power. And while they may introduce startups they like to
partners at their firm, the partners discriminate against deals that come to
them this way. I don't know of a single VC investment that began with an
associate cold-emailing a startup. If you want to approach a specific firm,
get an intro to a partner from someone they respect.
It's ok to talk to an associate if you get an intro to a VC firm or they see
you at a Demo Day and they begin by having an associate vet you. That's not a
promising lead and should therefore get low priority, but it's not as
completely worthless as a cold email.
Because the title "associate" has gotten a bad reputation, a few VC firms have
started to give their associates the title "partner," which can make things
very confusing. If you're a YC startup you can ask us who's who; otherwise you
may have to do some research online. There may be a special title for actual
partners. If someone speaks for the firm in the press or a blog on the firm's
site, they're probably a real partner. If they're on boards of directors
they're probably a real partner.
There are titles between "associate" and "partner," including "principal" and
"venture partner." The meanings of these titles vary too much to generalize.
[6] For similar reasons, avoid casual conversations with potential acquirers.
They can lead to distractions even more dangerous than fundraising. Don't even
take a meeting with a potential acquirer unless you want to sell your company
right now.
[7] Joshua Reeves specifically suggests asking each investor to intro you to
two more investors.
Don't ask investors who say no for introductions to other investors. That will
in many cases be an anti-recommendation.
[8] This is not always as deliberate as its sounds. A lot of the delays and
disconnects between founders and investors are induced by the customs of the
venture business, which have evolved the way they have because they suit
investors' interests.
[9] One YC founder who read a draft of this essay wrote:
> This is the most important section. I think it might bear stating even more
> clearly. "Investors will deliberately affect more interest than they have to
> preserve optionality. If an investor seems very interested in you, they
> still probably won't invest. The solution for this is to assume the worst —
> that an investor is just feigning interest — until you get a definite
> commitment."
[10] Though you should probably pack investor meetings as closely as you can,
Jeff Byun mentions one reason not to: if you pack investor meetings too
closely, you'll have less time for your pitch to evolve.
Some founders deliberately schedule a handful of lame investors first, to get
the bugs out of their pitch.
[11] There is not an efficient market in this respect. Some of the most
useless investors are also the highest maintenance.
[12] Incidentally, this paragraph is sales 101. If you want to see it in
action, go talk to a car dealer.
[13] I know one very smooth founder who used to end investor meetings with
"So, can I count you in?" delivered as if it were "Can you pass the salt?"
Unless you're very smooth (if you're not sure...), do not do this yourself.
There is nothing more unconvincing, for an investor, than a nerdy founder
trying to deliver the lines meant for a smooth one.
Investors are fine with funding nerds. So if you're a nerd, just try to be a
good nerd, rather than doing a bad imitation of a smooth salesman.
[14] Ian Hogarth suggests a good way to tell how serious potential investors
are: the resources they expend on you after the first meeting. An investor
who's seriously interested will already be working to help you even before
they've committed.
[15] In principle you might have to think about so-called "signalling risk."
If a prestigious VC makes a small seed investment in you, what if they don't
want to invest the next time you raise money? Other investors might assume
that the VC knows you well, since they're an existing investor, and if they
don't want to invest in your next round, that must mean you suck. The reason I
say "in principle" is that in practice signalling hasn't been much of a
problem so far. It rarely arises, and in the few cases where it does, the
startup in question usually is doing badly and is doomed anyway.
If you have the luxury of choosing among seed investors, you can play it safe
by excluding VC firms. But it isn't critical to.
[16] Sometimes a competitor will deliberately threaten you with a lawsuit just
as you start fundraising, because they know you'll have to disclose the threat
to potential investors and they hope this will make it harder for you to raise
money. If this happens it will probably frighten you more than investors.
Experienced investors know about this trick, and know the actual lawsuits
rarely happen. So if you're attacked in this way, be forthright with
investors. They'll be more alarmed if you seem evasive than if you tell them
everything.
[17] A related trick is to claim that they'll only invest contingently on
other investors doing so because otherwise you'd be "undercapitalized." This
is almost always bullshit. They can't estimate your minimum capital needs that
precisely.
[18] You won't hire all those 20 people at once, and you'll probably have some
revenues before 18 months are out. But those too are acceptable or at least
accepted additions to the margin for error.
[19] Type A fundraising is so much better that it might even be worth doing
something different if it gets you there sooner. One YC founder told me that
if he were a first-time founder again he'd "leave ideas that are up-front
capital intensive to founders with established reputations."
[20] I don't know whether this happens because they're innumerate, or because
they believe they have zero ability to predict startup outcomes (in which case
this behavior at least wouldn't be irrational). In either case the
implications are similar.
[21] If you're a YC startup and you have an investor who for some reason
insists that you decide the price, any YC partner can estimate a market price
for you.
[22] You should respond in kind when investors behave upstandingly too. When
an investor makes you a clean offer with no deadline, you have a moral
obligation to respond promptly.
[23] Tell the investors talking to you about an A round about the smaller
investments you raise as you raise them. You owe them such updates on your cap
table, and this is also a good way to pressure them to act. They won't like
you raising other money and may pressure you to stop, but they can't
legitimately ask you to commit to them till they also commit to you. If they
want you to stop raising money, the way to do it is to give you a series A
termsheet with a no-shop clause.
You can relent a little if the potential series A investor has a great
reputation and they're clearly working fast to get you a termsheet,
particularly if a third party like YC is involved to ensure there are no
misunderstandings. But be careful.
[24] The company is Weebly, which made it to profitability on a seed
investment of $650k. They did try to raise a series A in the fall of 2008 but
(no doubt partly because it was the fall of 2008) the terms they were offered
were so bad that they decided to skip raising an A round.
[25] Another advantage of having one founder take fundraising meetings is that
you never have to negotiate in real time, which is something inexperienced
founders should avoid. One YC founder told me:
> Investors are professional negotiators and can negotiate on the spot very
> easily. If only one founder is in the room, you can say "I need to circle
> back with my co-founder" before making any commitments. I used to do this
> all the time.
[26] You'll be lucky if fundraising feels pleasant enough to become addictive.
More often you have to worry about the other extreme — becoming demoralized
when investors reject you. As one (very successful) YC founder wrote after
reading a draft of this:
> It's hard to mentally deal with the sheer scale of rejection in fundraising
> and if you are not in the right mindset you will fail. Users may love you
> but these supposedly smart investors may not understand you at all. At this
> point for me, rejection still rankles but I've come to accept that investors
> are just not super thoughtful for the most part and you need to play the
> game according to certain somewhat depressing rules (many of which you are
> listing) in order to win.
[27] The actual sentence in the King James Bible is "Pride goeth before
destruction, and an haughty spirit before a fall."
**Thanks** to Slava Akhmechet, Sam Altman, Nate Blecharczyk, Adora Cheung,
Bill Clerico, John Collison, Patrick Collison, Parker Conrad, Ron Conway,
Travis Deyle, Jason Freedman, Joe Gebbia, Mattan Griffel, Kevin Hale, Jacob
Heller, Ian Hogarth, Justin Kan, Professor Moriarty, Nikhil Nirmel, David
Petersen, Geoff Ralston, Joshua Reeves, Yuri Sagalov, Emmett Shear, Rajat
Suri, Garry Tan, and Nick Tomarello for reading drafts of this.
|
August 2005
Thirty years ago, one was supposed to work one's way up the corporate ladder.
That's less the rule now. Our generation wants to get paid up front. Instead
of developing a product for some big company in the expectation of getting job
security in return, we develop the product ourselves, in a startup, and sell
it to the big company. At the very least we want options.
Among other things, this shift has created the appearance of a rapid increase
in economic inequality. But really the two cases are not as different as they
look in economic statistics.
Economic statistics are misleading because they ignore the value of safe jobs.
An easy job from which one can't be fired is worth money; exchanging the two
is one of the commonest forms of corruption. A sinecure is, in effect, an
annuity. Except sinecures don't appear in economic statistics. If they did, it
would be clear that in practice socialist countries have nontrivial
disparities of wealth, because they usually have a class of powerful
bureaucrats who are paid mostly by seniority and can never be fired.
While not a sinecure, a position on the corporate ladder was genuinely
valuable, because big companies tried not to fire people, and promoted from
within based largely on seniority. A position on the corporate ladder had a
value analogous to the "goodwill" that is a very real element in the valuation
of companies. It meant one could expect future high paying jobs.
One of main causes of the decay of the corporate ladder is the trend for
takeovers that began in the 1980s. Why waste your time climbing a ladder that
might disappear before you reach the top?
And, by no coincidence, the corporate ladder was one of the reasons the early
corporate raiders were so successful. It's not only economic statistics that
ignore the value of safe jobs. Corporate balance sheets do too. One reason it
was profitable to carve up 1980s companies and sell them for parts was that
they hadn't formally acknowledged their implicit debt to employees who had
done good work and expected to be rewarded with high-paying executive jobs
when their time came.
In the movie _Wall Street_ , Gordon Gekko ridicules a company overloaded with
vice presidents. But the company may not be as corrupt as it seems; those VPs'
cushy jobs were probably payment for work done earlier.
I like the new model better. For one thing, it seems a bad plan to treat jobs
as rewards. Plenty of good engineers got made into bad managers that way. And
the old system meant people had to deal with a lot more corporate politics, in
order to protect the work they'd invested in a position on the ladder.
The big disadvantage of the new system is that it involves more
[risk](inequality.html). If you develop ideas in a startup instead of within a
big company, any number of random factors could sink you before you can
finish. But maybe the older generation would laugh at me for saying that the
way we do things is riskier. After all, projects within big companies were
always getting cancelled as a result of arbitrary decisions from higher up. My
father's entire industry (breeder reactors) disappeared that way.
For better or worse, the idea of the corporate ladder is probably gone for
good. The new model seems more liquid, and more efficient. But it is less of a
change, financially, than one might think. Our fathers weren't _that_ stupid.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
October 2007
_(This essay is derived from a keynote at FOWA in October 2007.)_
There's something interesting happening right now. Startups are undergoing the
same transformation that technology does when it becomes cheaper.
It's a pattern we see over and over in technology. Initially there's some
device that's very expensive and made in small quantities. Then someone
discovers how to make them cheaply; many more get built; and as a result they
can be used in new ways.
Computers are a familiar example. When I was a kid, computers were big,
expensive machines built one at a time. Now they're a commodity. Now we can
stick computers in everything.
This pattern is very old. Most of the turning points in economic history are
instances of it. It happened to steel in the 1850s, and to power in the 1780s.
It happened to cloth manufacture in the thirteenth century, generating the
wealth that later brought about the Renaissance. Agriculture itself was an
instance of this pattern.
Now as well as being produced by startups, this pattern is happening _to_
startups. It's so cheap to start web startups that orders of magnitudes more
will be started. If the pattern holds true, that should cause dramatic
changes.
**1\. Lots of Startups**
So my first prediction about the future of web startups is pretty
straightforward: there will be a lot of them. When starting a startup was
expensive, you had to get the permission of investors to do it. Now the only
threshold is courage.
Even that threshold is getting lower, as people watch others take the plunge
and survive. In the last batch of startups we funded, we had several founders
who said they'd thought of applying before, but weren't sure and got jobs
instead. It was only after hearing reports of friends who'd done it that they
decided to try it themselves.
Starting a startup is hard, but having a 9 to 5 job is hard too, and in some
ways a worse kind of hard. In a startup you have lots of worries, but you
don't have that feeling that your life is flying by like you do in a big
company. Plus in a startup you could make much more money.
As word spreads that startups work, the number may grow to a point that would
now seem surprising.
We now think of it as normal to have a job at a company, but this is the
thinnest of historical veneers. Just two or three lifetimes ago, most people
in what are now called industrialized countries lived by farming. So while it
may seem surprising to propose that large numbers of people will change the
way they make a living, it would be more surprising if they didn't.
**2\. Standardization**
When technology makes something dramatically cheaper, standardization always
follows. When you make things in large volumes you tend to standardize
everything that doesn't need to change.
At Y Combinator we still only have four people, so we try to standardize
everything. We could hire employees, but we want to be forced to figure out
how to scale investing.
We often tell startups to release a minimal version one quickly, then let the
needs of the users determine what to do next. In essense, let the market
design the product. We've done the same thing ourselves. We think of the
techniques we're developing for dealing with large numbers of startups as like
software. Sometimes it literally is software, like [Hacker
News](http://news.ycombinator.com) and our application system.
One of the most important things we've been working on standardizing are
investment terms. Till now investment terms have been individually negotiated.
This is a problem for founders, because it makes raising money take longer and
cost more in legal fees. So as well as using the same paperwork for every deal
we do, we've commissioned generic angel paperwork that all the startups we
fund can use for future rounds.
Some investors will still want to cook up their own deal terms. Series A
rounds, where you raise a million dollars or more, will be custom deals for
the forseeable future. But I think angel rounds will start to be done mostly
with standardized agreements. An angel who wants to insert a bunch of
complicated terms into the agreement is probably not one you want anyway.
**3\. New Attitude to Acquisition**
Another thing I see starting to get standardized is acquisitions. As the
volume of startups increases, big companies will start to develop standardized
procedures that make acquisitions little more work than hiring someone.
Google is the leader here, as in so many areas of technology. They buy a lot
of startups— more than most people realize, because they only announce a
fraction of them. And being Google, they're figuring out how to do it
efficiently.
One problem they've solved is how to think about acquisitions. For most
companies, acquisitions still carry some stigma of inadequacy. Companies do
them because they have to, but there's usually some feeling they shouldn't
have to—that their own programmers should be able to build everything they
need.
Google's example should cure the rest of the world of this idea. Google has by
far the best programmers of any public technology company. If they don't have
a problem doing acquisitions, the others should have even less problem.
However many Google does, Microsoft should do ten times as many.
One reason Google doesn't have a problem with acquisitions is that they know
first-hand the quality of the people they can get that way. Larry and Sergey
only started Google after making the rounds of the search engines trying to
sell their idea and finding no takers. They've _been_ the guys coming in to
visit the big company, so they know who might be sitting across that
conference table from them.
**4\. Riskier Strategies are Possible**
Risk is always proportionate to reward. The way to get really big returns is
to do things that seem crazy, like starting a new search engine in 1998, or
turning down a billion dollar acquisition offer.
This has traditionally been a problem in venture funding. Founders and
investors have different attitudes to risk. Knowing that risk is on average
proportionate to reward, investors like risky strategies, while founders, who
don't have a big enough sample size to care what's true on average, tend to be
more conservative.
If startups are easy to start, this conflict goes away, because founders can
start them younger, when it's rational to take more risk, and can start more
startups total in their careers. When founders can do lots of startups, they
can start to look at the world in the same portfolio-optimizing way as
investors. And that means the overall amount of wealth created can be greater,
because strategies can be riskier.
**5\. Younger, Nerdier Founders**
If startups become a cheap commodity, more people will be able to have them,
just as more people could have computers once microprocessors made them cheap.
And in particular, younger and more technical founders will be able to start
startups than could before.
Back when it cost a lot to start a startup, you had to convince investors to
let you do it. And that required very different skills from actually doing the
startup. If investors were perfect judges, the two would require exactly the
same skills. But unfortunately most investors are terrible judges. I know
because I see behind the scenes what an enormous amount of work it takes to
raise money, and the amount of selling required in an industry is always
inversely proportional to the judgement of the buyers.
Fortunately, if startups get cheaper to start, there's another way to convince
investors. Instead of going to venture capitalists with a business plan and
trying to convince them to fund it, you can get a product launched on a few
tens of thousands of dollars of seed money from us or your uncle, and approach
them with a working company instead of a plan for one. Then instead of having
to seem smooth and confident, you can just point them to Alexa.
This way of convincing investors is better suited to hackers, who often went
into technology in part because they felt uncomfortable with the amount of
fakeness required in other fields.
**6\. Startup Hubs Will Persist**
It might seem that if startups get cheap to start, it will mean the end of
startup hubs like Silicon Valley. If all you need to start a startup is rent
money, you should be able to do it anywhere.
This is kind of true and kind of false. It's true that you can now _start_ a
startup anywhere. But you have to do more with a startup than just start it.
You have to make it succeed. And that is more likely to happen in a startup
hub.
I've thought a lot about this question, and it seems to me the increasing
cheapness of web startups will if anything increase the importance of startup
hubs. The value of startup hubs, like centers for any kind of business, lies
in something very old-fashioned: face to face meetings. No technology in the
immediate future will replace walking down University Ave and running into a
friend who tells you how to fix a bug that's been bothering you all weekend,
or visiting a friend's startup down the street and ending up in a conversation
with one of their investors.
The question of whether to be in a startup hub is like the question of whether
to take outside investment. The question is not whether you _need_ it, but
whether it brings any advantage at all. Because anything that brings an
advantage will give your competitors an advantage over you if they do it and
you don't. So if you hear someone saying "we don't need to be in Silicon
Valley," that use of the word "need" is a sign they're not even thinking about
the question right.
And while startup hubs are as powerful magnets as ever, the increasing
cheapness of starting a startup means the particles they're attracting are
getting lighter. A startup now can be just a pair of 22 year old guys. A
company like that can move much more easily than one with 10 people, half of
whom have kids.
We know because we make people move for Y Combinator, and it doesn't seem to
be a problem. The advantage of being able to work together face to face for
three months outweighs the inconvenience of moving. Ask anyone who's done it.
The mobility of seed-stage startups means that seed funding is a national
business. One of the most common emails we get is from people asking if we can
help them set up a local clone of Y Combinator. But this just wouldn't work.
Seed funding isn't regional, just as big research universities aren't.
Is seed funding not merely national, but international? Interesting question.
There are signs it may be. We've had an ongoing stream of founders from
outside the US, and they tend to do particularly well, because they're all
people who were so determined to succeed that they were willing to move to
another country to do it.
The more mobile startups get, the harder it would be to start new silicon
valleys. If startups are mobile, the best local talent will go to the real
Silicon Valley, and all they'll get at the local one will be the people who
didn't have the energy to move.
This is not a nationalistic idea, incidentally. It's cities that compete, not
countries. Atlanta is just as hosed as Munich.
**7\. Better Judgement Needed**
If the number of startups increases dramatically, then the people whose job is
to judge them are going to have to get better at it. I'm thinking particularly
of investors and acquirers. We now get on the order of 1000 applications a
year. What are we going to do if we get 10,000?
That's actually an alarming idea. But we'll figure out some kind of answer.
We'll have to. It will probably involve writing some software, but fortunately
we can do that.
Acquirers will also have to get better at picking winners. They generally do
better than investors, because they pick later, when there's more performance
to measure. But even at the most advanced acquirers, identifying companies to
buy is extremely ad hoc, and completing the acquisition often involves a great
deal of unneccessary friction.
I think acquirers may eventually have chief acquisition officers who will both
identify good acquisitions and make the deals happen. At the moment those two
functions are separate. Promising new startups are often discovered by
developers. If someone powerful enough wants to buy them, the deal is handed
over to corp dev guys to negotiate. It would be better if both were combined
in one group, headed by someone with a technical background and some vision of
what they wanted to accomplish. Maybe in the future big companies will have
both a VP of Engineering responsible for technology developed in-house, and a
CAO responsible for bringing technology in from outside.
At the moment, there is no one within big companies who gets in trouble when
they buy a startup for $200 million that they could have bought earlier for
$20 million. There should start to be someone who gets in trouble for that.
**8\. College Will Change**
If the best hackers start their own companies after college instead of getting
jobs, that will change what happens in college. Most of these changes will be
for the better. I think the experience of college is warped in a bad way by
the expectation that afterward you'll be judged by potential employers.
One change will be in the meaning of "after college," which will switch from
when one graduates from college to when one leaves it. If you're starting your
own company, why do you need a degree? We don't encourage people to start
startups during college, but the best founders are certainly capable of it.
Some of the most successful companies we've funded were started by undergrads.
I grew up in a time where college degrees seemed really important, so I'm
alarmed to be saying things like this, but there's nothing magical about a
degree. There's nothing that magically changes after you take that last exam.
The importance of degrees is due solely to the administrative needs of large
organizations. These can certainly affect your life—it's hard to get into grad
school, or to get a work visa in the US, without an undergraduate degree—but
tests like this will matter less and less.
As well as mattering less whether students get degrees, it will also start to
matter less where they go to college. In a startup you're judged by users, and
they don't care where you went to college. So in a world of startups, elite
universities will play less of a role as gatekeepers. In the US it's a
national scandal how easily children of rich parents game college admissions.
But the way this problem ultimately gets solved may not be by reforming the
universities but by going around them. We in the technology world are used to
that sort of solution: you don't beat the incumbents; you redefine the problem
to make them irrelevant.
The greatest value of universities is not the brand name or perhaps even the
classes so much as the people you meet. If it becomes common to start a
startup after college, students may start trying to maximize this. Instead of
focusing on getting internships at companies they want to work for, they may
start to focus on working with other students they want as cofounders.
What students do in their classes will change too. Instead of trying to get
good grades to impress future employers, students will try to learn things.
We're talking about some pretty dramatic changes here.
**9\. Lots of Competitors**
If it gets easier to start a startup, it's easier for competitors too. That
doesn't erase the advantage of increased cheapness, however. You're not all
playing a zero-sum game. There's not some fixed number of startups that can
succeed, regardless of how many are started.
In fact, I don't think there's any limit to the number of startups that could
succeed. Startups succeed by creating wealth, which is the satisfaction of
people's desires. And people's desires seem to be effectively infinite, at
least in the short term.
What the increasing number of startups does mean is that you won't be able to
sit on a good idea. Other people have your idea, and they'll be increasingly
likely to do something about it.
**10\. Faster Advances**
There's a good side to that, at least for consumers of technology. If people
get right to work implementing ideas instead of sitting on them, technology
will evolve faster.
Some kinds of innovations happen a company at a time, like the punctuated
equilibrium model of evolution. There are some kinds of ideas that are so
threatening that it's hard for big companies even to think of them. Look at
what a hard time Microsoft is having discovering web apps. They're like a
character in a movie that everyone in the audience can see something bad is
about to happen to, but who can't see it himself. The big innovations that
happen a company at a time will obviously happen faster if the rate of new
companies increases.
But in fact there will be a double speed increase. People won't wait as long
to act on new ideas, but also those ideas will increasingly be developed
within startups rather than big companies. Which means technology will evolve
faster per company as well.
Big companies are just not a good place to make things happen fast. I talked
recently to a founder whose startup had been acquired by a big company. He was
a precise sort of guy, so he'd measured their productivity before and after.
He counted lines of code, which can be a dubious measure, but in this case was
meaningful because it was the same group of programmers. He found they were
one thirteenth as productive after the acquisition.
The company that bought them was not a particularly stupid one. I think what
he was measuring was mostly the cost of bigness. I experienced this myself,
and his number sounds about right. There's something about big companies that
just sucks the energy out of you.
Imagine what all that energy could do if it were put to use. There is an
enormous latent capacity in the world's hackers that most people don't even
realize is there. That's the main reason we do Y Combinator: to let loose all
this energy by making it easy for hackers to start their own startups.
**A Series of Tubes**
The process of starting startups is currently like the plumbing in an old
house. The pipes are narrow and twisty, and there are leaks in every joint. In
the future this mess will gradually be replaced by a single, huge pipe. The
water will still have to get from A to B, but it will get there faster and
without the risk of spraying out through some random leak.
This will change a lot of things for the better. In a big, straight pipe like
that, the force of being measured by one's performance will propagate back
through the whole system. Performance is always the ultimate test, but there
are so many kinks in the plumbing now that most people are insulated from it
most of the time. So you end up with a world in which high school students
think they need to get good grades to get into elite colleges, and college
students think they need to get good grades to impress employers, within which
the employees waste most of their time in political battles, and from which
consumers have to buy anyway because there are so few choices. Imagine if that
sequence became a big, straight pipe. Then the effects of being measured by
performance would propagate all the way back to high school, flushing out all
the arbitrary stuff people are measured by now. That is the future of web
startups.
**Thanks** to Brian Oberkirch and Simon Willison for inviting me to speak, and
the crew at Carson Systems for making everything run smoothly.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
May 2005
_(This essay is derived from a talk at the Berkeley CSUA.)_
The three big powers on the Internet now are Yahoo, Google, and Microsoft.
Average age of their founders: 24. So it is pretty well established now that
grad students can start successful companies. And if grad students can do it,
why not undergrads?
Like everything else in technology, the cost of starting a startup has
decreased dramatically. Now it's so low that it has disappeared into the
noise. The main cost of starting a Web-based startup is food and rent. Which
means it doesn't cost much more to start a company than to be a total slacker.
You can probably start a startup on ten thousand dollars of seed funding, if
you're prepared to live on ramen.
The less it costs to start a company, the less you need the permission of
investors to do it. So a lot of people will be able to start companies now who
never could have before.
The most interesting subset may be those in their early twenties. I'm not so
excited about founders who have everything investors want except intelligence,
or everything except energy. The most promising group to be liberated by the
new, lower threshold are those who have everything investors want except
experience.
**Market Rate**
I once claimed that [nerds](nerds.html) were unpopular in secondary school
mainly because they had better things to do than work full-time at being
popular. Some said I was just telling people what they wanted to hear. Well,
I'm now about to do that in a spectacular way: I think undergraduates are
undervalued.
Or more precisely, I think few realize the huge spread in the value of 20 year
olds. Some, it's true, are not very capable. But others are more capable than
all but a handful of 30 year olds. [1]
Till now the problem has always been that it's difficult to pick them out.
Every VC in the world, if they could go back in time, would try to invest in
Microsoft. But which would have then? How many would have understood that this
particular 19 year old was Bill Gates?
It's hard to judge the young because (a) they change rapidly, (b) there is
great variation between them, and (c) they're individually inconsistent. That
last one is a big problem. When you're young, you occasionally say and do
stupid things even when you're smart. So if the algorithm is to filter out
people who say stupid things, as many investors and employers unconsciously
do, you're going to get a lot of false positives.
Most organizations who hire people right out of college are only aware of the
average value of 22 year olds, which is not that high. And so the idea for
most of the twentieth century was that everyone had to begin as a trainee in
some [entry-level](http://slashdot.org/comments.pl?sid=158756&cid=13299057)
job. Organizations realized there was a lot of variation in the incoming
stream, but instead of pursuing this thought they tended to suppress it, in
the belief that it was good for even the most promising kids to start at the
bottom, so they didn't get swelled heads.
The most productive young people will _always_ be undervalued by large
organizations, because the young have no performance to measure yet, and any
error in guessing their ability will tend toward the mean.
What's an especially productive 22 year old to do? One thing you can do is go
over the heads of organizations, directly to the users. Any company that hires
you is, economically, acting as a proxy for the customer. The rate at which
they value you (though they may not consciously realize it) is an attempt to
guess your value to the user. But there's a way to appeal their judgement. If
you want, you can opt to be valued directly by users, by starting your own
company.
The market is a lot more discerning than any employer. And it is completely
non-discriminatory. On the Internet, nobody knows you're a dog. And more to
the point, nobody knows you're 22. All users care about is whether your site
or software gives them what they want. They don't care if the person behind it
is a high school kid.
If you're really productive, why not make employers pay market rate for you?
Why go work as an ordinary employee for a big company, when you could start a
startup and make them buy it to get you?
When most people hear the word "startup," they think of the famous ones that
have gone public. But most startups that succeed do it by getting bought. And
usually the acquirer doesn't just want the technology, but the people who
created it as well.
Often big companies buy startups before they're profitable. Obviously in such
cases they're not after revenues. What they want is the development team and
the software they've built so far. When a startup gets bought for 2 or 3
million six months in, it's really more of a hiring bonus than an acquisition.
I think this sort of thing will happen more and more, and that it will be
better for everyone. It's obviously better for the people who start the
startup, because they get a big chunk of money up front. But I think it will
be better for the acquirers too. The central problem in big companies, and the
main reason they're so much less productive than small companies, is the
difficulty of valuing each person's work. Buying larval startups solves that
problem for them: the acquirer doesn't pay till the developers have proven
themselves. Acquirers are protected on the downside, but still get most of the
upside.
**Product Development**
Buying startups also solves another problem afflicting big companies: they
can't do product development. Big companies are good at extracting the value
from existing products, but bad at creating new ones.
Why? It's worth studying this phenomenon in detail, because this is the raison
d'etre of startups.
To start with, most big companies have some kind of turf to protect, and this
tends to warp their development decisions. For example, [Web-based](road.html)
applications are hot now, but within Microsoft there must be a lot of
ambivalence about them, because the very idea of Web-based software threatens
the desktop. So any Web-based application that Microsoft ends up with, will
probably, like Hotmail, be something developed outside the company.
Another reason big companies are bad at developing new products is that the
kind of people who do that tend not to have much power in big companies
(unless they happen to be the CEO). Disruptive technologies are developed by
disruptive people. And they either don't work for the big company, or have
been outmaneuvered by yes-men and have comparatively little influence.
Big companies also lose because they usually only build one of each thing.
When you only have one Web browser, you can't do anything really risky with
it. If ten different startups design ten different Web browsers and you take
the best, you'll probably get something better.
The more general version of this problem is that there are too many new ideas
for companies to explore them all. There might be 500 startups right now who
think they're making something Microsoft might buy. Even Microsoft probably
couldn't manage 500 development projects in-house.
Big companies also don't pay people the right way. People developing a new
product at a big company get paid roughly the same whether it succeeds or
fails. People at a startup expect to get rich if the product succeeds, and get
nothing if it fails. [2] So naturally the people at the startup work a lot
harder.
The mere bigness of big companies is an obstacle. In startups, developers are
often forced to talk directly to users, whether they want to or not, because
there is no one else to do sales and support. It's painful doing sales, but
you learn much more from trying to sell people something than reading what
they said in focus groups.
And then of course, big companies are bad at product development because
they're bad at everything. Everything happens slower in big companies than
small ones, and product development is something that has to happen fast,
because you have to go through a lot of iterations to get something good.
**Trend**
I think the trend of big companies buying startups will only accelerate. One
of the biggest remaining obstacles is pride. Most companies, at least
unconsciously, feel they ought to be able to develop stuff in house, and that
buying startups is to some degree an admission of failure. And so, as people
generally do with admissions of failure, they put it off for as long as
possible. That makes the acquisition very expensive when it finally happens.
What companies should do is go out and discover startups when they're young,
before VCs have puffed them up into something that costs hundreds of millions
to acquire. Much of what VCs add, the acquirer doesn't need anyway.
Why don't acquirers try to predict the companies they're going to have to buy
for hundreds of millions, and grab them early for a tenth or a twentieth of
that? Because they can't predict the winners in advance? If they're only
paying a twentieth as much, they only have to predict a twentieth as well.
Surely they can manage that.
I think companies that acquire technology will gradually learn to go after
earlier stage startups. They won't necessarily buy them outright. The solution
may be some hybrid of investment and acquisition: for example, to buy a chunk
of the company and get an option to buy the rest later.
When companies buy startups, they're effectively fusing recruiting and product
development. And I think that's more efficient than doing the two separately,
because you always get people who are really committed to what they're working
on.
Plus this method yields teams of developers who already work well together.
Any conflicts between them have been ironed out under the very hot iron of
running a startup. By the time the acquirer gets them, they're finishing one
another's sentences. That's valuable in software, because so many bugs occur
at the boundaries between different people's code.
**Investors**
The increasing cheapness of starting a company doesn't just give hackers more
power relative to employers. It also gives them more power relative to
investors.
The conventional wisdom among VCs is that hackers shouldn't be allowed to run
their own companies. The founders are supposed to accept MBAs as their bosses,
and themselves take on some title like Chief Technical Officer. There may be
cases where this is a good idea. But I think founders will increasingly be
able to push back in the matter of control, because they just don't need the
investors' money as much as they used to.
Startups are a comparatively new phenomenon. Fairchild Semiconductor is
considered the first VC-backed startup, and they were founded in 1959, less
than fifty years ago. Measured on the time scale of social change, what we
have now is pre-beta. So we shouldn't assume the way startups work now is the
way they have to work.
Fairchild needed a lot of money to get started. They had to build actual
factories. What does the first round of venture funding for a Web-based
startup get spent on today? More money can't get software written faster; it
isn't needed for facilities, because those can now be quite cheap; all money
can really buy you is sales and marketing. A sales force is worth something,
I'll admit. But marketing is increasingly irrelevant. On the Internet,
anything genuinely good will spread by word of mouth.
Investors' power comes from money. When startups need less money, investors
have less power over them. So future founders may not have to accept new CEOs
if they don't want them. The VCs will have to be dragged kicking and screaming
down this road, but like many things people have to be dragged kicking and
screaming toward, it may actually be good for them.
Google is a sign of the way things are going. As a condition of funding, their
investors insisted they hire someone old and experienced as CEO. But from what
I've heard the founders didn't just give in and take whoever the VCs wanted.
They delayed for an entire year, and when they did finally take a CEO, they
chose a guy with a PhD in computer science.
It sounds to me as if the founders are still the most powerful people in the
company, and judging by Google's performance, their youth and inexperience
doesn't seem to have hurt them. Indeed, I suspect Google has done better than
they would have if the founders had given the VCs what they wanted, when they
wanted it, and let some MBA take over as soon as they got their first round of
funding.
I'm not claiming the business guys installed by VCs have no value. Certainly
they have. But they don't need to become the founders' bosses, which is what
that title CEO means. I predict that in the future the executives installed by
VCs will increasingly be COOs rather than CEOs. The founders will run
engineering directly, and the rest of the company through the COO.
**The Open Cage**
With both employers and investors, the balance of power is slowly shifting
towards the young. And yet they seem the last to realize it. Only the most
ambitious undergrads even consider starting their own company when they
graduate. Most just want to get a job.
Maybe this is as it should be. Maybe if the idea of starting a startup is
intimidating, you filter out the uncommitted. But I suspect the filter is set
a little too high. I think there are people who could, if they tried, start
successful startups, and who instead let themselves be swept into the intake
ducts of big companies.
Have you ever noticed that when animals are let out of cages, they don't
always realize at first that the door's open? Often they have to be poked with
a stick to get them out. Something similar happened with blogs. People could
have been publishing online in 1995, and yet blogging has only really taken
off in the last couple years. In 1995 we thought only professional writers
were entitled to publish their ideas, and that anyone else who did was a
crank. Now publishing online is becoming so popular that everyone wants to do
it, even print journalists. But blogging has not taken off recently because of
any technical innovation; it just took eight years for everyone to realize the
cage was open.
I think most undergrads don't realize yet that the economic cage is open. A
lot have been told by their parents that the route to success is to get a good
job. This was true when their parents were in college, but it's less true now.
The route to success is to build something valuable, and you don't have to be
working for an existing company to do that. Indeed, you can often do it better
if you're not.
When I talk to undergrads, what surprises me most about them is how
conservative they are. Not politically, of course. I mean they don't seem to
want to take risks. This is a mistake, because the younger you are, the more
risk you can take.
**Risk**
Risk and reward are always proportionate. For example, stocks are riskier than
bonds, and over time always have greater returns. So why does anyone invest in
bonds? The catch is that phrase "over time." Stocks will generate greater
returns over thirty years, but they might lose value from year to year. So
what you should invest in depends on how soon you need the money. If you're
young, you should take the riskiest investments you can find.
All this talk about investing may seem very theoretical. Most undergrads
probably have more debts than assets. They may feel they have nothing to
invest. But that's not true: they have their time to invest, and the same rule
about risk applies there. Your early twenties are exactly the time to take
insane career risks.
The reason risk is always proportionate to reward is that market forces make
it so. People will pay extra for stability. So if you choose stability-- by
buying bonds, or by going to work for a big company-- it's going to cost you.
Riskier career moves pay better on average, because there is less demand for
them. Extreme choices like starting a startup are so frightening that most
people won't even try. So you don't end up having as much competition as you
might expect, considering the prizes at stake.
The math is brutal. While perhaps 9 out of 10 startups fail, the one that
succeeds will pay the founders more than 10 times what they would have made in
an ordinary job. [3] That's the sense in which startups pay better "on
average."
Remember that. If you start a startup, you'll probably fail. Most startups
fail. It's the nature of the business. But it's not necessarily a mistake to
try something that has a 90% chance of failing, if you can afford the risk.
Failing at 40, when you have a family to support, could be serious. But if you
fail at 22, so what? If you try to start a startup right out of college and it
tanks, you'll end up at 23 broke and a lot smarter. Which, if you think about
it, is roughly what you hope to get from a graduate program.
Even if your startup does tank, you won't harm your prospects with employers.
To make sure I asked some friends who work for big companies. I asked managers
at Yahoo, Google, Amazon, Cisco and Microsoft how they'd feel about two
candidates, both 24, with equal ability, one who'd tried to start a startup
that tanked, and another who'd spent the two years since college working as a
developer at a big company. Every one responded that they'd prefer the guy
who'd tried to start his own company. Zod Nazem, who's in charge of
engineering at Yahoo, said:
> I actually put more value on the guy with the failed startup. And you can
> quote me!
So there you have it. Want to get hired by Yahoo? Start your own company.
**The Man is the Customer**
If even big employers think highly of young hackers who start companies, why
don't more do it? Why are undergrads so conservative? I think it's because
they've spent so much time in institutions.
The first twenty years of everyone's life consists of being piped from one
institution to another. You probably didn't have much choice about the
secondary schools you went to. And after high school it was probably
understood that you were supposed to go to college. You may have had a few
different colleges to choose between, but they were probably pretty similar.
So by this point you've been riding on a subway line for twenty years, and the
next stop seems to be a job.
Actually college is where the line ends. Superficially, going to work for a
company may feel like just the next in a series of institutions, but
underneath, everything is different. The end of school is the fulcrum of your
life, the point where you go from net consumer to net producer.
The other big change is that now, you're steering. You can go anywhere you
want. So it may be worth standing back and understanding what's going on,
instead of just doing the default thing.
All through college, and probably long before that, most undergrads have been
thinking about what employers want. But what really matters is what customers
want, because they're the ones who give employers the money to pay you.
So instead of thinking about what employers want, you're probably better off
thinking directly about what users want. To the extent there's any difference
between the two, you can even use that to your advantage if you start a
company of your own. For example, big companies like docile conformists. But
this is merely an artifact of their bigness, not something customers need.
**Grad School**
I didn't consciously realize all this when I was graduating from college--
partly because I went straight to grad school. Grad school can be a pretty
good deal, even if you think of one day starting a startup. You can start one
when you're done, or even pull the ripcord part way through, like the founders
of Yahoo and Google.
Grad school makes a good launch pad for startups, because you're collected
together with a lot of smart people, and you have bigger chunks of time to
work on your own projects than an undergrad or corporate employee would. As
long as you have a fairly tolerant advisor, you can take your time developing
an idea before turning it into a company. David Filo and Jerry Yang started
the Yahoo directory in February 1994 and were getting a million hits a day by
the fall, but they didn't actually drop out of grad school and start a company
till March 1995.
You could also try the startup first, and if it doesn't work, then go to grad
school. When startups tank they usually do it fairly quickly. Within a year
you'll know if you're wasting your time.
If it fails, that is. If it succeeds, you may have to delay grad school a
little longer. But you'll have a much more enjoyable life once there than you
would on a regular grad student stipend.
**Experience**
Another reason people in their early twenties don't start startups is that
they feel they don't have enough experience. Most investors feel the same.
I remember hearing a lot of that word "experience" when I was in college. What
do people really mean by it? Obviously it's not the experience itself that's
valuable, but something it changes in your brain. What's different about your
brain after you have "experience," and can you make that change happen faster?
I now have some data on this, and I can tell you what tends to be missing when
people lack experience. I've said that every [startup](start.html) needs three
things: to start with good people, to make something users want, and not to
spend too much money. It's the middle one you get wrong when you're
inexperienced. There are plenty of undergrads with enough technical skill to
write good software, and undergrads are not especially prone to waste money.
If they get something wrong, it's usually not realizing they have to make
something people [want](bronze.html).
This is not exclusively a failing of the young. It's common for startup
founders of all ages to build things no one wants.
Fortunately, this flaw should be easy to fix. If undergrads were all bad
programmers, the problem would be a lot harder. It can take years to learn how
to program. But I don't think it takes years to learn how to make things
people want. My hypothesis is that all you have to do is smack hackers on the
side of the head and tell them: Wake up. Don't sit here making up a priori
theories about what users need. Go find some users and see what they need.
Most successful startups not only do something very specific, but solve a
problem people already know they have.
The big change that "experience" causes in your brain is learning that you
need to solve people's problems. Once you grasp that, you advance quickly to
the next step, which is figuring out what those problems are. And that takes
some effort, because the way software actually gets used, especially by the
people who pay the most for it, is not at all what you might expect. For
example, the stated purpose of Powerpoint is to present ideas. Its real role
is to overcome people's fear of public speaking. It allows you to give an
impressive-looking talk about nothing, and it causes the audience to sit in a
dark room looking at slides, instead of a bright one looking at you.
This kind of thing is out there for anyone to see. The key is to know to look
for it-- to realize that having an idea for a startup is not like having an
idea for a class project. The goal in a startup is not to write a cool piece
of software. It's to make something people want. And to do that you have to
look at users-- forget about hacking, and just look at users. This can be
quite a mental adjustment, because little if any of the software you write in
school even has users.
A few steps before a Rubik's Cube is solved, it still looks like a mess. I
think there are a lot of undergrads whose brains are in a similar position:
they're only a few steps away from being able to start successful startups, if
they wanted to, but they don't realize it. They have more than enough
technical skill. They just haven't realized yet that the way to create wealth
is to make what users want, and that employers are just proxies for users in
which risk is pooled.
If you're young and smart, you don't need either of those. You don't need
someone else to tell you what users want, because you can figure it out
yourself. And you don't want to pool risk, because the younger you are, the
more risk you should take.
**A Public Service Message**
I'd like to conclude with a joint message from me and your parents. Don't drop
out of college to start a startup. There's no rush. There will be plenty of
time to start companies after you graduate. In fact, it may be just as well to
go work for an existing company for a couple years after you graduate, to
learn how companies work.
And yet, when I think about it, I can't imagine telling Bill Gates at 19 that
he should wait till he graduated to start a company. He'd have told me to get
lost. And could I have honestly claimed that he was harming his future-- that
he was learning less by working at ground zero of the microcomputer revolution
than he would have if he'd been taking classes back at Harvard? No, probably
not.
And yes, while it is probably true that you'll learn some valuable things by
going to work for an existing company for a couple years before starting your
own, you'd learn a thing or two running your own company during that time too.
The advice about going to work for someone else would get an even colder
reception from the 19 year old Bill Gates. So I'm supposed to finish college,
then go work for another company for two years, and then I can start my own? I
have to wait till I'm 23? That's _four years_. That's more than twenty percent
of my life so far. Plus in four years it will be way too late to make money
writing a Basic interpreter for the Altair.
And he'd be right. The Apple II was launched just two years later. In fact, if
Bill had finished college and gone to work for another company as we're
suggesting, he might well have gone to work for Apple. And while that would
probably have been better for all of us, it wouldn't have been better for him.
So while I stand by our responsible advice to finish college and then go work
for a while before starting a startup, I have to admit it's one of those
things the old tell the young, but don't expect them to listen to. We say this
sort of thing mainly so we can claim we warned you. So don't say I didn't warn
you.
**Notes**
[1] The average B-17 pilot in World War II was in his early twenties. (Thanks
to Tad Marko for pointing this out.)
[2] If a company tried to pay employees this way, they'd be called unfair. And
yet when they buy some startups and not others, no one thinks of calling that
unfair.
[3] The 1/10 success rate for startups is a bit of an urban legend. It's
suspiciously neat. My guess is the odds are slightly worse.
**Thanks** to Jessica Livingston for reading drafts of this, to the friends I
promised anonymity to for their opinions about hiring, and to Karen Nguyen and
the Berkeley CSUA for organizing this talk.
|
September 2007
A few weeks ago I had a thought so heretical that it really surprised me. It
may not matter all that much where you go to college.
For me, as for a lot of middle class kids, getting into a good college was
more or less the meaning of life when I was growing up. What was I? A student.
To do that well meant to get good grades. Why did one have to get good grades?
To get into a good college. And why did one want to do that? There seemed to
be several reasons: you'd learn more, get better jobs, make more money. But it
didn't matter exactly what the benefits would be. College was a bottleneck
through which all your future prospects passed; everything would be better if
you went to a better college.
A few weeks ago I realized that somewhere along the line I had stopped
believing that.
What first set me thinking about this was the new trend of worrying
obsessively about what
[kindergarten](http://nymag.com/nymetro/urban/education/features/15141/) your
kids go to. It seemed to me this couldn't possibly matter. Either it won't
help your kid get into Harvard, or if it does, getting into Harvard won't mean
much anymore. And then I thought: how much does it mean even now?
It turns out I have a lot of data about that. My three partners and I run a
seed stage investment firm called [Y Combinator](http://ycombinator.com). We
invest when the company is just a couple guys and an idea. The idea doesn't
matter much; it will change anyway. Most of our decision is based on the
founders. The average founder is three years out of college. Many have just
graduated; a few are still in school. So we're in much the same position as a
graduate program, or a company hiring people right out of college. Except our
choices are immediately and visibly tested. There are two possible outcomes
for a startup: success or failure—and usually you know within a year which it
will be.
The test applied to a startup is among the purest of real world tests. A
startup succeeds or fails depending almost entirely on the efforts of the
founders. Success is decided by the market: you only succeed if users like
what you've built. And users don't care where you went to college.
As well as having precisely measurable results, we have a lot of them. Instead
of doing a small number of large deals like a traditional venture capital
fund, we do a large number of small ones. We currently fund about 40 companies
a year, selected from about 900 applications representing a total of about
2000 people. [1]
Between the volume of people we judge and the rapid, unequivocal test that's
applied to our choices, Y Combinator has been an unprecedented opportunity for
learning how to pick winners. One of the most surprising things we've learned
is how little it matters where people went to college.
I thought I'd already been cured of caring about that. There's nothing like
going to grad school at Harvard to cure you of any illusions you might have
about the average Harvard undergrad. And yet Y Combinator showed us we were
still overestimating people who'd been to elite colleges. We'd interview
people from MIT or Harvard or Stanford and sometimes find ourselves thinking:
they _must_ be smarter than they seem. It took us a few iterations to learn to
trust our senses.
Practically everyone thinks that someone who went to MIT or Harvard or
Stanford must be smart. Even people who hate you for it believe it.
But when you think about what it means to have gone to an elite college, how
could this be true? We're talking about a decision made by admissions
officers—basically, HR people—based on a cursory examination of a huge pile of
depressingly similar applications submitted by seventeen year olds. And what
do they have to go on? An easily gamed standardized test; a short essay
telling you what the kid thinks you want to hear; an interview with a random
alum; a high school record that's largely an index of obedience. Who would
rely on such a test?
And yet a lot of companies do. A lot of companies are very much influenced by
where applicants went to college. How could they be? I think I know the answer
to that.
There used to be a saying in the corporate world: "No one ever got fired for
buying IBM." You no longer hear this about IBM specifically, but the idea is
very much alive; there is a whole category of "enterprise" software companies
that exist to take advantage of it. People buying technology for large
organizations don't care if they pay a fortune for mediocre software. It's not
their money. They just want to buy from a supplier who seems safe—a company
with an established name, confident salesmen, impressive offices, and software
that conforms to all the current fashions. Not necessarily a company that will
deliver so much as one that, if they do let you down, will still seem to have
been a prudent choice. So companies have evolved to fill that niche.
A recruiter at a big company is in much the same position as someone buying
technology for one. If someone went to Stanford and is not obviously insane,
they're probably a safe bet. And a safe bet is enough. No one ever measures
recruiters by the later performance of people they turn down. [2]
I'm not saying, of course, that elite colleges have evolved to prey upon the
weaknesses of large organizations the way enterprise software companies have.
But they work as if they had. In addition to the power of the brand name,
graduates of elite colleges have two critical qualities that plug right into
the way large organizations work. They're good at doing what they're asked,
since that's what it takes to please the adults who judge you at seventeen.
And having been to an elite college makes them more confident.
Back in the days when people might spend their whole career at one big
company, these qualities must have been very valuable. Graduates of elite
colleges would have been capable, yet amenable to authority. And since
individual performance is so hard to measure in large organizations, their own
confidence would have been the starting point for their reputation.
Things are very different in the new world of startups. We couldn't save
someone from the market's judgement even if we wanted to. And being charming
and confident counts for nothing with users. All users care about is whether
you make something they like. If you don't, you're dead.
Knowing that test is coming makes us work a lot harder to get the right
answers than anyone would if they were merely hiring people. We can't afford
to have any illusions about the predictors of success. And what we've found is
that the variation between schools is so much smaller than the variation
between individuals that it's negligible by comparison. We can learn more
about someone in the first minute of talking to them than by knowing where
they went to school.
It seems obvious when you put it that way. Look at the individual, not where
they went to college. But that's a weaker statement than the idea I began
with, that it doesn't matter much where a given individual goes to college.
Don't you learn things at the best schools that you wouldn't learn at lesser
places?
Apparently not. Obviously you can't prove this in the case of a single
individual, but you can tell from aggregate evidence: you can't, without
asking them, distinguish people who went to one school from those who went to
another three times as far down the _US News_ list. [3] Try it and see.
How can this be? Because how much you learn in college depends a lot more on
you than the college. A determined party animal can get through the best
school without learning anything. And someone with a real thirst for knowledge
will be able to find a few smart people to learn from at a school that isn't
prestigious at all.
The other students are the biggest advantage of going to an elite college; you
learn more from them than the professors. But you should be able to reproduce
this at most colleges if you make a conscious effort to find smart friends. At
most colleges you can find at least a handful of other smart students, and
most people have only a handful of close friends in college anyway. [4] The
odds of finding smart professors are even better. The curve for faculty is a
lot flatter than for students, especially in math and the hard sciences; you
have to go pretty far down the list of colleges before you stop finding smart
professors in the math department.
So it's not surprising that we've found the relative prestige of different
colleges useless in judging individuals. There's a lot of randomness in how
colleges select people, and what they learn there depends much more on them
than the college. Between these two sources of variation, the college someone
went to doesn't mean a lot. It is to some degree a predictor of ability, but
so weak that we regard it mainly as a source of error and try consciously to
ignore it.
I doubt what we've discovered is an anomaly specific to startups. Probably
people have always overestimated the importance of where one goes to college.
We're just finally able to measure it.
The unfortunate thing is not just that people are judged by such a superficial
test, but that so many judge themselves by it. A lot of people, probably the
majority of people in America, have some amount of insecurity about where, or
whether, they went to college. The tragedy of the situation is that by far the
greatest liability of not having gone to the college you'd have liked is your
own feeling that you're thereby lacking something. Colleges are a bit like
exclusive clubs in this respect. There is only one real advantage to being a
member of most exclusive clubs: you know you wouldn't be missing much if you
weren't. When you're excluded, you can only imagine the advantages of being an
insider. But invariably they're larger in your imagination than in real life.
So it is with colleges. Colleges differ, but they're nothing like the stamp of
destiny so many imagine them to be. People aren't what some admissions officer
decides about them at seventeen. They're what they make themselves.
Indeed, the great advantage of not caring where people went to college is not
just that you can stop judging them (and yourself) by superficial measures,
but that you can focus instead on what really matters. What matters is what
you make of yourself. I think that's what we should tell kids. Their job isn't
to get good grades so they can get into a good college, but to learn and do.
And not just because that's more rewarding than worldly success. That will
increasingly _be_ the route to worldly success.
**Notes**
[1] Is what we measure worth measuring? I think so. You can get rich simply by
being energetic and unscrupulous, but getting rich from a technology startup
takes some amount of brains. It is just the kind of work the upper middle
class values; it has about the same intellectual component as being a doctor.
[2] Actually, someone did, once. Mitch Kapor's wife Freada was in charge of HR
at Lotus in the early years. (As he is at pains to point out, they did not
become romantically involved till afterward.) At one point they worried Lotus
was losing its startup edge and turning into a big company. So as an
experiment she sent their recruiters the resumes of the first 40 employees,
with identifying details changed. These were the people who had made Lotus
into the star it was. Not one got an interview.
[3] The _US News_ list? Surely no one trusts that. Even if the statistics they
consider are useful, how do they decide on the relative weights? The reason
the _US News_ list is meaningful is precisely because they are so
intellectually dishonest in that respect. There is no external source they can
use to calibrate the weighting of the statistics they use; if there were, we
could just use that instead. What they must do is adjust the weights till the
top schools are the usual suspects in about the right order. So in effect what
the _US News_ list tells us is what the editors think the top schools are,
which is probably not far from the conventional wisdom on the matter. The
amusing thing is, because some schools work hard to game the system, the
editors will have to keep tweaking their algorithm to get the rankings they
want.
[4] Possible doesn't mean easy, of course. A smart student at a party school
will inevitably be something of an outcast, just as he or she would be in most
[high schools](nerds.html).
**Thanks** to Trevor Blackwell, Sarah Harlin, Jessica Livingston, Jackie
McDonough, Peter Norvig, and Robert Morris for reading drafts of this.
|
March 2005
All the best [hackers](gba.html) I know are gradually switching to Macs. My
friend Robert said his whole research group at MIT recently bought themselves
Powerbooks. These guys are not the graphic designers and grandmas who were
buying Macs at Apple's low point in the mid 1990s. They're about as hardcore
OS hackers as you can get.
The reason, of course, is OS X. Powerbooks are beautifully designed and run
FreeBSD. What more do you need to know?
I got a Powerbook at the end of last year. When my IBM Thinkpad's hard disk
died soon after, it became my only laptop. And when my friend Trevor showed up
at my house recently, he was carrying a Powerbook [identical](tlbmac.html) to
mine.
For most of us, it's not a switch to Apple, but a return. Hard as this was to
believe in the mid 90s, the Mac was in its time the canonical hacker's
computer.
In the fall of 1983, the professor in one of my college CS classes got up and
announced, like a prophet, that there would soon be a computer with half a
MIPS of processing power that would fit under an airline seat and cost so
little that we could save enough to buy one from a summer job. The whole room
gasped. And when the Mac appeared, it was even better than we'd hoped. It was
small and powerful and cheap, as promised. But it was also something we'd
never considered a computer could be: fabulously well [designed](taste.html).
I had to have one. And I wasn't alone. In the mid to late 1980s, all the
hackers I knew were either writing software for the Mac, or wanted to. Every
futon sofa in Cambridge seemed to have the same fat white book lying open on
it. If you turned it over, it said "Inside Macintosh."
Then came Linux and FreeBSD, and hackers, who follow the most powerful OS
wherever it leads, found themselves switching to Intel boxes. If you cared
about design, you could buy a Thinkpad, which was at least not actively
repellent, if you could get the Intel and Microsoft
[stickers](designedforwindows.html) off the front. [1]
With OS X, the hackers are back. When I walked into the Apple store in
Cambridge, it was like coming home. Much was changed, but there was still that
Apple coolness in the air, that feeling that the show was being run by someone
who really cared, instead of random corporate deal-makers.
So what, the business world may say. Who cares if hackers like Apple again?
How big is the hacker market, after all?
Quite small, but important out of proportion to its size. When it comes to
computers, what hackers are doing now, everyone will be doing in ten years.
Almost all technology, from Unix to bitmapped displays to the Web, became
popular first within CS departments and research labs, and gradually spread to
the rest of the world.
I remember telling my father back in 1986 that there was a new kind of
computer called a Sun that was a serious Unix machine, but so small and cheap
that you could have one of your own to sit in front of, instead of sitting in
front of a VT100 connected to a single central Vax. Maybe, I suggested, he
should buy some stock in this company. I think he really wishes he'd listened.
In 1994 my friend Koling wanted to talk to his girlfriend in Taiwan, and to
save long-distance bills he wrote some software that would convert sound to
data packets that could be sent over the Internet. We weren't sure at the time
whether this was a proper use of the Internet, which was still then a quasi-
government entity. What he was doing is now called VoIP, and it is a huge and
rapidly growing business.
If you want to know what ordinary people will be doing with computers in ten
years, just walk around the CS department at a good university. Whatever
they're doing, you'll be doing.
In the matter of "platforms" this tendency is even more pronounced, because
novel software originates with [great hackers](gh.html), and they tend to
write it first for whatever computer they personally use. And software sells
hardware. Many if not most of the initial sales of the Apple II came from
people who bought one to run VisiCalc. And why did Bricklin and Frankston
write VisiCalc for the Apple II? Because they personally liked it. They could
have chosen any machine to make into a star.
If you want to attract hackers to write software that will sell your hardware,
you have to make it something that they themselves use. It's not enough to
make it "open." It has to be open and good.
And open and good is what Macs are again, finally. The intervening years have
created a situation that is, as far as I know, without precedent: Apple is
popular at the low end and the high end, but not in the middle. My seventy
year old mother has a Mac laptop. My friends with PhDs in computer science
have Mac laptops. [2] And yet Apple's overall market share is still small.
Though unprecedented, I predict this situation is also temporary.
So Dad, there's this company called Apple. They make a new kind of computer
that's as well designed as a Bang & Olufsen stereo system, and underneath is
the best Unix machine you can buy. Yes, the price to earnings ratio is kind of
high, but I think a lot of people are going to want these.
**Notes**
[1] These horrible stickers are much like the intrusive ads popular on pre-
Google search engines. They say to the customer: you are unimportant. We care
about Intel and Microsoft, not you.
[2] [Y Combinator](http://ycombinator.com) is (we hope) visited mostly by
hackers. The proportions of OSes are: Windows 66.4%, Macintosh 18.8%, Linux
11.4%, and FreeBSD 1.5%. The Mac number is a big change from what it would
have been five years ago.
|
December 2019
The most damaging thing you learned in school wasn't something you learned in
any specific class. It was learning to get good grades.
When I was in college, a particularly earnest philosophy grad student once
told me that he never cared what grade he got in a class, only what he learned
in it. This stuck in my mind because it was the only time I ever heard anyone
say such a thing.
For me, as for most students, the measurement of what I was learning
completely dominated actual learning in college. I was fairly earnest; I was
genuinely interested in most of the classes I took, and I worked hard. And yet
I worked by far the hardest when I was studying for a test.
In theory, tests are merely what their name implies: tests of what you've
learned in the class. In theory you shouldn't have to prepare for a test in a
class any more than you have to prepare for a blood test. In theory you learn
from taking the class, from going to the lectures and doing the reading and/or
assignments, and the test that comes afterward merely measures how well you
learned.
In practice, as almost everyone reading this will know, things are so
different that hearing this explanation of how classes and tests are meant to
work is like hearing the etymology of a word whose meaning has changed
completely. In practice, the phrase "studying for a test" was almost
redundant, because that was when one really studied. The difference between
diligent and slack students was that the former studied hard for tests and the
latter didn't. No one was pulling all-nighters two weeks into the semester.
Even though I was a diligent student, almost all the work I did in school was
aimed at getting a good grade on something.
To many people, it would seem strange that the preceding sentence has a
"though" in it. Aren't I merely stating a tautology? Isn't that what a
diligent student is, a straight-A student? That's how deeply the conflation of
learning with grades has infused our culture.
Is it so bad if learning is conflated with grades? Yes, it is bad. And it
wasn't till decades after college, when I was running Y Combinator, that I
realized how bad it is.
I knew of course when I was a student that studying for a test is far from
identical with actual learning. At the very least, you don't retain knowledge
you cram into your head the night before an exam. But the problem is worse
than that. The real problem is that most tests don't come close to measuring
what they're supposed to.
If tests truly were tests of learning, things wouldn't be so bad. Getting good
grades and learning would converge, just a little late. The problem is that
nearly all tests given to students are terribly hackable. Most people who've
gotten good grades know this, and know it so well they've ceased even to
question it. You'll see when you realize how naive it sounds to act otherwise.
Suppose you're taking a class on medieval history and the final exam is coming
up. The final exam is supposed to be a test of your knowledge of medieval
history, right? So if you have a couple days between now and the exam, surely
the best way to spend the time, if you want to do well on the exam, is to read
the best books you can find about medieval history. Then you'll know a lot
about it, and do well on the exam.
No, no, no, experienced students are saying to themselves. If you merely read
good books on medieval history, most of the stuff you learned wouldn't be on
the test. It's not good books you want to read, but the lecture notes and
assigned reading in this class. And even most of that you can ignore, because
you only have to worry about the sort of thing that could turn up as a test
question. You're looking for sharply-defined chunks of information. If one of
the assigned readings has an interesting digression on some subtle point, you
can safely ignore that, because it's not the sort of thing that could be
turned into a test question. But if the professor tells you that there were
three underlying causes of the Schism of 1378, or three main consequences of
the Black Death, you'd better know them. And whether they were in fact the
causes or consequences is beside the point. For the purposes of this class
they are.
At a university there are often copies of old exams floating around, and these
narrow still further what you have to learn. As well as learning what kind of
questions this professor asks, you'll often get actual exam questions. Many
professors re-use them. After teaching a class for 10 years, it would be hard
not to, at least inadvertently.
In some classes, your professor will have had some sort of political axe to
grind, and if so you'll have to grind it too. The need for this varies. In
classes in math or the hard sciences or engineering it's rarely necessary, but
at the other end of the spectrum there are classes where you couldn't get a
good grade without it.
Getting a good grade in a class on x is so different from learning a lot about
x that you have to choose one or the other, and you can't blame students if
they choose grades. Everyone judges them by their grades � graduate programs,
employers, scholarships, even their own parents.
I liked learning, and I really enjoyed some of the papers and programs I wrote
in college. But did I ever, after turning in a paper in some class, sit down
and write another just for fun? Of course not. I had things due in other
classes. If it ever came to a choice of learning or grades, I chose grades. I
hadn't come to college to do badly.
Anyone who cares about getting good grades has to play this game, or they'll
be surpassed by those who do. And at elite universities, that means nearly
everyone, since someone who didn't care about getting good grades probably
wouldn't be there in the first place. The result is that students compete to
maximize the difference between learning and getting good grades.
Why are tests so bad? More precisely, why are they so hackable? Any
experienced programmer could answer that. How hackable is software whose
author hasn't paid any attention to preventing it from being hacked? Usually
it's as porous as a colander.
Hackable is the default for any test imposed by an authority. The reason the
tests you're given are so consistently bad � so consistently far from
measuring what they're supposed to measure � is simply that the people
creating them haven't made much effort to prevent them from being hacked.
But you can't blame teachers if their tests are hackable. Their job is to
teach, not to create unhackable tests. The real problem is grades, or more
precisely, that grades have been overloaded. If grades were merely a way for
teachers to tell students what they were doing right and wrong, like a coach
giving advice to an athlete, students wouldn't be tempted to hack tests. But
unfortunately after a certain age grades become more than advice. After a
certain age, whenever you're being taught, you're usually also being judged.
I've used college tests as an example, but those are actually the least
hackable. All the tests most students take their whole lives are at least as
bad, including, most spectacularly of all, the test that gets them into
college. If getting into college were merely a matter of having the quality of
one's mind measured by admissions officers the way scientists measure the mass
of an object, we could tell teenage kids "learn a lot" and leave it at that.
You can tell how bad college admissions are, as a test, from how unlike high
school that sounds. In practice, the freakishly specific nature of the stuff
ambitious kids have to do in high school is directly proportionate to the
hackability of college admissions. The classes you don't care about that are
mostly memorization, the random "extracurricular activities" you have to
participate in to show you're "well-rounded," the standardized tests as
artificial as chess, the "essay" you have to write that's presumably meant to
hit some very specific target, but you're not told what.
As well as being bad in what it does to kids, this test is also bad in the
sense of being very hackable. So hackable that whole industries have grown up
to hack it. This is the explicit purpose of test-prep companies and admissions
counsellors, but it's also a significant part of the function of private
schools.
Why is this particular test so hackable? I think because of what it's
measuring. Although the popular story is that the way to get into a good
college is to be really smart, admissions officers at elite colleges neither
are, nor claim to be, looking only for that. What are they looking for?
They're looking for people who are not simply smart, but admirable in some
more general sense. And how is this more general admirableness measured? The
admissions officers feel it. In other words, they accept who they like.
So what college admissions is a test of is whether you suit the taste of some
group of people. Well, of course a test like that is going to be hackable. And
because it's both very hackable and there's (thought to be) a lot at stake,
it's hacked like nothing else. That's why it distorts your life so much for so
long.
It's no wonder high school students often feel alienated. The shape of their
lives is completely artificial.
But wasting your time is not the worst thing the educational system does to
you. The worst thing it does is to train you that the way to win is by hacking
bad tests. This is a much subtler problem that I didn't recognize until I saw
it happening to other people.
When I started advising startup founders at Y Combinator, especially young
ones, I was puzzled by the way they always seemed to make things
overcomplicated. How, they would ask, do you raise money? What's the trick for
making venture capitalists want to invest in you? The best way to make VCs
want to invest in you, I would explain, is to actually be a good investment.
Even if you could trick VCs into investing in a bad startup, you'd be tricking
yourselves too. You're investing time in the same company you're asking them
to invest money in. If it's not a good investment, why are you even doing it?
Oh, they'd say, and then after a pause to digest this revelation, they'd ask:
What makes a startup a good investment?
So I would explain that what makes a startup promising, not just in the eyes
of investors but in fact, is [_growth_](growth.html). Ideally in revenue, but
failing that in usage. What they needed to do was get lots of users.
How does one get lots of users? They had all kinds of ideas about that. They
needed to do a big launch that would get them "exposure." They needed
influential people to talk about them. They even knew they needed to launch on
a tuesday, because that's when one gets the most attention.
No, I would explain, that is not how to get lots of users. The way you get
lots of users is to make the product really great. Then people will not only
use it but recommend it to their friends, so your growth will be exponential
once you [_get it started_](ds.html).
At this point I've told the founders something you'd think would be completely
obvious: that they should make a good company by making a good product. And
yet their reaction would be something like the reaction many physicists must
have had when they first heard about the theory of relativity: a mixture of
astonishment at its apparent genius, combined with a suspicion that anything
so weird couldn't possibly be right. Ok, they would say, dutifully. And could
you introduce us to such-and-such influential person? And remember, we want to
launch on Tuesday.
It would sometimes take founders years to grasp these simple lessons. And not
because they were lazy or stupid. They just seemed blind to what was right in
front of them.
Why, I would ask myself, do they always make things so complicated? And then
one day I realized this was not a rhetorical question.
Why did founders tie themselves in knots doing the wrong things when the
answer was right in front of them? Because that was what they'd been trained
to do. Their education had taught them that the way to win was to hack the
test. And without even telling them they were being trained to do this. The
younger ones, the recent graduates, had never faced a non-artificial test.
They thought this was just how the world worked: that the first thing you did,
when facing any kind of challenge, was to figure out what the trick was for
hacking the test. That's why the conversation would always start with how to
raise money, because that read as the test. It came at the end of YC. It had
numbers attached to it, and higher numbers seemed to be better. It must be the
test.
There are certainly big chunks of the world where the way to win is to hack
the test. This phenomenon isn't limited to schools. And some people, either
due to ideology or ignorance, claim that this is true of startups too. But it
isn't. In fact, one of the most striking things about startups is the degree
to which you win by simply doing good work. There are edge cases, as there are
in anything, but in general you win by getting users, and what users care
about is whether the product does what they want.
Why did it take me so long to understand why founders made startups
overcomplicated? Because I hadn't realized explicitly that schools train us to
win by hacking bad tests. And not just them, but me! I'd been trained to hack
bad tests too, and hadn't realized it till decades later.
I had lived as if I realized it, but without knowing why. For example, I had
avoided working for big companies. But if you'd asked why, I'd have said it
was because they were bogus, or bureaucratic. Or just yuck. I never understood
how much of my dislike of big companies was due to the fact that you win by
hacking bad tests.
Similarly, the fact that the tests were unhackable was a lot of what attracted
me to startups. But again, I hadn't realized that explicitly.
I had in effect achieved by successive approximations something that may have
a closed-form solution. I had gradually undone my training in hacking bad
tests without knowing I was doing it. Could someone coming out of school
banish this demon just by knowing its name, and saying begone? It seems worth
trying.
Merely talking explicitly about this phenomenon is likely to make things
better, because much of its power comes from the fact that we take it for
granted. After you've noticed it, it seems the elephant in the room, but it's
a pretty well camouflaged elephant. The phenomenon is so old, and so
pervasive. And it's simply the result of neglect. No one meant things to be
this way. This is just what happens when you combine learning with grades,
competition, and the naive assumption of unhackability.
It was mind-blowing to realize that two of the things I'd puzzled about the
most � the bogusness of high school, and the difficulty of getting founders to
see the obvious � both had the same cause. It's rare for such a big block to
slide into place so late.
Usually when that happens it has implications in a lot of different areas, and
this case seems no exception. For example, it suggests both that education
could be done better, and how you might fix it. But it also suggests a
potential answer to the question all big companies seem to have: how can we be
more like a startup? I'm not going to chase down all the implications now.
What I want to focus on here is what it means for individuals.
To start with, it means that most ambitious kids graduating from college have
something they may want to unlearn. But it also changes how you look at the
world. Instead of looking at all the different kinds of work people do and
thinking of them vaguely as more or less appealing, you can now ask a very
specific question that will sort them in an interesting way: to what extent do
you win at this kind of work by hacking bad tests?
It would help if there was a way to recognize bad tests quickly. Is there a
pattern here? It turns out there is.
Tests can be divided into two kinds: those that are imposed by authorities,
and those that aren't. Tests that aren't imposed by authorities are inherently
unhackable, in the sense that no one is claiming they're tests of anything
more than they actually test. A football match, for example, is simply a test
of who wins, not which team is better. You can tell that from the fact that
commentators sometimes say afterward that the better team won. Whereas tests
imposed by authorities are usually proxies for something else. A test in a
class is supposed to measure not just how well you did on that particular
test, but how much you learned in the class. While tests that aren't imposed
by authorities are inherently unhackable, those imposed by authorities have to
be made unhackable. Usually they aren't. So as a first approximation, bad
tests are roughly equivalent to tests imposed by authorities.
You might actually like to win by hacking bad tests. Presumably some people
do. But I bet most people who find themselves doing this kind of work don't
like it. They just take it for granted that this is how the world works,
unless you want to drop out and be some kind of hippie artisan.
I suspect many people implicitly assume that working in a field with bad tests
is the price of making lots of money. But that, I can tell you, is false. It
used to be true. In the mid-twentieth century, when the economy was [_composed
of oligopolies_](re.html), the only way to the top was by playing their game.
But it's not true now. There are now ways to get rich by doing good work, and
that's part of the reason people are so much more excited about getting rich
than they used to be. When I was a kid, you could either become an engineer
and make cool things, or make lots of money by becoming an "executive." Now
you can make lots of money by making cool things.
Hacking bad tests is becoming less important as the link between work and
authority erodes. The erosion of that link is one of the most important trends
happening now, and we see its effects in almost every kind of work people do.
Startups are one of the most visible examples, but we see much the same thing
in writing. Writers no longer have to submit to publishers and editors to
reach readers; now they can go direct.
The more I think about this question, the more optimistic I get. This seems
one of those situations where we don't realize how much something was holding
us back until it's eliminated. And I can foresee the whole bogus edifice
crumbling. Imagine what happens as more and more people start to ask
themselves if they want to win by hacking bad tests, and decide that they
don't. The kinds of work where you win by hacking bad tests will be starved of
talent, and the kinds where you win by doing good work will see an influx of
the most ambitious people. And as hacking bad tests shrinks in importance,
education will evolve to stop training us to do it. Imagine what the world
could look like if that happened.
This is not just a lesson for individuals to unlearn, but one for society to
unlearn, and we'll be amazed at the energy that's liberated when we do.
**Notes**
[1] If using tests only to measure learning sounds impossibly utopian, that is
already the way things work at Lambda School. Lambda School doesn't have
grades. You either graduate or you don't. The only purpose of tests is to
decide at each stage of the curriculum whether you can continue to the next.
So in effect the whole school is pass/fail.
[2] If the final exam consisted of a long conversation with the professor, you
could prepare for it by reading good books on medieval history. A lot of the
hackability of tests in schools is due to the fact that the same test has to
be given to large numbers of students.
[3] Learning is the naive algorithm for getting good grades.
[4] [_Hacking_](gba.html) has multiple senses. There's a narrow sense in which
it means to compromise something. That's the sense in which one hacks a bad
test. But there's another, more general sense, meaning to find a surprising
solution to a problem, often by thinking differently about it. Hacking in this
sense is a wonderful thing. And indeed, some of the hacks people use on bad
tests are impressively ingenious; the problem is not so much the hacking as
that, because the tests are hackable, they don't test what they're meant to.
[5] The people who pick startups at Y Combinator are similar to admissions
officers, except that instead of being arbitrary, their acceptance criteria
are trained by a very tight feedback loop. If you accept a bad startup or
reject a good one, you will usually know it within a year or two at the
latest, and often within a month.
[6] I'm sure admissions officers are tired of reading applications from kids
who seem to have no personality beyond being willing to seem however they're
supposed to seem to get accepted. What they don't realize is that they are, in
a sense, looking in a mirror. The lack of authenticity in the applicants is a
reflection of the arbitrariness of the application process. A dictator might
just as well complain about the lack of authenticity in the people around him.
[7] By good work, I don't mean morally good, but good in the sense in which a
good craftsman does good work.
[8] There are borderline cases where it's hard to say which category a test
falls in. For example, is raising venture capital like college admissions, or
is it like selling to a customer?
[9] Note that a good test is merely one that's unhackable. Good here doesn't
mean morally good, but good in the sense of working well. The difference
between fields with bad tests and good ones is not that the former are bad and
the latter are good, but that the former are bogus and the latter aren't. But
those two measures are not unrelated. As Tara Ploughman said, the path from
good to evil goes through bogus.
[10] People who think the recent increase in [_economic
inequality_](ineq.html) is due to changes in tax policy seem very naive to
anyone with experience in startups. Different people are getting rich now than
used to, and they're getting much richer than mere tax savings could make
them.
[11] Note to tiger parents: you may think you're training your kids to win,
but if you're training them to win by hacking bad tests, you are, as parents
so often do, training them to fight the last war.
**Thanks** to Austen Allred, Trevor Blackwell, Patrick Collison, Jessica
Livingston, Robert Morris, and Harj Taggar for reading drafts of this.
|
August 2011
I realized recently that we may be able to solve part of the patent problem
without waiting for the government.
I've never been 100% sure whether patents help or hinder technological
progress. When I was a kid I thought they helped. I thought they protected
inventors from having their ideas stolen by big companies. Maybe that was
truer in the past, when more things were physical. But regardless of whether
patents are in general a good thing, there do seem to be bad ways of using
them. And since bad uses of patents seem to be increasing, there is an
increasing call for patent reform.
The problem with patent reform is that it has to go through the government.
That tends to be slow. But recently I realized we can also attack the problem
downstream. As well as pinching off the stream of patents at the point where
they're issued, we may in some cases be able to pinch it off at the point
where they're used.
One way of using patents that clearly does not encourage innovation is when
established companies with bad products use patents to suppress small
competitors with good products. This is the type of abuse we may be able to
decrease without having to go through the government.
The way to do it is to get the companies that are above pulling this sort of
trick to pledge publicly not to. Then the ones that won't make such a pledge
will be very conspicuous. Potential employees won't want to work for them. And
investors, too, will be able to see that they're the sort of company that
competes by litigation rather than by making good products.
Here's the pledge:
> No first use of software patents against companies with less than 25 people.
I've deliberately traded precision for brevity. The patent pledge is not
legally binding. It's like Google's "Don't be evil." They don't define what
evil is, but by publicly saying that, they're saying they're willing to be
held to a standard that, say, Altria is not. And though constraining, "Don't
be evil" has been good for Google. Technology companies win by attracting the
most productive people, and the most productive people are attracted to
employers who hold themselves to a higher standard than the law requires. [1]
The patent pledge is in effect a narrower but open source "Don't be evil." I
encourage every technology company to adopt it. If you want to help fix
patents, encourage your employer to.
Already most technology companies wouldn't sink to using patents on startups.
You don't see Google or Facebook suing startups for patent infringement. They
don't need to. So for the better technology companies, the patent pledge
requires no change in behavior. They're just promising to do what they'd do
anyway. And when all the companies that won't use patents on startups have
said so, the holdouts will be very conspicuous.
The patent pledge doesn't fix every problem with patents. It won't stop patent
trolls, for example; they're already pariahs. But the problem the patent
pledge does fix may be more serious than the problem of patent trolls. Patent
trolls are just parasites. A clumsy parasite may occasionally kill the host,
but that's not its goal. Whereas companies that sue startups for patent
infringement generally do it with explicit goal of keeping their product off
the market.
Companies that use patents on startups are attacking innovation at the root.
Now there's something any individual can do about this problem, without
waiting for the government: ask companies where they stand.
[Patent Pledge Site](http://thepatentpledge.org)
**Notes:**
[1] Because the pledge is deliberately vague, we're going to need common sense
when intepreting it. And even more vice versa: the pledge is vague in order to
make people use common sense when interpreting it.
So for example I've deliberately avoided saying whether the 25 people have to
be employees, or whether contractors count too. If a company has to split
hairs that fine about whether a suit would violate the patent pledge, it's
probably still a dick move.
|
April 2003
_(This essay is derived from a keynote talk at PyCon 2003.)_
It's hard to predict what life will be like in a hundred years. There are only
a few things we can say with certainty. We know that everyone will drive
flying cars, that zoning laws will be relaxed to allow buildings hundreds of
stories tall, that it will be dark most of the time, and that women will all
be trained in the martial arts. Here I want to zoom in on one detail of this
picture. What kind of programming language will they use to write the software
controlling those flying cars?
This is worth thinking about not so much because we'll actually get to use
these languages as because, if we're lucky, we'll use languages on the path
from this point to that.
I think that, like species, languages will form evolutionary trees, with dead-
ends branching off all over. We can see this happening already. Cobol, for all
its sometime popularity, does not seem to have any intellectual descendants.
It is an evolutionary dead-end-- a Neanderthal language.
I predict a similar fate for Java. People sometimes send me mail saying, "How
can you say that Java won't turn out to be a successful language? It's already
a successful language." And I admit that it is, if you measure success by
shelf space taken up by books on it (particularly individual books on it), or
by the number of undergrads who believe they have to learn it to get a job.
When I say Java won't turn out to be a successful language, I mean something
more specific: that Java will turn out to be an evolutionary dead-end, like
Cobol.
This is just a guess. I may be wrong. My point here is not to dis Java, but to
raise the issue of evolutionary trees and get people asking, where on the tree
is language X? The reason to ask this question isn't just so that our ghosts
can say, in a hundred years, I told you so. It's because staying close to the
main branches is a useful heuristic for finding languages that will be good to
program in now.
At any given time, you're probably happiest on the main branches of an
evolutionary tree. Even when there were still plenty of Neanderthals, it must
have sucked to be one. The Cro-Magnons would have been constantly coming over
and beating you up and stealing your food.
The reason I want to know what languages will be like in a hundred years is so
that I know what branch of the tree to bet on now.
The evolution of languages differs from the evolution of species because
branches can converge. The Fortran branch, for example, seems to be merging
with the descendants of Algol. In theory this is possible for species too, but
it's not likely to have happened to any bigger than a cell.
Convergence is more likely for languages partly because the space of
possibilities is smaller, and partly because mutations are not random.
Language designers deliberately incorporate ideas from other languages.
It's especially useful for language designers to think about where the
evolution of programming languages is likely to lead, because they can steer
accordingly. In that case, "stay on a main branch" becomes more than a way to
choose a good language. It becomes a heuristic for making the right decisions
about language design.
Any programming language can be divided into two parts: some set of
fundamental operators that play the role of axioms, and the rest of the
language, which could in principle be written in terms of these fundamental
operators.
I think the fundamental operators are the most important factor in a
language's long term survival. The rest you can change. It's like the rule
that in buying a house you should consider location first of all. Everything
else you can fix later, but you can't fix the location.
I think it's important not just that the axioms be well chosen, but that there
be few of them. Mathematicians have always felt this way about axioms-- the
fewer, the better-- and I think they're onto something.
At the very least, it has to be a useful exercise to look closely at the core
of a language to see if there are any axioms that could be weeded out. I've
found in my long career as a slob that cruft breeds cruft, and I've seen this
happen in software as well as under beds and in the corners of rooms.
I have a hunch that the main branches of the evolutionary tree pass through
the languages that have the smallest, cleanest cores. The more of a language
you can write in itself, the better.
Of course, I'm making a big assumption in even asking what programming
languages will be like in a hundred years. Will we even be writing programs in
a hundred years? Won't we just tell computers what we want them to do?
There hasn't been a lot of progress in that department so far. My guess is
that a hundred years from now people will still tell computers what to do
using programs we would recognize as such. There may be tasks that we solve
now by writing programs and which in a hundred years you won't have to write
programs to solve, but I think there will still be a good deal of programming
of the type that we do today.
It may seem presumptuous to think anyone can predict what any technology will
look like in a hundred years. But remember that we already have almost fifty
years of history behind us. Looking forward a hundred years is a graspable
idea when we consider how slowly languages have evolved in the past fifty.
Languages evolve slowly because they're not really technologies. Languages are
notation. A program is a formal description of the problem you want a computer
to solve for you. So the rate of evolution in programming languages is more
like the rate of evolution in mathematical notation than, say, transportation
or communications. Mathematical notation does evolve, but not with the giant
leaps you see in technology.
Whatever computers are made of in a hundred years, it seems safe to predict
they will be much faster than they are now. If Moore's Law continues to put
out, they will be 74 quintillion (73,786,976,294,838,206,464) times faster.
That's kind of hard to imagine. And indeed, the most likely prediction in the
speed department may be that Moore's Law will stop working. Anything that is
supposed to double every eighteen months seems likely to run up against some
kind of fundamental limit eventually. But I have no trouble believing that
computers will be very much faster. Even if they only end up being a paltry
million times faster, that should change the ground rules for programming
languages substantially. Among other things, there will be more room for what
would now be considered slow languages, meaning languages that don't yield
very efficient code.
And yet some applications will still demand speed. Some of the problems we
want to solve with computers are created by computers; for example, the rate
at which you have to process video images depends on the rate at which another
computer can generate them. And there is another class of problems which
inherently have an unlimited capacity to soak up cycles: image rendering,
cryptography, simulations.
If some applications can be increasingly inefficient while others continue to
demand all the speed the hardware can deliver, faster computers will mean that
languages have to cover an ever wider range of efficiencies. We've seen this
happening already. Current implementations of some popular new languages are
shockingly wasteful by the standards of previous decades.
This isn't just something that happens with programming languages. It's a
general historical trend. As technologies improve, each generation can do
things that the previous generation would have considered wasteful. People
thirty years ago would be astonished at how casually we make long distance
phone calls. People a hundred years ago would be even more astonished that a
package would one day travel from Boston to New York via Memphis.
I can already tell you what's going to happen to all those extra cycles that
faster hardware is going to give us in the next hundred years. They're nearly
all going to be wasted.
I learned to program when computer power was scarce. I can remember taking all
the spaces out of my Basic programs so they would fit into the memory of a 4K
TRS-80. The thought of all this stupendously inefficient software burning up
cycles doing the same thing over and over seems kind of gross to me. But I
think my intuitions here are wrong. I'm like someone who grew up poor, and
can't bear to spend money even for something important, like going to the
doctor.
Some kinds of waste really are disgusting. SUVs, for example, would arguably
be gross even if they ran on a fuel which would never run out and generated no
pollution. SUVs are gross because they're the solution to a gross problem.
(How to make minivans look more masculine.) But not all waste is bad. Now that
we have the infrastructure to support it, counting the minutes of your long-
distance calls starts to seem niggling. If you have the resources, it's more
elegant to think of all phone calls as one kind of thing, no matter where the
other person is.
There's good waste, and bad waste. I'm interested in good waste-- the kind
where, by spending more, we can get simpler designs. How will we take
advantage of the opportunities to waste cycles that we'll get from new, faster
hardware?
The desire for speed is so deeply engrained in us, with our puny computers,
that it will take a conscious effort to overcome it. In language design, we
should be consciously seeking out situations where we can trade efficiency for
even the smallest increase in convenience.
Most data structures exist because of speed. For example, many languages today
have both strings and lists. Semantically, strings are more or less a subset
of lists in which the elements are characters. So why do you need a separate
data type? You don't, really. Strings only exist for efficiency. But it's lame
to clutter up the semantics of the language with hacks to make programs run
faster. Having strings in a language seems to be a case of premature
optimization.
If we think of the core of a language as a set of axioms, surely it's gross to
have additional axioms that add no expressive power, simply for the sake of
efficiency. Efficiency is important, but I don't think that's the right way to
get it.
The right way to solve that problem, I think, is to separate the meaning of a
program from the implementation details. Instead of having both lists and
strings, have just lists, with some way to give the compiler optimization
advice that will allow it to lay out strings as contiguous bytes if necessary.
Since speed doesn't matter in most of a program, you won't ordinarily need to
bother with this sort of micromanagement. This will be more and more true as
computers get faster.
Saying less about implementation should also make programs more flexible.
Specifications change while a program is being written, and this is not only
inevitable, but desirable.
The word "essay" comes from the French verb "essayer", which means "to try".
An essay, in the original sense, is something you write to try to figure
something out. This happens in software too. I think some of the best programs
were essays, in the sense that the authors didn't know when they started
exactly what they were trying to write.
Lisp hackers already know about the value of being flexible with data
structures. We tend to write the first version of a program so that it does
everything with lists. These initial versions can be so shockingly inefficient
that it takes a conscious effort not to think about what they're doing, just
as, for me at least, eating a steak requires a conscious effort not to think
where it came from.
What programmers in a hundred years will be looking for, most of all, is a
language where you can throw together an unbelievably inefficient version 1 of
a program with the least possible effort. At least, that's how we'd describe
it in present-day terms. What they'll say is that they want a language that's
easy to program in.
Inefficient software isn't gross. What's gross is a language that makes
programmers do needless work. Wasting programmer time is the true
inefficiency, not wasting machine time. This will become ever more clear as
computers get faster.
I think getting rid of strings is already something we could bear to think
about. We did it in [Arc](arc.html), and it seems to be a win; some operations
that would be awkward to describe as regular expressions can be described
easily as recursive functions.
How far will this flattening of data structures go? I can think of
possibilities that shock even me, with my conscientiously broadened mind. Will
we get rid of arrays, for example? After all, they're just a subset of hash
tables where the keys are vectors of integers. Will we replace hash tables
themselves with lists?
There are more shocking prospects even than that. The Lisp that McCarthy
described in 1960, for example, didn't have numbers. Logically, you don't need
to have a separate notion of numbers, because you can represent them as lists:
the integer n could be represented as a list of n elements. You can do math
this way. It's just unbearably inefficient.
No one actually proposed implementing numbers as lists in practice. In fact,
McCarthy's 1960 paper was not, at the time, intended to be implemented at all.
It was a [theoretical exercise](rootsoflisp.html), an attempt to create a more
elegant alternative to the Turing Machine. When someone did, unexpectedly,
take this paper and translate it into a working Lisp interpreter, numbers
certainly weren't represented as lists; they were represented in binary, as in
every other language.
Could a programming language go so far as to get rid of numbers as a
fundamental data type? I ask this not so much as a serious question as as a
way to play chicken with the future. It's like the hypothetical case of an
irresistible force meeting an immovable object-- here, an unimaginably
inefficient implementation meeting unimaginably great resources. I don't see
why not. The future is pretty long. If there's something we can do to decrease
the number of axioms in the core language, that would seem to be the side to
bet on as t approaches infinity. If the idea still seems unbearable in a
hundred years, maybe it won't in a thousand.
Just to be clear about this, I'm not proposing that all numerical calculations
would actually be carried out using lists. I'm proposing that the core
language, prior to any additional notations about implementation, be defined
this way. In practice any program that wanted to do any amount of math would
probably represent numbers in binary, but this would be an optimization, not
part of the core language semantics.
Another way to burn up cycles is to have many layers of software between the
application and the hardware. This too is a trend we see happening already:
many recent languages are compiled into byte code. Bill Woods once told me
that, as a rule of thumb, each layer of interpretation costs a factor of 10 in
speed. This extra cost buys you flexibility.
The very first version of Arc was an extreme case of this sort of multi-level
slowness, with corresponding benefits. It was a classic "metacircular"
interpreter written on top of Common Lisp, with a definite family resemblance
to the eval function defined in McCarthy's original Lisp paper. The whole
thing was only a couple hundred lines of code, so it was very easy to
understand and change. The Common Lisp we used, CLisp, itself runs on top of a
byte code interpreter. So here we had two levels of interpretation, one of
them (the top one) shockingly inefficient, and the language was usable. Barely
usable, I admit, but usable.
Writing software as multiple layers is a powerful technique even within
applications. Bottom-up programming means writing a program as a series of
layers, each of which serves as a language for the one above. This approach
tends to yield smaller, more flexible programs. It's also the best route to
that holy grail, reusability. A language is by definition reusable. The more
of your application you can push down into a language for writing that type of
application, the more of your software will be reusable.
Somehow the idea of reusability got attached to object-oriented programming in
the 1980s, and no amount of evidence to the contrary seems to be able to shake
it free. But although some object-oriented software is reusable, what makes it
reusable is its bottom-upness, not its object-orientedness. Consider
libraries: they're reusable because they're language, whether they're written
in an object-oriented style or not.
I don't predict the demise of object-oriented programming, by the way. Though
I don't think it has much to offer good programmers, except in certain
specialized domains, it is irresistible to large organizations. Object-
oriented programming offers a sustainable way to write spaghetti code. It lets
you accrete programs as a series of patches. Large organizations always tend
to develop software this way, and I expect this to be as true in a hundred
years as it is today.
As long as we're talking about the future, we had better talk about parallel
computation, because that's where this idea seems to live. That is, no matter
when you're talking, parallel computation seems to be something that is going
to happen in the future.
Will the future ever catch up with it? People have been talking about parallel
computation as something imminent for at least 20 years, and it hasn't
affected programming practice much so far. Or hasn't it? Already chip
designers have to think about it, and so must people trying to write systems
software on multi-cpu computers.
The real question is, how far up the ladder of abstraction will parallelism
go? In a hundred years will it affect even application programmers? Or will it
be something that compiler writers think about, but which is usually invisible
in the source code of applications?
One thing that does seem likely is that most opportunities for parallelism
will be wasted. This is a special case of my more general prediction that most
of the extra computer power we're given will go to waste. I expect that, as
with the stupendous speed of the underlying hardware, parallelism will be
something that is available if you ask for it explicitly, but ordinarily not
used. This implies that the kind of parallelism we have in a hundred years
will not, except in special applications, be massive parallelism. I expect for
ordinary programmers it will be more like being able to fork off processes
that all end up running in parallel.
And this will, like asking for specific implementations of data structures, be
something that you do fairly late in the life of a program, when you try to
optimize it. Version 1s will ordinarily ignore any advantages to be got from
parallel computation, just as they will ignore advantages to be got from
specific representations of data.
Except in special kinds of applications, parallelism won't pervade the
programs that are written in a hundred years. It would be premature
optimization if it did.
How many programming languages will there be in a hundred years? There seem to
be a huge number of new programming languages lately. Part of the reason is
that faster hardware has allowed programmers to make different tradeoffs
between speed and convenience, depending on the application. If this is a real
trend, the hardware we'll have in a hundred years should only increase it.
And yet there may be only a few widely-used languages in a hundred years. Part
of the reason I say this is optimism: it seems that, if you did a really good
job, you could make a language that was ideal for writing a slow version 1,
and yet with the right optimization advice to the compiler, would also yield
very fast code when necessary. So, since I'm optimistic, I'm going to predict
that despite the huge gap they'll have between acceptable and maximal
efficiency, programmers in a hundred years will have languages that can span
most of it.
As this gap widens, profilers will become increasingly important. Little
attention is paid to profiling now. Many people still seem to believe that the
way to get fast applications is to write compilers that generate fast code. As
the gap between acceptable and maximal performance widens, it will become
increasingly clear that the way to get fast applications is to have a good
guide from one to the other.
When I say there may only be a few languages, I'm not including domain-
specific "little languages". I think such embedded languages are a great idea,
and I expect them to proliferate. But I expect them to be written as thin
enough skins that users can see the general-purpose language underneath.
Who will design the languages of the future? One of the most exciting trends
in the last ten years has been the rise of open-source languages like Perl,
Python, and Ruby. Language design is being taken over by hackers. The results
so far are messy, but encouraging. There are some stunningly novel ideas in
Perl, for example. Many are stunningly bad, but that's always true of
ambitious efforts. At its current rate of mutation, God knows what Perl might
evolve into in a hundred years.
It's not true that those who can't do, teach (some of the best hackers I know
are professors), but it is true that there are a lot of things that those who
teach can't do. [Research](desres.html) imposes constraining caste
restrictions. In any academic field there are topics that are ok to work on
and others that aren't. Unfortunately the distinction between acceptable and
forbidden topics is usually based on how intellectual the work sounds when
described in research papers, rather than how important it is for getting good
results. The extreme case is probably literature; people studying literature
rarely say anything that would be of the slightest use to those producing it.
Though the situation is better in the sciences, the overlap between the kind
of work you're allowed to do and the kind of work that yields good languages
is distressingly small. (Olin Shivers has grumbled eloquently about this.) For
example, types seem to be an inexhaustible source of research papers, despite
the fact that static typing seems to preclude true macros-- without which, in
my opinion, no language is worth using.
The trend is not merely toward languages being developed as open-source
projects rather than "research", but toward languages being designed by the
application programmers who need to use them, rather than by compiler writers.
This seems a good trend and I expect it to continue.
Unlike physics in a hundred years, which is almost necessarily impossible to
predict, I think it may be possible in principle to design a language now that
would appeal to users in a hundred years.
One way to design a language is to just write down the program you'd like to
be able to write, regardless of whether there is a compiler that can translate
it or hardware that can run it. When you do this you can assume unlimited
resources. It seems like we ought to be able to imagine unlimited resources as
well today as in a hundred years.
What program would one like to write? Whatever is least work. Except not
quite: whatever _would be_ least work if your ideas about programming weren't
already influenced by the languages you're currently used to. Such influence
can be so pervasive that it takes a great effort to overcome it. You'd think
it would be obvious to creatures as lazy as us how to express a program with
the least effort. In fact, our ideas about what's possible tend to be so
[limited](avg.html) by whatever language we think in that easier formulations
of programs seem very surprising. They're something you have to discover, not
something you naturally sink into.
One helpful trick here is to use the [length](power.html) of the program as an
approximation for how much work it is to write. Not the length in characters,
of course, but the length in distinct syntactic elements-- basically, the size
of the parse tree. It may not be quite true that the shortest program is the
least work to write, but it's close enough that you're better off aiming for
the solid target of brevity than the fuzzy, nearby one of least work. Then the
algorithm for language design becomes: look at a program and ask, is there any
way to write this that's shorter?
In practice, writing programs in an imaginary hundred-year language will work
to varying degrees depending on how close you are to the core. Sort routines
you can write now. But it would be hard to predict now what kinds of libraries
might be needed in a hundred years. Presumably many libraries will be for
domains that don't even exist yet. If SETI@home works, for example, we'll need
libraries for communicating with aliens. Unless of course they are
sufficiently advanced that they already communicate in XML.
At the other extreme, I think you might be able to design the core language
today. In fact, some might argue that it was already mostly designed in 1958.
If the hundred year language were available today, would we want to program in
it? One way to answer this question is to look back. If present-day
programming languages had been available in 1960, would anyone have wanted to
use them?
In some ways, the answer is no. Languages today assume infrastructure that
didn't exist in 1960. For example, a language in which indentation is
significant, like Python, would not work very well on printer terminals. But
putting such problems aside-- assuming, for example, that programs were all
just written on paper-- would programmers of the 1960s have liked writing
programs in the languages we use now?
I think so. Some of the less imaginative ones, who had artifacts of early
languages built into their ideas of what a program was, might have had
trouble. (How can you manipulate data without doing pointer arithmetic? How
can you implement flow charts without gotos?) But I think the smartest
programmers would have had no trouble making the most of present-day
languages, if they'd had them.
If we had the hundred-year language now, it would at least make a great
pseudocode. What about using it to write software? Since the hundred-year
language will need to generate fast code for some applications, presumably it
could generate code efficient enough to run acceptably well on our hardware.
We might have to give more optimization advice than users in a hundred years,
but it still might be a net win.
Now we have two ideas that, if you combine them, suggest interesting
possibilities: (1) the hundred-year language could, in principle, be designed
today, and (2) such a language, if it existed, might be good to program in
today. When you see these ideas laid out like that, it's hard not to think,
why not try writing the hundred-year language now?
When you're working on language design, I think it is good to have such a
target and to keep it consciously in mind. When you learn to drive, one of the
principles they teach you is to align the car not by lining up the hood with
the stripes painted on the road, but by aiming at some point in the distance.
Even if all you care about is what happens in the next ten feet, this is the
right answer. I think we can and should do the same thing with programming
languages.
**Notes**
I believe Lisp Machine Lisp was the first language to embody the principle
that declarations (except those of dynamic variables) were merely optimization
advice, and would not change the meaning of a correct program. Common Lisp
seems to have been the first to state this explicitly.
**Thanks** to Trevor Blackwell, Robert Morris, and Dan Giffin for reading
drafts of this, and to Guido van Rossum, Jeremy Hylton, and the rest of the
Python crew for inviting me to speak at PyCon.
|
1993
_(This essay is from the introduction to_[On Lisp](onlisp.html) _.)_
It's a long-standing principle of programming style that the functional
elements of a program should not be too large. If some component of a program
grows beyond the stage where it's readily comprehensible, it becomes a mass of
complexity which conceals errors as easily as a big city conceals fugitives.
Such software will be hard to read, hard to test, and hard to debug.
In accordance with this principle, a large program must be divided into
pieces, and the larger the program, the more it must be divided. How do you
divide a program? The traditional approach is called _top-down design:_ you
say "the purpose of the program is to do these seven things, so I divide it
into seven major subroutines. The first subroutine has to do these four
things, so it in turn will have four of its own subroutines," and so on. This
process continues until the whole program has the right level of granularity--
each part large enough to do something substantial, but small enough to be
understood as a single unit.
Experienced Lisp programmers divide up their programs differently. As well as
top-down design, they follow a principle which could be called _bottom-up
design_ \-- changing the language to suit the problem. In Lisp, you don't just
write your program down toward the language, you also build the language up
toward your program. As you're writing a program you may think "I wish Lisp
had such-and-such an operator." So you go and write it. Afterward you realize
that using the new operator would simplify the design of another part of the
program, and so on. Language and program evolve together. Like the border
between two warring states, the boundary between language and program is drawn
and redrawn, until eventually it comes to rest along the mountains and rivers,
the natural frontiers of your problem. In the end your program will look as if
the language had been designed for it. And when language and program fit one
another well, you end up with code which is clear, small, and efficient.
It's worth emphasizing that bottom-up design doesn't mean just writing the
same program in a different order. When you work bottom-up, you usually end up
with a different program. Instead of a single, monolithic program, you will
get a larger language with more abstract operators, and a smaller program
written in it. Instead of a lintel, you'll get an arch.
In typical code, once you abstract out the parts which are merely bookkeeping,
what's left is much shorter; the higher you build up the language, the less
distance you will have to travel from the top down to it. This brings several
advantages:
1. By making the language do more of the work, bottom-up design yields programs which are smaller and more agile. A shorter program doesn't have to be divided into so many components, and fewer components means programs which are easier to read or modify. Fewer components also means fewer connections between components, and thus less chance for errors there. As industrial designers strive to reduce the number of moving parts in a machine, experienced Lisp programmers use bottom-up design to reduce the size and complexity of their programs.
2. Bottom-up design promotes code re-use. When you write two or more programs, many of the utilities you wrote for the first program will also be useful in the succeeding ones. Once you've acquired a large substrate of utilities, writing a new program can take only a fraction of the effort it would require if you had to start with raw Lisp.
3. Bottom-up design makes programs easier to read. An instance of this type of abstraction asks the reader to understand a general-purpose operator; an instance of functional abstraction asks the reader to understand a special-purpose subroutine. [1]
4. Because it causes you always to be on the lookout for patterns in your code, working bottom-up helps to clarify your ideas about the design of your program. If two distant components of a program are similar in form, you'll be led to notice the similarity and perhaps to redesign the program in a simpler way.
Bottom-up design is possible to a certain degree in languages other than Lisp.
Whenever you see library functions, bottom-up design is happening. However,
Lisp gives you much broader powers in this department, and augmenting the
language plays a proportionately larger role in Lisp style-- so much so that
Lisp is not just a different language, but a whole different way of
programming.
It's true that this style of development is better suited to programs which
can be written by small groups. However, at the same time, it extends the
limits of what can be done by a small group. In _The Mythical Man-Month_ ,
Frederick Brooks proposed that the productivity of a group of programmers does
not grow linearly with its size. As the size of the group increases, the
productivity of individual programmers goes down. The experience of Lisp
programming suggests a more cheerful way to phrase this law: as the size of
the group decreases, the productivity of individual programmers goes up. A
small group wins, relatively speaking, simply because it's smaller. When a
small group also takes advantage of the techniques that Lisp makes possible,
it can [win outright](avg.html).
**New:** [Download On Lisp for Free](onlisptext.html).
* * *
[1] "But no one can read the program without understanding all your new
utilities." To see why such statements are usually mistaken, see Section 4.8.
|
April 2008
_(This essay is derived from a talk at the 2008 Startup School.)_
About a month after we started Y Combinator we came up with the phrase that
became our motto: Make something people want. We've learned a lot since then,
but if I were choosing now that's still the one I'd pick.
Another thing we tell founders is not to worry too much about the business
model, at least at first. Not because making money is unimportant, but because
it's so much easier than building something great.
A couple weeks ago I realized that if you put those two ideas together, you
get something surprising. Make something people want. Don't worry too much
about making money. What you've got is a description of a charity.
When you get an unexpected result like this, it could either be a bug or a new
discovery. Either businesses aren't supposed to be like charities, and we've
proven by reductio ad absurdum that one or both of the principles we began
with is false. Or we have a new idea.
I suspect it's the latter, because as soon as this thought occurred to me, a
whole bunch of other things fell into place.
**Examples**
For example, Craigslist. It's not a charity, but they run it like one. And
they're astoundingly successful. When you scan down the list of most popular
web sites, the number of employees at Craigslist looks like a misprint. Their
revenues aren't as high as they could be, but most startups would be happy to
trade places with them.
In Patrick O'Brian's novels, his captains always try to get upwind of their
opponents. If you're upwind, you decide when and if to engage the other ship.
Craigslist is effectively upwind of enormous revenues. They'd face some
challenges if they wanted to make more, but not the sort you face when you're
tacking upwind, trying to force a crappy product on ambivalent users by
spending ten times as much on sales as on development. [1]
I'm not saying startups should aim to end up like Craigslist. They're a
product of unusual circumstances. But they're a good model for the early
phases.
Google looked a lot like a charity in the beginning. They didn't have ads for
over a year. At year 1, Google was indistinguishable from a nonprofit. If a
nonprofit or government organization had started a project to index the web,
Google at year 1 is the limit of what they'd have produced.
Back when I was working on spam filters I thought it would be a good idea to
have a web-based email service with good spam filtering. I wasn't thinking of
it as a company. I just wanted to keep people from getting spammed. But as I
thought more about this project, I realized it would probably have to be a
company. It would cost something to run, and it would be a pain to fund with
grants and donations.
That was a surprising realization. Companies often claim to be benevolent, but
it was surprising to realize there were purely benevolent projects that had to
be embodied as companies to work.
I didn't want to start another company, so I didn't do it. But if someone had,
they'd probably be quite rich now. There was a window of about two years when
spam was increasing rapidly but all the big email services had terrible
filters. If someone had launched a new, spam-free mail service, users would
have flocked to it.
Notice the pattern here? From either direction we get to the same spot. If you
start from successful startups, you find they often behaved like nonprofits.
And if you start from ideas for nonprofits, you find they'd often make good
startups.
**Power**
How wide is this territory? Would all good nonprofits be good companies?
Possibly not. What makes Google so valuable is that their users have money. If
you make people with money love you, you can probably get some of it. But
could you also base a successful startup on behaving like a nonprofit to
people who don't have money? Could you, for example, grow a successful startup
out of curing an unfashionable but deadly disease like malaria?
I'm not sure, but I suspect that if you pushed this idea, you'd be surprised
how far it would go. For example, people who apply to Y Combinator don't
generally have much money, and yet we can profit by helping them, because with
our help they could make money. Maybe the situation is similar with malaria.
Maybe an organization that helped lift its weight off a country could benefit
from the resulting growth.
I'm not proposing this is a serious idea. I don't know anything about malaria.
But I've been kicking ideas around long enough to know when I come across a
powerful one.
One way to guess how far an idea extends is to ask yourself at what point
you'd bet against it. The thought of betting against benevolence is alarming
in the same way as saying that something is technically impossible. You're
just asking to be made a fool of, because these are such powerful forces. [2]
For example, initially I thought maybe this principle only applied to Internet
startups. Obviously it worked for Google, but what about Microsoft? Surely
Microsoft isn't benevolent? But when I think back to the beginning, they were.
Compared to IBM they were like Robin Hood. When IBM introduced the PC, they
thought they were going to make money selling hardware at high prices. But by
gaining control of the PC standard, Microsoft opened up the market to any
manufacturer. Hardware prices plummeted, and lots of people got to have
computers who couldn't otherwise have afforded them. It's the sort of thing
you'd expect Google to do.
Microsoft isn't so benevolent now. Now when one thinks of what Microsoft does
to users, all the verbs that come to mind begin with F. [3] And yet it doesn't
seem to pay. Their stock price has been flat for years. Back when they were
Robin Hood, their stock price rose like Google's. Could there be a connection?
You can see how there would be. When you're small, you can't bully customers,
so you have to charm them. Whereas when you're big you can maltreat them at
will, and you tend to, because it's easier than satisfying them. You grow big
by being nice, but you can stay big by being mean.
You get away with it till the underlying conditions change, and then all your
victims escape. So "Don't be evil" may be the most valuable thing Paul
Buchheit made for Google, because it may turn out to be an elixir of corporate
youth. I'm sure they find it constraining, but think how valuable it will be
if it saves them from lapsing into the fatal laziness that afflicted Microsoft
and IBM.
The curious thing is, this elixir is freely available to any other company.
Anyone can adopt "Don't be evil." The catch is that people will hold you to
it. So I don't think you're going to see record labels or tobacco companies
using this discovery.
**Morale**
There's a lot of external evidence that benevolence works. But how does it
work? One advantage of investing in a large number of startups is that you get
a lot of data about how they work. From what we've seen, being good seems to
help startups in three ways: it improves their morale, it makes other people
want to help them, and above all, it helps them be decisive.
Morale is tremendously important to a startup—so important that morale alone
is almost enough to determine success. Startups are often described as
emotional roller-coasters. One minute you're going to take over the world, and
the next you're doomed. The problem with feeling you're doomed is not just
that it makes you unhappy, but that it makes you _stop working_. So the
downhills of the roller-coaster are more of a self fulfilling prophecy than
the uphills. If feeling you're going to succeed makes you work harder, that
probably improves your chances of succeeding, but if feeling you're going to
fail makes you stop working, that practically guarantees you'll fail.
Here's where benevolence comes in. If you feel you're really helping people,
you'll keep working even when it seems like your startup is doomed. Most of us
have some amount of natural benevolence. The mere fact that someone needs you
makes you want to help them. So if you start the kind of startup where users
come back each day, you've basically built yourself a giant tamagotchi. You've
made something you need to take care of.
Blogger is a famous example of a startup that went through really low lows and
survived. At one point they ran out of money and everyone left. Evan Williams
came in to work the next day, and there was no one but him. What kept him
going? Partly that users needed him. He was hosting thousands of people's
blogs. He couldn't just let the site die.
There are many advantages of launching quickly, but the most important may be
that once you have users, the tamagotchi effect kicks in. Once you have users
to take care of, you're forced to figure out what will make them happy, and
that's actually very valuable information.
The added confidence that comes from trying to help people can also help you
with investors. One of the founders of [Chatterous](http://chatterous.com)
told me recently that he and his cofounder had decided that this service was
something the world needed, so they were going to keep working on it no matter
what, even if they had to move back to Canada and live in their parents'
basements.
Once they realized this, they stopped caring so much what investors thought
about them. They still met with them, but they weren't going to die if they
didn't get their money. And you know what? The investors got a lot more
interested. They could sense that the Chatterouses were going to do this
startup with or without them.
If you're really committed and your startup is cheap to run, you become very
hard to kill. And practically all startups, even the most successful, come
close to death at some point. So if doing good for people gives you a sense of
mission that makes you harder to kill, that alone more than compensates for
whatever you lose by not choosing a more selfish project.
**Help**
Another advantage of being good is that it makes other people want to help
you. This too seems to be an inborn trait in humans.
One of the startups we've funded, [Octopart](http://octopart.com), is
currently locked in a classic battle of good versus evil. They're a search
site for industrial components. A lot of people need to search for components,
and before Octopart there was no good way to do it. That, it turned out, was
no coincidence.
Octopart built the right way to search for components. Users like it and
they've been growing rapidly. And yet for most of Octopart's life, the biggest
distributor, Digi-Key, has been trying to force them take their prices off the
site. Octopart is sending them customers for free, and yet Digi-Key is trying
to make that traffic stop. Why? Because their current business model depends
on overcharging people who have incomplete information about prices. They
don't want search to work.
The Octoparts are the nicest guys in the world. They dropped out of the PhD
program in physics at Berkeley to do this. They just wanted to fix a problem
they encountered in their research. Imagine how much time you could save the
world's engineers if they could do searches online. So when I hear that a big,
evil company is trying to stop them in order to keep search broken, it makes
me really want to help them. It makes me spend more time on the Octoparts than
I do with most of the other startups we've funded. It just made me spend
several minutes telling you how great they are. Why? Because they're good guys
and they're trying to help the world.
If you're benevolent, people will rally around you: investors, customers,
other companies, and potential employees. In the long term the most important
may be the potential employees. I think everyone knows now that [good
hackers](gh.html) are much better than mediocre ones. If you can attract the
best hackers to work for you, as Google has, you have a big advantage. And the
very best hackers tend to be idealistic. They're not desperate for a job. They
can work wherever they want. So most want to work on things that will make the
world better.
**Compass**
But the most important advantage of being good is that it acts as a compass.
One of the hardest parts of doing a startup is that you have so many choices.
There are just two or three of you, and a thousand things you could do. How do
you decide?
Here's the answer: Do whatever's best for your users. You can hold onto this
like a rope in a hurricane, and it will save you if anything can. Follow it
and it will take you through everything you need to do.
It's even the answer to questions that seem unrelated, like how to convince
investors to give you money. If you're a good salesman, you could try to just
talk them into it. But the more reliable route is to convince them through
your users: if you make something users love enough to tell their friends, you
grow exponentially, and that will convince any investor.
Being good is a particularly useful strategy for making decisions in complex
situations because it's stateless. It's like telling the truth. The trouble
with lying is that you have to remember everything you've said in the past to
make sure you don't contradict yourself. If you tell the truth you don't have
to remember anything, and that's a really useful property in domains where
things happen fast.
For example, Y Combinator has now invested in 80 startups, 57 of which are
still alive. (The rest have died or merged or been acquired.) When you're
trying to advise 57 startups, it turns out you have to have a stateless
algorithm. You can't have ulterior motives when you have 57 things going on at
once, because you can't remember them. So our rule is just to do whatever's
best for the founders. Not because we're particularly benevolent, but because
it's the only algorithm that works on that scale.
When you write something telling people to be good, you seem to be claiming to
be good yourself. So I want to say explicitly that I am not a particularly
good person. When I was a kid I was firmly in the camp of bad. The way adults
used the word good, it seemed to be synonymous with quiet, so I grew up very
suspicious of it.
You know how there are some people whose names come up in conversation and
everyone says "He's _such_ a great guy?" People never say that about me. The
best I get is "he means well." I am not claiming to be good. At best I speak
good as a second language.
So I'm not suggesting you be good in the usual sanctimonious way. I'm
suggesting it because it works. It will work not just as a statement of
"values," but as a guide to strategy, and even a design spec for software.
Don't just not be evil. Be good.
**Notes**
[1] Fifty years ago it would have seemed shocking for a public company not to
pay dividends. Now many tech companies don't. The markets seem to have figured
out how to value potential dividends. Maybe that isn't the last step in this
evolution. Maybe markets will eventually get comfortable with potential
earnings. (VCs already are, and at least some of them consistently make
money.)
I realize this sounds like the stuff one used to hear about the "new economy"
during the Bubble. Believe me, I was not drinking that kool-aid at the time.
But I'm convinced there were some [good ideas](bubble.html) buried in Bubble
thinking. For example, it's ok to focus on growth instead of profits—but only
if the growth is genuine. You can't be buying users; that's a pyramid scheme.
But a company with rapid, genuine growth is valuable, and eventually markets
learn how to value valuable things.
[2] The idea of starting a company with benevolent aims is currently
undervalued, because the kind of people who currently make that their explicit
goal don't usually do a very good job.
It's one of the standard career paths of trustafarians to start some vaguely
benevolent business. The problem with most of them is that they either have a
bogus political agenda or are feebly executed. The trustafarians' ancestors
didn't get rich by preserving their traditional culture; maybe people in
Bolivia don't want to either. And starting an organic farm, though it's at
least straightforwardly benevolent, doesn't help people on the scale that
Google does.
Most explicitly benevolent projects don't hold themselves sufficiently
accountable. They act as if having good intentions were enough to guarantee
good effects.
[3] Users dislike their new operating system so much that they're starting
petitions to save the old one. And the old one was nothing special. The
hackers within Microsoft must know in their hearts that if the company really
cared about users they'd just advise them to switch to OSX.
**Thanks** to Trevor Blackwell, Paul Buchheit, Jessica Livingston, and Robert
Morris for reading drafts of this.
|
January 2015
My father is a mathematician. For most of my childhood he worked for
Westinghouse, modelling nuclear reactors.
He was one of those lucky people who know early on what they want to do. When
you talk to him about his childhood, there's a clear watershed at about age
12, when he "got interested in maths."
He grew up in the small Welsh seacoast town of
[Pwllheli](https://goo.gl/maps/rkzUm). As we retraced his walk to school on
Google Street View, he said that it had been nice growing up in the country.
"Didn't it get boring when you got to be about 15?" I asked.
"No," he said, "by then I was interested in maths."
In another conversation he told me that what he really liked was solving
problems. To me the exercises at the end of each chapter in a math textbook
represent work, or at best a way to reinforce what you learned in that
chapter. To him the problems were the reward. The text of each chapter was
just some advice about solving them. He said that as soon as he got a new
textbook he'd immediately work out all the problems — to the slight annoyance
of his teacher, since the class was supposed to work through the book
gradually.
Few people know so early or so certainly what they want to work on. But
talking to my father reminded me of a heuristic the rest of us can use. If
something that seems like work to other people doesn't seem like work to you,
that's something you're well suited for. For example, a lot of programmers I
know, including me, actually like debugging. It's not something people tend to
volunteer; one likes it the way one likes popping zits. But you may have to
like debugging to like programming, considering the degree to which
programming consists of it.
The stranger your tastes seem to other people, the stronger evidence they
probably are of what you should do. When I was in college I used to write
papers for my friends. It was quite interesting to write a paper for a class I
wasn't taking. Plus they were always so relieved.
It seemed curious that the same task could be painful to one person and
pleasant to another, but I didn't realize at the time what this imbalance
implied, because I wasn't looking for it. I didn't realize how hard it can be
to decide what you should work on, and that you sometimes have to [figure it
out](love.html) from subtle clues, like a detective solving a case in a
mystery novel. So I bet it would help a lot of people to ask themselves about
this explicitly. What seems like work to other people that doesn't seem like
work to you?
**Thanks** to Sam Altman, Trevor Blackwell, Jessica Livingston, Robert Morris,
and my father for reading drafts of this.
|
December 2010
I was thinking recently how inconvenient it was not to have a general term for
iPhones, iPads, and the corresponding things running Android. The closest to a
general term seems to be "mobile devices," but that (a) applies to any mobile
phone, and (b) doesn't really capture what's distinctive about the iPad.
After a few seconds it struck me that what we'll end up calling these things
is tablets. The only reason we even consider calling them "mobile devices" is
that the iPhone preceded the iPad. If the iPad had come first, we wouldn't
think of the iPhone as a phone; we'd think of it as a tablet small enough to
hold up to your ear.
The iPhone isn't so much a phone as a replacement for a phone. That's an
important distinction, because it's an early instance of what will become a
common pattern. Many if not most of the special-purpose objects around us are
going to be replaced by apps running on tablets.
This is already clear in cases like GPSes, music players, and cameras. But I
think it will surprise people how many things are going to get replaced. We
funded one startup that's [replacing keys](http://lockitron.com/). The fact
that you can change font sizes easily means the iPad effectively replaces
reading glasses. I wouldn't be surprised if by playing some clever tricks with
the accelerometer you could even replace the bathroom scale.
The advantages of doing things in software on a single device are so great
that everything that can get turned into software will. So for the next couple
years, a good [recipe for startups](http://ycombinator.com/rfs8.html) will be
to look around you for things that people haven't realized yet can be made
unnecessary by a tablet app.
In 1938 Buckminster Fuller coined the term
[ephemeralization](http://en.wikipedia.org/wiki/Ephemeralization) to describe
the increasing tendency of physical machinery to be replaced by what we would
now call software. The reason tablets are going to take over the world is not
(just) that Steve Jobs and Co are industrial design wizards, but because they
have this force behind them. The iPhone and the iPad have effectively drilled
a hole that will allow ephemeralization to flow into a lot of new areas. No
one who has studied the history of technology would want to underestimate the
power of that force.
I worry about the power Apple could have with this force behind them. I don't
want to see another era of client monoculture like the Microsoft one in the
80s and 90s. But if ephemeralization is one of the main forces driving the
spread of tablets, that suggests a way to compete with Apple: be a better
platform for it.
It has turned out to be a great thing that Apple tablets have accelerometers
in them. Developers have used the accelerometer in ways Apple could never have
imagined. That's the nature of platforms. The more versatile the tool, the
less you can predict how people will use it. So tablet makers should be
thinking: what else can we put in there? Not merely hardware, but software
too. What else can we give developers access to? Give hackers an inch and
they'll take you a mile.
**Thanks** to Sam Altman, Paul Buchheit, Jessica Livingston, and Robert Morris
for reading drafts of this.
|
April 2006, rev August 2009
Plato quotes Socrates as saying "the unexamined life is not worth living."
Part of what he meant was that the proper role of humans is to think, just as
the proper role of anteaters is to poke their noses into anthills.
A lot of ancient philosophy had the quality — and I don't mean this in an
insulting way — of the kind of conversations freshmen have late at night in
common rooms:
> What is our purpose? Well, we humans are as conspicuously different from
> other animals as the anteater. In our case the distinguishing feature is the
> ability to reason. So obviously that is what we should be doing, and a human
> who doesn't is doing a bad job of being human — is no better than an animal.
Now we'd give a different answer. At least, someone Socrates's age would. We'd
ask why we even suppose we have a "purpose" in life. We may be better adapted
for some things than others; we may be happier doing things we're adapted for;
but why assume purpose?
The history of ideas is a history of gradually discarding the assumption that
it's all about us. No, it turns out, the earth is not the center of the
universe — not even the center of the solar system. No, it turns out, humans
are not created by God in his own image; they're just one species among many,
descended not merely from apes, but from microorganisms. Even the concept of
"me" turns out to be fuzzy around the edges if you examine it closely.
The idea that we're the center of things is difficult to discard. So difficult
that there's probably room to discard more. Richard Dawkins made another step
in that direction only in the last several decades, with the idea of the
[selfish gene](http://en.wikipedia.org/wiki/The_Selfish_Gene). No, it turns
out, we're not even the protagonists: we're just the latest model vehicle our
genes have constructed to travel around in. And having kids is our genes
heading for the lifeboats. Reading that book snapped my brain out of its
previous way of thinking the way Darwin's must have when it first appeared.
(Few people can experience now what Darwin's contemporaries did when _The
Origin of Species_ was first published, because everyone now is raised either
to take evolution for granted, or to regard it as a heresy. No one encounters
the idea of natural selection for the first time as an adult.)
So if you want to discover things that have been overlooked till now, one
really good place to look is in our blind spot: in our natural, naive belief
that it's all about us. And expect to encounter ferocious opposition if you
do.
Conversely, if you have to choose between two theories, prefer the one that
doesn't center on you.
This principle isn't only for big ideas. It works in everyday life, too. For
example, suppose you're saving a piece of cake in the fridge, and you come
home one day to find your housemate has eaten it. Two possible theories:
> a) Your housemate did it deliberately to upset you. He _knew_ you were
> saving that piece of cake.
>
> b) Your housemate was hungry.
I say pick b. No one knows who said "never attribute to malice what can be
explained by incompetence," but it is a powerful idea. Its more general
version is our answer to the Greeks:
> Don't see purpose where there isn't.
Or better still, the positive version:
> See randomness.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
July 2009
Now that the term "ramen profitable" has become widespread, I ought to explain
precisely what the idea entails.
Ramen profitable means a startup makes just enough to pay the founders' living
expenses. This is a different form of profitability than startups have
traditionally aimed for. Traditional profitability means a big bet is finally
paying off, whereas the main importance of ramen profitability is that it buys
you time. [1]
In the past, a startup would usually become profitable only after raising and
spending quite a lot of money. A company making computer hardware might not
become profitable for 5 years, during which they spent $50 million. But when
they did they might have revenues of $50 million a year. This kind of
profitability means the startup has succeeded.
Ramen profitability is the other extreme: a startup that becomes profitable
after 2 months, even though its revenues are only $3000 a month, because the
only employees are a couple 25 year old founders who can live on practically
nothing. Revenues of $3000 a month do not mean the company has succeeded. But
it does share something with the one that's profitable in the traditional way:
they don't need to raise money to survive.
Ramen profitability is an unfamiliar idea to most people because it only
recently became feasible. It's still not feasible for a lot of startups; it
would not be for most biotech startups, for example; but it is for many
software startups because they're now so cheap. For many, the only real cost
is the founders' living expenses.
The main significance of this type of profitability is that you're no longer
at the mercy of investors. If you're still losing money, then eventually
you'll either have to raise more or shut down. Once you're ramen profitable
this painful choice goes away. You can still raise money, but you don't have
to do it now.
* * *
The most obvious advantage of not needing money is that you can get better
terms. If investors know you need money, they'll sometimes take advantage of
you. Some may even deliberately stall, because they know that as you run out
of money you'll become increasingly pliable.
But there are also three less obvious advantages of ramen profitability. One
is that it makes you more attractive to investors. If you're already
profitable, on however small a scale, it shows that (a) you can get at least
someone to pay you, (b) you're serious about building things people want, and
(c) you're disciplined enough to keep expenses low.
This is reassuring to investors, because you've addressed three of their
biggest worries. It's common for them to fund companies that have smart
founders and a big market, and yet still fail. When these companies fail, it's
usually because (a) people wouldn't pay for what they made, e.g. because it
was too hard to sell to them, or the market wasn't ready yet, (b) the founders
solved the wrong problem, instead of paying attention to what users needed, or
(c) the company spent too much and burned through their funding before they
started to make money. If you're ramen profitable, you're already avoiding
these mistakes.
Another advantage of ramen profitability is that it's good for morale. A
company tends to feel rather theoretical when you first start it. It's legally
a company, but you feel like you're lying when you call it one. When people
start to pay you significant amounts, the company starts to feel real. And
your own living expenses are the milestone you feel most, because at that
point the future flips state. Now survival is the default, instead of dying.
A morale boost on that scale is very valuable in a startup, because the moral
weight of running a startup is what makes it hard. Startups are still very
rare. Why don't more people do it? The financial risk? Plenty of 25 year olds
save nothing anyway. The long hours? Plenty of people work just as long hours
in regular jobs. What keeps people from starting startups is the fear of
having so much responsibility. And this is not an irrational fear: it really
is hard to bear. Anything that takes some of that weight off you will greatly
increase your chances of surviving.
A startup that reaches ramen profitability may be more likely to succeed than
not. Which is pretty exciting, considering the bimodal distribution of
outcomes in startups: you either fail or make a lot of money.
The fourth advantage of ramen profitability is the least obvious but may be
the most important. If you don't need to raise money, you don't have to
interrupt working on the company to do it.
[Raising money](fundraising.html) is terribly distracting. You're lucky if
your productivity is a third of what it was before. And it can last for
months.
I didn't understand (or rather, remember) precisely why raising money was so
distracting till earlier this year. I'd noticed that startups we funded would
usually grind to a halt when they switched to raising money, but I didn't
remember exactly why till YC raised money itself. We had a comparatively easy
time of it; the first people I asked said yes; but it took months to work out
the details, and during that time I got hardly any real work done. Why?
Because I thought about it all the time.
At any given time there tends to be one problem that's the most urgent for a
startup. This is what you think about as you fall asleep at night and when you
take a shower in the morning. And when you start raising money, that becomes
the problem you think about. You only take one shower in the morning, and if
you're thinking about investors during it, then you're not thinking about the
product.
Whereas if you can choose when you raise money, you can pick a time when
you're not in the middle of something else, and you can probably also insist
that the round close fast. You may even be able to avoid having the round
occupy your thoughts, if you don't care whether it closes.
* * *
Ramen profitable means no more than the definition implies. It does not, for
example, imply that you're "bootstrapping" the startup—that you're never going
to take money from investors. Empirically that doesn't seem to work very well.
Few startups succeed without taking investment. Maybe as startups get cheaper
it will become more common. On the other hand, the money is there, waiting to
be invested. If startups need it less, they'll be able to get it on better
terms, which will make them more inclined to take it. That will tend to
produce an equilibrium. [2]
Another thing ramen profitability doesn't imply is Joe Kraus's idea that you
should put your [business
model](http://www.brendonwilson.com/blog/2006/04/30/joe-kraus-confessions-of-
a-startup-addict/) in beta when you put your product in beta. He believes you
should get people to pay you from the beginning. I think that's too
constraining. Facebook didn't, and they've done better than most startups.
Making money right away was not only unnecessary for them, but probably would
have been harmful. I do think Joe's rule could be useful for many startups,
though. When founders seem unfocused, I sometimes suggest they try to get
customers to pay them for something, in the hope that this constraint will
prod them into action.
The difference between Joe's idea and ramen profitability is that a ramen
profitable company doesn't have to be making money the way it ultimately will.
It just has to be making money. The most famous example is Google, which
initially made money by licensing search to sites like Yahoo.
Is there a downside to ramen profitability? Probably the biggest danger is
that it might turn you into a consulting firm. Startups have to be product
companies, in the sense of making a single thing that everyone uses. The
defining quality of startups is that they grow fast, and consulting just can't
scale the way a product can. [3] But it's pretty easy to make $3000 a month
consulting; in fact, that would be a low rate for contract programming. So
there could be a temptation to slide into consulting, and telling yourselves
you're a ramen profitable startup, when in fact you're not a startup at all.
It's ok to do a little consulting-type work at first. Startups usually have to
do something weird at first. But remember that ramen profitability is not the
destination. A startup's destination is to grow really big; ramen
profitability is a trick for [not dying](die.html) en route.
**Notes**
[1] The "ramen" in "ramen profitable" refers to instant ramen, which is just
about the cheapest food available.
Please do not take the term literally. Living on instant ramen would be very
unhealthy. Rice and beans are a better source of food. Start by investing in a
rice cooker, if you don't have one.
Rice and Beans for 2n
olive oil or butter
n yellow onions
other fresh vegetables; experiment
3n cloves garlic
n 12-oz cans white, kidney, or black beans
n cubes Knorr beef or vegetable bouillon
n teaspoons freshly ground black pepper
3n teaspoons ground cumin
n cups dry rice, preferably brown
Put rice in rice cooker. Add water as specified on rice package. (Default: 2
cups water per cup of rice.) Turn on rice cooker and forget about it.
Chop onions and other vegetables and fry in oil, over fairly low heat, till
onions are glassy. Put in chopped garlic, pepper, cumin, and a little more
fat, and stir. Keep heat low. Cook another 2 or 3 minutes, then add beans
(don't drain the beans), and stir. Throw in the bouillon cube(s), cover, and
cook on lowish heat for at least 10 minutes more. Stir vigilantly to avoid
sticking.
If you want to save money, buy beans in giant cans from discount stores.
Spices are also much cheaper when bought in bulk. If there's an Indian grocery
store near you, they'll have big bags of cumin for the same price as the
little jars in supermarkets.
[2] There's a good chance that a shift in power from investors to founders
would actually increase the size of the venture business. I think investors
currently err too far on the side of being harsh to founders. If they were
forced to stop, the whole venture business would work better, and you might
see something like the increase in trade you always see when restrictive laws
are removed.
Investors are one of the biggest sources of pain for founders; if they stopped
causing so much pain, it would be better to be a founder; and if it were
better to be a founder, more people would do it.
[3] It's conceivable that a startup could grow big by transforming consulting
into a form that would scale. But if they did that they'd really be a product
company.
**Thanks** to Jessica Livingston for reading drafts of this.
|
April 2020
I recently saw a [_video_](https://www.youtube.com/watch?v=NAh4uS4f78o) of TV
journalists and politicians confidently saying that the coronavirus would be
no worse than the flu. What struck me about it was not just how mistaken they
seemed, but how daring. How could they feel safe saying such things?
The answer, I realized, is that they didn't think they could get caught. They
didn't realize there was any danger in making false predictions. These people
constantly make false predictions, and get away with it, because the things
they make predictions about either have mushy enough outcomes that they can
bluster their way out of trouble, or happen so far in the future that few
remember what they said.
An epidemic is different. It falsifies your predictions rapidly and
unequivocally.
But epidemics are rare enough that these people clearly didn't realize this
was even a possibility. Instead they just continued to use their ordinary
m.o., which, as the epidemic has made clear, is to talk confidently about
things they don't understand.
An event like this is thus a uniquely powerful way of taking people's measure.
As Warren Buffett said, "It's only when the tide goes out that you learn who's
been swimming naked." And the tide has just gone out like never before.
Now that we've seen the results, let's remember what we saw, because this is
the most accurate test of credibility we're ever likely to have. I hope.
|
January 2007
_(Foreword to Jessica Livingston's[Founders at
Work](http://www.amazon.com/gp/product/1590597141).)_
Apparently sprinters reach their highest speed right out of the blocks, and
spend the rest of the race slowing down. The winners slow down the least. It's
that way with most startups too. The earliest phase is usually the most
productive. That's when they have the really big ideas. Imagine what Apple was
like when 100% of its employees were either Steve Jobs or Steve Wozniak.
The striking thing about this phase is that it's completely different from
most people's idea of what business is like. If you looked in people's heads
(or stock photo collections) for images representing "business," you'd get
images of people dressed up in suits, groups sitting around conference tables
looking serious, Powerpoint presentations, people producing thick reports for
one another to read. Early stage startups are the exact opposite of this. And
yet they're probably the most productive part of the whole economy.
Why the disconnect? I think there's a general principle at work here: the less
energy people expend on performance, the more they expend on appearances to
compensate. More often than not the energy they expend on seeming impressive
makes their actual performance worse. A few years ago I read an article in
which a car magazine modified the "sports" model of some production car to get
the fastest possible standing quarter mile. You know how they did it? They cut
off all the crap the manufacturer had bolted onto the car to make it _look_
fast.
Business is broken the same way that car was. The effort that goes into
looking productive is not merely wasted, but actually makes organizations less
productive. Suits, for example. Suits do not help people to think better. I
bet most executives at big companies do their best thinking when they wake up
on Sunday morning and go downstairs in their bathrobe to make a cup of coffee.
That's when you have ideas. Just imagine what a company would be like if
people could think that well at work. People do in startups, at least some of
the time. (Half the time you're in a panic because your servers are on fire,
but the other half you're thinking as deeply as most people only get to
sitting alone on a Sunday morning.)
Ditto for most of the other differences between startups and what passes for
productivity in big companies. And yet conventional ideas of professionalism
have such an iron grip on our minds that even startup founders are affected by
them. In our startup, when outsiders came to visit we tried hard to seem
"professional." We'd clean up our offices, wear better clothes, try to arrange
that a lot of people were there during conventional office hours. In fact,
programming didn't get done by well-dressed people at clean desks during
office hours. It got done by badly dressed people (I was notorious for
programmming wearing just a towel) in offices strewn with junk at 2 in the
morning. But no visitor would understand that. Not even investors, who are
supposed to be able to recognize real productivity when they see it. Even we
were affected by the conventional wisdom. We thought of ourselves as
impostors, succeeding despite being totally unprofessional. It was as if we'd
created a Formula 1 car but felt sheepish because it didn't look like a car
was supposed to look.
In the car world, there are at least some people who know that a high
performance car looks like a Formula 1 racecar, not a sedan with giant rims
and a fake spoiler bolted to the trunk. Why not in business? Probably because
startups are so small. The really dramatic growth happens when a startup only
has three or four people, so only three or four people see that, whereas tens
of thousands see business as it's practiced by Boeing or Philip Morris.
This book can help fix that problem, by showing everyone what, till now, only
a handful people got to see: what happens in the first year of a startup. This
is what real productivity looks like. This is the Formula 1 racecar. It looks
weird, but it goes fast.
Of course, big companies won't be able to do everything these startups do. In
big companies there's always going to be more politics, and less scope for
individual decisions. But seeing what startups are really like will at least
show other organizations what to aim for. The time may soon be coming when
instead of startups trying to seem more corporate, corporations will try to
seem more like startups. That would be a good thing.
[Japanese
Translation](http://www.aoky.net/articles/paul_graham/foundersatwork.htm)
* * *
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
May 2004
_(This essay was originally published in[Hackers &
Painters](http://www.amazon.com/gp/product/0596006624/104-0572701-7443937).) _
If you wanted to get rich, how would you do it? I think your best bet would be
to start or join a startup. That's been a reliable way to get rich for
hundreds of years. The word "startup" dates from the 1960s, but what happens
in one is very similar to the venture-backed trading voyages of the Middle
Ages.
Startups usually involve technology, so much so that the phrase "high-tech
startup" is almost redundant. A startup is a small company that takes on a
hard technical problem.
Lots of people get rich knowing nothing more than that. You don't have to know
physics to be a good pitcher. But I think it could give you an edge to
understand the underlying principles. Why do startups have to be small? Will a
startup inevitably stop being a startup as it grows larger? And why do they so
often work on developing new technology? Why are there so many startups
selling new drugs or computer software, and none selling corn oil or laundry
detergent?
**The Proposition**
Economically, you can think of a startup as a way to compress your whole
working life into a few years. Instead of working at a low intensity for forty
years, you work as hard as you possibly can for four. This pays especially
well in technology, where you earn a premium for working fast.
Here is a brief sketch of the economic proposition. If you're a good hacker in
your mid twenties, you can get a job paying about $80,000 per year. So on
average such a hacker must be able to do at least $80,000 worth of work per
year for the company just to break even. You could probably work twice as many
hours as a corporate employee, and if you focus you can probably get three
times as much done in an hour. [1] You should get another multiple of two, at
least, by eliminating the drag of the pointy-haired middle manager who would
be your boss in a big company. Then there is one more multiple: how much
smarter are you than your job description expects you to be? Suppose another
multiple of three. Combine all these multipliers, and I'm claiming you could
be 36 times more productive than you're expected to be in a random corporate
job. [2] If a fairly good hacker is worth $80,000 a year at a big company,
then a smart hacker working very hard without any corporate bullshit to slow
him down should be able to do work worth about $3 million a year.
Like all back-of-the-envelope calculations, this one has a lot of wiggle room.
I wouldn't try to defend the actual numbers. But I stand by the structure of
the calculation. I'm not claiming the multiplier is precisely 36, but it is
certainly more than 10, and probably rarely as high as 100.
If $3 million a year seems high, remember that we're talking about the limit
case: the case where you not only have zero leisure time but indeed work so
hard that you endanger your health.
Startups are not magic. They don't change the laws of wealth creation. They
just represent a point at the far end of the curve. There is a conservation
law at work here: if you want to make a million dollars, you have to endure a
million dollars' worth of pain. For example, one way to make a million dollars
would be to work for the Post Office your whole life, and save every penny of
your salary. Imagine the stress of working for the Post Office for fifty
years. In a startup you compress all this stress into three or four years. You
do tend to get a certain bulk discount if you buy the economy-size pain, but
you can't evade the fundamental conservation law. If starting a startup were
easy, everyone would do it.
**Millions, not Billions**
If $3 million a year seems high to some people, it will seem low to others.
Three _million?_ How do I get to be a billionaire, like Bill Gates?
So let's get Bill Gates out of the way right now. It's not a good idea to use
famous rich people as examples, because the press only write about the very
richest, and these tend to be outliers. Bill Gates is a smart, determined, and
hardworking man, but you need more than that to make as much money as he has.
You also need to be very lucky.
There is a large random factor in the success of any company. So the guys you
end up reading about in the papers are the ones who are very smart, totally
dedicated, _and_ win the lottery. Certainly Bill is smart and dedicated, but
Microsoft also happens to have been the beneficiary of one of the most
spectacular blunders in the history of business: the licensing deal for DOS.
No doubt Bill did everything he could to steer IBM into making that blunder,
and he has done an excellent job of exploiting it, but if there had been one
person with a brain on IBM's side, Microsoft's future would have been very
different. Microsoft at that stage had little leverage over IBM. They were
effectively a component supplier. If IBM had required an exclusive license, as
they should have, Microsoft would still have signed the deal. It would still
have meant a lot of money for them, and IBM could easily have gotten an
operating system elsewhere.
Instead IBM ended up using all its power in the market to give Microsoft
control of the PC standard. From that point, all Microsoft had to do was
execute. They never had to bet the company on a bold decision. All they had to
do was play hardball with licensees and copy more innovative products
reasonably promptly.
If IBM hadn't made this mistake, Microsoft would still have been a successful
company, but it could not have grown so big so fast. Bill Gates would be rich,
but he'd be somewhere near the bottom of the Forbes 400 with the other guys
his age.
There are a lot of ways to get rich, and this essay is about only one of them.
This essay is about how to make money by creating wealth and getting paid for
it. There are plenty of other ways to get money, including chance,
speculation, marriage, inheritance, theft, extortion, fraud, monopoly, graft,
lobbying, counterfeiting, and prospecting. Most of the greatest fortunes have
probably involved several of these.
The advantage of creating wealth, as a way to get rich, is not just that it's
more legitimate (many of the other methods are now illegal) but that it's more
_straightforward._ You just have to do something people want.
**Money Is Not Wealth**
If you want to create wealth, it will help to understand what it is. Wealth is
not the same thing as money. [3] Wealth is as old as human history. Far older,
in fact; ants have wealth. Money is a comparatively recent invention.
Wealth is the fundamental thing. Wealth is stuff we want: food, clothes,
houses, cars, gadgets, travel to interesting places, and so on. You can have
wealth without having money. If you had a magic machine that could on command
make you a car or cook you dinner or do your laundry, or do anything else you
wanted, you wouldn't need money. Whereas if you were in the middle of
Antarctica, where there is nothing to buy, it wouldn't matter how much money
you had.
Wealth is what you want, not money. But if wealth is the important thing, why
does everyone talk about making money? It is a kind of shorthand: money is a
way of moving wealth, and in practice they are usually interchangeable. But
they are not the same thing, and unless you plan to get rich by
counterfeiting, talking about _making money_ can make it harder to understand
how to make money.
Money is a side effect of specialization. In a specialized society, most of
the things you need, you can't make for yourself. If you want a potato or a
pencil or a place to live, you have to get it from someone else.
How do you get the person who grows the potatoes to give you some? By giving
him something he wants in return. But you can't get very far by trading things
directly with the people who need them. If you make violins, and none of the
local farmers wants one, how will you eat?
The solution societies find, as they get more specialized, is to make the
trade into a two-step process. Instead of trading violins directly for
potatoes, you trade violins for, say, silver, which you can then trade again
for anything else you need. The intermediate stuff-- the _medium of exchange_
\-- can be anything that's rare and portable. Historically metals have been
the most common, but recently we've been using a medium of exchange, called
the _dollar_ , that doesn't physically exist. It works as a medium of
exchange, however, because its rarity is guaranteed by the U.S. Government.
The advantage of a medium of exchange is that it makes trade work. The
disadvantage is that it tends to obscure what trade really means. People think
that what a business does is make money. But money is just the intermediate
stage-- just a shorthand-- for whatever people want. What most businesses
really do is make wealth. They do something people want. [4]
**The Pie Fallacy**
A surprising number of people retain from childhood the idea that there is a
fixed amount of wealth in the world. There is, in any normal family, a fixed
amount of _money_ at any moment. But that's not the same thing.
When wealth is talked about in this context, it is often described as a pie.
"You can't make the pie larger," say politicians. When you're talking about
the amount of money in one family's bank account, or the amount available to a
government from one year's tax revenue, this is true. If one person gets more,
someone else has to get less.
I can remember believing, as a child, that if a few rich people had all the
money, it left less for everyone else. Many people seem to continue to believe
something like this well into adulthood. This fallacy is usually there in the
background when you hear someone talking about how x percent of the population
have y percent of the wealth. If you plan to start a startup, then whether you
realize it or not, you're planning to disprove the Pie Fallacy.
What leads people astray here is the abstraction of money. Money is not
wealth. It's just something we use to move wealth around. So although there
may be, in certain specific moments (like your family, this month) a fixed
amount of money available to trade with other people for things you want,
there is not a fixed amount of wealth in the world. _You can make more
wealth._ Wealth has been getting created and destroyed (but on balance,
created) for all of human history.
Suppose you own a beat-up old car. Instead of sitting on your butt next
summer, you could spend the time restoring your car to pristine condition. In
doing so you create wealth. The world is-- and you specifically are-- one
pristine old car the richer. And not just in some metaphorical way. If you
sell your car, you'll get more for it.
In restoring your old car you have made yourself richer. You haven't made
anyone else poorer. So there is obviously not a fixed pie. And in fact, when
you look at it this way, you wonder why anyone would think there was. [5]
Kids know, without knowing they know, that they can create wealth. If you need
to give someone a present and don't have any money, you make one. But kids are
so bad at making things that they consider home-made presents to be a
distinct, inferior, sort of thing to store-bought ones-- a mere expression of
the proverbial thought that counts. And indeed, the lumpy ashtrays we made for
our parents did not have much of a resale market.
**Craftsmen**
The people most likely to grasp that wealth can be created are the ones who
are good at making things, the craftsmen. Their hand-made objects become
store-bought ones. But with the rise of industrialization there are fewer and
fewer craftsmen. One of the biggest remaining groups is computer programmers.
A programmer can sit down in front of a computer and _create wealth_. A good
piece of software is, in itself, a valuable thing. There is no manufacturing
to confuse the issue. Those characters you type are a complete, finished
product. If someone sat down and wrote a web browser that didn't suck (a fine
idea, by the way), the world would be that much richer. [5b]
Everyone in a company works together to create wealth, in the sense of making
more things people want. Many of the employees (e.g. the people in the
mailroom or the personnel department) work at one remove from the actual
making of stuff. Not the programmers. They literally think the product, one
line at a time. And so it's clearer to programmers that wealth is something
that's made, rather than being distributed, like slices of a pie, by some
imaginary Daddy.
It's also obvious to programmers that there are huge variations in the rate at
which wealth is created. At Viaweb we had one programmer who was a sort of
monster of productivity. I remember watching what he did one long day and
estimating that he had added several hundred thousand dollars to the market
value of the company. A great programmer, on a roll, could create a million
dollars worth of wealth in a couple weeks. A mediocre programmer over the same
period will generate zero or even negative wealth (e.g. by introducing bugs).
This is why so many of the best programmers are libertarians. In our world,
you sink or swim, and there are no excuses. When those far removed from the
creation of wealth-- undergraduates, reporters, politicians-- hear that the
richest 5% of the people have half the total wealth, they tend to think
_injustice!_ An experienced programmer would be more likely to think _is that
all?_ The top 5% of programmers probably write 99% of the good software.
Wealth can be created without being sold. Scientists, till recently at least,
effectively donated the wealth they created. We are all richer for knowing
about penicillin, because we're less likely to die from infections. Wealth is
whatever people want, and not dying is certainly something we want. Hackers
often donate their work by writing open source software that anyone can use
for free. I am much the richer for the operating system FreeBSD, which I'm
running on the computer I'm using now, and so is Yahoo, which runs it on all
their servers.
**What a Job Is**
In industrialized countries, people belong to one institution or another at
least until their twenties. After all those years you get used to the idea of
belonging to a group of people who all get up in the morning, go to some set
of buildings, and do things that they do not, ordinarily, enjoy doing.
Belonging to such a group becomes part of your identity: name, age, role,
institution. If you have to introduce yourself, or someone else describes you,
it will be as something like, John Smith, age 10, a student at such and such
elementary school, or John Smith, age 20, a student at such and such college.
When John Smith finishes school he is expected to get a job. And what getting
a job seems to mean is joining another institution. Superficially it's a lot
like college. You pick the companies you want to work for and apply to join
them. If one likes you, you become a member of this new group. You get up in
the morning and go to a new set of buildings, and do things that you do not,
ordinarily, enjoy doing. There are a few differences: life is not as much fun,
and you get paid, instead of paying, as you did in college. But the
similarities feel greater than the differences. John Smith is now John Smith,
22, a software developer at such and such corporation.
In fact John Smith's life has changed more than he realizes. Socially, a
company looks much like college, but the deeper you go into the underlying
reality, the more different it gets.
What a company does, and has to do if it wants to continue to exist, is earn
money. And the way most companies make money is by creating wealth. Companies
can be so specialized that this similarity is concealed, but it is not only
manufacturing companies that create wealth. A big component of wealth is
location. Remember that magic machine that could make you cars and cook you
dinner and so on? It would not be so useful if it delivered your dinner to a
random location in central Asia. If wealth means what people want, companies
that move things also create wealth. Ditto for many other kinds of companies
that don't make anything physical. Nearly all companies exist to do something
people want.
And that's what you do, as well, when you go to work for a company. But here
there is another layer that tends to obscure the underlying reality. In a
company, the work you do is averaged together with a lot of other people's.
You may not even be aware you're doing something people want. Your
contribution may be indirect. But the company as a whole must be giving people
something they want, or they won't make any money. And if they are paying you
x dollars a year, then on average you must be contributing at least x dollars
a year worth of work, or the company will be spending more than it makes, and
will go out of business.
Someone graduating from college thinks, and is told, that he needs to get a
job, as if the important thing were becoming a member of an institution. A
more direct way to put it would be: you need to start doing something people
want. You don't need to join a company to do that. All a company is is a group
of people working together to do something people want. It's doing something
people want that matters, not joining the group. [6]
For most people the best plan probably is to go to work for some existing
company. But it is a good idea to understand what's happening when you do
this. A job means doing something people want, averaged together with everyone
else in that company.
**Working Harder**
That averaging gets to be a problem. I think the single biggest problem
afflicting large companies is the difficulty of assigning a value to each
person's work. For the most part they punt. In a big company you get paid a
fairly predictable salary for working fairly hard. You're expected not to be
obviously incompetent or lazy, but you're not expected to devote your whole
life to your work.
It turns out, though, that there are economies of scale in how much of your
life you devote to your work. In the right kind of business, someone who
really devoted himself to work could generate ten or even a hundred times as
much wealth as an average employee. A programmer, for example, instead of
chugging along maintaining and updating an existing piece of software, could
write a whole new piece of software, and with it create a new source of
revenue.
Companies are not set up to reward people who want to do this. You can't go to
your boss and say, I'd like to start working ten times as hard, so will you
please pay me ten times as much? For one thing, the official fiction is that
you are already working as hard as you can. But a more serious problem is that
the company has no way of measuring the value of your work.
Salesmen are an exception. It's easy to measure how much revenue they
generate, and they're usually paid a percentage of it. If a salesman wants to
work harder, he can just start doing it, and he will automatically get paid
proportionally more.
There is one other job besides sales where big companies can hire first-rate
people: in the top management jobs. And for the same reason: their performance
can be measured. The top managers are held responsible for the performance of
the entire company. Because an ordinary employee's performance can't usually
be measured, he is not expected to do more than put in a solid effort. Whereas
top management, like salespeople, have to actually come up with the numbers.
The CEO of a company that tanks cannot plead that he put in a solid effort. If
the company does badly, he's done badly.
A company that could pay all its employees so straightforwardly would be
enormously successful. Many employees would work harder if they could get paid
for it. More importantly, such a company would attract people who wanted to
work especially hard. It would crush its competitors.
Unfortunately, companies can't pay everyone like salesmen. Salesmen work
alone. Most employees' work is tangled together. Suppose a company makes some
kind of consumer gadget. The engineers build a reliable gadget with all kinds
of new features; the industrial designers design a beautiful case for it; and
then the marketing people convince everyone that it's something they've got to
have. How do you know how much of the gadget's sales are due to each group's
efforts? Or, for that matter, how much is due to the creators of past gadgets
that gave the company a reputation for quality? There's no way to untangle all
their contributions. Even if you could read the minds of the consumers, you'd
find these factors were all blurred together.
If you want to go faster, it's a problem to have your work tangled together
with a large number of other people's. In a large group, your performance is
not separately measurable-- and the rest of the group slows you down.
**Measurement and Leverage**
To get rich you need to get yourself in a situation with two things,
measurement and leverage. You need to be in a position where your performance
can be measured, or there is no way to get paid more by doing more. And you
have to have leverage, in the sense that the decisions you make have a big
effect.
Measurement alone is not enough. An example of a job with measurement but not
leverage is doing piecework in a sweatshop. Your performance is measured and
you get paid accordingly, but you have no scope for decisions. The only
decision you get to make is how fast you work, and that can probably only
increase your earnings by a factor of two or three.
An example of a job with both measurement and leverage would be lead actor in
a movie. Your performance can be measured in the gross of the movie. And you
have leverage in the sense that your performance can make or break it.
CEOs also have both measurement and leverage. They're measured, in that the
performance of the company is their performance. And they have leverage in
that their decisions set the whole company moving in one direction or another.
I think everyone who gets rich by their own efforts will be found to be in a
situation with measurement and leverage. Everyone I can think of does: CEOs,
movie stars, hedge fund managers, professional athletes. A good hint to the
presence of leverage is the possibility of failure. Upside must be balanced by
downside, so if there is big potential for gain there must also be a
terrifying possibility of loss. CEOs, stars, fund managers, and athletes all
live with the sword hanging over their heads; the moment they start to suck,
they're out. If you're in a job that feels safe, you are not going to get
rich, because if there is no danger there is almost certainly no leverage.
But you don't have to become a CEO or a movie star to be in a situation with
measurement and leverage. All you need to do is be part of a small group
working on a hard problem.
**Smallness = Measurement**
If you can't measure the value of the work done by individual employees, you
can get close. You can measure the value of the work done by small groups.
One level at which you can accurately measure the revenue generated by
employees is at the level of the whole company. When the company is small, you
are thereby fairly close to measuring the contributions of individual
employees. A viable startup might only have ten employees, which puts you
within a factor of ten of measuring individual effort.
Starting or joining a startup is thus as close as most people can get to
saying to one's boss, I want to work ten times as hard, so please pay me ten
times as much. There are two differences: you're not saying it to your boss,
but directly to the customers (for whom your boss is only a proxy after all),
and you're not doing it individually, but along with a small group of other
ambitious people.
It will, ordinarily, be a group. Except in a few unusual kinds of work, like
acting or writing books, you can't be a company of one person. And the people
you work with had better be good, because it's their work that yours is going
to be averaged with.
A big company is like a giant galley driven by a thousand rowers. Two things
keep the speed of the galley down. One is that individual rowers don't see any
result from working harder. The other is that, in a group of a thousand
people, the average rower is likely to be pretty average.
If you took ten people at random out of the big galley and put them in a boat
by themselves, they could probably go faster. They would have both carrot and
stick to motivate them. An energetic rower would be encouraged by the thought
that he could have a visible effect on the speed of the boat. And if someone
was lazy, the others would be more likely to notice and complain.
But the real advantage of the ten-man boat shows when you take the ten _best_
rowers out of the big galley and put them in a boat together. They will have
all the extra motivation that comes from being in a small group. But more
importantly, by selecting that small a group you can get the best rowers. Each
one will be in the top 1%. It's a much better deal for them to average their
work together with a small group of their peers than to average it with
everyone.
That's the real point of startups. Ideally, you are getting together with a
group of other people who also want to work a lot harder, and get paid a lot
more, than they would in a big company. And because startups tend to get
founded by self-selecting groups of ambitious people who already know one
another (at least by reputation), the level of measurement is more precise
than you get from smallness alone. A startup is not merely ten people, but ten
people like you.
Steve Jobs once said that the success or failure of a startup depends on the
first ten employees. I agree. If anything, it's more like the first five.
Being small is not, in itself, what makes startups kick butt, but rather that
small groups can be select. You don't want small in the sense of a village,
but small in the sense of an all-star team.
The larger a group, the closer its average member will be to the average for
the population as a whole. So all other things being equal, a very able person
in a big company is probably getting a bad deal, because his performance is
dragged down by the overall lower performance of the others. Of course, all
other things often are not equal: the able person may not care about money, or
may prefer the stability of a large company. But a very able person who does
care about money will ordinarily do better to go off and work with a small
group of peers.
**Technology = Leverage**
Startups offer anyone a way to be in a situation with measurement and
leverage. They allow measurement because they're small, and they offer
leverage because they make money by inventing new technology.
What is technology? It's _technique_. It's the way we all do things. And when
you discover a new way to do things, its value is multiplied by all the people
who use it. It is the proverbial fishing rod, rather than the fish. That's the
difference between a startup and a restaurant or a barber shop. You fry eggs
or cut hair one customer at a time. Whereas if you solve a technical problem
that a lot of people care about, you help everyone who uses your solution.
That's leverage.
If you look at history, it seems that most people who got rich by creating
wealth did it by developing new technology. You just can't fry eggs or cut
hair fast enough. What made the Florentines rich in 1200 was the discovery of
new techniques for making the high-tech product of the time, fine woven cloth.
What made the Dutch rich in 1600 was the discovery of shipbuilding and
navigation techniques that enabled them to dominate the seas of the Far East.
Fortunately there is a natural fit between smallness and solving hard
problems. The leading edge of technology moves fast. Technology that's
valuable today could be worthless in a couple years. Small companies are more
at home in this world, because they don't have layers of bureaucracy to slow
them down. Also, technical advances tend to come from unorthodox approaches,
and small companies are less constrained by convention.
Big companies can develop technology. They just can't do it quickly. Their
size makes them slow and prevents them from rewarding employees for the
extraordinary effort required. So in practice big companies only get to
develop technology in fields where large capital requirements prevent startups
from competing with them, like microprocessors, power plants, or passenger
aircraft. And even in those fields they depend heavily on startups for
components and ideas.
It's obvious that biotech or software startups exist to solve hard technical
problems, but I think it will also be found to be true in businesses that
don't seem to be about technology. McDonald's, for example, grew big by
designing a system, the McDonald's franchise, that could then be reproduced at
will all over the face of the earth. A McDonald's franchise is controlled by
rules so precise that it is practically a piece of software. Write once, run
everywhere. Ditto for Wal-Mart. Sam Walton got rich not by being a retailer,
but by designing a new kind of store.
Use difficulty as a guide not just in selecting the overall aim of your
company, but also at decision points along the way. At Viaweb one of our rules
of thumb was _run upstairs._ Suppose you are a little, nimble guy being chased
by a big, fat, bully. You open a door and find yourself in a staircase. Do you
go up or down? I say up. The bully can probably run downstairs as fast as you
can. Going upstairs his bulk will be more of a disadvantage. Running upstairs
is hard for you but even harder for him.
What this meant in practice was that we deliberately sought hard problems. If
there were two features we could add to our software, both equally valuable in
proportion to their difficulty, we'd always take the harder one. Not just
because it was more valuable, but _because it was harder._ We delighted in
forcing bigger, slower competitors to follow us over difficult ground. Like
guerillas, startups prefer the difficult terrain of the mountains, where the
troops of the central government can't follow. I can remember times when we
were just exhausted after wrestling all day with some horrible technical
problem. And I'd be delighted, because something that was hard for us would be
impossible for our competitors.
This is not just a good way to run a startup. It's what a startup is. Venture
capitalists know about this and have a phrase for it: _barriers to entry._ If
you go to a VC with a new idea and ask him to invest in it, one of the first
things he'll ask is, how hard would this be for someone else to develop? That
is, how much difficult ground have you put between yourself and potential
pursuers? [7] And you had better have a convincing explanation of why your
technology would be hard to duplicate. Otherwise as soon as some big company
becomes aware of it, they'll make their own, and with their brand name,
capital, and distribution clout, they'll take away your market overnight.
You'd be like guerillas caught in the open field by regular army forces.
One way to put up barriers to entry is through patents. But patents may not
provide much protection. Competitors commonly find ways to work around a
patent. And if they can't, they may simply violate it and invite you to sue
them. A big company is not afraid to be sued; it's an everyday thing for them.
They'll make sure that suing them is expensive and takes a long time. Ever
heard of Philo Farnsworth? He invented television. The reason you've never
heard of him is that his company was not the one to make money from it. [8]
The company that did was RCA, and Farnsworth's reward for his efforts was a
decade of patent litigation.
Here, as so often, the best defense is a good offense. If you can develop
technology that's simply too hard for competitors to duplicate, you don't need
to rely on other defenses. Start by picking a hard problem, and then at every
decision point, take the harder choice. [9]
**The Catch(es)**
If it were simply a matter of working harder than an ordinary employee and
getting paid proportionately, it would obviously be a good deal to start a
startup. Up to a point it would be more fun. I don't think many people like
the slow pace of big companies, the interminable meetings, the water-cooler
conversations, the clueless middle managers, and so on.
Unfortunately there are a couple catches. One is that you can't choose the
point on the curve that you want to inhabit. You can't decide, for example,
that you'd like to work just two or three times as hard, and get paid that
much more. When you're running a startup, your competitors decide how hard you
work. And they pretty much all make the same decision: as hard as you possibly
can.
The other catch is that the payoff is only on average proportionate to your
productivity. There is, as I said before, a large random multiplier in the
success of any company. So in practice the deal is not that you're 30 times as
productive and get paid 30 times as much. It is that you're 30 times as
productive, and get paid between zero and a thousand times as much. If the
mean is 30x, the median is probably zero. Most startups tank, and not just the
dogfood portals we all heard about during the Internet Bubble. It's common for
a startup to be developing a genuinely good product, take slightly too long to
do it, run out of money, and have to shut down.
A startup is like a mosquito. A bear can absorb a hit and a crab is armored
against one, but a mosquito is designed for one thing: to score. No energy is
wasted on defense. The defense of mosquitos, as a species, is that there are a
lot of them, but this is little consolation to the individual mosquito.
Startups, like mosquitos, tend to be an all-or-nothing proposition. And you
don't generally know which of the two you're going to get till the last
minute. Viaweb came close to tanking several times. Our trajectory was like a
sine wave. Fortunately we got bought at the top of the cycle, but it was
damned close. While we were visiting Yahoo in California to talk about selling
the company to them, we had to borrow a conference room to reassure an
investor who was about to back out of a new round of funding that we needed to
stay alive.
The all-or-nothing aspect of startups was not something we wanted. Viaweb's
hackers were all extremely risk-averse. If there had been some way just to
work super hard and get paid for it, without having a lottery mixed in, we
would have been delighted. We would have much preferred a 100% chance of $1
million to a 20% chance of $10 million, even though theoretically the second
is worth twice as much. Unfortunately, there is not currently any space in the
business world where you can get the first deal.
The closest you can get is by selling your startup in the early stages, giving
up upside (and risk) for a smaller but guaranteed payoff. We had a chance to
do this, and stupidly, as we then thought, let it slip by. After that we
became comically eager to sell. For the next year or so, if anyone expressed
the slightest curiosity about Viaweb we would try to sell them the company.
But there were no takers, so we had to keep going.
It would have been a bargain to buy us at an early stage, but companies doing
acquisitions are not looking for bargains. A company big enough to acquire
startups will be big enough to be fairly conservative, and within the company
the people in charge of acquisitions will be among the more conservative,
because they are likely to be business school types who joined the company
late. They would rather overpay for a safe choice. So it is easier to sell an
established startup, even at a large premium, than an early-stage one.
**Get Users**
I think it's a good idea to get bought, if you can. Running a business is
different from growing one. It is just as well to let a big company take over
once you reach cruising altitude. It's also financially wiser, because selling
allows you to diversify. What would you think of a financial advisor who put
all his client's assets into one volatile stock?
How do you get bought? Mostly by doing the same things you'd do if you didn't
intend to sell the company. Being profitable, for example. But getting bought
is also an art in its own right, and one that we spent a lot of time trying to
master.
Potential buyers will always delay if they can. The hard part about getting
bought is getting them to act. For most people, the most powerful motivator is
not the hope of gain, but the fear of loss. For potential acquirers, the most
powerful motivator is the prospect that one of their competitors will buy you.
This, as we found, causes CEOs to take red-eyes. The second biggest is the
worry that, if they don't buy you now, you'll continue to grow rapidly and
will cost more to acquire later, or even become a competitor.
In both cases, what it all comes down to is users. You'd think that a company
about to buy you would do a lot of research and decide for themselves how
valuable your technology was. Not at all. What they go by is the number of
users you have.
In effect, acquirers assume the customers know who has the best technology.
And this is not as stupid as it sounds. Users are the only real proof that
you've created wealth. Wealth is what people want, and if people aren't using
your software, maybe it's not just because you're bad at marketing. Maybe it's
because you haven't made what they want.
Venture capitalists have a list of danger signs to watch out for. Near the top
is the company run by techno-weenies who are obsessed with solving interesting
technical problems, instead of making users happy. In a startup, you're not
just trying to solve problems. You're trying to solve problems _that users
care about._
So I think you should make users the test, just as acquirers do. Treat a
startup as an optimization problem in which performance is measured by number
of users. As anyone who has tried to optimize software knows, the key is
measurement. When you try to guess where your program is slow, and what would
make it faster, you almost always guess wrong.
Number of users may not be the perfect test, but it will be very close. It's
what acquirers care about. It's what revenues depend on. It's what makes
competitors unhappy. It's what impresses reporters, and potential new users.
Certainly it's a better test than your a priori notions of what problems are
important to solve, no matter how technically adept you are.
Among other things, treating a startup as an optimization problem will help
you avoid another pitfall that VCs worry about, and rightly-- taking a long
time to develop a product. Now we can recognize this as something hackers
already know to avoid: premature optimization. Get a version 1.0 out there as
soon as you can. Until you have some users to measure, you're optimizing based
on guesses.
The ball you need to keep your eye on here is the underlying principle that
wealth is what people want. If you plan to get rich by creating wealth, you
have to know what people want. So few businesses really pay attention to
making customers happy. How often do you walk into a store, or call a company
on the phone, with a feeling of dread in the back of your mind? When you hear
"your call is important to us, please stay on the line," do you think, oh
good, now everything will be all right?
A restaurant can afford to serve the occasional burnt dinner. But in
technology, you cook one thing and that's what everyone eats. So any
difference between what people want and what you deliver is multiplied. You
please or annoy customers wholesale. The closer you can get to what they want,
the more wealth you generate.
**Wealth and Power**
Making wealth is not the only way to get rich. For most of human history it
has not even been the most common. Until a few centuries ago, the main sources
of wealth were mines, slaves and serfs, land, and cattle, and the only ways to
acquire these rapidly were by inheritance, marriage, conquest, or
confiscation. Naturally wealth had a bad reputation.
Two things changed. The first was the rule of law. For most of the world's
history, if you did somehow accumulate a fortune, the ruler or his henchmen
would find a way to steal it. But in medieval Europe something new happened. A
new class of merchants and manufacturers began to collect in towns. [10]
Together they were able to withstand the local feudal lord. So for the first
time in our history, the bullies stopped stealing the nerds' lunch money. This
was naturally a great incentive, and possibly indeed the main cause of the
second big change, industrialization.
A great deal has been written about the causes of the Industrial Revolution.
But surely a necessary, if not sufficient, condition was that people who made
fortunes be able to enjoy them in peace. [11] One piece of evidence is what
happened to countries that tried to return to the old model, like the Soviet
Union, and to a lesser extent Britain under the labor governments of the 1960s
and early 1970s. Take away the incentive of wealth, and technical innovation
grinds to a halt.
Remember what a startup is, economically: a way of saying, I want to work
faster. Instead of accumulating money slowly by being paid a regular wage for
fifty years, I want to get it over with as soon as possible. So governments
that forbid you to accumulate wealth are in effect decreeing that you work
slowly. They're willing to let you earn $3 million over fifty years, but
they're not willing to let you work so hard that you can do it in two. They
are like the corporate boss that you can't go to and say, I want to work ten
times as hard, so please pay me ten times a much. Except this is not a boss
you can escape by starting your own company.
The problem with working slowly is not just that technical innovation happens
slowly. It's that it tends not to happen at all. It's only when you're
deliberately looking for hard problems, as a way to use speed to the greatest
advantage, that you take on this kind of project. Developing new technology is
a pain in the ass. It is, as Edison said, one percent inspiration and ninety-
nine percent perspiration. Without the incentive of wealth, no one wants to do
it. Engineers will work on sexy projects like fighter planes and moon rockets
for ordinary salaries, but more mundane technologies like light bulbs or
semiconductors have to be developed by entrepreneurs.
Startups are not just something that happened in Silicon Valley in the last
couple decades. Since it became possible to get rich by creating wealth,
everyone who has done it has used essentially the same recipe: measurement and
leverage, where measurement comes from working with a small group, and
leverage from developing new techniques. The recipe was the same in Florence
in 1200 as it is in Santa Clara today.
Understanding this may help to answer an important question: why Europe grew
so powerful. Was it something about the geography of Europe? Was it that
Europeans are somehow racially superior? Was it their religion? The answer (or
at least the proximate cause) may be that the Europeans rode on the crest of a
powerful new idea: allowing those who made a lot of money to keep it.
Once you're allowed to do that, people who want to get rich can do it by
generating wealth instead of stealing it. The resulting technological growth
translates not only into wealth but into military power. The theory that led
to the stealth plane was developed by a Soviet mathematician. But because the
Soviet Union didn't have a computer industry, it remained for them a theory;
they didn't have hardware capable of executing the calculations fast enough to
design an actual airplane.
In that respect the Cold War teaches the same lesson as World War II and, for
that matter, most wars in recent history. Don't let a ruling class of warriors
and politicians squash the entrepreneurs. The same recipe that makes
individuals rich makes countries powerful. Let the nerds keep their lunch
money, and you rule the world.
**Notes**
[1] One valuable thing you tend to get only in startups is
_uninterruptability_. Different kinds of work have different time quanta.
Someone proofreading a manuscript could probably be interrupted every fifteen
minutes with little loss of productivity. But the time quantum for hacking is
very long: it might take an hour just to load a problem into your head. So the
cost of having someone from personnel call you about a form you forgot to fill
out can be huge.
This is why hackers give you such a baleful stare as they turn from their
screen to answer your question. Inside their heads a giant house of cards is
tottering.
The mere possibility of being interrupted deters hackers from starting hard
projects. This is why they tend to work late at night, and why it's next to
impossible to write great software in a cubicle (except late at night).
One great advantage of startups is that they don't yet have any of the people
who interrupt you. There is no personnel department, and thus no form nor
anyone to call you about it.
[2] Faced with the idea that people working for startups might be 20 or 30
times as productive as those working for large companies, executives at large
companies will naturally wonder, how could I get the people working for me to
do that? The answer is simple: pay them to.
Internally most companies are run like Communist states. If you believe in
free markets, why not turn your company into one?
Hypothesis: A company will be maximally profitable when each employee is paid
in proportion to the wealth they generate.
[3] Until recently even governments sometimes didn't grasp the distinction
between money and wealth. Adam Smith (_Wealth of Nations_ , v:i) mentions
several that tried to preserve their "wealth" by forbidding the export of gold
or silver. But having more of the medium of exchange would not make a country
richer; if you have more money chasing the same amount of material wealth, the
only result is higher prices.
[4] There are many senses of the word "wealth," not all of them material. I'm
not trying to make a deep philosophical point here about which is the true
kind. I'm writing about one specific, rather technical sense of the word
"wealth." What people will give you money for. This is an interesting sort of
wealth to study, because it is the kind that prevents you from starving. And
what people will give you money for depends on them, not you.
When you're starting a business, it's easy to slide into thinking that
customers want what you do. During the Internet Bubble I talked to a woman
who, because she liked the outdoors, was starting an "outdoor portal." You
know what kind of business you should start if you like the outdoors? One to
recover data from crashed hard disks.
What's the connection? None at all. Which is precisely my point. If you want
to create wealth (in the narrow technical sense of not starving) then you
should be especially skeptical about any plan that centers on things you like
doing. That is where your idea of what's valuable is least likely to coincide
with other people's.
[5] In the average car restoration you probably do make everyone else
microscopically poorer, by doing a small amount of damage to the environment.
While environmental costs should be taken into account, they don't make wealth
a zero-sum game. For example, if you repair a machine that's broken because a
part has come unscrewed, you create wealth with no environmental cost.
[5b] This essay was written before Firefox.
[6] Many people feel confused and depressed in their early twenties. Life
seemed so much more fun in college. Well, of course it was. Don't be fooled by
the surface similarities. You've gone from guest to servant. It's possible to
have fun in this new world. Among other things, you now get to go behind the
doors that say "authorized personnel only." But the change is a shock at
first, and all the worse if you're not consciously aware of it.
[7] When VCs asked us how long it would take another startup to duplicate our
software, we used to reply that they probably wouldn't be able to at all. I
think this made us seem naive, or liars.
[8] Few technologies have one clear inventor. So as a rule, if you know the
"inventor" of something (the telephone, the assembly line, the airplane, the
light bulb, the transistor) it is because their company made money from it,
and the company's PR people worked hard to spread the story. If you don't know
who invented something (the automobile, the television, the computer, the jet
engine, the laser), it's because other companies made all the money.
[9] This is a good plan for life in general. If you have two choices, choose
the harder. If you're trying to decide whether to go out running or sit home
and watch TV, go running. Probably the reason this trick works so well is that
when you have two choices and one is harder, the only reason you're even
considering the other is laziness. You know in the back of your mind what's
the right thing to do, and this trick merely forces you to acknowledge it.
[10] It is probably no accident that the middle class first appeared in
northern Italy and the low countries, where there were no strong central
governments. These two regions were the richest of their time and became the
twin centers from which Renaissance civilization radiated. If they no longer
play that role, it is because other places, like the United States, have been
truer to the principles they discovered.
[11] It may indeed be a sufficient condition. But if so, why didn't the
Industrial Revolution happen earlier? Two possible (and not incompatible)
answers: (a) It did. The Industrial Revolution was one in a series. (b)
Because in medieval towns, monopolies and guild regulations initially slowed
the development of new means of production.
[](http://reddit.com) [ Comment](http://reddit.com/info?id=20775) on this
essay.
|
December 2006
I grew up believing that taste is just a matter of personal preference. Each
person has things they like, but no one's preferences are any better than
anyone else's. There is no such thing as _good_ taste.
Like a lot of things I grew up believing, this turns out to be false, and I'm
going to try to explain why.
One problem with saying there's no such thing as good taste is that it also
means there's no such thing as good art. If there were good art, then people
who liked it would have better taste than people who didn't. So if you discard
taste, you also have to discard the idea of art being good, and artists being
good at making it.
It was pulling on that thread that unravelled my childhood faith in
relativism. When you're trying to make things, taste becomes a practical
matter. You have to decide what to do next. Would it make the painting better
if I changed that part? If there's no such thing as better, it doesn't matter
what you do. In fact, it doesn't matter if you paint at all. You could just go
out and buy a ready-made blank canvas. If there's no such thing as good, that
would be just as great an achievement as the ceiling of the Sistine Chapel.
Less laborious, certainly, but if you can achieve the same level of
performance with less effort, surely that's more impressive, not less.
Yet that doesn't seem quite right, does it?
**Audience**
I think the key to this puzzle is to remember that art has an audience. Art
has a purpose, which is to interest its audience. Good art (like good
anything) is art that achieves its purpose particularly well. The meaning of
"interest" can vary. Some works of art are meant to shock, and others to
please; some are meant to jump out at you, and others to sit quietly in the
background. But all art has to work on an audience, and—here's the critical
point—members of the audience share things in common.
For example, nearly all humans find human faces engaging. It seems to be wired
into us. Babies can recognize faces practically from birth. In fact, faces
seem to have co-evolved with our interest in them; the face is the body's
billboard. So all other things being equal, a painting with faces in it will
interest people more than one without. [1]
One reason it's easy to believe that taste is merely personal preference is
that, if it isn't, how do you pick out the people with better taste? There are
billions of people, each with their own opinion; on what grounds can you
prefer one to another? [2]
But if audiences have a lot in common, you're not in a position of having to
choose one out of a random set of individual biases, because the set isn't
random. All humans find faces engaging—practically by definition: face
recognition is in our DNA. And so having a notion of good art, in the sense of
art that does its job well, doesn't require you to pick out a few individuals
and label their opinions as correct. No matter who you pick, they'll find
faces engaging.
Of course, space aliens probably wouldn't find human faces engaging. But there
might be other things they shared in common with us. The most likely source of
examples is math. I expect space aliens would agree with us most of the time
about which of two proofs was better. Erdos thought so. He called a maximally
elegant proof one out of God's book, and presumably God's book is universal.
[3]
Once you start talking about audiences, you don't have to argue simply that
there are or aren't standards of taste. Instead tastes are a series of
concentric rings, like ripples in a pond. There are some things that will
appeal to you and your friends, others that will appeal to most people your
age, others that will appeal to most humans, and perhaps others that would
appeal to most sentient beings (whatever that means).
The picture is slightly more complicated than that, because in the middle of
the pond there are overlapping sets of ripples. For example, there might be
things that appealed particularly to men, or to people from a certain culture.
If good art is art that interests its audience, then when you talk about art
being good, you also have to say for what audience. So is it meaningless to
talk about art simply being good or bad? No, because one audience is the set
of all possible humans. I think that's the audience people are implicitly
talking about when they say a work of art is good: they mean it would engage
any human. [4]
And that is a meaningful test, because although, like any everyday concept,
"human" is fuzzy around the edges, there are a lot of things practically all
humans have in common. In addition to our interest in faces, there's something
special about primary colors for nearly all of us, because it's an artifact of
the way our eyes work. Most humans will also find images of 3D objects
engaging, because that also seems to be built into our visual perception. [5]
And beneath that there's edge-finding, which makes images with definite shapes
more engaging than mere blur.
Humans have a lot more in common than this, of course. My goal is not to
compile a complete list, just to show that there's some solid ground here.
People's preferences aren't random. So an artist working on a painting and
trying to decide whether to change some part of it doesn't have to think "Why
bother? I might as well flip a coin." Instead he can ask "What would make the
painting more interesting to people?" And the reason you can't equal
Michelangelo by going out and buying a blank canvas is that the ceiling of the
Sistine Chapel is more interesting to people.
A lot of philosophers have had a hard time believing it was possible for there
to be objective standards for art. It seemed obvious that beauty, for example,
was something that happened in the head of the observer, not something that
was a property of objects. It was thus "subjective" rather than "objective."
But in fact if you narrow the definition of beauty to something that works a
certain way on humans, and you observe how much humans have in common, it
turns out to be a property of objects after all. You don't have to choose
between something being a property of the subject or the object if subjects
all react similarly. Being good art is thus a property of objects as much as,
say, being toxic to humans is: it's good art if it consistently affects humans
in a certain way.
**Error**
So could we figure out what the best art is by taking a vote? After all, if
appealing to humans is the test, we should be able to just ask them, right?
Well, not quite. For products of nature that might work. I'd be willing to eat
the apple the world's population had voted most delicious, and I'd probably be
willing to visit the beach they voted most beautiful, but having to look at
the painting they voted the best would be a crapshoot.
Man-made stuff is different. For one thing, artists, unlike apple trees, often
deliberately try to trick us. Some tricks are quite subtle. For example, any
work of art sets expectations by its level of finish. You don't expect
photographic accuracy in something that looks like a quick sketch. So one
widely used trick, especially among illustrators, is to intentionally make a
painting or drawing look like it was done faster than it was. The average
person looks at it and thinks: how amazingly skillful. It's like saying
something clever in a conversation as if you'd thought of it on the spur of
the moment, when in fact you'd worked it out the day before.
Another much less subtle influence is brand. If you go to see the Mona Lisa,
you'll probably be disappointed, because it's hidden behind a thick glass wall
and surrounded by a frenzied crowd taking pictures of themselves in front of
it. At best you can see it the way you see a friend across the room at a
crowded party. The Louvre might as well replace it with copy; no one would be
able to tell. And yet the Mona Lisa is a small, dark painting. If you found
people who'd never seen an image of it and sent them to a museum in which it
was hanging among other paintings with a tag labelling it as a portrait by an
unknown fifteenth century artist, most would walk by without giving it a
second look.
For the average person, brand dominates all other factors in the judgement of
art. Seeing a painting they recognize from reproductions is so overwhelming
that their response to it as a painting is drowned out.
And then of course there are the tricks people play on themselves. Most adults
looking at art worry that if they don't like what they're supposed to, they'll
be thought uncultured. This doesn't just affect what they claim to like; they
actually make themselves like things they're supposed to.
That's why you can't just take a vote. Though appeal to people is a meaningful
test, in practice you can't measure it, just as you can't find north using a
compass with a magnet sitting next to it. There are sources of error so
powerful that if you take a vote, all you're measuring is the error.
We can, however, approach our goal from another direction, by using ourselves
as guinea pigs. You're human. If you want to know what the basic human
reaction to a piece of art would be, you can at least approach that by getting
rid of the sources of error in your own judgements.
For example, while anyone's reaction to a famous painting will be warped at
first by its fame, there are ways to decrease its effects. One is to come back
to the painting over and over. After a few days the fame wears off, and you
can start to see it as a painting. Another is to stand close. A painting
familiar from reproductions looks more familiar from ten feet away; close in
you see details that get lost in reproductions, and which you're therefore
seeing for the first time.
There are two main kinds of error that get in the way of seeing a work of art:
biases you bring from your own circumstances, and tricks played by the artist.
Tricks are straightforward to correct for. Merely being aware of them usually
prevents them from working. For example, when I was ten I used to be very
impressed by airbrushed lettering that looked like shiny metal. But once you
study how it's done, you see that it's a pretty cheesy trick—one of the sort
that relies on pushing a few visual buttons really hard to temporarily
overwhelm the viewer. It's like trying to convince someone by shouting at
them.
The way not to be vulnerable to tricks is to explicitly seek out and catalog
them. When you notice a whiff of dishonesty coming from some kind of art, stop
and figure out what's going on. When someone is obviously pandering to an
audience that's easily fooled, whether it's someone making shiny stuff to
impress ten year olds, or someone making conspicuously avant-garde stuff to
impress would-be intellectuals, learn how they do it. Once you've seen enough
examples of specific types of tricks, you start to become a connoisseur of
trickery in general, just as professional magicians are.
What counts as a trick? Roughly, it's something done with contempt for the
audience. For example, the guys designing Ferraris in the 1950s were probably
designing cars that they themselves admired. Whereas I suspect over at General
Motors the marketing people are telling the designers, "Most people who buy
SUVs do it to seem manly, not to drive off-road. So don't worry about the
suspension; just make that sucker as big and tough-looking as you can." [6]
I think with some effort you can make yourself nearly immune to tricks. It's
harder to escape the influence of your own circumstances, but you can at least
move in that direction. The way to do it is to travel widely, in both time and
space. If you go and see all the different kinds of things people like in
other cultures, and learn about all the different things people have liked in
the past, you'll probably find it changes what you like. I doubt you could
ever make yourself into a completely universal person, if only because you can
only travel in one direction in time. But if you find a work of art that would
appeal equally to your friends, to people in Nepal, and to the ancient Greeks,
you're probably onto something.
My main point here is not how to have good taste, but that there can even be
such a thing. And I think I've shown that. There is such a thing as good art.
It's art that interests its human audience, and since humans have a lot in
common, what interests them is not random. Since there's such a thing as good
art, there's also such a thing as good taste, which is the ability to
recognize it.
If we were talking about the taste of apples, I'd agree that taste is just
personal preference. Some people like certain kinds of apples and others like
other kinds, but how can you say that one is right and the other wrong? [7]
The thing is, art isn't apples. Art is man-made. It comes with a lot of
cultural baggage, and in addition the people who make it often try to trick
us. Most people's judgement of art is dominated by these extraneous factors;
they're like someone trying to judge the taste of apples in a dish made of
equal parts apples and jalapeno peppers. All they're tasting is the peppers.
So it turns out you can pick out some people and say that they have better
taste than others: they're the ones who actually taste art like apples.
Or to put it more prosaically, they're the people who (a) are hard to trick,
and (b) don't just like whatever they grew up with. If you could find people
who'd eliminated all such influences on their judgement, you'd probably still
see variation in what they liked. But because humans have so much in common,
you'd also find they agreed on a lot. They'd nearly all prefer the ceiling of
the Sistine Chapel to a blank canvas.
**Making It**
I wrote this essay because I was tired of hearing "taste is subjective" and
wanted to kill it once and for all. Anyone who makes things knows intuitively
that's not true. When you're trying to make art, the temptation to be lazy is
as great as in any other kind of work. Of course it matters to do a good job.
And yet you can see how great a hold "taste is subjective" has even in the art
world by how nervous it makes people to talk about art being good or bad.
Those whose jobs require them to judge art, like curators, mostly resort to
euphemisms like "significant" or "important" or (getting dangerously close)
"realized." [8]
I don't have any illusions that being able to talk about art being good or bad
will cause the people who talk about it to have anything more useful to say.
Indeed, one of the reasons "taste is subjective" found such a receptive
audience is that, historically, the things people have said about good taste
have generally been such nonsense.
It's not for the people who talk about art that I want to free the idea of
good art, but for those who [make](taste.html) it. Right now, ambitious kids
going to art school run smack into a brick wall. They arrive hoping one day to
be as good as the famous artists they've seen in books, and the first thing
they learn is that the concept of good has been retired. Instead everyone is
just supposed to explore their own personal vision. [9]
When I was in art school, we were looking one day at a slide of some great
fifteenth century painting, and one of the students asked "Why don't artists
paint like that now?" The room suddenly got quiet. Though rarely asked out
loud, this question lurks uncomfortably in the back of every art student's
mind. It was as if someone had brought up the topic of lung cancer in a
meeting within Philip Morris.
"Well," the professor replied, "we're interested in different questions now."
He was a pretty nice guy, but at the time I couldn't help wishing I could send
him back to fifteenth century Florence to explain in person to Leonardo & Co.
how we had moved beyond their early, limited concept of art. Just imagine that
conversation.
In fact, one of the reasons artists in fifteenth century Florence made such
great things was that they believed you could make great things. [10] They
were intensely competitive and were always trying to outdo one another, like
mathematicians or physicists today—maybe like anyone who has ever done
anything really well.
The idea that you could make great things was not just a useful illusion. They
were actually right. So the most important consequence of realizing there can
be good art is that it frees artists to try to make it. To the ambitious kids
arriving at art school this year hoping one day to make great things, I say:
don't believe it when they tell you this is a naive and outdated ambition.
There is such a thing as good art, and if you try to make it, there are people
who will notice.
**Notes**
[1] This is not to say, of course, that good paintings must have faces in
them, just that everyone's visual piano has that key on it. There are
situations in which you want to avoid faces, precisely because they attract so
much attention. But you can see how universally faces work by their prevalence
in advertising.
[2] The other reason it's easy to believe is that it makes people feel good.
To a kid, this idea is crack. In every other respect they're constantly being
told that they have a lot to learn. But in this they're perfect. Their opinion
carries the same weight as any adult's. You should probably question anything
you believed as a kid that you'd want to believe this much.
[3] It's conceivable that the elegance of proofs is quantifiable, in the sense
that there may be some formal measure that turns out to coincide with
mathematicians' judgements. Perhaps it would be worth trying to make a formal
language for proofs in which those considered more elegant consistently came
out shorter (perhaps after being macroexpanded or compiled).
[4] Maybe it would be possible to make art that would appeal to space aliens,
but I'm not going to get into that because (a) it's too hard to answer, and
(b) I'm satisfied if I can establish that good art is a meaningful idea for
human audiences.
[5] If early abstract paintings seem more interesting than later ones, it may
be because the first abstract painters were trained to paint from life, and
their hands thus tended to make the kind of gestures you use in representing
physical things. In effect they were saying "scaramara" instead of "uebfgbsb."
[6] It's a bit more complicated, because sometimes artists unconsciously use
tricks by imitating art that does.
[7] I phrased this in terms of the taste of apples because if people can see
the apples, they can be fooled. When I was a kid most apples were a variety
called Red Delicious that had been bred to look appealing in stores, but which
didn't taste very good.
[8] To be fair, curators are in a difficult position. If they're dealing with
recent art, they have to include things in shows that they think are bad.
That's because the test for what gets included in shows is basically the
market price, and for recent art that is largely determined by successful
businessmen and their wives. So it's not always intellectual dishonesty that
makes curators and dealers use neutral-sounding language.
[9] What happens in practice is that everyone gets really good at _talking_
about art. As the art itself gets more random, the effort that would have gone
into the work goes instead into the intellectual sounding theory behind it.
"My work represents an exploration of gender and sexuality in an urban
context," etc. Different people win at that game.
[10] There were several other reasons, including that Florence was then the
richest and most sophisticated city in the world, and that they lived in a
time before photography had (a) killed portraiture as a source of income and
(b) made brand the dominant factor in the sale of art.
Incidentally, I'm not saying that good art = fifteenth century European art.
I'm not saying we should make what they made, but that we should work like
they worked. There are fields now in which many people work with the same
energy and honesty that fifteenth century artists did, but art is not one of
them.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Robert Morris for
reading drafts of this, and to Paul Watson for permission to use the image at
the top.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
May 2002
"We were after the C++ programmers. We managed to drag a lot of them about
halfway to Lisp."
\- Guy Steele, co-author of the Java spec
In the software business there is an ongoing struggle between the pointy-
headed academics, and another equally formidable force, the pointy-haired
bosses. Everyone knows who the pointy-haired boss is, right? I think most
people in the technology world not only recognize this cartoon character, but
know the actual person in their company that he is modelled upon.
The pointy-haired boss miraculously combines two qualities that are common by
themselves, but rarely seen together: (a) he knows nothing whatsoever about
technology, and (b) he has very strong opinions about it.
Suppose, for example, you need to write a piece of software. The pointy-haired
boss has no idea how this software has to work, and can't tell one programming
language from another, and yet he knows what language you should write it in.
Exactly. He thinks you should write it in Java.
Why does he think this? Let's take a look inside the brain of the pointy-
haired boss. What he's thinking is something like this. Java is a standard. I
know it must be, because I read about it in the press all the time. Since it
is a standard, I won't get in trouble for using it. And that also means there
will always be lots of Java programmers, so if the programmers working for me
now quit, as programmers working for me mysteriously always do, I can easily
replace them.
Well, this doesn't sound that unreasonable. But it's all based on one unspoken
assumption, and that assumption turns out to be false. The pointy-haired boss
believes that all programming languages are pretty much equivalent. If that
were true, he would be right on target. If languages are all equivalent, sure,
use whatever language everyone else is using.
But all languages are not equivalent, and I think I can prove this to you
without even getting into the differences between them. If you asked the
pointy-haired boss in 1992 what language software should be written in, he
would have answered with as little hesitation as he does today. Software
should be written in C++. But if languages are all equivalent, why should the
pointy-haired boss's opinion ever change? In fact, why should the developers
of Java have even bothered to create a new language?
Presumably, if you create a new language, it's because you think it's better
in some way than what people already had. And in fact, Gosling makes it clear
in the first Java white paper that Java was designed to fix some problems with
C++. So there you have it: languages are not all equivalent. If you follow the
trail through the pointy-haired boss's brain to Java and then back through
Java's history to its origins, you end up holding an idea that contradicts the
assumption you started with.
So, who's right? James Gosling, or the pointy-haired boss? Not surprisingly,
Gosling is right. Some languages _are_ better, for certain problems, than
others. And you know, that raises some interesting questions. Java was
designed to be better, for certain problems, than C++. What problems? When is
Java better and when is C++? Are there situations where other languages are
better than either of them?
Once you start considering this question, you have opened a real can of worms.
If the pointy-haired boss had to think about the problem in its full
complexity, it would make his brain explode. As long as he considers all
languages equivalent, all he has to do is choose the one that seems to have
the most momentum, and since that is more a question of fashion than
technology, even he can probably get the right answer. But if languages vary,
he suddenly has to solve two simultaneous equations, trying to find an optimal
balance between two things he knows nothing about: the relative suitability of
the twenty or so leading languages for the problem he needs to solve, and the
odds of finding programmers, libraries, etc. for each. If that's what's on the
other side of the door, it is no surprise that the pointy-haired boss doesn't
want to open it.
The disadvantage of believing that all programming languages are equivalent is
that it's not true. But the advantage is that it makes your life a lot
simpler. And I think that's the main reason the idea is so widespread. It is a
_comfortable_ idea.
We know that Java must be pretty good, because it is the cool, new programming
language. Or is it? If you look at the world of programming languages from a
distance, it looks like Java is the latest thing. (From far enough away, all
you can see is the large, flashing billboard paid for by Sun.) But if you look
at this world up close, you find that there are degrees of coolness. Within
the hacker subculture, there is another language called Perl that is
considered a lot cooler than Java. Slashdot, for example, is generated by
Perl. I don't think you would find those guys using Java Server Pages. But
there is another, newer language, called Python, whose users tend to look down
on Perl, and [more](accgen.html) waiting in the wings.
If you look at these languages in order, Java, Perl, Python, you notice an
interesting pattern. At least, you notice this pattern if you are a Lisp
hacker. Each one is progressively more like Lisp. Python copies even features
that many Lisp hackers consider to be mistakes. You could translate simple
Lisp programs into Python line for line. It's 2002, and programming languages
have almost caught up with 1958.
**Catching Up with Math**
What I mean is that Lisp was first discovered by John McCarthy in 1958, and
popular programming languages are only now catching up with the ideas he
developed then.
Now, how could that be true? Isn't computer technology something that changes
very rapidly? I mean, in 1958, computers were refrigerator-sized behemoths
with the processing power of a wristwatch. How could any technology that old
even be relevant, let alone superior to the latest developments?
I'll tell you how. It's because Lisp was not really designed to be a
programming language, at least not in the sense we mean today. What we mean by
a programming language is something we use to tell a computer what to do.
McCarthy did eventually intend to develop a programming language in this
sense, but the Lisp that we actually ended up with was based on something
separate that he did as a [theoretical exercise](rootsoflisp.html)\-- an
effort to define a more convenient alternative to the Turing Machine. As
McCarthy said later,
> Another way to show that Lisp was neater than Turing machines was to write a
> universal Lisp function and show that it is briefer and more comprehensible
> than the description of a universal Turing machine. This was the Lisp
> function
> [_eval_](https://sep.turbifycdn.com/ty/cdn/paulgraham/jmc.lisp?t=1688221954&)...,
> which computes the value of a Lisp expression.... Writing _eval_ required
> inventing a notation representing Lisp functions as Lisp data, and such a
> notation was devised for the purposes of the paper with no thought that it
> would be used to express Lisp programs in practice.
What happened next was that, some time in late 1958, Steve Russell, one of
McCarthy's grad students, looked at this definition of _eval_ and realized
that if he translated it into machine language, the result would be a Lisp
interpreter.
This was a big surprise at the time. Here is what McCarthy said about it later
in an interview:
> Steve Russell said, look, why don't I program this _eval_..., and I said to
> him, ho, ho, you're confusing theory with practice, this _eval_ is intended
> for reading, not for computing. But he went ahead and did it. That is, he
> compiled the _eval_ in my paper into [IBM] 704 machine code, fixing bugs,
> and then advertised this as a Lisp interpreter, which it certainly was. So
> at that point Lisp had essentially the form that it has today....
Suddenly, in a matter of weeks I think, McCarthy found his theoretical
exercise transformed into an actual programming language-- and a more powerful
one than he had intended.
So the short explanation of why this 1950s language is not obsolete is that it
was not technology but math, and math doesn't get stale. The right thing to
compare Lisp to is not 1950s hardware, but, say, the Quicksort algorithm,
which was discovered in 1960 and is still the fastest general-purpose sort.
There is one other language still surviving from the 1950s, Fortran, and it
represents the opposite approach to language design. Lisp was a piece of
theory that unexpectedly got turned into a programming language. Fortran was
developed intentionally as a programming language, but what we would now
consider a very low-level one.
[Fortran I](history.html), the language that was developed in 1956, was a very
different animal from present-day Fortran. Fortran I was pretty much assembly
language with math. In some ways it was less powerful than more recent
assembly languages; there were no subroutines, for example, only branches.
Present-day Fortran is now arguably closer to Lisp than to Fortran I.
Lisp and Fortran were the trunks of two separate evolutionary trees, one
rooted in math and one rooted in machine architecture. These two trees have
been converging ever since. Lisp started out powerful, and over the next
twenty years got fast. So-called mainstream languages started out fast, and
over the next forty years gradually got more powerful, until now the most
advanced of them are fairly close to Lisp. Close, but they are still missing a
few things....
**What Made Lisp Different**
When it was first developed, Lisp embodied nine new ideas. Some of these we
now take for granted, others are only seen in more advanced languages, and two
are still unique to Lisp. The nine ideas are, in order of their adoption by
the mainstream,
1. Conditionals. A conditional is an if-then-else construct. We take these for granted now, but Fortran I didn't have them. It had only a conditional goto closely based on the underlying machine instruction.
2. A function type. In Lisp, functions are a data type just like integers or strings. They have a literal representation, can be stored in variables, can be passed as arguments, and so on.
3. Recursion. Lisp was the first programming language to support it.
4. Dynamic typing. In Lisp, all variables are effectively pointers. Values are what have types, not variables, and assigning or binding variables means copying pointers, not what they point to.
5. Garbage-collection.
6. Programs composed of expressions. Lisp programs are trees of expressions, each of which returns a value. This is in contrast to Fortran and most succeeding languages, which distinguish between expressions and statements.
It was natural to have this distinction in Fortran I because you could not
nest statements. And so while you needed expressions for math to work, there
was no point in making anything else return a value, because there could not
be anything waiting for it.
This limitation went away with the arrival of block-structured languages, but
by then it was too late. The distinction between expressions and statements
was entrenched. It spread from Fortran into Algol and then to both their
descendants.
7. A symbol type. Symbols are effectively pointers to strings stored in a hash table. So you can test equality by comparing a pointer, instead of comparing each character.
8. A notation for code using trees of symbols and constants.
9. The whole language there all the time. There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime.
Running code at read-time lets users reprogram Lisp's syntax; running code at
compile-time is the basis of macros; compiling at runtime is the basis of
Lisp's use as an extension language in programs like Emacs; and reading at
runtime enables programs to communicate using s-expressions, an idea recently
reinvented as XML.
When Lisp first appeared, these ideas were far removed from ordinary
programming practice, which was dictated largely by the hardware available in
the late 1950s. Over time, the default language, embodied in a succession of
popular languages, has gradually evolved toward Lisp. Ideas 1-5 are now
widespread. Number 6 is starting to appear in the mainstream. Python has a
form of 7, though there doesn't seem to be any syntax for it.
As for number 8, this may be the most interesting of the lot. Ideas 8 and 9
only became part of Lisp by accident, because Steve Russell implemented
something McCarthy had never intended to be implemented. And yet these ideas
turn out to be responsible for both Lisp's strange appearance and its most
distinctive features. Lisp looks strange not so much because it has a strange
syntax as because it has no syntax; you express programs directly in the parse
trees that get built behind the scenes when other languages are parsed, and
these trees are made of lists, which are Lisp data structures.
Expressing the language in its own data structures turns out to be a very
powerful feature. Ideas 8 and 9 together mean that you can write programs that
write programs. That may sound like a bizarre idea, but it's an everyday thing
in Lisp. The most common way to do it is with something called a _macro._
The term "macro" does not mean in Lisp what it means in other languages. A
Lisp macro can be anything from an abbreviation to a compiler for a new
language. If you want to really understand Lisp, or just expand your
programming horizons, I would [learn more](onlisp.html) about macros.
Macros (in the Lisp sense) are still, as far as I know, unique to Lisp. This
is partly because in order to have macros you probably have to make your
language look as strange as Lisp. It may also be because if you do add that
final increment of power, you can no longer claim to have invented a new
language, but only a new dialect of Lisp.
I mention this mostly as a joke, but it is quite true. If you define a
language that has car, cdr, cons, quote, cond, atom, eq, and a notation for
functions expressed as lists, then you can build all the rest of Lisp out of
it. That is in fact the defining quality of Lisp: it was in order to make this
so that McCarthy gave Lisp the shape it has.
**Where Languages Matter**
So suppose Lisp does represent a kind of limit that mainstream languages are
approaching asymptotically-- does that mean you should actually use it to
write software? How much do you lose by using a less powerful language? Isn't
it wiser, sometimes, not to be at the very edge of innovation? And isn't
popularity to some extent its own justification? Isn't the pointy-haired boss
right, for example, to want to use a language for which he can easily hire
programmers?
There are, of course, projects where the choice of programming language
doesn't matter much. As a rule, the more demanding the application, the more
leverage you get from using a powerful language. But plenty of projects are
not demanding at all. Most programming probably consists of writing little
glue programs, and for little glue programs you can use any language that
you're already familiar with and that has good libraries for whatever you need
to do. If you just need to feed data from one Windows app to another, sure,
use Visual Basic.
You can write little glue programs in Lisp too (I use it as a desktop
calculator), but the biggest win for languages like Lisp is at the other end
of the spectrum, where you need to write sophisticated programs to solve hard
problems in the face of fierce competition. A good example is the [airline
fare search program](carl.html) that ITA Software licenses to Orbitz. These
guys entered a market already dominated by two big, entrenched competitors,
Travelocity and Expedia, and seem to have just humiliated them
technologically.
The core of ITA's application is a 200,000 line Common Lisp program that
searches many orders of magnitude more possibilities than their competitors,
who apparently are still using mainframe-era programming techniques. (Though
ITA is also in a sense using a mainframe-era programming language.) I have
never seen any of ITA's code, but according to one of their top hackers they
use a lot of macros, and I am not surprised to hear it.
**Centripetal Forces**
I'm not saying there is no cost to using uncommon technologies. The pointy-
haired boss is not completely mistaken to worry about this. But because he
doesn't understand the risks, he tends to magnify them.
I can think of three problems that could arise from using less common
languages. Your programs might not work well with programs written in other
languages. You might have fewer libraries at your disposal. And you might have
trouble hiring programmers.
How much of a problem is each of these? The importance of the first varies
depending on whether you have control over the whole system. If you're writing
software that has to run on a remote user's machine on top of a buggy, closed
operating system (I mention no names), there may be advantages to writing your
application in the same language as the OS. But if you control the whole
system and have the source code of all the parts, as ITA presumably does, you
can use whatever languages you want. If any incompatibility arises, you can
fix it yourself.
In server-based applications you can get away with using the most advanced
technologies, and I think this is the main cause of what Jonathan Erickson
calls the "[programming language
renaissance](http://www.byte.com/documents/s=1821/byt20011214s0003/)." This is
why we even hear about new languages like Perl and Python. We're not hearing
about these languages because people are using them to write Windows apps, but
because people are using them on servers. And as software shifts [off the
desktop](road.html) and onto servers (a future even Microsoft seems resigned
to), there will be less and less pressure to use middle-of-the-road
technologies.
As for libraries, their importance also depends on the application. For less
demanding problems, the availability of libraries can outweigh the intrinsic
power of the language. Where is the breakeven point? Hard to say exactly, but
wherever it is, it is short of anything you'd be likely to call an
application. If a company considers itself to be in the software business, and
they're writing an application that will be one of their products, then it
will probably involve several hackers and take at least six months to write.
In a project of that size, powerful languages probably start to outweigh the
convenience of pre-existing libraries.
The third worry of the pointy-haired boss, the difficulty of hiring
programmers, I think is a red herring. How many hackers do you need to hire,
after all? Surely by now we all know that software is best developed by teams
of less than ten people. And you shouldn't have trouble hiring hackers on that
scale for any language anyone has ever heard of. If you can't find ten Lisp
hackers, then your company is probably based in the wrong city for developing
software.
In fact, choosing a more powerful language probably decreases the size of the
team you need, because (a) if you use a more powerful language you probably
won't need as many hackers, and (b) hackers who work in more advanced
languages are likely to be smarter.
I'm not saying that you won't get a lot of pressure to use what are perceived
as "standard" technologies. At Viaweb (now Yahoo Store), we raised some
eyebrows among VCs and potential acquirers by using Lisp. But we also raised
eyebrows by using generic Intel boxes as servers instead of "industrial
strength" servers like Suns, for using a then-obscure open-source Unix variant
called FreeBSD instead of a real commercial OS like Windows NT, for ignoring a
supposed e-commerce standard called
[SET](http://news.com.com/2100-1017-225723.html) that no one now even
remembers, and so on.
You can't let the suits make technical decisions for you. Did it alarm some
potential acquirers that we used Lisp? Some, slightly, but if we hadn't used
Lisp, we wouldn't have been able to write the software that made them want to
buy us. What seemed like an anomaly to them was in fact cause and effect.
If you start a startup, don't design your product to please VCs or potential
acquirers. _Design your product to please the users._ If you win the users,
everything else will follow. And if you don't, no one will care how
comfortingly orthodox your technology choices were.
**The Cost of Being Average**
How much do you lose by using a less powerful language? There is actually some
data out there about that.
The most convenient measure of power is probably [code size](power.html). The
point of high-level languages is to give you bigger abstractions-- bigger
bricks, as it were, so you don't need as many to build a wall of a given size.
So the more powerful the language, the shorter the program (not simply in
characters, of course, but in distinct elements).
How does a more powerful language enable you to write shorter programs? One
technique you can use, if the language will let you, is something called
[bottom-up programming](progbot.html). Instead of simply writing your
application in the base language, you build on top of the base language a
language for writing programs like yours, then write your program in it. The
combined code can be much shorter than if you had written your whole program
in the base language-- indeed, this is how most compression algorithms work. A
bottom-up program should be easier to modify as well, because in many cases
the language layer won't have to change at all.
Code size is important, because the time it takes to write a program depends
mostly on its length. If your program would be three times as long in another
language, it will take three times as long to write-- and you can't get around
this by hiring more people, because beyond a certain size new hires are
actually a net lose. Fred Brooks described this phenomenon in his famous book
_The Mythical Man-Month,_ and everything I've seen has tended to confirm what
he said.
So how much shorter are your programs if you write them in Lisp? Most of the
numbers I've heard for Lisp versus C, for example, have been around 7-10x. But
a recent article about ITA in [_New
Architect_](http://www.newarchitectmag.com/documents/s=2286/new1015626014044/)
magazine said that "one line of Lisp can replace 20 lines of C," and since
this article was full of quotes from ITA's president, I assume they got this
number from ITA. If so then we can put some faith in it; ITA's software
includes a lot of C and C++ as well as Lisp, so they are speaking from
experience.
My guess is that these multiples aren't even constant. I think they increase
when you face harder problems and also when you have smarter programmers. A
really good hacker can squeeze more out of better tools.
As one data point on the curve, at any rate, if you were to compete with ITA
and chose to write your software in C, they would be able to develop software
twenty times faster than you. If you spent a year on a new feature, they'd be
able to duplicate it in less than three weeks. Whereas if they spent just
three months developing something new, it would be _five years_ before you had
it too.
And you know what? That's the best-case scenario. When you talk about code-
size ratios, you're implicitly assuming that you can actually write the
program in the weaker language. But in fact there are limits on what
programmers can do. If you're trying to solve a hard problem with a language
that's too low-level, you reach a point where there is just too much to keep
in your head at once.
So when I say it would take ITA's imaginary competitor five years to duplicate
something ITA could write in Lisp in three months, I mean five years if
nothing goes wrong. In fact, the way things work in most companies, any
development project that would take five years is likely never to get finished
at all.
I admit this is an extreme case. ITA's hackers seem to be unusually smart, and
C is a pretty low-level language. But in a competitive market, even a
differential of two or three to one would be enough to guarantee that you'd
always be behind.
**A Recipe**
This is the kind of possibility that the pointy-haired boss doesn't even want
to think about. And so most of them don't. Because, you know, when it comes
down to it, the pointy-haired boss doesn't mind if his company gets their ass
kicked, so long as no one can prove it's his fault. The safest plan for him
personally is to stick close to the center of the herd.
Within large organizations, the phrase used to describe this approach is
"industry best practice." Its purpose is to shield the pointy-haired boss from
responsibility: if he chooses something that is "industry best practice," and
the company loses, he can't be blamed. He didn't choose, the industry did.
I believe this term was originally used to describe accounting methods and so
on. What it means, roughly, is _don't do anything weird._ And in accounting
that's probably a good idea. The terms "cutting-edge" and "accounting" do not
sound good together. But when you import this criterion into decisions about
technology, you start to get the wrong answers.
Technology often _should_ be cutting-edge. In programming languages, as Erann
Gat has pointed out, what "industry best practice" actually gets you is not
the best, but merely the average. When a decision causes you to develop
software at a fraction of the rate of more aggressive competitors, "best
practice" is a misnomer.
So here we have two pieces of information that I think are very valuable. In
fact, I know it from my own experience. Number 1, languages vary in power.
Number 2, most managers deliberately ignore this. Between them, these two
facts are literally a recipe for making money. ITA is an example of this
recipe in action. If you want to win in a software business, just take on the
hardest problem you can find, use the most powerful language you can get, and
wait for your competitors' pointy-haired bosses to revert to the mean.
* * *
**Appendix: Power**
As an illustration of what I mean about the relative power of programming
languages, consider the following problem. We want to write a function that
generates accumulators-- a function that takes a number n, and returns a
function that takes another number i and returns n incremented by i.
(That's _incremented by_ , not plus. An accumulator has to accumulate.)
In Common Lisp this would be (defun foo (n) (lambda (i) (incf n i))) and in
Perl 5, sub foo { my ($n) = @_; sub {$n += shift} } which has more elements
than the Lisp version because you have to extract parameters manually in Perl.
In Smalltalk the code is slightly longer than in Lisp foo: n |s| s := n.
^[:i| s := s+i. ] because although in general lexical variables work, you
can't do an assignment to a parameter, so you have to create a new variable s.
In Javascript the example is, again, slightly longer, because Javascript
retains the distinction between statements and expressions, so you need
explicit `return` statements to return values: function foo(n) { return
function (i) { return n += i } } (To be fair, Perl also retains this
distinction, but deals with it in typical Perl fashion by letting you omit
`return`s.)
If you try to translate the Lisp/Perl/Smalltalk/Javascript code into Python
you run into some limitations. Because Python doesn't fully support lexical
variables, you have to create a data structure to hold the value of n. And
although Python does have a function data type, there is no literal
representation for one (unless the body is only a single expression) so you
need to create a named function to return. This is what you end up with: def
foo(n): s = [n] def bar(i): s[0] += i return s[0] return bar Python users
might legitimately ask why they can't just write def foo(n): return lambda i:
return n += i or even def foo(n): lambda i: n += i and my guess is that
they probably will, one day. (But if they don't want to wait for Python to
evolve the rest of the way into Lisp, they could always just...)
In OO languages, you can, to a limited extent, simulate a closure (a function
that refers to variables defined in enclosing scopes) by defining a class with
one method and a field to replace each variable from an enclosing scope. This
makes the programmer do the kind of code analysis that would be done by the
compiler in a language with full support for lexical scope, and it won't work
if more than one function refers to the same variable, but it is enough in
simple cases like this.
Python experts seem to agree that this is the preferred way to solve the
problem in Python, writing either def foo(n): class acc: def __init__(self,
s): self.s = s def inc(self, i): self.s += i return self.s return acc(n).inc
or class foo: def __init__(self, n): self.n = n def __call__(self, i): self.n
+= i return self.n I include these because I wouldn't want Python advocates
to say I was misrepresenting the language, but both seem to me more complex
than the first version. You're doing the same thing, setting up a separate
place to hold the accumulator; it's just a field in an object instead of the
head of a list. And the use of these special, reserved field names, especially
`__call__`, seems a bit of a hack.
In the rivalry between Perl and Python, the claim of the Python hackers seems
to be that that Python is a more elegant alternative to Perl, but what this
case shows is that power is the ultimate elegance: the Perl program is simpler
(has fewer elements), even if the syntax is a bit uglier.
How about other languages? In the other languages mentioned in this talk--
Fortran, C, C++, Java, and Visual Basic-- it is not clear whether you can
actually solve this problem. Ken Anderson says that the following code is
about as close as you can get in Java: public interface Inttoint { public int
call(int i); } public static Inttoint foo(final int n) { return new
Inttoint() { int s = n; public int call(int i) { s = s + i; return s; }}; }
This falls short of the spec because it only works for integers. After many
email exchanges with Java hackers, I would say that writing a properly
polymorphic version that behaves like the preceding examples is somewhere
between damned awkward and impossible. If anyone wants to write one I'd be
very curious to see it, but I personally have timed out.
It's not literally true that you can't solve this problem in other languages,
of course. The fact that all these languages are Turing-equivalent means that,
strictly speaking, you can write any program in any of them. So how would you
do it? In the limit case, by writing a Lisp interpreter in the less powerful
language.
That sounds like a joke, but it happens so often to varying degrees in large
programming projects that there is a name for the phenomenon, Greenspun's
Tenth Rule:
> Any sufficiently complicated C or Fortran program contains an ad hoc
> informally-specified bug-ridden slow implementation of half of Common Lisp.
If you try to solve a hard problem, the question is not whether you will use a
powerful enough language, but whether you will (a) use a powerful language,
(b) write a de facto interpreter for one, or (c) yourself become a human
compiler for one. We see this already begining to happen in the Python
example, where we are in effect simulating the code that a compiler would
generate to implement a lexical variable.
This practice is not only common, but institutionalized. For example, in the
OO world you hear a good deal about "patterns". I wonder if these patterns are
not sometimes evidence of case (c), the human compiler, at work. When I see
patterns in my programs, I consider it a sign of trouble. The shape of a
program should reflect only the problem it needs to solve. Any other
regularity in the code is a sign, to me at least, that I'm using abstractions
that aren't powerful enough-- often that I'm generating by hand the expansions
of some macro that I need to write.
**Notes**
* The IBM 704 CPU was about the size of a refrigerator, but a lot heavier. The CPU weighed 3150 pounds, and the 4K of RAM was in a separate box weighing another 4000 pounds. The Sub-Zero 690, one of the largest household refrigerators, weighs 656 pounds.
* Steve Russell also wrote the first (digital) computer game, Spacewar, in 1962.
* If you want to trick a pointy-haired boss into letting you write software in Lisp, you could try telling him it's XML.
* Here is the accumulator generator in other Lisp dialects: Scheme: (define (foo n) (lambda (i) (set! n (+ n i)) n)) Goo: (df foo (n) (op incf n _))) Arc: (def foo (n) [++ n _])
* Erann Gat's sad tale about "industry best practice" at JPL inspired me to address this generally misapplied phrase.
* Peter Norvig found that 16 of the 23 patterns in _Design Patterns_ were "[invisible or simpler](http://www.norvig.com/design-patterns/)" in Lisp.
* Thanks to the many people who answered my questions about various languages and/or read drafts of this, including Ken Anderson, Trevor Blackwell, Erann Gat, Dan Giffin, Sarah Harlin, Jeremy Hylton, Robert Morris, Peter Norvig, Guy Steele, and Anton van Straaten. They bear no blame for any opinions expressed.
**Related:**
Many people have responded to this talk, so I have set up an additional page
to deal with the issues they have raised: [Re: Revenge of the
Nerds](icadmore.html).
It also set off an extensive and often useful discussion on the
[LL1](http://www.ai.mit.edu/~gregs/ll1-discuss-archive-html/threads.html)
mailing list. See particularly the mail by Anton van Straaten on semantic
compression.
Some of the mail on LL1 led me to try to go deeper into the subject of
language power in [Succinctness is Power](power.html).
A larger set of canonical implementations of the [accumulator generator
benchmark](accgen.html) are collected together on their own page.
[Japanese
Translation](http://www.shiro.dreamhost.com/scheme/trans/icad-j.html),
[Spanish
Translation](http://kapcoweb.com/p/docs/translations/revenge_of_the_nerds/revenge_of_the_nerds-
es.html), [Chinese
Translation](http://flyingapplet.spaces.live.com/blog/cns!F682AFBD82F7E261!375.entry
)
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
September 2010
The reason startups have been using [more convertible
notes](http://twitter.com/paulg/status/22319113993) in angel rounds is that
they make deals close faster. By making it easier for startups to give
different prices to different investors, they help them break the sort of
deadlock that happens when investors all wait to see who else is going to
invest.
By far the biggest influence on investors' opinions of a startup is the
opinion of other investors. There are very, very few who simply decide for
themselves. Any startup founder can tell you the most common question they
hear from investors is not about the founders or the product, but "who else is
investing?"
That tends to produce deadlocks. Raising an old-fashioned fixed-size equity
round can take weeks, because all the angels sit around waiting for the others
to commit, like competitors in a bicycle sprint who deliberately ride slowly
at the start so they can follow whoever breaks first.
Convertible notes let startups beat such deadlocks by rewarding investors
willing to move first with lower (effective) valuations. Which they deserve
because they're taking more risk. It's much safer to invest in a startup Ron
Conway has already invested in; someone who comes after him should pay a
higher price.
The reason convertible notes allow more flexibility in price is that valuation
caps aren't actual valuations, and notes are cheap and easy to do. So you can
do high-resolution fundraising: if you wanted you could have a separate note
with a different cap for each investor.
That cap need not simply rise monotonically. A startup could also give better
deals to investors they expected to help them most. The point is simply that
different investors, whether because of the help they offer or their
willingness to commit, have different values for startups, and their terms
should reflect that.
Different terms for different investors is clearly the way of the future.
Markets always evolve toward higher resolution. You may not need to use
convertible notes to do it. With sufficiently lightweight standardized equity
terms (and some changes in investors' and lawyers' expectations about equity
rounds) you might be able to do the same thing with equity instead of debt.
Either would be fine with startups, so long as they can easily change their
valuation.
Deadlocks weren't the only problem with fixed-size equity rounds. Another was
that startups had to decide in advance how much to raise. I think it's a
mistake for a startup to fix upon a specific number. If investors are easily
convinced, the startup should raise more now, and if investors are skeptical,
the startup should take a smaller amount and use that to get the company to
the point where it's more convincing.
It's just not reasonable to expect startups to pick an optimal round size in
advance, because that depends on the reactions of investors, and those are
impossible to predict.
Fixed-size, multi-investor angel rounds are such a bad idea for startups that
one wonders why things were ever done that way. One possibility is that this
custom reflects the way investors like to collude when they can get away with
it. But I think the actual explanation is less sinister. I think angels (and
their lawyers) organized rounds this way in unthinking imitation of VC series
A rounds. In a series A, a fixed-size equity round with a lead makes sense,
because there is usually just one big investor, who is unequivocally the lead.
Fixed-size series A rounds already are high res. But the more investors you
have in a round, the less sense it makes for everyone to get the same price.
The most interesting question here may be what high res fundraising will do to
the world of investors. Bolder investors will now get rewarded with lower
prices. But more important, in a hits-driven business, is that they'll be able
to get into the deals they want. Whereas the "who else is investing?" type of
investors will not only pay higher prices, but may not be able to get into the
best deals at all.
**Thanks** to Immad Akhund, Sam Altman, John Bautista, Pete Koomen, Jessica
Livingston, Dan Siroker, Harj Taggar, and Fred Wilson for reading drafts of
this.
|
November 2004, corrected June 2006
Occam's razor says we should prefer the simpler of two explanations. I begin
by reminding readers of this principle because I'm about to propose a theory
that will offend both liberals and conservatives. But Occam's razor means, in
effect, that if you want to disagree with it, you have a hell of a coincidence
to explain.
Theory: In US presidential elections, the more charismatic candidate wins.
People who write about politics, whether on the left or the right, have a
consistent bias: they take politics seriously. When one candidate beats
another they look for political explanations. The country is shifting to the
left, or the right. And that sort of shift can certainly be the result of a
presidential election, which makes it easy to believe it was the cause.
But when I think about why I voted for Clinton over the first George Bush, it
wasn't because I was shifting to the left. Clinton just seemed more dynamic.
He seemed to want the job more. Bush seemed old and tired. I suspect it was
the same for a lot of voters.
Clinton didn't represent any national shift leftward. [1] He was just more
charismatic than George Bush or (God help us) Bob Dole. In 2000 we practically
got a controlled experiment to prove it: Gore had Clinton's policies, but not
his charisma, and he suffered proportionally. [2] Same story in 2004. Kerry
was smarter and more articulate than Bush, but rather a stiff. And Kerry lost.
As I looked further back, I kept finding the same pattern. Pundits said Carter
beat Ford because the country distrusted the Republicans after Watergate. And
yet it also happened that Carter was famous for his big grin and folksy ways,
and Ford for being a boring klutz. Four years later, pundits said the country
had lurched to the right. But Reagan, a former actor, also happened to be even
more charismatic than Carter (whose grin was somewhat less cheery after four
stressful years in office). In 1984 the charisma gap between Reagan and
Mondale was like that between Clinton and Dole, with similar results. The
first George Bush managed to win in 1988, though he would later be vanquished
by one of the most charismatic presidents ever, because in 1988 he was up
against the notoriously uncharismatic Michael Dukakis.
These are the elections I remember personally, but apparently the same pattern
played out in 1964 and 1972. The most recent counterexample appears to be
1968, when Nixon beat the more charismatic Hubert Humphrey. But when you
examine that election, it tends to support the charisma theory more than
contradict it. As Joe McGinnis recounts in his famous book _The Selling of the
President 1968_ , Nixon knew he had less charisma than Humphrey, and thus
simply refused to debate him on TV. He knew he couldn't afford to let the two
of them be seen side by side.
Now a candidate probably couldn't get away with refusing to debate. But in
1968 the custom of televised debates was still evolving. In effect, Nixon won
in 1968 because voters were never allowed to see the real Nixon. All they saw
were carefully scripted campaign spots.
Oddly enough, the most recent true counterexample is probably 1960. Though
this election is usually given as an example of the power of TV, Kennedy
apparently would not have won without fraud by party machines in Illinois and
Texas. But TV was still young in 1960; only 87% of households had it. [3]
Undoubtedly TV helped Kennedy, so historians are correct in regarding this
election as a watershed. TV required a new kind of candidate. There would be
no more Calvin Coolidges.
The charisma theory may also explain why Democrats tend to lose presidential
elections. The core of the Democrats' ideology seems to be a belief in
government. Perhaps this tends to attract people who are earnest, but dull.
Dukakis, Gore, and Kerry were so similar in that respect that they might have
been brothers. Good thing for the Democrats that their screen lets through an
occasional Clinton, even if some scandal results. [4]
One would like to believe elections are won and lost on issues, if only fake
ones like Willie Horton. And yet, if they are, we have a remarkable
coincidence to explain. In every presidential election since TV became
widespread, the apparently more charismatic candidate has won. Surprising,
isn't it, that voters' opinions on the issues have lined up with charisma for
11 elections in a row?
The political commentators who come up with shifts to the left or right in
their morning-after analyses are like the financial reporters stuck writing
stories day after day about the random fluctuations of the stock market. Day
ends, market closes up or down, reporter looks for good or bad news
respectively, and writes that the market was up on news of Intel's earnings,
or down on fears of instability in the Middle East. Suppose we could somehow
feed these reporters false information about market closes, but give them all
the other news intact. Does anyone believe they would notice the anomaly, and
not simply write that stocks were up (or down) on whatever good (or bad) news
there was that day? That they would say, hey, wait a minute, how can stocks be
up with all this unrest in the Middle East?
I'm not saying that issues don't matter to voters. Of course they do. But the
major parties know so well which issues matter how much to how many voters,
and adjust their message so precisely in response, that they tend to split the
difference on the issues, leaving the election to be decided by the one factor
they can't control: charisma.
If the Democrats had been running a candidate as charismatic as Clinton in the
2004 election, he'd have won. And we'd be reading that the election was a
referendum on the war in Iraq, instead of that the Democrats are out of touch
with evangelical Christians in middle America.
During the 1992 election, the Clinton campaign staff had a big sign in their
office saying "It's the economy, stupid." Perhaps it was even simpler than
they thought.
**Postscript**
Opinions seem to be divided about the charisma theory. Some say it's
impossible, others say it's obvious. This seems a good sign. Perhaps it's in
the sweet spot midway between.
As for it being impossible, I reply: here's the data; here's the theory;
theory explains data 100%. To a scientist, at least, that means it deserves
attention, however implausible it seems.
You can't believe voters are so superficial that they just choose the most
charismatic guy? My theory doesn't require that. I'm not proposing that
charisma is the only factor, just that it's the only one _left_ after the
efforts of the two parties cancel one another out.
As for the theory being obvious, as far as I know, no one has proposed it
before. Election forecasters are proud when they can achieve the same results
with much more complicated models.
Finally, to the people who say that the theory is probably true, but rather
depressing: it's not so bad as it seems. The phenomenon is like a pricing
anomaly; once people realize it's there, it will disappear. Once both parties
realize it's a waste of time to nominate uncharismatic candidates, they'll
tend to nominate only the most charismatic ones. And if the candidates are
equally charismatic, charisma will cancel out, and elections will be decided
on issues, as political commentators like to think they are now.
**Notes**
[1] As Clinton himself discovered to his surprise when, in one of his first
acts as president, he tried to shift the military leftward. After a bruising
fight he escaped with a face-saving compromise.
[2] True, Gore won the popular vote. But politicians know the electoral vote
decides the election, so that's what they campaign for. If Bush had been
campaigning for the popular vote he would presumably have got more of it.
(Thanks to judgmentalist for this point.)
[3] Source: Nielsen Media Research. Of the remaining 13%, 11 didn't have TV
because they couldn't afford it. I'd argue that the missing 11% were probably
also the 11% most susceptible to charisma.
[4] One implication of this theory is that parties shouldn't be too quick to
reject candidates with skeletons in their closets. Charismatic candidates will
tend to have more skeletons than squeaky clean dullards, but in practice that
doesn't seem to lose elections. The current Bush, for example, probably did
more drugs in his twenties than any preceding president, and yet managed to
get elected with a base of evangelical Christians. All you have to do is say
you've reformed, and stonewall about the details.
**Thanks** to Trevor Blackwell, Maria Daniels, Jessica Livingston, Jackie
McDonough, and Robert Morris for reading drafts of this, and to Eric Raymond
for pointing out that I was wrong about 1968.
[](http://reddit.com) [ Comment](http://reddit.com/info/8zp7/comments) on this
essay.
|
February 2002
"...Copernicus' aesthetic objections to [equants] provided one essential
motive for his rejection of the Ptolemaic system...."
\- Thomas Kuhn, _The Copernican Revolution_
"All of us had been trained by Kelly Johnson and believed fanatically in his
insistence that an airplane that looked beautiful would fly the same way."
\- Ben Rich, _Skunk Works_
"Beauty is the first test: there is no permanent place in this world for ugly
mathematics."
\- G. H. Hardy, _A Mathematician's Apology_
I was talking recently to a friend who teaches at MIT. His field is hot now
and every year he is inundated by applications from would-be graduate
students. "A lot of them seem smart," he said. "What I can't tell is whether
they have any kind of taste."
Taste. You don't hear that word much now. And yet we still need the underlying
concept, whatever we call it. What my friend meant was that he wanted students
who were not just good technicians, but who could use their technical
knowledge to design beautiful things.
Mathematicians call good work "beautiful," and so, either now or in the past,
have scientists, engineers, musicians, architects, designers, writers, and
painters. Is it just a coincidence that they used the same word, or is there
some overlap in what they meant? If there is an overlap, can we use one
field's discoveries about beauty to help us in another?
For those of us who design things, these are not just theoretical questions.
If there is such a thing as beauty, we need to be able to recognize it. We
need good taste to make good things. Instead of treating beauty as an airy
abstraction, to be either blathered about or avoided depending on how one
feels about airy abstractions, let's try considering it as a practical
question: _how do you make good stuff?_
If you mention taste nowadays, a lot of people will tell you that "taste is
subjective." They believe this because it really feels that way to them. When
they like something, they have no idea why. It could be because it's
beautiful, or because their mother had one, or because they saw a movie star
with one in a magazine, or because they know it's expensive. Their thoughts
are a tangle of unexamined impulses.
Most of us are encouraged, as children, to leave this tangle unexamined. If
you make fun of your little brother for coloring people green in his coloring
book, your mother is likely to tell you something like "you like to do it your
way and he likes to do it his way."
Your mother at this point is not trying to teach you important truths about
aesthetics. She's trying to get the two of you to stop bickering.
Like many of the half-truths adults tell us, this one contradicts other things
they tell us. After dinning into you that taste is merely a matter of personal
preference, they take you to the museum and tell you that you should pay
attention because Leonardo is a great artist.
What goes through the kid's head at this point? What does he think "great
artist" means? After having been told for years that everyone just likes to do
things their own way, he is unlikely to head straight for the conclusion that
a great artist is someone whose work is _better_ than the others'. A far more
likely theory, in his Ptolemaic model of the universe, is that a great artist
is something that's good for you, like broccoli, because someone said so in a
book.
Saying that taste is just personal preference is a good way to prevent
disputes. The trouble is, it's not true. You feel this when you start to
design things.
Whatever job people do, they naturally want to do better. Football players
like to win games. CEOs like to increase earnings. It's a matter of pride, and
a real pleasure, to get better at your job. But if your job is to design
things, and there is no such thing as beauty, then there is _no way to get
better at your job._ If taste is just personal preference, then everyone's is
already perfect: you like whatever you like, and that's it.
As in any job, as you continue to design things, you'll get better at it. Your
tastes will change. And, like anyone who gets better at their job, you'll know
you're getting better. If so, your old tastes were not merely different, but
worse. Poof goes the axiom that taste can't be wrong.
Relativism is fashionable at the moment, and that may hamper you from thinking
about taste, even as yours grows. But if you come out of the closet and admit,
at least to yourself, that there is such a thing as good and bad design, then
you can start to study good design in detail. How has your taste changed? When
you made mistakes, what caused you to make them? What have other people
learned about design?
Once you start to examine the question, it's surprising how much different
fields' ideas of beauty have in common. The same principles of good design
crop up again and again.
**Good design is simple.** You hear this from math to painting. In math it
means that a shorter proof tends to be a better one. Where axioms are
concerned, especially, less is more. It means much the same thing in
programming. For architects and designers it means that beauty should depend
on a few carefully chosen structural elements rather than a profusion of
superficial ornament. (Ornament is not in itself bad, only when it's
camouflage on insipid form.) Similarly, in painting, a still life of a few
carefully observed and solidly modelled objects will tend to be more
interesting than a stretch of flashy but mindlessly repetitive painting of,
say, a lace collar. In writing it means: say what you mean and say it briefly.
It seems strange to have to emphasize simplicity. You'd think simple would be
the default. Ornate is more work. But something seems to come over people when
they try to be creative. Beginning writers adopt a pompous tone that doesn't
sound anything like the way they speak. Designers trying to be artistic resort
to swooshes and curlicues. Painters discover that they're expressionists. It's
all evasion. Underneath the long words or the "expressive" brush strokes,
there is not much going on, and that's frightening.
When you're forced to be simple, you're forced to face the real problem. When
you can't deliver ornament, you have to deliver substance.
**Good design is timeless.** In math, every proof is timeless unless it
contains a mistake. So what does Hardy mean when he says there is no permanent
place for ugly mathematics? He means the same thing Kelly Johnson did: if
something is ugly, it can't be the best solution. There must be a better one,
and eventually someone will discover it.
Aiming at timelessness is a way to make yourself find the best answer: if you
can imagine someone surpassing you, you should do it yourself. Some of the
greatest masters did this so well that they left little room for those who
came after. Every engraver since Durer has had to live in his shadow.
Aiming at timelessness is also a way to evade the grip of fashion. Fashions
almost by definition change with time, so if you can make something that will
still look good far into the future, then its appeal must derive more from
merit and less from fashion.
Strangely enough, if you want to make something that will appeal to future
generations, one way to do it is to try to appeal to past generations. It's
hard to guess what the future will be like, but we can be sure it will be like
the past in caring nothing for present fashions. So if you can make something
that appeals to people today and would also have appealed to people in 1500,
there is a good chance it will appeal to people in 2500.
**Good design solves the right problem.** The typical stove has four burners
arranged in a square, and a dial to control each. How do you arrange the
dials? The simplest answer is to put them in a row. But this is a simple
answer to the wrong question. The dials are for humans to use, and if you put
them in a row, the unlucky human will have to stop and think each time about
which dial matches which burner. Better to arrange the dials in a square like
the burners.
A lot of bad design is industrious, but misguided. In the mid twentieth
century there was a vogue for setting text in sans-serif fonts. These fonts
_are_ closer to the pure, underlying letterforms. But in text that's not the
problem you're trying to solve. For legibility it's more important that
letters be easy to tell apart. It may look Victorian, but a Times Roman
lowercase g is easy to tell from a lowercase y.
Problems can be improved as well as solutions. In software, an intractable
problem can usually be replaced by an equivalent one that's easy to solve.
Physics progressed faster as the problem became predicting observable
behavior, instead of reconciling it with scripture.
**Good design is suggestive.** Jane Austen's novels contain almost no
description; instead of telling you how everything looks, she tells her story
so well that you envision the scene for yourself. Likewise, a painting that
suggests is usually more engaging than one that tells. Everyone makes up their
own story about the Mona Lisa.
In architecture and design, this principle means that a building or object
should let you use it how you want: a good building, for example, will serve
as a backdrop for whatever life people want to lead in it, instead of making
them live as if they were executing a program written by the architect.
In software, it means you should give users a few basic elements that they can
combine as they wish, like Lego. In math it means a proof that becomes the
basis for a lot of new work is preferable to a proof that was difficult, but
doesn't lead to future discoveries; in the sciences generally, citation is
considered a rough indicator of merit.
**Good design is often slightly funny.** This one may not always be true. But
Durer's [engravings](pilate.html) and Saarinen's [womb chair](womb.html) and
the [Pantheon](pantheon.html) and the original [Porsche 911](1974-911s.html)
all seem to me slightly funny. Godel's incompleteness theorem seems like a
practical joke.
I think it's because humor is related to strength. To have a sense of humor is
to be strong: to keep one's sense of humor is to shrug off misfortunes, and to
lose one's sense of humor is to be wounded by them. And so the mark-- or at
least the prerogative-- of strength is not to take oneself too seriously. The
confident will often, like swallows, seem to be making fun of the whole
process slightly, as Hitchcock does in his films or Bruegel in his paintings--
or Shakespeare, for that matter.
Good design may not have to be funny, but it's hard to imagine something that
could be called humorless also being good design.
**Good design is hard.** If you look at the people who've done great work, one
thing they all seem to have in common is that they worked very hard. If you're
not working hard, you're probably wasting your time.
Hard problems call for great efforts. In math, difficult proofs require
ingenious solutions, and those tend to be interesting. Ditto in engineering.
When you have to climb a mountain you toss everything unnecessary out of your
pack. And so an architect who has to build on a difficult site, or a small
budget, will find that he is forced to produce an elegant design. Fashions and
flourishes get knocked aside by the difficult business of solving the problem
at all.
Not every kind of hard is good. There is good pain and bad pain. You want the
kind of pain you get from going running, not the kind you get from stepping on
a nail. A difficult problem could be good for a designer, but a fickle client
or unreliable materials would not be.
In art, the highest place has traditionally been given to paintings of people.
There is something to this tradition, and not just because pictures of faces
get to press buttons in our brains that other pictures don't. We are so good
at looking at faces that we force anyone who draws them to work hard to
satisfy us. If you draw a tree and you change the angle of a branch five
degrees, no one will know. When you change the angle of someone's eye five
degrees, people notice.
When Bauhaus designers adopted Sullivan's "form follows function," what they
meant was, form _should_ follow function. And if function is hard enough, form
is forced to follow it, because there is no effort to spare for error. Wild
animals are beautiful because they have hard lives.
**Good design looks easy.** Like great athletes, great designers make it look
easy. Mostly this is an illusion. The easy, conversational tone of good
writing comes only on the eighth rewrite.
In science and engineering, some of the greatest discoveries seem so simple
that you say to yourself, I could have thought of that. The discoverer is
entitled to reply, why didn't you?
Some Leonardo heads are just a few lines. You look at them and you think, all
you have to do is get eight or ten lines in the right place and you've made
this beautiful portrait. Well, yes, but you have to get them in _exactly_ the
right place. The slightest error will make the whole thing collapse.
Line drawings are in fact the most difficult visual medium, because they
demand near perfection. In math terms, they are a closed-form solution; lesser
artists literally solve the same problems by successive approximation. One of
the reasons kids give up drawing at ten or so is that they decide to start
drawing like grownups, and one of the first things they try is a line drawing
of a face. Smack!
In most fields the appearance of ease seems to come with practice. Perhaps
what practice does is train your unconscious mind to handle tasks that used to
require conscious thought. In some cases you literally train your body. An
expert pianist can play notes faster than the brain can send signals to his
hand. Likewise an artist, after a while, can make visual perception flow in
through his eye and out through his hand as automatically as someone tapping
his foot to a beat.
When people talk about being in "the zone," I think what they mean is that the
spinal cord has the situation under control. Your spinal cord is less
hesitant, and it frees conscious thought for the hard problems.
**Good design uses symmetry.** I think symmetry may just be one way to achieve
simplicity, but it's important enough to be mentioned on its own. Nature uses
it a lot, which is a good sign.
There are two kinds of symmetry, repetition and recursion. Recursion means
repetition in subelements, like the pattern of veins in a leaf.
Symmetry is unfashionable in some fields now, in reaction to excesses in the
past. Architects started consciously making buildings asymmetric in Victorian
times and by the 1920s asymmetry was an explicit premise of modernist
architecture. Even these buildings only tended to be asymmetric about major
axes, though; there were hundreds of minor symmetries.
In writing you find symmetry at every level, from the phrases in a sentence to
the plot of a novel. You find the same in music and art. Mosaics (and some
Cezannes) get extra visual punch by making the whole picture out of the same
atoms. Compositional symmetry yields some of the most memorable paintings,
especially when two halves react to one another, as in the _[Creation of
Adam](symptg.html)_ or _[American Gothic](symptg.html)._
In math and engineering, recursion, especially, is a big win. Inductive proofs
are wonderfully short. In software, a problem that can be solved by recursion
is nearly always best solved that way. The Eiffel Tower looks striking partly
because it is a recursive solution, a tower on a tower.
The danger of symmetry, and repetition especially, is that it can be used as a
substitute for thought.
**Good design resembles nature.** It's not so much that resembling nature is
intrinsically good as that nature has had a long time to work on the problem.
It's a good sign when your answer resembles nature's.
It's not cheating to copy. Few would deny that a story should be like life.
Working from life is a valuable tool in painting too, though its role has
often been misunderstood. The aim is not simply to make a record. The point of
painting from life is that it gives your mind something to chew on: when your
eyes are looking at something, your hand will do more interesting work.
Imitating nature also works in engineering. Boats have long had spines and
ribs like an animal's ribcage. In some cases we may have to wait for better
technology: early aircraft designers were mistaken to design aircraft that
looked like birds, because they didn't have materials or power sources light
enough (the Wrights' engine weighed 152 lbs. and generated only 12 hp.) or
control systems sophisticated enough for machines that flew like birds, but I
could imagine little unmanned reconnaissance planes flying like birds in fifty
years.
Now that we have enough computer power, we can imitate nature's method as well
as its results. Genetic algorithms may let us create things too complex to
design in the ordinary sense.
**Good design is redesign.** It's rare to get things right the first time.
Experts expect to throw away some early work. They plan for plans to change.
It takes confidence to throw work away. You have to be able to think, _there's
more where that came from._ When people first start drawing, for example,
they're often reluctant to redo parts that aren't right; they feel they've
been lucky to get that far, and if they try to redo something, it will turn
out worse. Instead they convince themselves that the drawing is not that bad,
really-- in fact, maybe they meant it to look that way.
Dangerous territory, that; if anything you should cultivate dissatisfaction.
In Leonardo's [drawings](leonardo.html) there are often five or six attempts
to get a line right. The distinctive back of the Porsche 911 only appeared in
the redesign of an awkward [prototype](porsche695.html). In Wright's early
plans for the [Guggenheim](guggen.html), the right half was a ziggurat; he
inverted it to get the present shape.
Mistakes are natural. Instead of treating them as disasters, make them easy to
acknowledge and easy to fix. Leonardo more or less invented the sketch, as a
way to make drawing bear a greater weight of exploration. Open-source software
has fewer bugs because it admits the possibility of bugs.
It helps to have a medium that makes change easy. When oil paint replaced
tempera in the fifteenth century, it helped painters to deal with difficult
subjects like the human figure because, unlike tempera, oil can be blended and
overpainted.
**Good design can copy.** Attitudes to copying often make a round trip. A
novice imitates without knowing it; next he tries consciously to be original;
finally, he decides it's more important to be right than original.
Unknowing imitation is almost a recipe for bad design. If you don't know where
your ideas are coming from, you're probably imitating an imitator. Raphael so
pervaded mid-nineteenth century taste that almost anyone who tried to draw was
imitating him, often at several removes. It was this, more than Raphael's own
work, that bothered the Pre-Raphaelites.
The ambitious are not content to imitate. The second phase in the growth of
taste is a conscious attempt at originality.
I think the greatest masters go on to achieve a kind of selflessness. They
just want to get the right answer, and if part of the right answer has already
been discovered by someone else, that's no reason not to use it. They're
confident enough to take from anyone without feeling that their own vision
will be lost in the process.
**Good design is often strange.** Some of the very best work has an uncanny
quality: [Euler's Formula](http://mathworld.wolfram.com/EulerFormula.html),
Bruegel's _[Hunters in the Snow](hunters.html),_ the [SR-71](sr71.html),
[Lisp](rootsoflisp.html). They're not just beautiful, but strangely beautiful.
I'm not sure why. It may just be my own stupidity. A can-opener must seem
miraculous to a dog. Maybe if I were smart enough it would seem the most
natural thing in the world that ei*pi = -1. It is after all necessarily true.
Most of the qualities I've mentioned are things that can be cultivated, but I
don't think it works to cultivate strangeness. The best you can do is not
squash it if it starts to appear. Einstein didn't try to make relativity
strange. He tried to make it true, and the truth turned out to be strange.
At an art school where I once studied, the students wanted most of all to
develop a personal style. But if you just try to make good things, you'll
inevitably do it in a distinctive way, just as each person walks in a
distinctive way. Michelangelo was not trying to paint like Michelangelo. He
was just trying to paint well; he couldn't help painting like Michelangelo.
The only style worth having is the one you can't help. And this is especially
true for strangeness. There is no shortcut to it. The Northwest Passage that
the Mannerists, the Romantics, and two generations of American high school
students have searched for does not seem to exist. The only way to get there
is to go through good and come out the other side.
**Good design happens in chunks.** The inhabitants of fifteenth century
Florence included Brunelleschi, Ghiberti, Donatello, Masaccio, Filippo Lippi,
Fra Angelico, Verrocchio, Botticelli, Leonardo, and Michelangelo. Milan at the
time was as big as Florence. How many fifteenth century Milanese artists can
you name?
Something was happening in Florence in the fifteenth century. And it can't
have been heredity, because it isn't happening now. You have to assume that
whatever inborn ability Leonardo and Michelangelo had, there were people born
in Milan with just as much. What happened to the Milanese Leonardo?
There are roughly a thousand times as many people alive in the US right now as
lived in Florence during the fifteenth century. A thousand Leonardos and a
thousand Michelangelos walk among us. If DNA ruled, we should be greeted daily
by artistic marvels. We aren't, and the reason is that to make Leonardo you
need more than his innate ability. You also need Florence in 1450.
Nothing is more powerful than a community of talented people working on
related problems. Genes count for little by comparison: being a genetic
Leonardo was not enough to compensate for having been born near Milan instead
of Florence. Today we move around more, but great work still comes
disproportionately from a few hotspots: the Bauhaus, the Manhattan Project,
the _New Yorker,_ Lockheed's Skunk Works, Xerox Parc.
At any given time there are a few hot topics and a few groups doing great work
on them, and it's nearly impossible to do good work yourself if you're too far
removed from one of these centers. You can push or pull these trends to some
extent, but you can't break away from them. (Maybe _you_ can, but the Milanese
Leonardo couldn't.)
**Good design is often daring.** At every period of history, people have
believed things that were just ridiculous, and believed them so strongly that
you risked ostracism or even violence by saying otherwise.
If our own time were any different, that would be remarkable. As far as I can
tell it [isn't](say.html).
This problem afflicts not just every era, but in some degree every field. Much
Renaissance art was in its time considered shockingly secular: according to
Vasari, Botticelli repented and gave up painting, and Fra Bartolommeo and
Lorenzo di Credi actually burned some of their work. Einstein's theory of
relativity offended many contemporary physicists, and was not fully accepted
for decades-- in France, not until the 1950s.
Today's experimental error is tomorrow's new theory. If you want to discover
great new things, then instead of turning a blind eye to the places where
conventional wisdom and truth don't quite meet, you should pay particular
attention to them.
As a practical matter, I think it's easier to see ugliness than to imagine
beauty. Most of the people who've made beautiful things seem to have done it
by fixing something that they thought ugly. Great work usually seems to happen
because someone sees something and thinks, _I could do better than that._
Giotto saw traditional Byzantine madonnas painted according to a formula that
had satisfied everyone for centuries, and to him they looked wooden and
unnatural. Copernicus was so troubled by a hack that all his contemporaries
could tolerate that he felt there must be a better solution.
Intolerance for ugliness is not in itself enough. You have to understand a
field well before you develop a good nose for what needs fixing. You have to
do your homework. But as you become expert in a field, you'll start to hear
little voices saying, _What a hack! There must be a better way._ Don't ignore
those voices. Cultivate them. The recipe for great work is: very exacting
taste, plus the ability to gratify it.
**Notes**
[Sullivan](https://sep.turbifycdn.com/ty/cdn/paulgraham/sullivan.html?t=1688221954&)
actually said "form ever follows function," but I think the usual misquotation
is closer to what modernist architects meant.
Stephen G. Brush, "Why was Relativity Accepted?" _Phys. Perspect. 1 (1999)
184-214.
_
|
December 2020
To celebrate Airbnb's IPO and to help future founders, I thought it might be
useful to explain what was special about Airbnb.
What was special about the Airbnbs was how earnest they were. They did nothing
half-way, and we could sense this even in the interview. Sometimes after we
interviewed a startup we'd be uncertain what to do, and have to talk it over.
Other times we'd just look at one another and smile. The Airbnbs' interview
was that kind. We didn't even like the idea that much. Nor did users, at that
stage; they had no growth. But the founders seemed so full of energy that it
was impossible not to like them.
That first impression was not misleading. During the batch our nickname for
Brian Chesky was The Tasmanian Devil, because like the [cartoon
character](http://www.youtube.com/watch?v=StG2u5qfFRg&t=2m27s) he seemed a
tornado of energy. All three of them were like that. No one ever worked harder
during YC than the Airbnbs did. When you talked to the Airbnbs, they took
notes. If you suggested an idea to them in office hours, the next time you
talked to them they'd not only have implemented it, but also implemented two
new ideas they had in the process. "They probably have the best attitude of
any startup we've funded" I wrote to Mike Arrington during the batch.
They're still like that. Jessica and I had dinner with Brian in the summer of
2018, just the three of us. By this point the company is ten years old. He
took a page of notes about ideas for new things Airbnb could do.
What we didn't realize when we first met Brian and Joe and Nate was that
Airbnb was on its last legs. After working on the company for a year and
getting no growth, they'd agreed to give it one last shot. They'd try this Y
Combinator thing, and if the company still didn't take off, they'd give up.
Any normal person would have given up already. They'd been funding the company
with credit cards. They had a _binder_ full of credit cards they'd maxed out.
Investors didn't think much of the idea. One investor they met in a cafe
walked out in the middle of meeting with them. They thought he was going to
the bathroom, but he never came back. "He didn't even finish his smoothie,"
Brian said. And now, in late 2008, it was the worst recession in decades. The
stock market was in free fall and wouldn't hit bottom for another four months.
Why hadn't they given up? This is a useful question to ask. People, like
matter, reveal their nature under extreme conditions. One thing that's clear
is that they weren't doing this just for the money. As a money-making scheme,
this was pretty lousy: a year's work and all they had to show for it was a
binder full of maxed-out credit cards. So why were they still working on this
startup? Because of the experience they'd had as the first hosts.
When they first tried renting out airbeds on their floor during a design
convention, all they were hoping for was to make enough money to pay their
rent that month. But something surprising happened: they enjoyed having those
first three guests staying with them. And the guests enjoyed it too. Both they
and the guests had done it because they were in a sense forced to, and yet
they'd all had a great experience. Clearly there was something new here: for
hosts, a new way to make money that had literally been right under their
noses, and for guests, a new way to travel that was in many ways better than
hotels.
That experience was why the Airbnbs didn't give up. They knew they'd
discovered something. They'd seen a glimpse of the future, and they couldn't
let it go.
They knew that once people tried staying in what is now called "an airbnb,"
they would also realize that this was the future. But only if they tried it,
and they weren't. That was the problem during Y Combinator: to get growth
started.
Airbnb's goal during YC was to reach what we call [ramen
profitability](http://paulgraham.com/ramenprofitable.html), which means making
enough money that the company can pay the founders' living expenses, if they
live on ramen noodles. Ramen profitability is not, obviously, the end goal of
any startup, but it's the most important threshold on the way, because this is
the point where you're airborne. This is the point where you no longer need
investors' permission to continue existing. For the Airbnbs, ramen
profitability was $4000 a month: $3500 for rent, and $500 for food. They taped
this goal to the mirror in the bathroom of their apartment.
The way to get growth started in something like Airbnb is to focus on the
hottest subset of the market. If you can get growth started there, it will
spread to the rest. When I asked the Airbnbs where there was most demand, they
knew from searches: New York City. So they focused on New York. They went
there [in person](http://paulgraham.com/ds.html) to visit their hosts and help
them make their listings more attractive. A big part of that was better
pictures. So Joe and Brian rented a professional camera and took pictures of
the hosts' places themselves.
This didn't just make the listings better. It also taught them about their
hosts. When they came back from their first trip to New York, I asked what
they'd noticed about hosts that surprised them, and they said the biggest
surprise was how many of the hosts were in the same position they'd been in:
they needed this money to pay their rent. This was, remember, the worst
recession in decades, and it had hit New York first. It definitely added to
the Airbnbs' sense of mission to feel that people needed them.
In late January 2009, about three weeks into Y Combinator, their efforts
started to show results, and their numbers crept upward. But it was hard to
say for sure whether it was growth or just random fluctuation. By February it
was clear that it was real growth. They made $460 in fees in the first week of
February, $897 in the second, and $1428 in the third. That was it: they were
airborne. Brian sent me an email on February 22 announcing that they were
ramen profitable and giving the last three weeks' numbers.
"I assume you know what you've now set yourself up for next week," I
responded.
Brian's reply was seven words: "We are not going to slow down."
|
July 2020
One of the most revealing ways to classify people is by the degree and
aggressiveness of their conformism. Imagine a Cartesian coordinate system
whose horizontal axis runs from conventional-minded on the left to
independent-minded on the right, and whose vertical axis runs from passive at
the bottom to aggressive at the top. The resulting four quadrants define four
types of people. Starting in the upper left and going counter-clockwise:
aggressively conventional-minded, passively conventional-minded, passively
independent-minded, and aggressively independent-minded.
I think that you'll find all four types in most societies, and that which
quadrant people fall into depends more on their own personality than the
beliefs prevalent in their society. [1]
Young children offer some of the best evidence for both points. Anyone who's
been to primary school has seen the four types, and the fact that school rules
are so arbitrary is strong evidence that which quadrant people fall into
depends more on them than the rules.
The kids in the upper left quadrant, the aggressively conventional-minded
ones, are the tattletales. They believe not only that rules must be obeyed,
but that those who disobey them must be punished.
The kids in the lower left quadrant, the passively conventional-minded, are
the sheep. They're careful to obey the rules, but when other kids break them,
their impulse is to worry that those kids will be punished, not to ensure that
they will.
The kids in the lower right quadrant, the passively independent-minded, are
the dreamy ones. They don't care much about rules and probably aren't 100%
sure what the rules even are.
And the kids in the upper right quadrant, the aggressively independent-minded,
are the naughty ones. When they see a rule, their first impulse is to question
it. Merely being told what to do makes them inclined to do the opposite.
When measuring conformism, of course, you have to say with respect to what,
and this changes as kids get older. For younger kids it's the rules set by
adults. But as kids get older, the source of rules becomes their peers. So a
pack of teenagers who all flout school rules in the same way are not
independent-minded; rather the opposite.
In adulthood we can recognize the four types by their distinctive calls, much
as you could recognize four species of birds. The call of the aggressively
conventional-minded is "Crush <outgroup>!" (It's rather alarming to see an
exclamation point after a variable, but that's the whole problem with the
aggressively conventional-minded.) The call of the passively conventional-
minded is "What will the neighbors think?" The call of the passively
independent-minded is "To each his own." And the call of the aggressively
independent-minded is "Eppur si muove."
The four types are not equally common. There are more passive people than
aggressive ones, and far more conventional-minded people than independent-
minded ones. So the passively conventional-minded are the largest group, and
the aggressively independent-minded the smallest.
Since one's quadrant depends more on one's personality than the nature of the
rules, most people would occupy the same quadrant even if they'd grown up in a
quite different society.
Princeton professor Robert George recently wrote:
> I sometimes ask students what their position on slavery would have been had
> they been white and living in the South before abolition. Guess what? They
> all would have been abolitionists! They all would have bravely spoken out
> against slavery, and worked tirelessly against it.
He's too polite to say so, but of course they wouldn't. And indeed, our
default assumption should not merely be that his students would, on average,
have behaved the same way people did at the time, but that the ones who are
aggressively conventional-minded today would have been aggressively
conventional-minded then too. In other words, that they'd not only not have
fought against slavery, but that they'd have been among its staunchest
defenders.
I'm biased, I admit, but it seems to me that aggressively conventional-minded
people are responsible for a disproportionate amount of the trouble in the
world, and that a lot of the customs we've evolved since the Enlightenment
have been designed to protect the rest of us from them. In particular, the
retirement of the concept of heresy and its replacement by the principle of
freely debating all sorts of different ideas, even ones that are currently
considered unacceptable, without any punishment for those who try them out to
see if they work. [2]
Why do the independent-minded need to be protected, though? Because they have
all the new ideas. To be a successful scientist, for example, it's not enough
just to be right. You have to be right when everyone else is wrong.
Conventional-minded people can't do that. For similar reasons, all successful
startup CEOs are not merely independent-minded, but aggressively so. So it's
no coincidence that societies prosper only to the extent that they have
customs for keeping the conventional-minded at bay. [3]
In the last few years, many of us have noticed that the customs protecting
free inquiry have been weakened. Some say we're overreacting � that they
haven't been weakened very much, or that they've been weakened in the service
of a greater good. The latter I'll dispose of immediately. When the
conventional-minded get the upper hand, they always say it's in the service of
a greater good. It just happens to be a different, incompatible greater good
each time.
As for the former worry, that the independent-minded are being oversensitive,
and that free inquiry hasn't been shut down that much, you can't judge that
unless you are yourself independent-minded. You can't know how much of the
space of ideas is being lopped off unless you have them, and only the
independent-minded have the ones at the edges. Precisely because of this, they
tend to be very sensitive to changes in how freely one can explore ideas.
They're the canaries in this coalmine.
The conventional-minded say, as they always do, that they don't want to shut
down the discussion of all ideas, just the bad ones.
You'd think it would be obvious just from that sentence what a dangerous game
they're playing. But I'll spell it out. There are two reasons why we need to
be able to discuss even "bad" ideas.
The first is that any process for deciding which ideas to ban is bound to make
mistakes. All the more so because no one intelligent wants to undertake that
kind of work, so it ends up being done by the stupid. And when a process makes
a lot of mistakes, you need to leave a margin for error. Which in this case
means you need to ban fewer ideas than you'd like to. But that's hard for the
aggressively conventional-minded to do, partly because they enjoy seeing
people punished, as they have since they were children, and partly because
they compete with one another. Enforcers of orthodoxy can't allow a borderline
idea to exist, because that gives other enforcers an opportunity to one-up
them in the moral purity department, and perhaps even to turn enforcer upon
them. So instead of getting the margin for error we need, we get the opposite:
a race to the bottom in which any idea that seems at all bannable ends up
being banned. [4]
The second reason it's dangerous to ban the discussion of ideas is that ideas
are more closely related than they look. Which means if you restrict the
discussion of some topics, it doesn't only affect those topics. The
restrictions propagate back into any topic that yields implications in the
forbidden ones. And that is not an edge case. The best ideas do exactly that:
they have consequences in fields far removed from their origins. Having ideas
in a world where some ideas are banned is like playing soccer on a pitch that
has a minefield in one corner. You don't just play the same game you would
have, but on a different shaped pitch. You play a much more subdued game even
on the ground that's safe.
In the past, the way the independent-minded protected themselves was to
congregate in a handful of places � first in courts, and later in universities
� where they could to some extent make their own rules. Places where people
work with ideas tend to have customs protecting free inquiry, for the same
reason wafer fabs have powerful air filters, or recording studios good sound
insulation. For the last couple centuries at least, when the aggressively
conventional-minded were on the rampage for whatever reason, universities were
the safest places to be.
That may not work this time though, due to the unfortunate fact that the
latest wave of intolerance began in universities. It began in the mid 1980s,
and by 2000 seemed to have died down, but it has recently flared up again with
the arrival of social media. This seems, unfortunately, to have been an own
goal by Silicon Valley. Though the people who run Silicon Valley are almost
all independent-minded, they've handed the aggressively conventional-minded a
tool such as they could only have dreamed of.
On the other hand, perhaps the decline in the spirit of free inquiry within
universities is as much the symptom of the departure of the independent-minded
as the cause. People who would have become professors 50 years ago have other
options now. Now they can become quants or start startups. You have to be
independent-minded to succeed at either of those. If these people had been
professors, they'd have put up a stiffer resistance on behalf of academic
freedom. So perhaps the picture of the independent-minded fleeing declining
universities is too gloomy. Perhaps the universities are declining because so
many have already left. [5]
Though I've spent a lot of time thinking about this situation, I can't predict
how it plays out. Could some universities reverse the current trend and remain
places where the independent-minded want to congregate? Or will the
independent-minded gradually abandon them? I worry a lot about what we might
lose if that happened.
But I'm hopeful long term. The independent-minded are good at protecting
themselves. If existing institutions are compromised, they'll create new ones.
That may require some imagination. But imagination is, after all, their
specialty.
**Notes**
[1] I realize of course that if people's personalities vary in any two ways,
you can use them as axes and call the resulting four quadrants personality
types. So what I'm really claiming is that the axes are orthogonal and that
there's significant variation in both.
[2] The aggressively conventional-minded aren't responsible for all the
trouble in the world. Another big source of trouble is the sort of charismatic
leader who gains power by appealing to them. They become much more dangerous
when such leaders emerge.
[3] I never worried about writing things that offended the conventional-minded
when I was running Y Combinator. If YC were a cookie company, I'd have faced a
difficult moral choice. Conventional-minded people eat cookies too. But they
don't start successful startups. So if I deterred them from applying to YC,
the only effect was to save us work reading applications.
[4] There has been progress in one area: the punishments for talking about
banned ideas are less severe than in the past. There's little danger of being
killed, at least in richer countries. The aggressively conventional-minded are
mostly satisfied with getting people fired.
[5] Many professors are independent-minded � especially in math, the hard
sciences, and engineering, where you have to be to succeed. But students are
more representative of the general population, and thus mostly conventional-
minded. So when professors and students are in conflict, it's not just a
conflict between generations but also between different types of people.
**Thanks** to Sam Altman, Trevor Blackwell, Nicholas Christakis, Patrick
Collison, Sam Gichuru, Jessica Livingston, Patrick McKenzie, Geoff Ralston,
and Harj Taggar for reading drafts of this.
|
January 2017
Because biographies of famous scientists tend to edit out their mistakes, we
underestimate the degree of risk they were willing to take. And because
anything a famous scientist did that wasn't a mistake has probably now become
the conventional wisdom, those choices don't seem risky either.
Biographies of Newton, for example, understandably focus more on physics than
alchemy or theology. The impression we get is that his unerring judgment led
him straight to truths no one else had noticed. How to explain all the time he
spent on alchemy and theology? Well, smart people are often kind of crazy.
But maybe there is a simpler explanation. Maybe the smartness and the
craziness were not as separate as we think. Physics seems to us a promising
thing to work on, and alchemy and theology obvious wastes of time. But that's
because we know how things turned out. In Newton's day the three problems
seemed roughly equally promising. No one knew yet what the payoff would be for
inventing what we now call physics; if they had, more people would have been
working on it. And alchemy and theology were still then in the category Marc
Andreessen would describe as "huge, if true."
Newton made three bets. One of them worked. But they were all risky.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
October 2010
Silicon Valley proper is mostly suburban sprawl. At first glance it doesn't
seem there's anything to see. It's not the sort of place that has conspicuous
monuments. But if you look, there are subtle signs you're in a place that's
different from other places.
**1.[Stanford University](http://maps.google.com/maps?q=stanford+university)**
Stanford is a strange place. Structurally it is to an ordinary university what
suburbia is to a city. It's enormously spread out, and feels surprisingly
empty much of the time. But notice the weather. It's probably perfect. And
notice the beautiful mountains to the west. And though you can't see it,
cosmopolitan San Francisco is 40 minutes to the north. That combination is
much of the reason Silicon Valley grew up around this university and not some
other one.
**2.[University
Ave](http://maps.google.com/maps?q=university+and+ramona+palo+alto)**
A surprising amount of the work of the Valley is done in the cafes on or just
off University Ave in Palo Alto. If you visit on a weekday between 10 and 5,
you'll often see founders pitching investors. In case you can't tell, the
founders are the ones leaning forward eagerly, and the investors are the ones
sitting back with slightly pained expressions.
**3.[The Lucky
Office](http://maps.google.com/maps?q=165+university+ave+palo+alto)**
The office at 165 University Ave was Google's first. Then it was Paypal's.
(Now it's [Wepay](http://wepay.com)'s.) The interesting thing about it is the
location. It's a smart move to put a startup in a place with restaurants and
people walking around instead of in an office park, because then the people
who work there want to stay there, instead of fleeing as soon as conventional
working hours end. They go out for dinner together, talk about ideas, and then
come back and implement them.
It's important to realize that Google's current location in an office park is
not where they started; it's just where they were forced to move when they
needed more space. Facebook was till recently across the street, till they too
had to move because they needed more space.
**4.[Old Palo Alto](http://maps.google.com/maps?q=old+palo+alto)**
Palo Alto was not originally a suburb. For the first 100 years or so of its
existence, it was a college town out in the countryside. Then in the mid 1950s
it was engulfed in a wave of suburbia that raced down the peninsula. But Palo
Alto north of Oregon expressway still feels noticeably different from the area
around it. It's one of the nicest places in the Valley. The buildings are old
(though increasingly they are being torn down and replaced with generic
McMansions) and the trees are tall. But houses are very expensive—around $1000
per square foot. This is post-exit Silicon Valley.
**5.[Sand Hill
Road](http://maps.google.com/maps?q=2900+sand+hill+road+menlo+park)**
It's interesting to see the VCs' offices on the north side of Sand Hill Road
precisely because they're so boringly uniform. The buildings are all more or
less the same, their exteriors express very little, and they are arranged in a
confusing maze. (I've been visiting them for years and I still occasionally
get lost.) It's not a coincidence. These buildings are a pretty accurate
reflection of the VC business.
If you go on a weekday you may see groups of founders there to meet VCs. But
mostly you won't see anyone; bustling is the last word you'd use to describe
the atmos. Visiting Sand Hill Road reminds you that the opposite of "down and
dirty" would be "up and clean."
**6.[Castro
Street](http://maps.google.com/maps?q=castro+and+villa+mountain+view)**
It's a tossup whether Castro Street or University Ave should be considered the
heart of the Valley now. University Ave would have been 10 years ago. But Palo
Alto is getting expensive. Increasingly startups are located in Mountain View,
and Palo Alto is a place they come to meet investors. Palo Alto has a lot of
different cafes, but there is one that clearly dominates in Mountain View:
[Red Rock](http://maps.google.com/places/us/ca/mountain-view/castro-
st/201/-red-rock-coffee).
**7.[Google](http://maps.google.com/maps?q=charleston+road+mountain+view)**
Google spread out from its first building in Mountain View to a lot of the
surrounding ones. But because the buildings were built at different times by
different people, the place doesn't have the sterile, walled-off feel that a
typical large company's headquarters have. It definitely has a flavor of its
own though. You sense there is something afoot. The general atmos is vaguely
utopian; there are lots of Priuses, and people who look like they drive them.
You can't get into Google unless you know someone there. It's very much worth
seeing inside if you can, though. Ditto for Facebook, at the end of California
Ave in Palo Alto, though there is nothing to see outside.
**8.[Skyline Drive](http://maps.google.com/maps?q=skylonda)**
Skyline Drive runs along the crest of the Santa Cruz mountains. On one side is
the Valley, and on the other is the sea—which because it's cold and foggy and
has few harbors, plays surprisingly little role in the lives of people in the
Valley, considering how close it is. Along some parts of Skyline the dominant
trees are huge redwoods, and in others they're live oaks. Redwoods mean those
are the parts where the fog off the coast comes in at night; redwoods condense
rain out of fog. The MROSD manages a collection of [great walking
trails](http://www.openspace.org/) off Skyline.
**9.[280](http://maps.google.com/maps?q=interstate+280+san+mateo)**
Silicon Valley has two highways running the length of it: 101, which is pretty
ugly, and 280, which is one of the more beautiful highways in the world. I
always take 280 when I have a choice. Notice the long narrow lake to the west?
That's the San Andreas Fault. It runs along the base of the hills, then heads
uphill through Portola Valley. One of the MROSD trails runs [right along the
fault](http://www.openspace.org/preserves/pr_los_trancos.asp). A string of
rich neighborhoods runs along the foothills to the west of 280: Woodside,
Portola Valley, Los Altos Hills, Saratoga, Los Gatos.
[SLAC](http://www.flickr.com/photos/38037974@N00/3890299362/) goes right under
280 a little bit south of Sand Hill Road. And a couple miles south of that is
the Valley's equivalent of the "Welcome to Las Vegas" sign: [The
Dish](http://www.flickr.com/photos/paulbarroga/3443486941/).
**Notes**
I skipped the [Computer History Museum](http://www.computerhistory.org/)
because this is a list of where to see the Valley itself, not where to see
artifacts from it. I also skipped San Jose. San Jose calls itself the capital
of Silicon Valley, but when people in the Valley use the phrase "the city,"
they mean San Francisco. San Jose is a dotted line on a map.
**Thanks** to Sam Altman, Paul Buchheit, Patrick Collison, and Jessica
Livingston for reading drafts of this.
|
March 2008
The web is turning writing into a conversation. Twenty years ago, writers
wrote and readers read. The web lets readers respond, and increasingly they
do—in comment threads, on forums, and in their own blog posts.
Many who respond to something disagree with it. That's to be expected.
Agreeing tends to motivate people less than disagreeing. And when you agree
there's less to say. You could expand on something the author said, but he has
probably already explored the most interesting implications. When you disagree
you're entering territory he may not have explored.
The result is there's a lot more disagreeing going on, especially measured by
the word. That doesn't mean people are getting angrier. The structural change
in the way we communicate is enough to account for it. But though it's not
anger that's driving the increase in disagreement, there's a danger that the
increase in disagreement will make people angrier. Particularly online, where
it's easy to say things you'd never say face to face.
If we're all going to be disagreeing more, we should be careful to do it well.
What does it mean to disagree well? Most readers can tell the difference
between mere name-calling and a carefully reasoned refutation, but I think it
would help to put names on the intermediate stages. So here's an attempt at a
disagreement hierarchy:
**DH0. Name-calling.**
This is the lowest form of disagreement, and probably also the most common.
We've all seen comments like this:
> u r a fag!!!!!!!!!!
But it's important to realize that more articulate name-calling has just as
little weight. A comment like
> The author is a self-important dilettante.
is really nothing more than a pretentious version of "u r a fag."
**DH1. Ad Hominem.**
An ad hominem attack is not quite as weak as mere name-calling. It might
actually carry some weight. For example, if a senator wrote an article saying
senators' salaries should be increased, one could respond:
> Of course he would say that. He's a senator.
This wouldn't refute the author's argument, but it may at least be relevant to
the case. It's still a very weak form of disagreement, though. If there's
something wrong with the senator's argument, you should say what it is; and if
there isn't, what difference does it make that he's a senator?
Saying that an author lacks the authority to write about a topic is a variant
of ad hominem—and a particularly useless sort, because good ideas often come
from outsiders. The question is whether the author is correct or not. If his
lack of authority caused him to make mistakes, point those out. And if it
didn't, it's not a problem.
**DH2. Responding to Tone.**
The next level up we start to see responses to the writing, rather than the
writer. The lowest form of these is to disagree with the author's tone. E.g.
> I can't believe the author dismisses intelligent design in such a cavalier
> fashion.
Though better than attacking the author, this is still a weak form of
disagreement. It matters much more whether the author is wrong or right than
what his tone is. Especially since tone is so hard to judge. Someone who has a
chip on their shoulder about some topic might be offended by a tone that to
other readers seemed neutral.
So if the worst thing you can say about something is to criticize its tone,
you're not saying much. Is the author flippant, but correct? Better that than
grave and wrong. And if the author is incorrect somewhere, say where.
**DH3. Contradiction.**
In this stage we finally get responses to what was said, rather than how or by
whom. The lowest form of response to an argument is simply to state the
opposing case, with little or no supporting evidence.
This is often combined with DH2 statements, as in:
> I can't believe the author dismisses intelligent design in such a cavalier
> fashion. Intelligent design is a legitimate scientific theory.
Contradiction can sometimes have some weight. Sometimes merely seeing the
opposing case stated explicitly is enough to see that it's right. But usually
evidence will help.
**DH4. Counterargument.**
At level 4 we reach the first form of convincing disagreement:
counterargument. Forms up to this point can usually be ignored as proving
nothing. Counterargument might prove something. The problem is, it's hard to
say exactly what.
Counterargument is contradiction plus reasoning and/or evidence. When aimed
squarely at the original argument, it can be convincing. But unfortunately
it's common for counterarguments to be aimed at something slightly different.
More often than not, two people arguing passionately about something are
actually arguing about two different things. Sometimes they even agree with
one another, but are so caught up in their squabble they don't realize it.
There could be a legitimate reason for arguing against something slightly
different from what the original author said: when you feel they missed the
heart of the matter. But when you do that, you should say explicitly you're
doing it.
**DH5. Refutation.**
The most convincing form of disagreement is refutation. It's also the rarest,
because it's the most work. Indeed, the disagreement hierarchy forms a kind of
pyramid, in the sense that the higher you go the fewer instances you find.
To refute someone you probably have to quote them. You have to find a "smoking
gun," a passage in whatever you disagree with that you feel is mistaken, and
then explain why it's mistaken. If you can't find an actual quote to disagree
with, you may be arguing with a straw man.
While refutation generally entails quoting, quoting doesn't necessarily imply
refutation. Some writers quote parts of things they disagree with to give the
appearance of legitimate refutation, then follow with a response as low as DH3
or even DH0.
**DH6. Refuting the Central Point.**
The force of a refutation depends on what you refute. The most powerful form
of disagreement is to refute someone's central point.
Even as high as DH5 we still sometimes see deliberate dishonesty, as when
someone picks out minor points of an argument and refutes those. Sometimes the
spirit in which this is done makes it more of a sophisticated form of ad
hominem than actual refutation. For example, correcting someone's grammar, or
harping on minor mistakes in names or numbers. Unless the opposing argument
actually depends on such things, the only purpose of correcting them is to
discredit one's opponent.
Truly refuting something requires one to refute its central point, or at least
one of them. And that means one has to commit explicitly to what the central
point is. So a truly effective refutation would look like:
> The author's main point seems to be x. As he says:
>
>> <quotation>
>
> But this is wrong for the following reasons...
The quotation you point out as mistaken need not be the actual statement of
the author's main point. It's enough to refute something it depends upon.
**What It Means**
Now we have a way of classifying forms of disagreement. What good is it? One
thing the disagreement hierarchy _doesn't_ give us is a way of picking a
winner. DH levels merely describe the form of a statement, not whether it's
correct. A DH6 response could still be completely mistaken.
But while DH levels don't set a lower bound on the convincingness of a reply,
they do set an upper bound. A DH6 response might be unconvincing, but a DH2 or
lower response is always unconvincing.
The most obvious advantage of classifying the forms of disagreement is that it
will help people to evaluate what they read. In particular, it will help them
to see through intellectually dishonest arguments. An eloquent speaker or
writer can give the impression of vanquishing an opponent merely by using
forceful words. In fact that is probably the defining quality of a demagogue.
By giving names to the different forms of disagreement, we give critical
readers a pin for popping such balloons.
Such labels may help writers too. Most intellectual dishonesty is
unintentional. Someone arguing against the tone of something he disagrees with
may believe he's really saying something. Zooming out and seeing his current
position on the disagreement hierarchy may inspire him to try moving up to
counterargument or refutation.
But the greatest benefit of disagreeing well is not just that it will make
conversations better, but that it will make the people who have them happier.
If you study conversations, you find there is a lot more meanness down in DH1
than up in DH6. You don't have to be mean when you have a real point to make.
In fact, you don't want to. If you have something real to say, being mean just
gets in the way.
If moving up the disagreement hierarchy makes people less mean, that will make
most of them happier. Most people don't really enjoy being mean; they do it
because they can't help it.
**Thanks** to Trevor Blackwell and Jessica Livingston for reading drafts of
this.
**Related:**
|
March 2012
Y Combinator's 7th birthday was March 11. As usual we were so busy we didn't
notice till a few days after. I don't think we've ever managed to remember our
birthday on our birthday.
On March 11 2005, Jessica and I were walking home from dinner in Harvard
Square. Jessica was working at an investment bank at the time, but she didn't
like it much, so she had interviewed for a job as director of marketing at a
Boston VC fund. The VC fund was doing what now seems a comically familiar
thing for a VC fund to do: taking a long time to make up their mind. Meanwhile
I had been telling Jessica all the things they should change about the VC
business � essentially the ideas now underlying Y Combinator: investors should
be making more, smaller investments, they should be funding hackers instead of
suits, they should be willing to fund younger founders, etc.
At the time I had been thinking about doing some angel investing. I had just
given a talk to the undergraduate computer club at Harvard about [how to start
a startup](start.html), and it hit me afterward that although I had always
meant to do angel investing, 7 years had now passed since I got enough money
to do it, and I still hadn't started. I had also been thinking about ways to
work with Robert Morris and Trevor Blackwell again. A few hours before I had
sent them an email trying to figure out what we could do together.
Between Harvard Square and my house the idea gelled. We'd start our own
investment firm and Jessica could work for that instead. As we turned onto
Walker Street we decided to do it. I agreed to put $100k into the new fund and
Jessica agreed to quit her job to work for it. Over the next couple days I
recruited Robert and Trevor, who put in another $50k each. So YC started with
$200k.
Jessica was so happy to be able to quit her job and start her own company that
I took her
[picture](https://web.archive.org/web/20170609055553/http://www.ycombinator.com/yc05.html)
when we got home.
The company wasn't called Y Combinator yet. At first we called it Cambridge
Seed. But that name never saw the light of day, because by the time we
announced it a few days later, we'd changed the name to Y Combinator. We
realized early on that what we were doing could be national in scope and we
didn't want a name that tied us to one place.
Initially we only had part of the idea. We were going to do seed funding with
standardized terms. Before YC, seed funding was very haphazard. You'd get that
first $10k from your friend's rich uncle. The deal terms were often a
disaster; often neither the investor nor the founders nor the lawyer knew what
the documents should look like. Facebook's early history as a Florida LLC
shows how random things could be in those days. We were going to be something
there had not been before: a standard source of seed funding.
We modelled YC on the seed funding we ourselves had taken when we started
Viaweb. We started Viaweb with $10k we got from our friend [Julian
Weber](julian.html), the husband of Idelle Weber, whose painting class I took
as a grad student at Harvard. Julian knew about business, but you would not
describe him as a suit. Among other things he'd been president of the
_National Lampoon_. He was also a lawyer, and got all our paperwork set up
properly. In return for $10k, getting us set up as a company, teaching us what
business was about, and remaining calm in times of crisis, Julian got 10% of
Viaweb. I remember thinking once what a good deal Julian got. And then a
second later I realized that without Julian, Viaweb would never have made it.
So even though it was a good deal for him, it was a good deal for us too.
That's why I knew there was room for something like Y Combinator.
Initially we didn't have what turned out to be the most important idea:
funding startups synchronously, instead of asynchronously as it had always
been done before. Or rather we had the idea, but we didn't realize its
significance. We decided very early that the first thing we'd do would be to
fund a bunch of startups over the coming summer. But we didn't realize
initially that this would be the way we'd do all our investing. The reason we
began by funding a bunch of startups at once was not that we thought it would
be a better way to fund startups, but simply because we wanted to learn how to
be angel investors, and a summer program for undergrads seemed the fastest way
to do it. No one takes summer jobs that seriously. The opportunity cost for a
bunch of undergrads to spend a summer working on startups was low enough that
we wouldn't feel guilty encouraging them to do it.
We knew students would already be making plans for the summer, so we did what
we're always telling startups to do: we launched fast. Here are the initial
[announcement](summerfounder.html) and
[description](https://web.archive.org/web/20170609055553/http://ycombinator.com/old/sfp.html)
of what was at the time called the Summer Founders Program.
We got lucky in that the length and structure of a summer program turns out to
be perfect for what we do. The structure of the YC cycle is still almost
identical to what it was that first summer.
We also got lucky in who the first batch of founders were. We never expected
to make any money from that first batch. We thought of the money we were
investing as a combination of an educational expense and a charitable
donation. But the founders in the first batch turned out to be surprisingly
good. And great people too. We're still friends with a lot of them today.
It's hard for people to realize now how inconsequential YC seemed at the time.
I can't blame people who didn't take us seriously, because we ourselves didn't
take that first summer program seriously in the very beginning. But as the
summer progressed we were increasingly impressed by how well the startups were
doing. Other people started to be impressed too. Jessica and I invented a
term, "the Y Combinator effect," to describe the moment when the realization
hit someone that YC was not totally lame. When people came to YC to speak at
the dinners that first summer, they came in the spirit of someone coming to
address a Boy Scout troop. By the time they left the building they were all
saying some variant of "Wow, these companies might actually succeed."
Now YC is well enough known that people are no longer surprised when the
companies we fund are legit, but it took a while for reputation to catch up
with reality. That's one of the reasons we especially like funding ideas that
might be dismissed as "toys" � because YC itself was dismissed as one
initially.
When we saw how well it worked to fund companies synchronously, we decided
we'd keep doing that. We'd fund two batches of startups a year.
We funded the second batch in Silicon Valley. That was a last minute decision.
In retrospect I think what pushed me over the edge was going to Foo Camp that
fall. The density of startup people in the Bay Area was so much greater than
in Boston, and the weather was so nice. I remembered that from living there in
the 90s. Plus I didn't want someone else to copy us and describe it as the Y
Combinator of Silicon Valley. I wanted YC to be the Y Combinator of Silicon
Valley. So doing the winter batch in California seemed like one of those rare
cases where the self-indulgent choice and the ambitious one were the same.
If we'd had enough time to do what we wanted, Y Combinator would have been in
Berkeley. That was our favorite part of the Bay Area. But we didn't have time
to get a building in Berkeley. We didn't have time to get our own building
anywhere. The only way to get enough space in time was to convince Trevor to
let us take over part of his (as it then seemed) giant building in Mountain
View. Yet again we lucked out, because Mountain View turned out to be the
ideal place to put something like YC. But even then we barely made it. The
first dinner in California, we had to warn all the founders not to touch the
walls, because the paint was still wet.
|
January 2023
_([_Someone_](https://twitter.com/stef/status/1617222428727586816) fed my
essays into GPT to make something that could answer questions based on them,
then asked it where good ideas come from. The answer was ok, but not what I
would have said. This is what I would have said.)_
The way to get new ideas is to notice anomalies: what seems strange, or
missing, or broken? You can see anomalies in everyday life (much of standup
comedy is based on this), but the best place to look for them is at the
frontiers of knowledge.
Knowledge grows fractally. From a distance its edges look smooth, but when you
learn enough to get close to one, you'll notice it's full of gaps. These gaps
will seem obvious; it will seem inexplicable that no one has tried x or
wondered about y. In the best case, exploring such gaps yields whole new
fractal buds.
|
April 2022
One of the most surprising things I've witnessed in my lifetime is the rebirth
of the concept of heresy.
In his excellent biography of Newton, Richard Westfall writes about the moment
when he was elected a fellow of Trinity College:
> Supported comfortably, Newton was free to devote himself wholly to whatever
> he chose. To remain on, he had only to avoid the three unforgivable sins:
> crime, heresy, and marriage. [1]
The first time I read that, in the 1990s, it sounded amusingly medieval. How
strange, to have to avoid committing heresy. But when I reread it 20 years
later it sounded like a description of contemporary employment.
There are an ever-increasing number of opinions you can be fired for. Those
doing the firing don't use the word "heresy" to describe them, but
structurally they're equivalent. Structurally there are two distinctive things
about heresy: (1) that it takes priority over the question of truth or
falsity, and (2) that it outweighs everything else the speaker has done.
For example, when someone calls a statement "x-ist," they're also implicitly
saying that this is the end of the discussion. They do not, having said this,
go on to consider whether the statement is true or not. Using such labels is
the conversational equivalent of signalling an exception. That's one of the
reasons they're used: to end a discussion.
If you find yourself talking to someone who uses these labels a lot, it might
be worthwhile to ask them explicitly if they believe any babies are being
thrown out with the bathwater. Can a statement be x-ist, for whatever value of
x, and also true? If the answer is yes, then they're admitting to banning the
truth. That's obvious enough that I'd guess most would answer no. But if they
answer no, it's easy to show that they're mistaken, and that in practice such
labels are applied to statements regardless of their truth or falsity.
The clearest evidence of this is that whether a statement is considered x-ist
often depends on who said it. Truth doesn't work that way. The same statement
can't be true when one person says it, but x-ist, and therefore false, when
another person does. [2]
The other distinctive thing about heresies, compared to ordinary opinions, is
that the public expression of them outweighs everything else the speaker has
done. In ordinary matters, like knowledge of history, or taste in music,
you're judged by the average of your opinions. A heresy is qualitatively
different. It's like dropping a chunk of uranium onto the scale.
Back in the day (and still, in some places) the punishment for heresy was
death. You could have led a life of exemplary goodness, but if you publicly
doubted, say, the divinity of Christ, you were going to burn. Nowadays, in
civilized countries, heretics only get fired in the metaphorical sense, by
losing their jobs. But the structure of the situation is the same: the heresy
outweighs everything else. You could have spent the last ten years saving
children's lives, but if you express certain opinions, you're automatically
fired.
It's much the same as if you committed a crime. No matter how virtuously
you've lived, if you commit a crime, you must still suffer the penalty of the
law. Having lived a previously blameless life might mitigate the punishment,
but it doesn't affect whether you're guilty or not.
A heresy is an opinion whose expression is treated like a crime — one that
makes some people feel not merely that you're mistaken, but that you should be
punished. Indeed, their desire to see you punished is often stronger than it
would be if you'd committed an actual crime. There are many on the far left
who believe strongly in the reintegration of felons (as I do myself), and yet
seem to feel that anyone guilty of certain heresies should never work again.
There are always some heresies — some opinions you'd be punished for
expressing. But there are a lot more now than there were a few decades ago,
and even those who are happy about this would have to agree that it's so.
Why? Why has this antiquated-sounding religious concept come back in a secular
form? And why now?
You need two ingredients for a wave of intolerance: intolerant people, and an
ideology to guide them. The intolerant people are always there. They exist in
every sufficiently large society. That's why waves of intolerance can arise so
suddenly; all they need is something to set them off.
I've already written an [_essay_](conformism.html) describing the aggressively
conventional-minded. The short version is that people can be classified in two
dimensions according to (1) how independent- or conventional-minded they are,
and (2) how aggressive they are about it. The aggressively conventional-minded
are the enforcers of orthodoxy.
Normally they're only locally visible. They're the grumpy, censorious people
in a group — the ones who are always first to complain when something violates
the current rules of propriety. But occasionally, like a vector field whose
elements become aligned, a large number of aggressively conventional-minded
people unite behind some ideology all at once. Then they become much more of a
problem, because a mob dynamic takes over, where the enthusiasm of each
participant is increased by the enthusiasm of the others.
The most notorious 20th century case may have been the Cultural Revolution.
Though initiated by Mao to undermine his rivals, the Cultural Revolution was
otherwise mostly a grass-roots phenomenon. Mao said in essence: There are
heretics among us. Seek them out and punish them. And that's all the
aggressively conventional-minded ever need to hear. They went at it with the
delight of dogs chasing squirrels.
To unite the conventional-minded, an ideology must have many of the features
of a religion. In particular it must have strict and arbitrary rules that
adherents can demonstrate their
[_purity_](https://www.youtube.com/watch?v=qaHLd8de6nM) by obeying, and its
adherents must believe that anyone who obeys these rules is ipso facto morally
superior to anyone who doesn't. [3]
In the late 1980s a new ideology of this type appeared in US universities. It
had a very strong component of moral purity, and the aggressively
conventional-minded seized upon it with their usual eagerness — all the more
because the relaxation of social norms in the preceding decades meant there
had been less and less to forbid. The resulting wave of intolerance has been
eerily similar in form to the Cultural Revolution, though fortunately much
smaller in magnitude. [4]
I've deliberately avoided mentioning any specific heresies here. Partly
because one of the universal tactics of heretic hunters, now as in the past,
is to accuse those who disapprove of the way in which they suppress ideas of
being heretics themselves. Indeed, this tactic is so consistent that you could
use it as a way of detecting witch hunts in any era.
And that's the second reason I've avoided mentioning any specific heresies. I
want this essay to work in the future, not just now. And unfortunately it
probably will. The aggressively conventional-minded will always be among us,
looking for things to forbid. All they need is an ideology to tell them what.
And it's unlikely the current one will be the last.
There are aggressively conventional-minded people on both the right and the
left. The reason the current wave of intolerance comes from the left is simply
because the new unifying ideology happened to come from the left. The next one
might come from the right. Imagine what that would be like.
Fortunately in western countries the suppression of heresies is nothing like
as bad as it used to be. Though the window of opinions you can express
publicly has narrowed in the last decade, it's still much wider than it was a
few hundred years ago. The problem is the derivative. Up till about 1985 the
window had been growing ever wider. Anyone looking into the future in 1985
would have expected freedom of expression to continue to increase. Instead it
has decreased. [5]
The situation is similar to what's happened with infectious diseases like
measles. Anyone looking into the future in 2010 would have expected the number
of measles cases in the US to continue to decrease. Instead, thanks to anti-
vaxxers, it has increased. The absolute number is still not that high. The
problem is the derivative. [6]
In both cases it's hard to know how much to worry. Is it really dangerous to
society as a whole if a handful of extremists refuse to get their kids
vaccinated, or shout down speakers at universities? The point to start
worrying is presumably when their efforts start to spill over into everyone
else's lives. And in both cases that does seem to be happening.
So it's probably worth spending some amount of effort on pushing back to keep
open the window of free expression. My hope is that this essay will help form
social antibodies not just against current efforts to suppress ideas, but
against the concept of heresy in general. That's the real prize. How do you
disable the concept of heresy? Since the Enlightenment, western societies have
discovered many techniques for doing that, but there are surely more to be
discovered.
Overall I'm optimistic. Though the trend in freedom of expression has been bad
over the last decade, it's been good over the longer term. And there are signs
that the current wave of intolerance is peaking. Independent-minded people I
talk to seem more confident than they did a few years ago. On the other side,
even some of the
[_leaders_](https://www.nytimes.com/2022/03/18/opinion/cancel-culture-free-
speech-poll.html) are starting to wonder if things have gone too far. And
popular culture among the young has already moved on. All we have to do is
keep pushing back, and the wave collapses. And then we'll be net ahead,
because as well as having defeated this wave, we'll also have developed new
tactics for resisting the next one.
**Notes**
[1] Or more accurately, biographies of Newton, since Westfall wrote two: a
long version called _Never at Rest_ , and a shorter one called _The Life of
Isaac Newton_. Both are great. The short version moves faster, but the long
one is full of interesting and often very funny details. This passage is the
same in both.
[2] Another more subtle but equally damning bit of evidence is that claims of
x-ism are never qualified. You never hear anyone say that a statement is
"probably x-ist" or "almost certainly y-ist." If claims of x-ism were actually
claims about truth, you'd expect to see "probably" in front of "x-ist" as
often as you see it in front of "fallacious."
[3] The rules must be strict, but they need not be demanding. So the most
effective type of rules are those about superficial matters, like doctrinal
minutiae, or the precise words adherents must use. Such rules can be made
extremely complicated, and yet don't repel potential converts by requiring
significant sacrifice.
The superficial demands of orthodoxy make it an inexpensive substitute for
virtue. And that in turn is one of the reasons orthodoxy is so attractive to
bad people. You could be a horrible person, and yet as long as you're
orthodox, you're better than everyone who isn't.
[4] Arguably there were two. The first had died down somewhat by 2000, but was
followed by a second in the 2010s, probably caused by social media.
[5] Fortunately most of those trying to suppress ideas today still respect
Enlightenment principles enough to pay lip service to them. They know they're
not supposed to ban ideas per se, so they have to recast the ideas as causing
"harm," which sounds like something that can be banned. The more extreme try
to claim speech itself is violence, or even that silence is. But strange as it
may sound, such gymnastics are a good sign. We'll know we're really in trouble
when they stop bothering to invent pretenses for banning ideas — when, like
the medieval church, they say "Damn right we're banning ideas, and in fact
here's a list of them."
[6] People only have the luxury of ignoring the medical consensus about
vaccines because vaccines have worked so well. If we didn't have any vaccines
at all, the mortality rate would be so high that most current anti-vaxxers
would be begging for them. And the situation with freedom of expression is
similar. It's only because they live in a world created by the Enlightenment
that kids from the suburbs can play at banning ideas.
**Thanks** to Marc Andreessen, Chris Best, Trevor Blackwell, Nicholas
Christakis, Daniel Gackle, Jonathan Haidt, Claire Lehmann, Jessica Livingston,
Greg Lukianoff, Robert Morris, and Garry Tan for reading drafts of this.
|
December 2020
Jessica and I have certain words that have special significance when we're
talking about startups. The highest compliment we can pay to founders is to
describe them as "earnest." This is not by itself a guarantee of success. You
could be earnest but incapable. But when founders are both formidable (another
of our words) and earnest, they're as close to unstoppable as you get.
Earnestness sounds like a boring, even Victorian virtue. It seems a bit of an
anachronism that people in Silicon Valley would care about it. Why does this
matter so much?
When you call someone earnest, you're making a statement about their motives.
It means both that they're doing something for the right reasons, and that
they're trying as hard as they can. If we imagine motives as vectors, it means
both the direction and the magnitude are right. Though these are of course
related: when people are doing something for the right reasons, they try
harder. [1]
The reason motives matter so much in Silicon Valley is that so many people
there have the wrong ones. Starting a successful startup makes you rich and
famous. So a lot of the people trying to start them are doing it for those
reasons. Instead of what? Instead of interest in the problem for its own sake.
That is the root of earnestness. [2]
It's also the hallmark of a nerd. Indeed, when people describe themselves as
"x nerds," what they mean is that they're interested in x for its own sake,
and not because it's cool to be interested in x, or because of what they can
get from it. They're saying they care so much about x that they're willing to
sacrifice seeming cool for its sake.
A [_genuine interest_](genius.html) in something is a very powerful motivator
� for some people, the most powerful motivator of all. [3] Which is why it's
what Jessica and I look for in founders. But as well as being a source of
strength, it's also a source of vulnerability. Caring constrains you. The
earnest can't easily reply in kind to mocking banter, or put on a cool facade
of nihil admirari. They care too much. They are doomed to be the straight man.
That's a real disadvantage in your [_teenage years_](nerds.html), when mocking
banter and nihil admirari often have the upper hand. But it becomes an
advantage later.
It's a commonplace now that the kids who were nerds in high school become the
cool kids' bosses later on. But people misunderstand why this happens. It's
not just because the nerds are smarter, but also because they're more earnest.
When the problems get harder than the fake ones you're given in high school,
caring about them starts to matter.
Does it always matter? Do the earnest always win? Not always. It probably
doesn't matter much in politics, or in crime, or in certain types of business
that are similar to crime, like gambling, personal injury law, patent
trolling, and so on. Nor does it matter in academic fields at the more
[_bogus_](https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=hermeneutic+dialectics+hegemonic+phenomenology+intersectionality)
end of the spectrum. And though I don't know enough to say for sure, it may
not matter in some kinds of humor: it may be possible to be completely cynical
and still be very funny. [4]
Looking at the list of fields I mentioned, there's an obvious pattern. Except
possibly for humor, these are all types of work I'd avoid like the plague. So
that could be a useful heuristic for deciding which fields to work in: how
much does earnestness matter? Which can in turn presumably be inferred from
the prevalence of nerds at the top.
Along with "nerd," another word that tends to be associated with earnestness
is "naive." The earnest often seem naive. It's not just that they don't have
the motives other people have. They often don't fully grasp that such motives
exist. Or they may know intellectually that they do, but because they don't
feel them, they forget about them. [5]
It works to be slightly naive not just about motives but also, believe it or
not, about the problems you're working on. Naive optimism can compensate for
the bit rot that [_rapid change_](ecw.html) causes in established beliefs. You
plunge into some problem saying "How hard can it be?", and then after solving
it you learn that it was till recently insoluble.
Naivete is an obstacle for anyone who wants to seem sophisticated, and this is
one reason would-be intellectuals find it so difficult to understand Silicon
Valley. It hasn't been safe for such people to use the word "earnest" outside
scare quotes since Oscar Wilde wrote "The Importance of Being Earnest" in
1895. And yet when you zoom in on Silicon Valley, right into [_Jessica
Livingston's brain_](jessica.html), that's what her x-ray vision is seeking
out in founders. Earnestness! Who'd have guessed? Reporters literally can't
believe it when founders making piles of money say that they started their
companies to make the world better. The situation seems made for mockery. How
can these founders be so naive as not to realize how implausible they sound?
Though those asking this question don't realize it, that's not a rhetorical
question.
A lot of founders are faking it, of course, particularly the smaller fry, and
the soon to be smaller fry. But not all of them. There are a significant
number of founders who really are interested in the problem they're solving
mainly for its own sake.
Why shouldn't there be? We have no difficulty believing that people would be
interested in history or math or even old bus tickets for their own sake. Why
can't there be people interested in self-driving cars or social networks for
their own sake? When you look at the question from this side, it seems obvious
there would be. And isn't it likely that having a deep interest in something
would be a source of great energy and resilience? It is in every other field.
The question really is why we have a blind spot about business. And the answer
to that is obvious if you know enough history. For most of history, making
large amounts of money has not been very intellectually interesting. In
preindustrial times it was never far from robbery, and some areas of business
still retain that character, except using lawyers instead of soldiers.
But there are other areas of business where the work is genuinely interesting.
Henry Ford got to spend much of his time working on interesting technical
problems, and for the last several decades the trend in that direction has
been accelerating. It's much easier now to make a lot of money by working on
something you're interested in than it was [_50 years ago_](re.html). And
that, rather than how fast they grow, may be the most important change that
startups represent. Though indeed, the fact that the work is genuinely
interesting is a big part of why it gets done so fast. [6]
Can you imagine a more important change than one in the relationship between
intellectual curiosity and money? These are two of the most powerful forces in
the world, and in my lifetime they've become significantly more aligned. How
could you not be fascinated to watch something like this happening in real
time?
I meant this essay to be about earnestness generally, and now I've gone and
talked about startups again. But I suppose at least it serves as an example of
an x nerd in the wild.
**Notes**
[1] It's interesting how many different ways there are _not_ to be earnest: to
be cleverly cynical, to be superficially brilliant, to be conspicuously
virtuous, to be cool, to be sophisticated, to be orthodox, to be a snob, to
bully, to pander, to be on the make. This pattern suggests that earnestness is
not one end of a continuum, but a target one can fall short of in multiple
dimensions.
Another thing I notice about this list is that it sounds like a list of the
ways people behave on Twitter. Whatever else social media is, it's a vivid
catalogue of ways not to be earnest.
[2] People's motives are as mixed in Silicon Valley as anywhere else. Even the
founders motivated mostly by money tend to be at least somewhat interested in
the problem they're solving, and even the founders most interested in the
problem they're solving also like the idea of getting rich. But there's great
variation in the relative proportions of different founders' motivations.
And when I talk about "wrong" motives, I don't mean morally wrong. There's
nothing morally wrong with starting a startup to make money. I just mean that
those startups don't do as well.
[3] The most powerful motivator for most people is probably family. But there
are some for whom intellectual curiosity comes first. In his (wonderful)
autobiography, Paul Halmos says explicitly that for a mathematician, math must
come before anything else, including family. Which at least implies that it
did for him.
[4] Interestingly, just as the word "nerd" implies earnestness even when used
as a metaphor, the word "politics" implies the opposite. It's not only in
actual politics that earnestness seems to be a handicap, but also in office
politics and academic politics.
[5] It's a bigger social error to seem naive in most European countries than
it is in America, and this may be one of subtler reasons startups are less
common there. Founder culture is completely at odds with sophisticated
cynicism.
The most earnest part of Europe is Scandinavia, and not surprisingly this is
also the region with the highest number of successful startups per capita.
[6] Much of business is schleps, and probably always will be. But even being a
professor is largely schleps. It would be interesting to collect statistics
about the schlep ratios of different jobs, but I suspect they'd rarely be less
than 30%.
**Thanks** to Trevor Blackwell, Patrick Collison, Suhail Doshi, Jessica
Livingston, Mattias Ljungman, Harj Taggar, and Kyle Vogt for reading drafts of
this.
|
September 2009
When meeting people you don't know very well, the convention is to seem extra
friendly. You smile and say "pleased to meet you," whether you are or not.
There's nothing dishonest about this. Everyone knows that these little social
lies aren't meant to be taken literally, just as everyone knows that "Can you
pass the salt?" is only grammatically a question.
I'm perfectly willing to smile and say "pleased to meet you" when meeting new
people. But there is another set of customs for being ingratiating in print
that are not so harmless.
The reason there's a convention of being ingratiating in print is that most
essays are written to persuade. And as any politician could tell you, the way
to persuade people is not just to baldly state the facts. You have to add a
spoonful of sugar to make the medicine go down.
For example, a politician announcing the cancellation of a government program
will not merely say "The program is canceled." That would seem offensively
curt. Instead he'll spend most of his time talking about the noble effort made
by the people who worked on it.
The reason these conventions are more dangerous is that they interact with the
ideas. Saying "pleased to meet you" is just something you prepend to a
conversation, but the sort of spin added by politicians is woven through it.
We're starting to move from social lies to real lies.
Here's an example of a paragraph from an essay I wrote about [labor
unions](unions.html). As written, it tends to offend people who like unions.
> People who think the labor movement was the creation of heroic union
> organizers have a problem to explain: why are unions shrinking now? The best
> they can do is fall back on the default explanation of people living in
> fallen civilizations. Our ancestors were giants. The workers of the early
> twentieth century must have had a moral courage that's lacking today.
Now here's the same paragraph rewritten to please instead of offending them:
> Early union organizers made heroic sacrifices to improve conditions for
> workers. But though labor unions are shrinking now, it's not because present
> union leaders are any less courageous. An employer couldn't get away with
> hiring thugs to beat up union leaders today, but if they did, I see no
> reason to believe today's union leaders would shrink from the challenge. So
> I think it would be a mistake to attribute the decline of unions to some
> kind of decline in the people who run them. Early union leaders were heroic,
> certainly, but we should not suppose that if unions have declined, it's
> because present union leaders are somehow inferior. The cause must be
> external. [1]
It makes the same point: that it can't have been the personal qualities of
early union organizers that made unions successful, but must have been some
external factor, or otherwise present-day union leaders would have to be
inferior people. But written this way it seems like a defense of present-day
union organizers rather than an attack on early ones. That makes it more
persuasive to people who like unions, because it seems sympathetic to their
cause.
I believe everything I wrote in the second version. Early union leaders did
make heroic sacrifices. And present union leaders probably would rise to the
occasion if necessary. People tend to; I'm skeptical about the idea of "the
greatest generation." [2]
If I believe everything I said in the second version, why didn't I write it
that way? Why offend people needlessly?
Because I'd rather offend people than pander to them, and if you write about
controversial topics you have to choose one or the other. The degree of
courage of past or present union leaders is beside the point; all that matters
for the argument is that they're the same. But if you want to please people
who are mistaken, you can't simply tell the truth. You're always going to have
to add some sort of padding to protect their misconceptions from bumping
against reality.
Most writers do. Most writers write to persuade, if only out of habit or
politeness. But I don't write to persuade; I write to figure out. I write to
persuade a hypothetical perfectly unbiased reader.
Since the custom is to write to persuade the actual reader, someone who
doesn't will seem arrogant. In fact, worse than arrogant: since readers are
used to essays that try to please someone, an essay that displeases one side
in a dispute reads as an attempt to pander to the other. To a lot of pro-union
readers, the first paragraph sounds like the sort of thing a right-wing radio
talk show host would say to stir up his followers. But it's not. Something
that curtly contradicts one's beliefs can be hard to distinguish from a
partisan attack on them, but though they can end up in the same place they
come from different sources.
Would it be so bad to add a few extra words, to make people feel better? Maybe
not. Maybe I'm excessively attached to conciseness. I write [code](power.html)
the same way I write essays, making pass after pass looking for anything I can
cut. But I have a legitimate reason for doing this. You don't know what the
ideas are until you get them down to the fewest words. [3]
The danger of the second paragraph is not merely that it's longer. It's that
you start to lie to yourself. The ideas start to get mixed together with the
spin you've added to get them past the readers' misconceptions.
I think the goal of an essay should be to discover [surprising](essay.html)
things. That's my goal, at least. And most surprising means most different
from what people currently believe. So writing to persuade and writing to
discover are diametrically opposed. The more your conclusions disagree with
readers' present beliefs, the more effort you'll have to expend on selling
your ideas rather than having them. As you accelerate, this drag increases,
till eventually you reach a point where 100% of your energy is devoted to
overcoming it and you can't go any faster.
It's hard enough to overcome one's own misconceptions without having to think
about how to get the resulting ideas past other people's. I worry that if I
wrote to persuade, I'd start to shy away unconsciously from ideas I knew would
be hard to sell. When I notice something surprising, it's usually very faint
at first. There's nothing more than a slight stirring of discomfort. I don't
want anything to get in the way of noticing it consciously.
**Notes**
[1] I had a strange feeling of being back in high school writing this. To get
a good grade you had to both write the sort of pious crap you were expected
to, but also seem to be writing with conviction. The solution was a kind of
method acting. It was revoltingly familiar to slip back into it.
[2] Exercise for the reader: rephrase that thought to please the same people
the first version would offend.
[3] Come to think of it, there is one way in which I deliberately pander to
readers, because it doesn't change the number of words: I switch person. This
flattering distinction seems so natural to the average reader that they
probably don't notice even when I switch in mid-sentence, though you tend to
notice when it's done as conspicuously as this.
**Thanks** to Jessica Livingston and Robert Morris for reading drafts of this.
**Note:** An earlier version of this essay began by talking about why people
dislike Michael Arrington. I now believe that was mistaken, and that most
people don't dislike him for the same reason I did when I first met him, but
simply because he writes about controversial things.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
August 2010
Two years ago I [wrote](http://www.paulgraham.com/googles.html#next) about
what I called "a huge, unexploited opportunity in startup funding:" the
growing disconnect between VCs, whose current business model requires them to
invest large amounts, and a large class of startups that need less than they
used to. Increasingly, startups want a couple hundred thousand dollars, not a
couple million. [1]
The opportunity is a lot less unexploited now. Investors have poured into this
territory from both directions. VCs are much more likely to make angel-sized
investments than they were a year ago. And meanwhile the past year has seen a
dramatic increase in a new type of investor: the super-angel, who operates
like an angel, but using other people's money, like a VC.
Though a lot of investors are entering this territory, there is still room for
more. The distribution of investors should mirror the distribution of
startups, which has the usual power law dropoff. So there should be a lot more
people investing tens or hundreds of thousands than millions. [2]
In fact, it may be good for angels that there are more people doing angel-
sized deals, because if angel rounds become more legitimate, then startups may
start to opt for angel rounds even when they could, if they wanted, raise
series A rounds from VCs. One reason startups prefer series A rounds is that
they're more prestigious. But if angel investors become more active and better
known, they'll increasingly be able to compete with VCs in brand.
Of course, prestige isn't the main reason to prefer a series A round. A
startup will probably get more attention from investors in a series A round
than an angel round. So if a startup is choosing between an angel round and an
A round from a good VC fund, I usually advise them to take the A round. [3]
But while series A rounds aren't going away, I think VCs should be more
worried about super-angels than vice versa. Despite their name, the super-
angels are really mini VC funds, and they clearly have existing VCs in their
sights.
They would seem to have history on their side. The pattern here seems the same
one we see when startups and established companies enter a new market. Online
video becomes possible, and YouTube plunges right in, while existing media
companies embrace it only half-willingly, driven more by fear than hope, and
aiming more to protect their turf than to do great things for users. Ditto for
PayPal. This pattern is repeated over and over, and it's usually the invaders
who win. In this case the super-angels are the invaders. Angel rounds are
their whole business, as online video was for YouTube. Whereas VCs who make
angel investments mostly do it as a way to generate deal flow for series A
rounds. [4]
On the other hand, startup investing is a very strange business. Nearly all
the returns are concentrated in a few big winners. If the super-angels merely
fail to invest in (and to some extent produce) the big winners, they'll be out
of business, even if they invest in all the others.
**VCs**
Why don't VCs start doing smaller series A rounds? The sticking point is board
seats. In a traditional series A round, the partner whose deal it is takes a
seat on the startup's board. If we assume the average startup runs for 6 years
and a partner can bear to be on 12 boards at once, then a VC fund can do 2
series A deals per partner per year.
It has always seemed to me the solution is to take fewer board seats. You
don't have to be on the board to help a startup. Maybe VCs feel they need the
power that comes with board membership to ensure their money isn't wasted. But
have they tested that theory? Unless they've tried not taking board seats and
found their returns are lower, they're not bracketing the problem.
I'm not saying VCs don't help startups. The good ones help them a lot. What
I'm saying is that the kind of help that matters, you may not have to be a
board member to give. [5]
How will this all play out? Some VCs will probably adapt, by doing more,
smaller deals. I wouldn't be surprised if by streamlining their selection
process and taking fewer board seats, VC funds could do 2 to 3 times as many
series A rounds with no loss of quality.
But other VCs will make no more than superficial changes. VCs are
conservative, and the threat to them isn't mortal. The VC funds that don't
adapt won't be violently displaced. They'll edge gradually into a different
business without realizing it. They'll still do what they will call series A
rounds, but these will increasingly be de facto series B rounds. [6]
In such rounds they won't get the 25 to 40% of the company they do now. You
don't give up as much of the company in later rounds unless something is
seriously wrong. Since the VCs who don't adapt will be investing later, their
returns from winners may be smaller. But investing later should also mean they
have fewer losers. So their ratio of risk to return may be the same or even
better. They'll just have become a different, more conservative, type of
investment.
**Angels**
In the big angel rounds that increasingly compete with series A rounds, the
investors won't take as much equity as VCs do now. And VCs who try to compete
with angels by doing more, smaller deals will probably find they have to take
less equity to do it. Which is good news for founders: they'll get to keep
more of the company.
The deal terms of angel rounds will become less restrictive too—not just less
restrictive than series A terms, but less restrictive than angel terms have
traditionally been.
In the future, angel rounds will less often be for specific amounts or have a
lead investor. In the old days, the standard m.o. for startups was to find one
angel to act as the lead investor. You'd negotiate a round size and valuation
with the lead, who'd supply some but not all of the money. Then the startup
and the lead would cooperate to find the rest.
The future of angel rounds looks more like this: instead of a fixed round
size, startups will do a rolling close, where they take money from investors
one at a time till they feel they have enough. [7] And though there's going to
be one investor who gives them the first check, and his or her help in
recruiting other investors will certainly be welcome, this initial investor
will no longer be the lead in the old sense of managing the round. The startup
will now do that themselves.
There will continue to be lead investors in the sense of investors who take
the lead in _advising_ a startup. They may also make the biggest investment.
But they won't always have to be the one terms are negotiated with, or be the
first money in, as they have in the past. Standardized paperwork will do away
with the need to negotiate anything except the valuation, and that will get
easier too.
If multiple investors have to share a valuation, it will be whatever the
startup can get from the first one to write a check, limited by their guess at
whether this will make later investors balk. But there may not have to be just
one valuation. Startups are increasingly raising money on convertible notes,
and convertible notes have not valuations but at most valuation _caps_ : caps
on what the effective valuation will be when the debt converts to equity (in a
later round, or upon acquisition if that happens first). That's an important
difference because it means a startup could do multiple notes at once with
different caps. This is now starting to happen, and I predict it will become
more common.
**Sheep**
The reason things are moving this way is that the old way sucked for startups.
Leads could (and did) use a fixed size round as a legitimate-seeming way of
saying what all founders hate to hear: I'll invest if other people will. Most
investors, unable to judge startups for themselves, rely instead on the
opinions of other investors. If everyone wants in, they want in too; if not,
not. Founders hate this because it's a recipe for deadlock, and delay is the
thing a startup can least afford. Most investors know this m.o. is lame, and
few say openly that they're doing it. But the craftier ones achieve the same
result by offering to lead rounds of fixed size and supplying only part of the
money. If the startup can't raise the rest, the lead is out too. How could
they go ahead with the deal? The startup would be underfunded!
In the future, investors will increasingly be unable to offer investment
subject to contingencies like other people investing. Or rather, investors who
do that will get last place in line. Startups will go to them only to fill up
rounds that are mostly subscribed. And since hot startups tend to have rounds
that are oversubscribed, being last in line means they'll probably miss the
hot deals. Hot deals and successful startups are not identical, but there is a
significant correlation. [8] So investors who won't invest unilaterally will
have lower returns.
Investors will probably find they do better when deprived of this crutch
anyway. Chasing hot deals doesn't make investors choose better; it just makes
them feel better about their choices. I've seen feeding frenzies both form and
fall apart many times, and as far as I can tell they're mostly random. [9] If
investors can no longer rely on their herd instincts, they'll have to think
more about each startup before investing. They may be surprised how well this
works.
Deadlock wasn't the only disadvantage of letting a lead investor manage an
angel round. The investors would not infrequently collude to push down the
valuation. And rounds took too long to close, because however motivated the
lead was to get the round closed, he was not a tenth as motivated as the
startup.
Increasingly, startups are taking charge of their own angel rounds. Only a few
do so far, but I think we can already declare the old way dead, because those
few are the best startups. They're the ones in a position to tell investors
how the round is going to work. And if the startups you want to invest in do
things a certain way, what difference does it make what the others do?
**Traction**
In fact, it may be slightly misleading to say that angel rounds will
increasingly take the place of series A rounds. What's really happening is
that startup-controlled rounds are taking the place of investor-controlled
rounds.
This is an instance of a very important meta-trend, one that Y Combinator
itself has been based on from the beginning: founders are becoming
increasingly powerful relative to investors. So if you want to predict what
the future of venture funding will be like, just ask: how would founders like
it to be? One by one, all the things founders dislike about raising money are
going to get eliminated. [10]
Using that heuristic, I'll predict a couple more things. One is that investors
will increasingly be unable to wait for startups to have "traction" before
they put in significant money. It's hard to predict in advance which startups
will succeed. So most investors prefer, if they can, to wait till the startup
is already succeeding, then jump in quickly with an offer. Startups hate this
as well, partly because it tends to create deadlock, and partly because it
seems kind of slimy. If you're a promising startup but don't yet have
significant growth, all the investors are your friends in words, but few are
in actions. They all say they love you, but they all wait to invest. Then when
you start to see growth, they claim they were your friend all along, and are
aghast at the thought you'd be so disloyal as to leave them out of your round.
If founders become more powerful, they'll be able to make investors give them
more money upfront.
(The worst variant of this behavior is the tranched deal, where the investor
makes a small initial investment, with more to follow if the startup does
well. In effect, this structure gives the investor a free option on the next
round, which they'll only take if it's worse for the startup than they could
get in the open market. Tranched deals are an abuse. They're increasingly
rare, and they're going to get rarer.) [11]
Investors don't like trying to predict which startups will succeed, but
increasingly they'll have to. Though the way that happens won't necessarily be
that the behavior of existing investors will change; it may instead be that
they'll be replaced by other investors with different behavior—that investors
who understand startups well enough to take on the hard problem of predicting
their trajectory will tend to displace suits whose skills lie more in raising
money from LPs.
**Speed**
The other thing founders hate most about fundraising is how long it takes. So
as founders become more powerful, rounds should start to close faster.
Fundraising is still terribly distracting for startups. If you're a founder in
the middle of raising a round, the round is the [top idea in your
mind](top.html), which means working on the company isn't. If a round takes 2
months to close, which is reasonably fast by present standards, that means 2
months during which the company is basically treading water. That's the worst
thing a startup could do.
So if investors want to get the best deals, the way to do it will be to close
faster. Investors don't need weeks to make up their minds anyway. We decide
based on about 10 minutes of reading an application plus 10 minutes of in
person interview, and we only regret about 10% of our decisions. If we can
decide in 20 minutes, surely the next round of investors can decide in a
couple days. [12]
There are a lot of institutionalized delays in startup funding: the multi-week
mating dance with investors; the distinction between termsheets and deals; the
fact that each series A has enormously elaborate, custom paperwork. Both
founders and investors tend to take these for granted. It's the way things
have always been. But ultimately the reason these delays exist is that they're
to the advantage of investors. More time gives investors more information
about a startup's trajectory, and it also tends to make startups more pliable
in negotiations, since they're usually short of money.
These conventions weren't designed to drag out the funding process, but that's
why they're allowed to persist. Slowness is to the advantage of investors, who
have in the past been the ones with the most power. But there is no need for
rounds to take months or even weeks to close, and once founders realize that,
it's going to stop. Not just in angel rounds, but in series A rounds too. The
future is simple deals with standard terms, done quickly.
One minor abuse that will get corrected in the process is option pools. In a
traditional series A round, before the VCs invest they make the company set
aside a block of stock for future hires—usually between 10 and 30% of the
company. The point is to ensure this dilution is borne by the existing
shareholders. The practice isn't dishonest; founders know what's going on. But
it makes deals unnecessarily complicated. In effect the valuation is 2
numbers. There's no need to keep doing this. [13]
The final thing founders want is to be able to sell some of their own stock in
later rounds. This won't be a change, because the practice is now quite
common. A lot of investors hated the idea, but the world hasn't exploded as a
result, so it will happen more, and more openly.
**Surprise**
I've talked here about a bunch of changes that will be forced on investors as
founders become more powerful. Now the good news: investors may actually make
more money as a result.
A couple days ago an interviewer [asked
me](http://techcrunch.tv/watch?id=Q3amZtMTryrpiP80cbUtsV2ah92eZP2m) if
founders having more power would be better or worse for the world. I was
surprised, because I'd never considered that question. Better or worse, it's
happening. But after a second's reflection, the answer seemed obvious.
Founders understand their companies better than investors, and it has to be
better if the people with more knowledge have more power.
One of the mistakes novice pilots make is overcontrolling the aircraft:
applying corrections too vigorously, so the aircraft oscillates about the
desired configuration instead of approaching it asymptotically. It seems
probable that investors have till now on average been overcontrolling their
portfolio companies. In a lot of startups, the biggest source of stress for
the founders is not competitors but investors. Certainly it was for us at
Viaweb. And this is not a new phenomenon: investors were James Watt's biggest
problem too. If having less power prevents investors from overcontrolling
startups, it should be better not just for founders but for investors too.
Investors may end up with less stock per startup, but startups will probably
do better with founders more in control, and there will almost certainly be
more of them. Investors all compete with one another for deals, but they
aren't one another's main competitor. Our main competitor is employers. And so
far that competitor is crushing us. Only a tiny fraction of people who could
start a startup do. Nearly all customers choose the competing product, a job.
Why? Well, let's look at the product we're offering. An unbiased review would
go something like this:
> Starting a startup gives you more freedom and the opportunity to make a lot
> more money than a job, but it's also hard work and at times very stressful.
Much of the stress comes from dealing with investors. If reforming the
investment process removed that stress, we'd make our product much more
attractive. The kind of people who make good startup founders don't mind
dealing with technical problems—they enjoy technical problems—but they hate
the type of problems investors cause.
Investors have no idea that when they maltreat one startup, they're preventing
10 others from happening, but they are. Indirectly, but they are. So when
investors stop trying to squeeze a little more out of their existing deals,
they'll find they're net ahead, because so many more new deals appear.
One of our axioms at Y Combinator is not to think of deal flow as a zero-sum
game. Our main focus is to encourage more startups to happen, not to win a
larger share of the existing stream. We've found this principle very useful,
and we think as it spreads outward it will help later stage investors as well.
"Make something people want" applies to us too.
**Notes**
[1] In this essay I'm talking mainly about software startups. These points
don't apply to types of startups that are still expensive to start, e.g. in
energy or biotech.
Even the cheap kinds of startups will generally raise large amounts at some
point, when they want to hire a lot of people. What has changed is how much
they can get done before that.
[2] It's not the distribution of good startups that has a power law dropoff,
but the distribution of potentially good startups, which is to say, good
deals. There are lots of potential winners, from which a few actual winners
emerge with superlinear certainty.
[3] As I was writing this, I asked some founders who'd taken series A rounds
from top VC funds whether it was worth it, and they unanimously said yes.
The quality of investor is more important than the type of round, though. I'd
take an angel round from good angels over a series A from a mediocre VC.
[4] Founders also worry that taking an angel investment from a VC means
they'll look bad if the VC declines to participate in the next round. The
trend of VC angel investing is so new that it's hard to say how justified this
worry is.
Another danger, pointed out by Mitch Kapor, is that if VCs are only doing
angel deals to generate series A deal flow, then their incentives aren't
aligned with the founders'. The founders want the valuation of the next round
to be high, and the VCs want it to be low. Again, hard to say yet how much of
a problem this will be.
[5] Josh Kopelman pointed out that another way to be on fewer boards at once
is to take board seats for shorter periods.
[6] Google was in this respect as so many others the pattern for the future.
It would be great for VCs if the similarity extended to returns. That's
probably too much to hope for, but the returns may be somewhat higher, as I
explain later.
[7] Doing a rolling close doesn't mean the company is always raising money.
That would be a distraction. The point of a rolling close is to make
fundraising take less time, not more. With a classic fixed sized round, you
don't get any money till all the investors agree, and that often creates a
situation where they all sit waiting for the others to act. A rolling close
usually prevents this.
[8] There are two (non-exclusive) causes of hot deals: the quality of the
company, and domino effects among investors. The former is obviously a better
predictor of success.
[9] Some of the randomness is concealed by the fact that investment is a self
fulfilling prophecy.
[10] The shift in power to founders is exaggerated now because it's a seller's
market. On the next downtick it will seem like I overstated the case. But on
the next uptick after that, founders will seem more powerful than ever.
[11] More generally, it will become less common for the same investor to
invest in successive rounds, except when exercising an option to maintain
their percentage. When the same investor invests in successive rounds, it
often means the startup isn't getting market price. They may not care; they
may prefer to work with an investor they already know; but as the investment
market becomes more efficient, it will become increasingly easy to get market
price if they want it. Which in turn means the investment community will tend
to become more stratified.
[12] The two 10 minuteses have 3 weeks between them so founders can get cheap
plane tickets, but except for that they could be adjacent.
[13] I'm not saying option pools themselves will go away. They're an
administrative convenience. What will go away is investors requiring them.
**Thanks** to Sam Altman, John Bautista, Trevor Blackwell, Paul Buchheit, Jeff
Clavier, Patrick Collison, Ron Conway, Matt Cohler, Chris Dixon, Mitch Kapor,
Josh Kopelman, Pete Koomen, Carolynn Levy, Jessica Livingston, Ariel Poler,
Geoff Ralston, Naval Ravikant, Dan Siroker, Harj Taggar, and Fred Wilson for
reading drafts of this.
|
March 2006, rev August 2009
A couple days ago I found to my surprise that I'd been granted a
[patent](http://patft.uspto.gov/netacgi/nph-
Parser?Sect1=PTO1&Sect2=HITOFF&d=PALL&p=1&u=%2Fnetahtml%2FPTO%2Fsrchnum.htm&r=1&f=G&l=50&s1=6,631,372.PN.&OS=PN/6,631,372&RS=PN/6,631,372).
It issued in 2003, but no one told me. I wouldn't know about it now except
that a few months ago, while visiting Yahoo, I happened to run into a Big
Cheese I knew from working there in the late nineties. He brought up something
called Revenue Loop, which Viaweb had been working on when they bought us.
The idea is basically that you sort search results not in order of textual
"relevance" (as search engines did then) nor in order of how much advertisers
bid (as Overture did) but in order of the bid times the number of
transactions. Ordinarily you'd do this for shopping searches, though in fact
one of the features of our scheme is that it automatically detects which
searches are shopping searches.
If you just order the results in order of bids, you can make the search
results useless, because the first results could be dominated by lame sites
that had bid the most. But if you order results by bid multiplied by
transactions, far from selling out, you're getting a _better_ measure of
relevance. What could be a better sign that someone was satisfied with a
search result than going to the site and buying something?
And, of course, this algorithm automatically maximizes the revenue of the
search engine.
Everyone is focused on this type of approach now, but few were in 1998\. In
1998 it was all about selling banner ads. We didn't know that, so we were
pretty excited when we figured out what seemed to us the optimal way of doing
shopping searches.
When Yahoo was thinking of buying us, we had a meeting with Jerry Yang in New
York. For him, I now realize, this was supposed to be one of those meetings
when you check out a company you've pretty much decided to buy, just to make
sure they're ok guys. We weren't expected to do more than chat and seem smart
and reasonable. He must have been dismayed when I jumped up to the whiteboard
and launched into a presentation of our exciting new technology.
I was just as dismayed when he didn't seem to care at all about it. At the
time I thought, "boy, is this guy poker-faced. We present to him what has to
be the optimal way of sorting product search results, and he's not even
curious." I didn't realize till much later why he didn't care. In 1998,
advertisers were overpaying enormously for ads on web sites. In 1998, if
advertisers paid the maximum that traffic was worth to them, Yahoo's revenues
would have _decreased._
Things are different now, of course. Now this sort of thing is all the rage.
So when I ran into the Yahoo exec I knew from the old days in the Yahoo
cafeteria a few months ago, the first thing he remembered was not
(fortunately) all the fights I had with him, but Revenue Loop.
"Well," I said, "I think we actually applied for a patent on it. I'm not sure
what happened to the application after I left."
"Really? That would be an important patent."
So someone investigated, and sure enough, that patent application had
continued in the pipeline for several years after, and finally issued in 2003.
The main thing that struck me on reading it, actually, is that lawyers at some
point messed up my nice clear writing. Some clever person with a spell checker
reduced one section to Zen-like incomprehensibility:
> Also, common spelling errors will tend to get fixed. For example, if users
> searching for "compact disc player" end up spending considerable money at
> sites offering compact disc players, then those pages will have a higher
> relevance for that search phrase, even though the phrase "compact disc
> player" is not present on those pages.
(That "compat disc player" wasn't a typo, guys.)
For the fine prose of the original, see the provisional application of
February 1998, back when we were still Viaweb and couldn't afford to pay
lawyers to turn every "a lot of" into "considerable."
|
**Like to build things?** Try [Hacker News](http://news.ycombinator.com).
August 2002
_(This article describes the spam-filtering techniques used in the spamproof
web-based mail reader we built to exercise[Arc](arc.html). An improved
algorithm is described in [Better Bayesian Filtering](better.html).)_
I think it's possible to stop spam, and that content-based filters are the way
to do it. The Achilles heel of the spammers is their message. They can
circumvent any other barrier you set up. They have so far, at least. But they
have to deliver their message, whatever it is. If we can write software that
recognizes their messages, there is no way they can get around that.
_ _ _
To the recipient, spam is easily recognizable. If you hired someone to read
your mail and discard the spam, they would have little trouble doing it. How
much do we have to do, short of AI, to automate this process?
I think we will be able to solve the problem with fairly simple algorithms. In
fact, I've found that you can filter present-day spam acceptably well using
nothing more than a Bayesian combination of the spam probabilities of
individual words. Using a slightly tweaked (as described below) Bayesian
filter, we now miss less than 5 per 1000 spams, with 0 false positives.
The statistical approach is not usually the first one people try when they
write spam filters. Most hackers' first instinct is to try to write software
that recognizes individual properties of spam. You look at spams and you
think, the gall of these guys to try sending me mail that begins "Dear Friend"
or has a subject line that's all uppercase and ends in eight exclamation
points. I can filter out that stuff with about one line of code.
And so you do, and in the beginning it works. A few simple rules will take a
big bite out of your incoming spam. Merely looking for the word "click" will
catch 79.7% of the emails in my spam corpus, with only 1.2% false positives.
I spent about six months writing software that looked for individual spam
features before I tried the statistical approach. What I found was that
recognizing that last few percent of spams got very hard, and that as I made
the filters stricter I got more false positives.
False positives are innocent emails that get mistakenly identified as spams.
For most users, missing legitimate email is an order of magnitude worse than
receiving spam, so a filter that yields false positives is like an acne cure
that carries a risk of death to the patient.
The more spam a user gets, the less likely he'll be to notice one innocent
mail sitting in his spam folder. And strangely enough, the better your spam
filters get, the more dangerous false positives become, because when the
filters are really good, users will be more likely to ignore everything they
catch.
I don't know why I avoided trying the statistical approach for so long. I
think it was because I got addicted to trying to identify spam features
myself, as if I were playing some kind of competitive game with the spammers.
(Nonhackers don't often realize this, but most hackers are very competitive.)
When I did try statistical analysis, I found immediately that it was much
cleverer than I had been. It discovered, of course, that terms like
"virtumundo" and "teens" were good indicators of spam. But it also discovered
that "per" and "FL" and "ff0000" are good indicators of spam. In fact,
"ff0000" (html for bright red) turns out to be as good an indicator of spam as
any pornographic term.
_ _ _
Here's a sketch of how I do statistical filtering. I start with one corpus of
spam and one of nonspam mail. At the moment each one has about 4000 messages
in it. I scan the entire text, including headers and embedded html and
javascript, of each message in each corpus. I currently consider alphanumeric
characters, dashes, apostrophes, and dollar signs to be part of tokens, and
everything else to be a token separator. (There is probably room for
improvement here.) I ignore tokens that are all digits, and I also ignore html
comments, not even considering them as token separators.
I count the number of times each token (ignoring case, currently) occurs in
each corpus. At this stage I end up with two large hash tables, one for each
corpus, mapping tokens to number of occurrences.
Next I create a third hash table, this time mapping each token to the
probability that an email containing it is a spam, which I calculate as
follows [1]: (let ((g (* 2 (or (gethash word good) 0))) (b (or (gethash word
bad) 0))) (unless (< (+ g b) 5) (max .01 (min .99 (float (/ (min 1 (/ b nbad))
(+ (min 1 (/ g ngood)) (min 1 (/ b nbad))))))))) where word is the token
whose probability we're calculating, good and bad are the hash tables I
created in the first step, and ngood and nbad are the number of nonspam and
spam messages respectively.
I explained this as code to show a couple of important details. I want to bias
the probabilities slightly to avoid false positives, and by trial and error
I've found that a good way to do it is to double all the numbers in good. This
helps to distinguish between words that occasionally do occur in legitimate
email and words that almost never do. I only consider words that occur more
than five times in total (actually, because of the doubling, occurring three
times in nonspam mail would be enough). And then there is the question of what
probability to assign to words that occur in one corpus but not the other.
Again by trial and error I chose .01 and .99. There may be room for tuning
here, but as the corpus grows such tuning will happen automatically anyway.
The especially observant will notice that while I consider each corpus to be a
single long stream of text for purposes of counting occurrences, I use the
number of emails in each, rather than their combined length, as the divisor in
calculating spam probabilities. This adds another slight bias to protect
against false positives.
When new mail arrives, it is scanned into tokens, and the most interesting
fifteen tokens, where interesting is measured by how far their spam
probability is from a neutral .5, are used to calculate the probability that
the mail is spam. If probs is a list of the fifteen individual probabilities,
you calculate the [combined](naivebayes.html) probability thus: (let ((prod
(apply #'* probs))) (/ prod (+ prod (apply #'* (mapcar #'(lambda (x) (- 1 x))
probs))))) One question that arises in practice is what probability to assign
to a word you've never seen, i.e. one that doesn't occur in the hash table of
word probabilities. I've found, again by trial and error, that .4 is a good
number to use. If you've never seen a word before, it is probably fairly
innocent; spam words tend to be all too familiar.
There are examples of this algorithm being applied to actual emails in an
appendix at the end.
I treat mail as spam if the algorithm above gives it a probability of more
than .9 of being spam. But in practice it would not matter much where I put
this threshold, because few probabilities end up in the middle of the range.
_ _ _
One great advantage of the statistical approach is that you don't have to read
so many spams. Over the past six months, I've read literally thousands of
spams, and it is really kind of demoralizing. Norbert Wiener said if you
compete with slaves you become a slave, and there is something similarly
degrading about competing with spammers. To recognize individual spam features
you have to try to get into the mind of the spammer, and frankly I want to
spend as little time inside the minds of spammers as possible.
But the real advantage of the Bayesian approach, of course, is that you know
what you're measuring. Feature-recognizing filters like SpamAssassin assign a
spam "score" to email. The Bayesian approach assigns an actual probability.
The problem with a "score" is that no one knows what it means. The user
doesn't know what it means, but worse still, neither does the developer of the
filter. How many _points_ should an email get for having the word "sex" in it?
A probability can of course be mistaken, but there is little ambiguity about
what it means, or how evidence should be combined to calculate it. Based on my
corpus, "sex" indicates a .97 probability of the containing email being a
spam, whereas "sexy" indicates .99 probability. And Bayes' Rule, equally
unambiguous, says that an email containing both words would, in the (unlikely)
absence of any other evidence, have a 99.97% chance of being a spam.
Because it is measuring probabilities, the Bayesian approach considers all the
evidence in the email, both good and bad. Words that occur disproportionately
_rarely_ in spam (like "though" or "tonight" or "apparently") contribute as
much to decreasing the probability as bad words like "unsubscribe" and "opt-
in" do to increasing it. So an otherwise innocent email that happens to
include the word "sex" is not going to get tagged as spam.
Ideally, of course, the probabilities should be calculated individually for
each user. I get a lot of email containing the word "Lisp", and (so far) no
spam that does. So a word like that is effectively a kind of password for
sending mail to me. In my earlier spam-filtering software, the user could set
up a list of such words and mail containing them would automatically get past
the filters. On my list I put words like "Lisp" and also my zipcode, so that
(otherwise rather spammy-sounding) receipts from online orders would get
through. I thought I was being very clever, but I found that the Bayesian
filter did the same thing for me, and moreover discovered of a lot of words I
hadn't thought of.
When I said at the start that our filters let through less than 5 spams per
1000 with 0 false positives, I'm talking about filtering my mail based on a
corpus of my mail. But these numbers are not misleading, because that is the
approach I'm advocating: filter each user's mail based on the spam and nonspam
mail he receives. Essentially, each user should have two delete buttons,
ordinary delete and delete-as-spam. Anything deleted as spam goes into the
spam corpus, and everything else goes into the nonspam corpus.
You could start users with a seed filter, but ultimately each user should have
his own per-word probabilities based on the actual mail he receives. This (a)
makes the filters more effective, (b) lets each user decide their own precise
definition of spam, and (c) perhaps best of all makes it hard for spammers to
tune mails to get through the filters. If a lot of the brain of the filter is
in the individual databases, then merely tuning spams to get through the seed
filters won't guarantee anything about how well they'll get through individual
users' varying and much more trained filters.
Content-based spam filtering is often combined with a whitelist, a list of
senders whose mail can be accepted with no filtering. One easy way to build
such a whitelist is to keep a list of every address the user has ever sent
mail to. If a mail reader has a delete-as-spam button then you could also add
the from address of every email the user has deleted as ordinary trash.
I'm an advocate of whitelists, but more as a way to save computation than as a
way to improve filtering. I used to think that whitelists would make filtering
easier, because you'd only have to filter email from people you'd never heard
from, and someone sending you mail for the first time is constrained by
convention in what they can say to you. Someone you already know might send
you an email talking about sex, but someone sending you mail for the first
time would not be likely to. The problem is, people can have more than one
email address, so a new from-address doesn't guarantee that the sender is
writing to you for the first time. It is not unusual for an old friend
(especially if he is a hacker) to suddenly send you an email with a new from-
address, so you can't risk false positives by filtering mail from unknown
addresses especially stringently.
In a sense, though, my filters do themselves embody a kind of whitelist (and
blacklist) because they are based on entire messages, including the headers.
So to that extent they "know" the email addresses of trusted senders and even
the routes by which mail gets from them to me. And they know the same about
spam, including the server names, mailer versions, and protocols.
_ _ _
If I thought that I could keep up current rates of spam filtering, I would
consider this problem solved. But it doesn't mean much to be able to filter
out most present-day spam, because spam evolves. Indeed, most [antispam
techniques](falsepositives.html) so far have been like pesticides that do
nothing more than create a new, resistant strain of bugs.
I'm more hopeful about Bayesian filters, because they evolve with the spam. So
as spammers start using "c0ck" instead of "cock" to evade simple-minded spam
filters based on individual words, Bayesian filters automatically notice.
Indeed, "c0ck" is far more damning evidence than "cock", and Bayesian filters
know precisely how much more.
Still, anyone who proposes a plan for spam filtering has to be able to answer
the question: if the spammers knew exactly what you were doing, how well could
they get past you? For example, I think that if checksum-based spam filtering
becomes a serious obstacle, the spammers will just switch to mad-lib
techniques for generating message bodies.
To beat Bayesian filters, it would not be enough for spammers to make their
emails unique or to stop using individual naughty words. They'd have to make
their mails indistinguishable from your ordinary mail. And this I think would
severely constrain them. Spam is mostly sales pitches, so unless your regular
mail is all sales pitches, spams will inevitably have a different character.
And the spammers would also, of course, have to change (and keep changing)
their whole infrastructure, because otherwise the headers would look as bad to
the Bayesian filters as ever, no matter what they did to the message body. I
don't know enough about the infrastructure that spammers use to know how hard
it would be to make the headers look innocent, but my guess is that it would
be even harder than making the message look innocent.
Assuming they could solve the problem of the headers, the spam of the future
will probably look something like this: Hey there. Thought you should check
out the following: http://www.27meg.com/foo because that is about as much
sales pitch as content-based filtering will leave the spammer room to make.
(Indeed, it will be hard even to get this past filters, because if everything
else in the email is neutral, the spam probability will hinge on the url, and
it will take some effort to make that look neutral.)
Spammers range from businesses running so-called opt-in lists who don't even
try to conceal their identities, to guys who hijack mail servers to send out
spams promoting porn sites. If we use filtering to whittle their options down
to mails like the one above, that should pretty much put the spammers on the
"legitimate" end of the spectrum out of business; they feel obliged by various
state laws to include boilerplate about why their spam is not spam, and how to
cancel your "subscription," and that kind of text is easy to recognize.
(I used to think it was naive to believe that stricter laws would decrease
spam. Now I think that while stricter laws may not decrease the amount of spam
that spammers _send,_ they can certainly help filters to decrease the amount
of spam that recipients actually see.)
All along the spectrum, if you restrict the sales pitches spammers can make,
you will inevitably tend to put them out of business. That word _business_ is
an important one to remember. The spammers are businessmen. They send spam
because it works. It works because although the response rate is abominably
low (at best 15 per million, vs 3000 per million for a catalog mailing), the
cost, to them, is practically nothing. The cost is enormous for the
recipients, about 5 man-weeks for each million recipients who spend a second
to delete the spam, but the spammer doesn't have to pay that.
Sending spam does cost the spammer something, though. [2] So the lower we can
get the response rate-- whether by filtering, or by using filters to force
spammers to dilute their pitches-- the fewer businesses will find it worth
their while to send spam.
The reason the spammers use the kinds of [sales
pitches](http://www.milliondollaremails.com) that they do is to increase
response rates. This is possibly even more disgusting than getting inside the
mind of a spammer, but let's take a quick look inside the mind of someone who
_responds_ to a spam. This person is either astonishingly credulous or deeply
in denial about their sexual interests. In either case, repulsive or idiotic
as the spam seems to us, it is exciting to them. The spammers wouldn't say
these things if they didn't sound exciting. And "thought you should check out
the following" is just not going to have nearly the pull with the spam
recipient as the kinds of things that spammers say now. Result: if it can't
contain exciting sales pitches, spam becomes less effective as a marketing
vehicle, and fewer businesses want to use it.
That is the big win in the end. I started writing spam filtering software
because I didn't want have to look at the stuff anymore. But if we get good
enough at filtering out spam, it will stop working, and the spammers will
actually stop sending it.
_ _ _
Of all the approaches to fighting spam, from software to laws, I believe
Bayesian filtering will be the single most effective. But I also think that
the more different kinds of antispam efforts we undertake, the better, because
any measure that constrains spammers will tend to make filtering easier. And
even within the world of content-based filtering, I think it will be a good
thing if there are many different kinds of software being used simultaneously.
The more different filters there are, the harder it will be for spammers to
tune spams to get through them.
**Appendix: Examples of Filtering**
[Here](https://sep.turbifycdn.com/ty/cdn/paulgraham/spam1.txt?t=1688221954&)
is an example of a spam that arrived while I was writing this article. The
fifteen most interesting words in this spam are: qvp0045 indira mx-05
intimail $7500 freeyankeedom cdo bluefoxmedia jpg unsecured platinum 3d0 qves
7c5 7c266675 The words are a mix of stuff from the headers and from the
message body, which is typical of spam. Also typical of spam is that every one
of these words has a spam probability, in my database, of .99. In fact there
are more than fifteen words with probabilities of .99, and these are just the
first fifteen seen.
Unfortunately that makes this email a boring example of the use of Bayes'
Rule. To see an interesting variety of probabilities we have to look at
[this](https://sep.turbifycdn.com/ty/cdn/paulgraham/spam2.txt?t=1688221954&)
actually quite atypical spam.
The fifteen most interesting words in this spam, with their probabilities,
are: madam 0.99 promotion 0.99 republic 0.99 shortest 0.047225013 mandatory
0.047225013 standardization 0.07347802 sorry 0.08221981 supported 0.09019077
people's 0.09019077 enter 0.9075001 quality 0.8921298 organization 0.12454646
investment 0.8568143 very 0.14758544 valuable 0.82347786 This time the
evidence is a mix of good and bad. A word like "shortest" is almost as much
evidence for innocence as a word like "madam" or "promotion" is for guilt. But
still the case for guilt is stronger. If you combine these numbers according
to Bayes' Rule, the resulting probability is .9027.
"Madam" is obviously from spams beginning "Dear Sir or Madam." They're not
very common, but the word "madam" _never_ occurs in my legitimate email, and
it's all about the ratio.
"Republic" scores high because it often shows up in Nigerian scam emails, and
also occurs once or twice in spams referring to Korea and South Africa. You
might say that it's an accident that it thus helps identify this spam. But
I've found when examining spam probabilities that there are a lot of these
accidents, and they have an uncanny tendency to push things in the right
direction rather than the wrong one. In this case, it is not entirely a
coincidence that the word "Republic" occurs in Nigerian scam emails and this
spam. There is a whole class of dubious business propositions involving less
developed countries, and these in turn are more likely to have names that
specify explicitly (because they aren't) that they are republics.[3]
On the other hand, "enter" is a genuine miss. It occurs mostly in unsubscribe
instructions, but here is used in a completely innocent way. Fortunately the
statistical approach is fairly robust, and can tolerate quite a lot of misses
before the results start to be thrown off.
For comparison,
[here](https://sep.turbifycdn.com/ty/cdn/paulgraham/hostexspam.txt?t=1688221954&)
is an example of that rare bird, a spam that gets through the filters. Why?
Because by sheer chance it happens to be loaded with words that occur in my
actual email: perl 0.01 python 0.01 tcl 0.01 scripting 0.01 morris 0.01
graham 0.01491078 guarantee 0.9762507 cgi 0.9734398 paul 0.027040077 quite
0.030676773 pop3 0.042199217 various 0.06080265 prices 0.9359873 managed
0.06451222 difficult 0.071706355 There are a couple pieces of good news here.
First, this mail probably wouldn't get through the filters of someone who
didn't happen to specialize in programming languages and have a good friend
called Morris. For the average user, all the top five words here would be
neutral and would not contribute to the spam probability.
Second, I think filtering based on word pairs (see below) might well catch
this one: "cost effective", "setup fee", "money back" -- pretty incriminating
stuff. And of course if they continued to spam me (or a network I was part
of), "Hostex" itself would be recognized as a spam term.
Finally,
[here](https://sep.turbifycdn.com/ty/cdn/paulgraham/legit.txt?t=1688221954&)
is an innocent email. Its fifteen most interesting words are as follows:
continuation 0.01 describe 0.01 continuations 0.01 example 0.033600237
programming 0.05214485 i'm 0.055427782 examples 0.07972858 color 0.9189189
localhost 0.09883721 hi 0.116539136 california 0.84421706 same 0.15981844 spot
0.1654587 us-ascii 0.16804294 what 0.19212411 Most of the words here indicate
the mail is an innocent one. There are two bad smelling words, "color"
(spammers love colored fonts) and "California" (which occurs in testimonials
and also in menus in forms), but they are not enough to outweigh obviously
innocent words like "continuation" and "example".
It's interesting that "describe" rates as so thoroughly innocent. It hasn't
occurred in a single one of my 4000 spams. The data turns out to be full of
such surprises. One of the things you learn when you analyze spam texts is how
narrow a subset of the language spammers operate in. It's that fact, together
with the equally characteristic vocabulary of any individual user's mail, that
makes Bayesian filtering a good bet.
**Appendix: More Ideas**
One idea that I haven't tried yet is to filter based on word pairs, or even
triples, rather than individual words. This should yield a much sharper
estimate of the probability. For example, in my current database, the word
"offers" has a probability of .96. If you based the probabilities on word
pairs, you'd end up with "special offers" and "valuable offers" having
probabilities of .99 and, say, "approach offers" (as in "this approach
offers") having a probability of .1 or less.
The reason I haven't done this is that filtering based on individual words
already works so well. But it does mean that there is room to tighten the
filters if spam gets harder to detect. (Curiously, a filter based on word
pairs would be in effect a Markov-chaining text generator running in reverse.)
Specific spam features (e.g. not seeing the recipient's address in the to:
field) do of course have value in recognizing spam. They can be considered in
this algorithm by treating them as virtual words. I'll probably do this in
future versions, at least for a handful of the most egregious spam indicators.
Feature-recognizing spam filters are right in many details; what they lack is
an overall discipline for combining evidence.
Recognizing nonspam features may be more important than recognizing spam
features. False positives are such a worry that they demand extraordinary
measures. I will probably in future versions add a second level of testing
designed specifically to avoid false positives. If a mail triggers this second
level of filters it will be accepted even if its spam probability is above the
threshold.
I don't expect this second level of filtering to be Bayesian. It will
inevitably be not only ad hoc, but based on guesses, because the number of
false positives will not tend to be large enough to notice patterns. (It is
just as well, anyway, if a backup system doesn't rely on the same technology
as the primary system.)
Another thing I may try in the future is to focus extra attention on specific
parts of the email. For example, about 95% of current spam includes the url of
a site they want you to visit. (The remaining 5% want you to call a phone
number, reply by email or to a US mail address, or in a few cases to buy a
certain stock.) The url is in such cases practically enough by itself to
determine whether the email is spam.
Domain names differ from the rest of the text in a (non-German) email in that
they often consist of several words stuck together. Though computationally
expensive in the general case, it might be worth trying to decompose them. If
a filter has never seen the token "xxxporn" before it will have an individual
spam probability of .4, whereas "xxx" and "porn" individually have
probabilities (in my corpus) of .9889 and .99 respectively, and a combined
probability of .9998.
I expect decomposing domain names to become more important as spammers are
gradually forced to stop using incriminating words in the text of their
messages. (A url with an ip address is of course an extremely incriminating
sign, except in the mail of a few sysadmins.)
It might be a good idea to have a cooperatively maintained list of urls
promoted by spammers. We'd need a trust metric of the type studied by Raph
Levien to prevent malicious or incompetent submissions, but if we had such a
thing it would provide a boost to any filtering software. It would also be a
convenient basis for boycotts.
Another way to test dubious urls would be to send out a crawler to look at the
site before the user looked at the email mentioning it. You could use a
Bayesian filter to rate the site just as you would an email, and whatever was
found on the site could be included in calculating the probability of the
email being a spam. A url that led to a redirect would of course be especially
suspicious.
One cooperative project that I think really would be a good idea would be to
accumulate a giant corpus of spam. A large, clean corpus is the key to making
Bayesian filtering work well. Bayesian filters could actually use the corpus
as input. But such a corpus would be useful for other kinds of filters too,
because it could be used to test them.
Creating such a corpus poses some technical problems. We'd need trust metrics
to prevent malicious or incompetent submissions, of course. We'd also need
ways of erasing personal information (not just to-addresses and ccs, but also
e.g. the arguments to unsubscribe urls, which often encode the to-address)
from mails in the corpus. If anyone wants to take on this project, it would be
a good thing for the world.
**Appendix: Defining Spam**
I think there is a rough consensus on what spam is, but it would be useful to
have an explicit definition. We'll need to do this if we want to establish a
central corpus of spam, or even to compare spam filtering rates meaningfully.
To start with, spam is not unsolicited commercial email. If someone in my
neighborhood heard that I was looking for an old Raleigh three-speed in good
condition, and sent me an email offering to sell me one, I'd be delighted, and
yet this email would be both commercial and unsolicited. The defining feature
of spam (in fact, its _raison d'etre_) is not that it is unsolicited, but that
it is automated.
It is merely incidental, too, that spam is usually commercial. If someone
started sending mass email to support some political cause, for example, it
would be just as much spam as email promoting a porn site.
I propose we define spam as **unsolicited automated email**. This definition
thus includes some email that many legal definitions of spam don't. Legal
definitions of spam, influenced presumably by lobbyists, tend to exclude mail
sent by companies that have an "existing relationship" with the recipient. But
buying something from a company, for example, does not imply that you have
solicited ongoing email from them. If I order something from an online store,
and they then send me a stream of spam, it's still spam.
Companies sending spam often give you a way to "unsubscribe," or ask you to go
to their site and change your "account preferences" if you want to stop
getting spam. This is not enough to stop the mail from being spam. Not opting
out is not the same as opting in. Unless the recipient explicitly checked a
clearly labelled box (whose default was no) asking to receive the email, then
it is spam.
In some business relationships, you do implicitly solicit certain kinds of
mail. When you order online, I think you implicitly solicit a receipt, and
notification when the order ships. I don't mind when Verisign sends me mail
warning that a domain name is about to expire (at least, if they are the
[actual registrar](http://siliconvalley.internet.com/news/article.php/1441651)
for it). But when Verisign sends me email offering a FREE Guide to Building My
E-Commerce Web Site, that's spam.
**Notes:**
[1] The examples in this article are translated into Common Lisp for, believe
it or not, greater accessibility. The application described here is one that
we wrote in order to test a new Lisp dialect called [Arc](arc.html) that is
not yet released.
[2] Currently the lowest rate seems to be about $200 to send a million spams.
That's very cheap, 1/50th of a cent per spam. But filtering out 95% of spam,
for example, would increase the spammers' cost to reach a given audience by a
factor of 20. Few can have margins big enough to absorb that.
[3] As a rule of thumb, the more qualifiers there are before the name of a
country, the more corrupt the rulers. A country called The Socialist People's
Democratic Republic of X is probably the last place in the world you'd want to
live.
**Thanks** to Sarah Harlin for reading drafts of this; Daniel Giffin (who is
also writing the production Arc interpreter) for several good ideas about
filtering and for creating our mail infrastructure; Robert Morris, Trevor
Blackwell and Erann Gat for many discussions about spam; Raph Levien for
advice about trust metrics; and Chip Coldwell and Sam Steingold for advice
about statistics.
You'll find this essay and 14 others in [**_Hackers &
Painters_**](http://www.amazon.com/gp/product/0596006624).
**More Info:**
|
August 2005
_(This essay is derived from a talk at Defcon 2005.)_
Suppose you wanted to get rid of economic inequality. There are two ways to do
it: give money to the poor, or take it away from the rich. But they amount to
the same thing, because if you want to give money to the poor, you have to get
it from somewhere. You can't get it from the poor, or they just end up where
they started. You have to get it from the rich.
There is of course a way to make the poor richer without simply shifting money
from the rich. You could help the poor become more productive — for example,
by improving access to education. Instead of taking money from engineers and
giving it to checkout clerks, you could enable people who would have become
checkout clerks to become engineers.
This is an excellent strategy for making the poor richer. But the evidence of
the last 200 years shows that it doesn't reduce economic inequality, because
it makes the rich richer too. If there are more engineers, then there are more
opportunities to hire them and to sell them things. Henry Ford couldn't have
made a fortune building cars in a society in which most people were still
subsistence farmers; he would have had neither workers nor customers.
If you want to reduce economic inequality instead of just improving the
overall standard of living, it's not enough just to raise up the poor. What if
one of your newly minted engineers gets ambitious and goes on to become
another Bill Gates? Economic inequality will be as bad as ever. If you
actually want to compress the gap between rich and poor, you have to push down
on the top as well as pushing up on the bottom.
How do you push down on the top? You could try to decrease the productivity of
the people who make the most money: make the best surgeons operate with their
left hands, force popular actors to overeat, and so on. But this approach is
hard to implement. The only practical solution is to let people do the best
work they can, and then (either by taxation or by limiting what they can
charge) to confiscate whatever you deem to be surplus.
So let's be clear what reducing economic inequality means. It is identical
with taking money from the rich.
When you transform a mathematical expression into another form, you often
notice new things. So it is in this case. Taking money from the rich turns out
to have consequences one might not foresee when one phrases the same idea in
terms of "reducing inequality."
The problem is, risk and reward have to be proportionate. A bet with only a
10% chance of winning has to pay more than one with a 50% chance of winning,
or no one will take it. So if you lop off the top of the possible rewards, you
thereby decrease people's willingness to take risks.
Transposing into our original expression, we get: decreasing economic
inequality means decreasing the risk people are willing to take.
There are whole classes of risks that are no longer worth taking if the
maximum return is decreased. One reason high tax rates are disastrous is that
this class of risks includes starting new companies.
**Investors**
Startups are intrinsically risky. A startup is like a small boat in the open
sea. One big wave and you're sunk. A competing product, a downturn in the
economy, a delay in getting funding or regulatory approval, a patent suit,
changing technical standards, the departure of a key employee, the loss of a
big account — any one of these can destroy you overnight. It seems only about
1 in 10 startups succeeds. [1]
Our startup paid its first round of outside investors 36x. Which meant, with
current US tax rates, that it made sense to invest in us if we had better than
a 1 in 24 chance of succeeding. That sounds about right. That's probably
roughly how we looked when we were a couple of nerds with no business
experience operating out of an apartment.
If that kind of risk doesn't pay, venture investing, as we know it, doesn't
happen.
That might be ok if there were other sources of capital for new companies. Why
not just have the government, or some large almost-government organization
like Fannie Mae, do the venture investing instead of private funds?
I'll tell you why that wouldn't work. Because then you're asking government or
almost-government employees to do the one thing they are least able to do:
take risks.
As anyone who has worked for the government knows, the important thing is not
to make the right choices, but to make choices that can be justified later if
they fail. If there is a safe option, that's the one a bureaucrat will choose.
But that is exactly the wrong way to do venture investing. The nature of the
business means that you want to make terribly risky choices, if the upside
looks good enough.
VCs are currently [paid](venturecapital.html) in a way that makes them focus
on the upside: they get a percentage of the fund's gains. And that helps
overcome their understandable fear of investing in a company run by nerds who
look like (and perhaps are) college students.
If VCs weren't allowed to get rich, they'd behave like bureaucrats. Without
hope of gain, they'd have only fear of loss. And so they'd make the wrong
choices. They'd turn down the nerds in favor of the smooth-talking MBA in a
suit, because that investment would be easier to justify later if it failed.
**Founders**
But even if you could somehow redesign venture funding to work without
allowing VCs to become rich, there's another kind of investor you simply
cannot replace: the startups' founders and early employees.
What they invest is their time and ideas. But these are equivalent to money;
the proof is that investors are willing (if forced) to treat them as
interchangeable, granting the same status to "sweat equity" and the equity
they've purchased with cash.
The fact that you're investing time doesn't change the relationship between
risk and reward. If you're going to invest your time in something with a small
chance of succeeding, you'll only do it if there is a proportionately large
payoff. [2] If large payoffs aren't allowed, you may as well play it safe.
Like many startup founders, I did it to get rich. But not because I wanted to
buy expensive things. What I wanted was security. I wanted to make enough
money that I didn't have to worry about money. If I'd been forbidden to make
enough from a startup to do this, I would have sought security by some other
means: for example, by going to work for a big, stable organization from which
it would be hard to get fired. Instead of busting my ass in a startup, I would
have tried to get a nice, low-stress job at a big research lab, or tenure at a
university.
That's what everyone does in societies where risk isn't rewarded. If you can't
ensure your own security, the next best thing is to make a nest for yourself
in some large organization where your status depends mostly on
[seniority](ladder.html). [3]
Even if we could somehow replace investors, I don't see how we could replace
founders. Investors mainly contribute money, which in principle is the same no
matter what the source. But the founders contribute ideas. You can't replace
those.
Let's rehearse the chain of argument so far. I'm heading for a conclusion to
which many readers will have to be dragged kicking and screaming, so I've
tried to make each link unbreakable. Decreasing economic inequality means
taking money from the rich. Since risk and reward are equivalent, decreasing
potential rewards automatically decreases people's appetite for risk. Startups
are intrinsically risky. Without the prospect of rewards proportionate to the
risk, founders will not invest their time in a startup. Founders are
irreplaceable. So eliminating economic inequality means eliminating startups.
Economic inequality is not just a consequence of startups. It's the engine
that drives them, in the same way a fall of water drives a water mill. People
start startups in the hope of becoming much richer than they were before. And
if your society tries to prevent anyone from being much richer than anyone
else, it will also prevent one person from being much richer at t2 than t1.
**Growth**
This argument applies proportionately. It's not just that if you eliminate
economic inequality, you get no startups. To the extent you reduce economic
inequality, you decrease the number of startups. [4] Increase taxes, and
willingness to take risks decreases in proportion.
And that seems bad for everyone. New technology and new jobs both come
disproportionately from new companies. Indeed, if you don't have startups,
pretty soon you won't have established companies either, just as, if you stop
having kids, pretty soon you won't have any adults.
It sounds benevolent to say we ought to reduce economic inequality. When you
phrase it that way, who can argue with you? _Inequality_ has to be bad, right?
It sounds a good deal less benevolent to say we ought to reduce the rate at
which new companies are founded. And yet the one implies the other.
Indeed, it may be that reducing investors' appetite for risk doesn't merely
kill off larval startups, but kills off the most promising ones especially.
Startups yield faster growth at greater risk than established companies. Does
this trend also hold among startups? That is, are the riskiest startups the
ones that generate most growth if they succeed? I suspect the answer is yes.
And that's a chilling thought, because it means that if you cut investors'
appetite for risk, the most beneficial startups are the first to go.
Not all rich people got that way from startups, of course. What if we let
people get rich by starting startups, but taxed away all other surplus wealth?
Wouldn't that at least decrease inequality?
Less than you might think. If you made it so that people could only get rich
by starting startups, people who wanted to get rich would all start startups.
And that might be a great thing. But I don't think it would have much effect
on the distribution of wealth. People who want to get rich will do whatever
they have to. If startups are the only way to do it, you'll just get far more
people starting startups. (If you write the laws very carefully, that is. More
likely, you'll just get a lot of people doing things that can be made to look
on paper like startups.)
If we're determined to eliminate economic inequality, there is still one way
out: we could say that we're willing to go ahead and do without startups. What
would happen if we did?
At a minimum, we'd have to accept lower rates of technological growth. If you
believe that large, established companies could somehow be made to develop new
technology as fast as startups, the ball is in your court to explain how. (If
you can come up with a remotely plausible story, you can make a fortune
writing business books and consulting for large companies.) [5]
Ok, so we get slower growth. Is that so bad? Well, one reason it's bad in
practice is that other countries might not agree to slow down with us. If
you're content to develop new technologies at a slower rate than the rest of
the world, what happens is that you don't invent anything at all. Anything you
might discover has already been invented elsewhere. And the only thing you can
offer in return is raw materials and cheap labor. Once you sink that low,
other countries can do whatever they like with you: install puppet
governments, siphon off your best workers, use your women as prostitutes, dump
their toxic waste on your territory — all the things we do to poor countries
now. The only defense is to isolate yourself, as communist countries did in
the twentieth century. But the problem then is, you have to become a police
state to enforce it.
**Wealth and Power**
I realize startups are not the main target of those who want to eliminate
economic inequality. What they really dislike is the sort of wealth that
becomes self-perpetuating through an alliance with power. For example,
construction firms that fund politicians' campaigns in return for government
contracts, or rich parents who get their children into good colleges by
sending them to expensive schools designed for that purpose. But if you try to
attack this type of wealth through _economic_ policy, it's hard to hit without
destroying startups as collateral damage.
The problem here is not wealth, but corruption. So why not go after
corruption?
We don't need to prevent people from being rich if we can prevent wealth from
translating into power. And there has been progress on that front. Before he
died of drink in 1925, Commodore Vanderbilt's wastrel grandson Reggie ran down
pedestrians on five separate occasions, killing two of them. By 1969, when Ted
Kennedy drove off the bridge at Chappaquiddick, the limit seemed to be down to
one. Today it may well be zero. But what's changed is not variation in wealth.
What's changed is the ability to translate wealth into power.
How do you break the connection between wealth and power? Demand transparency.
Watch closely how power is exercised, and demand an account of how decisions
are made. Why aren't all police interrogations videotaped? Why did 36% of
Princeton's class of 2007 come from prep schools, when only 1.7% of American
kids attend them? Why did the US really invade Iraq? Why don't government
officials disclose more about their finances, and why only during their term
of office?
A friend of mine who knows a lot about computer security says the single most
important step is to log everything. Back when he was a kid trying to break
into computers, what worried him most was the idea of leaving a trail. He was
more inconvenienced by the need to avoid that than by any obstacle
deliberately put in his path.
Like all illicit connections, the connection between wealth and power
flourishes in secret. Expose all transactions, and you will greatly reduce it.
Log everything. That's a strategy that already seems to be working, and it
doesn't have the side effect of making your whole country poor.
I don't think many people realize there is a connection between economic
inequality and risk. I didn't fully grasp it till recently. I'd known for
years of course that if one didn't score in a startup, the other alternative
was to get a cozy, tenured research job. But I didn't understand the equation
governing my behavior. Likewise, it's obvious empirically that a country that
doesn't let people get rich is headed for disaster, whether it's Diocletian's
Rome or Harold Wilson's Britain. But I did not till recently understand the
role risk played.
If you try to attack wealth, you end up nailing risk as well, and with it
growth. If we want a fairer world, I think we're better off attacking one step
downstream, where wealth turns into power.
**Notes**
[1] Success here is defined from the initial investors' point of view: either
an IPO, or an acquisition for more than the valuation at the last round of
funding. The conventional 1 in 10 success rate is suspiciously neat, but
conversations with VCs suggest it's roughly correct for startups overall. Top
VC firms expect to do better.
[2] I'm not claiming founders sit down and calculate the expected after-tax
return from a startup. They're motivated by examples of other people who did
it. And those examples do reflect after-tax returns.
[3] Conjecture: The variation in wealth in a (non-corrupt) country or
organization will be inversely proportional to the prevalence of systems of
seniority. So if you suppress variation in wealth, seniority will become
correspondingly more important. So far, I know of no counterexamples, though
in very corrupt countries you may get both simultaneously. (Thanks to Daniel
Sobral for pointing this out.)
[4] In a country with a truly feudal economy, you might be able to
redistribute wealth successfully, because there are no startups to kill.
[5] The speed at which startups develop new techology is the other reason they
pay so well. As I explained in ["How to Make Wealth"](wealth.html), what you
do in a startup is compress a lifetime's worth of work into a few years. It
seems as dumb to discourage that as to discourage risk-taking.
**Thanks** to Chris Anderson, Trevor Blackwell, Dan Giffin, Jessica
Livingston, and Evan Williams for reading drafts of this essay, and to Langley
Steinert, Sangam Pant, and Mike Moritz for information about venture
investing.
|
February 2003
When we were in junior high school, my friend Rich and I made a map of the
school lunch tables according to popularity. This was easy to do, because kids
only ate lunch with others of about the same popularity. We graded them from A
to E. A tables were full of football players and cheerleaders and so on. E
tables contained the kids with mild cases of Down's Syndrome, what in the
language of the time we called "retards."
We sat at a D table, as low as you could get without looking physically
different. We were not being especially candid to grade ourselves as D. It
would have taken a deliberate lie to say otherwise. Everyone in the school
knew exactly how popular everyone else was, including us.
My stock gradually rose during high school. Puberty finally arrived; I became
a decent soccer player; I started a scandalous underground newspaper. So I've
seen a good part of the popularity landscape.
I know a lot of people who were nerds in school, and they all tell the same
story: there is a strong correlation between being smart and being a nerd, and
an even stronger inverse correlation between being a nerd and being popular.
Being smart seems to _make_ you unpopular.
Why? To someone in school now, that may seem an odd question to ask. The mere
fact is so overwhelming that it may seem strange to imagine that it could be
any other way. But it could. Being smart doesn't make you an outcast in
elementary school. Nor does it harm you in the real world. Nor, as far as I
can tell, is the problem so bad in most other countries. But in a typical
American secondary school, being smart is likely to make your life difficult.
Why?
The key to this mystery is to rephrase the question slightly. Why don't smart
kids make themselves popular? If they're so smart, why don't they figure out
how popularity works and beat the system, just as they do for standardized
tests?
One argument says that this would be impossible, that the smart kids are
unpopular because the other kids envy them for being smart, and nothing they
could do could make them popular. I wish. If the other kids in junior high
school envied me, they did a great job of concealing it. And in any case, if
being smart were really an enviable quality, the girls would have broken
ranks. The guys that guys envy, girls like.
In the schools I went to, being smart just didn't matter much. Kids didn't
admire it or despise it. All other things being equal, they would have
preferred to be on the smart side of average rather than the dumb side, but
intelligence counted far less than, say, physical appearance, charisma, or
athletic ability.
So if intelligence in itself is not a factor in popularity, why are smart kids
so consistently unpopular? The answer, I think, is that they don't really want
to be popular.
If someone had told me that at the time, I would have laughed at him. Being
unpopular in school makes kids miserable, some of them so miserable that they
commit suicide. Telling me that I didn't want to be popular would have seemed
like telling someone dying of thirst in a desert that he didn't want a glass
of water. Of course I wanted to be popular.
But in fact I didn't, not enough. There was something else I wanted more: to
be smart. Not simply to do well in school, though that counted for something,
but to design beautiful rockets, or to write well, or to understand how to
program computers. In general, to make great things.
At the time I never tried to separate my wants and weigh them against one
another. If I had, I would have seen that being smart was more important. If
someone had offered me the chance to be the most popular kid in school, but
only at the price of being of average intelligence (humor me here), I wouldn't
have taken it.
Much as they suffer from their unpopularity, I don't think many nerds would.
To them the thought of average intelligence is unbearable. But most kids would
take that deal. For half of them, it would be a step up. Even for someone in
the eightieth percentile (assuming, as everyone seemed to then, that
intelligence is a scalar), who wouldn't drop thirty points in exchange for
being loved and admired by everyone?
And that, I think, is the root of the problem. Nerds serve two masters. They
want to be popular, certainly, but they want even more to be smart. And
popularity is not something you can do in your spare time, not in the fiercely
competitive environment of an American secondary school.
Alberti, arguably the archetype of the Renaissance Man, writes that "no art,
however minor, demands less than total dedication if you want to excel in it."
I wonder if anyone in the world works harder at anything than American school
kids work at popularity. Navy SEALs and neurosurgery residents seem slackers
by comparison. They occasionally take vacations; some even have hobbies. An
American teenager may work at being popular every waking hour, 365 days a
year.
I don't mean to suggest they do this consciously. Some of them truly are
little Machiavellis, but what I really mean here is that teenagers are always
on duty as conformists.
For example, teenage kids pay a great deal of attention to clothes. They don't
consciously dress to be popular. They dress to look good. But to who? To the
other kids. Other kids' opinions become their definition of right, not just
for clothes, but for almost everything they do, right down to the way they
walk. And so every effort they make to do things "right" is also, consciously
or not, an effort to be more popular.
Nerds don't realize this. They don't realize that it takes work to be popular.
In general, people outside some very demanding field don't realize the extent
to which success depends on constant (though often unconscious) effort. For
example, most people seem to consider the ability to draw as some kind of
innate quality, like being tall. In fact, most people who "can draw" like
drawing, and have spent many hours doing it; that's why they're good at it.
Likewise, popular isn't just something you are or you aren't, but something
you make yourself.
The main reason nerds are unpopular is that they have other things to think
about. Their attention is drawn to books or the natural world, not fashions
and parties. They're like someone trying to play soccer while balancing a
glass of water on his head. Other players who can focus their whole attention
on the game beat them effortlessly, and wonder why they seem so incapable.
Even if nerds cared as much as other kids about popularity, being popular
would be more work for them. The popular kids learned to be popular, and to
want to be popular, the same way the nerds learned to be smart, and to want to
be smart: from their parents. While the nerds were being trained to get the
right answers, the popular kids were being trained to please.
So far I've been finessing the relationship between smart and nerd, using them
as if they were interchangeable. In fact it's only the context that makes them
so. A nerd is someone who isn't socially adept enough. But "enough" depends on
where you are. In a typical American school, standards for coolness are so
high (or at least, so specific) that you don't have to be especially awkward
to look awkward by comparison.
Few smart kids can spare the attention that popularity requires. Unless they
also happen to be good-looking, natural athletes, or siblings of popular kids,
they'll tend to become nerds. And that's why smart people's lives are worst
between, say, the ages of eleven and seventeen. Life at that age revolves far
more around popularity than before or after.
Before that, kids' lives are dominated by their parents, not by other kids.
Kids do care what their peers think in elementary school, but this isn't their
whole life, as it later becomes.
Around the age of eleven, though, kids seem to start treating their family as
a day job. They create a new world among themselves, and standing in this
world is what matters, not standing in their family. Indeed, being in trouble
in their family can win them points in the world they care about.
The problem is, the world these kids create for themselves is at first a very
crude one. If you leave a bunch of eleven-year-olds to their own devices, what
you get is _Lord of the Flies._ Like a lot of American kids, I read this book
in school. Presumably it was not a coincidence. Presumably someone wanted to
point out to us that we were savages, and that we had made ourselves a cruel
and stupid world. This was too subtle for me. While the book seemed entirely
believable, I didn't get the additional message. I wish they had just told us
outright that we were savages and our world was stupid.
Nerds would find their unpopularity more bearable if it merely caused them to
be ignored. Unfortunately, to be unpopular in school is to be actively
persecuted.
Why? Once again, anyone currently in school might think this a strange
question to ask. How could things be any other way? But they could be. Adults
don't normally persecute nerds. Why do teenage kids do it?
Partly because teenagers are still half children, and many children are just
intrinsically cruel. Some torture nerds for the same reason they pull the legs
off spiders. Before you develop a conscience, torture is amusing.
Another reason kids persecute nerds is to make themselves feel better. When
you tread water, you lift yourself up by pushing water down. Likewise, in any
social hierarchy, people unsure of their own position will try to emphasize it
by maltreating those they think rank below. I've read that this is why poor
whites in the United States are the group most hostile to blacks.
But I think the main reason other kids persecute nerds is that it's part of
the mechanism of popularity. Popularity is only partially about individual
attractiveness. It's much more about alliances. To become more popular, you
need to be constantly doing things that bring you close to other popular
people, and nothing brings people closer than a common enemy.
Like a politician who wants to distract voters from bad times at home, you can
create an enemy if there isn't a real one. By singling out and persecuting a
nerd, a group of kids from higher in the hierarchy create bonds between
themselves. Attacking an outsider makes them all insiders. This is why the
worst cases of bullying happen with groups. Ask any nerd: you get much worse
treatment from a group of kids than from any individual bully, however
sadistic.
If it's any consolation to the nerds, it's nothing personal. The group of kids
who band together to pick on you are doing the same thing, and for the same
reason, as a bunch of guys who get together to go hunting. They don't actually
hate you. They just need something to chase.
Because they're at the bottom of the scale, nerds are a safe target for the
entire school. If I remember correctly, the most popular kids don't persecute
nerds; they don't need to stoop to such things. Most of the persecution comes
from kids lower down, the nervous middle classes.
The trouble is, there are a lot of them. The distribution of popularity is not
a pyramid, but tapers at the bottom like a pear. The least popular group is
quite small. (I believe we were the only D table in our cafeteria map.) So
there are more people who want to pick on nerds than there are nerds.
As well as gaining points by distancing oneself from unpopular kids, one loses
points by being close to them. A woman I know says that in high school she
liked nerds, but was afraid to be seen talking to them because the other girls
would make fun of her. Unpopularity is a communicable disease; kids too nice
to pick on nerds will still ostracize them in self-defense.
It's no wonder, then, that smart kids tend to be unhappy in middle school and
high school. Their other interests leave them little attention to spare for
popularity, and since popularity resembles a zero-sum game, this in turn makes
them targets for the whole school. And the strange thing is, this nightmare
scenario happens without any conscious malice, merely because of the shape of
the situation.
For me the worst stretch was junior high, when kid culture was new and harsh,
and the specialization that would later gradually separate the smarter kids
had barely begun. Nearly everyone I've talked to agrees: the nadir is
somewhere between eleven and fourteen.
In our school it was eighth grade, which was ages twelve and thirteen for me.
There was a brief sensation that year when one of our teachers overheard a
group of girls waiting for the school bus, and was so shocked that the next
day she devoted the whole class to an eloquent plea not to be so cruel to one
another.
It didn't have any noticeable effect. What struck me at the time was that she
was surprised. You mean she doesn't know the kind of things they say to one
another? You mean this isn't normal?
It's important to realize that, no, the adults don't know what the kids are
doing to one another. They know, in the abstract, that kids are monstrously
cruel to one another, just as we know in the abstract that people get tortured
in poorer countries. But, like us, they don't like to dwell on this depressing
fact, and they don't see evidence of specific abuses unless they go looking
for it.
Public school teachers are in much the same position as prison wardens.
Wardens' main concern is to keep the prisoners on the premises. They also need
to keep them fed, and as far as possible prevent them from killing one
another. Beyond that, they want to have as little to do with the prisoners as
possible, so they leave them to create whatever social organization they want.
From what I've read, the society that the prisoners create is warped, savage,
and pervasive, and it is no fun to be at the bottom of it.
In outline, it was the same at the schools I went to. The most important thing
was to stay on the premises. While there, the authorities fed you, prevented
overt violence, and made some effort to teach you something. But beyond that
they didn't want to have too much to do with the kids. Like prison wardens,
the teachers mostly left us to ourselves. And, like prisoners, the culture we
created was barbaric.
Why is the real world more hospitable to nerds? It might seem that the answer
is simply that it's populated by adults, who are too mature to pick on one
another. But I don't think this is true. Adults in prison certainly pick on
one another. And so, apparently, do society wives; in some parts of Manhattan,
life for women sounds like a continuation of high school, with all the same
petty intrigues.
I think the important thing about the real world is not that it's populated by
adults, but that it's very large, and the things you do have real effects.
That's what school, prison, and ladies-who-lunch all lack. The inhabitants of
all those worlds are trapped in little bubbles where nothing they do can have
more than a local effect. Naturally these societies degenerate into savagery.
They have no function for their form to follow.
When the things you do have real effects, it's no longer enough just to be
pleasing. It starts to be important to get the right answers, and that's where
nerds show to advantage. Bill Gates will of course come to mind. Though
notoriously lacking in social skills, he gets the right answers, at least as
measured in revenue.
The other thing that's different about the real world is that it's much
larger. In a large enough pool, even the smallest minorities can achieve a
critical mass if they clump together. Out in the real world, nerds collect in
certain places and form their own societies where intelligence is the most
important thing. Sometimes the current even starts to flow in the other
direction: sometimes, particularly in university math and science departments,
nerds deliberately exaggerate their awkwardness in order to seem smarter. John
Nash so admired Norbert Wiener that he adopted his habit of touching the wall
as he walked down a corridor.
As a thirteen-year-old kid, I didn't have much more experience of the world
than what I saw immediately around me. The warped little world we lived in
was, I thought, _the world._ The world seemed cruel and boring, and I'm not
sure which was worse.
Because I didn't fit into this world, I thought that something must be wrong
with me. I didn't realize that the reason we nerds didn't fit in was that in
some ways we were a step ahead. We were already thinking about the kind of
things that matter in the real world, instead of spending all our time playing
an exacting but mostly pointless game like the others.
We were a bit like an adult would be if he were thrust back into middle
school. He wouldn't know the right clothes to wear, the right music to like,
the right slang to use. He'd seem to the kids a complete alien. The thing is,
he'd know enough not to care what they thought. We had no such confidence.
A lot of people seem to think it's good for smart kids to be thrown together
with "normal" kids at this stage of their lives. Perhaps. But in at least some
cases the reason the nerds don't fit in really is that everyone else is crazy.
I remember sitting in the audience at a "pep rally" at my high school,
watching as the cheerleaders threw an effigy of an opposing player into the
audience to be torn to pieces. I felt like an explorer witnessing some bizarre
tribal ritual.
If I could go back and give my thirteen year old self some advice, the main
thing I'd tell him would be to stick his head up and look around. I didn't
really grasp it at the time, but the whole world we lived in was as fake as a
Twinkie. Not just school, but the entire town. Why do people move to suburbia?
To have kids! So no wonder it seemed boring and sterile. The whole place was a
giant nursery, an artificial town created explicitly for the purpose of
breeding children.
Where I grew up, it felt as if there was nowhere to go, and nothing to do.
This was no accident. Suburbs are deliberately designed to exclude the outside
world, because it contains things that could endanger children.
And as for the schools, they were just holding pens within this fake world.
Officially the purpose of schools is to teach kids. In fact their primary
purpose is to keep kids locked up in one place for a big chunk of the day so
adults can get things done. And I have no problem with this: in a specialized
industrial society, it would be a disaster to have kids running around loose.
What bothers me is not that the kids are kept in prisons, but that (a) they
aren't told about it, and (b) the prisons are run mostly by the inmates. Kids
are sent off to spend six years memorizing meaningless facts in a world ruled
by a caste of giants who run after an oblong brown ball, as if this were the
most natural thing in the world. And if they balk at this surreal cocktail,
they're called misfits.
Life in this twisted world is stressful for the kids. And not just for the
nerds. Like any war, it's damaging even to the winners.
Adults can't avoid seeing that teenage kids are tormented. So why don't they
do something about it? Because they blame it on puberty. The reason kids are
so unhappy, adults tell themselves, is that monstrous new chemicals,
_hormones_ , are now coursing through their bloodstream and messing up
everything. There's nothing wrong with the system; it's just inevitable that
kids will be miserable at that age.
This idea is so pervasive that even the kids believe it, which probably
doesn't help. Someone who thinks his feet naturally hurt is not going to stop
to consider the possibility that he is wearing the wrong size shoes.
I'm suspicious of this theory that thirteen-year-old kids are intrinsically
messed up. If it's physiological, it should be universal. Are Mongol nomads
all nihilists at thirteen? I've read a lot of history, and I have not seen a
single reference to this supposedly universal fact before the twentieth
century. Teenage apprentices in the Renaissance seem to have been cheerful and
eager. They got in fights and played tricks on one another of course
(Michelangelo had his nose broken by a bully), but they weren't crazy.
As far as I can tell, the concept of the hormone-crazed teenager is coeval
with suburbia. I don't think this is a coincidence. I think teenagers are
driven crazy by the life they're made to lead. Teenage apprentices in the
Renaissance were working dogs. Teenagers now are neurotic lapdogs. Their
craziness is the craziness of the idle everywhere.
When I was in school, suicide was a constant topic among the smarter kids. No
one I knew did it, but several planned to, and some may have tried. Mostly
this was just a pose. Like other teenagers, we loved the dramatic, and suicide
seemed very dramatic. But partly it was because our lives were at times
genuinely miserable.
Bullying was only part of the problem. Another problem, and possibly an even
worse one, was that we never had anything real to work on. Humans like to
work; in most of the world, your work is your identity. And all the work we
did was [pointless](essay.html), or seemed so at the time.
At best it was practice for real work we might do far in the future, so far
that we didn't even know at the time what we were practicing for. More often
it was just an arbitrary series of hoops to jump through, words without
content designed mainly for testability. (The three main causes of the Civil
War were.... Test: List the three main causes of the Civil War.)
And there was no way to opt out. The adults had agreed among themselves that
this was to be the route to college. The only way to escape this empty life
was to submit to it.
Teenage kids used to have a more active role in society. In pre-industrial
times, they were all apprentices of one sort or another, whether in shops or
on farms or even on warships. They weren't left to create their own societies.
They were junior members of adult societies.
Teenagers seem to have respected adults more then, because the adults were the
visible experts in the skills they were trying to learn. Now most kids have
little idea what their parents do in their distant offices, and see no
connection (indeed, there is precious little) between schoolwork and the work
they'll do as adults.
And if teenagers respected adults more, adults also had more use for
teenagers. After a couple years' training, an apprentice could be a real help.
Even the newest apprentice could be made to carry messages or sweep the
workshop.
Now adults have no immediate use for teenagers. They would be in the way in an
office. So they drop them off at school on their way to work, much as they
might drop the dog off at a kennel if they were going away for the weekend.
What happened? We're up against a hard one here. The cause of this problem is
the same as the cause of so many present ills: specialization. As jobs become
more specialized, we have to train longer for them. Kids in pre-industrial
times started working at about 14 at the latest; kids on farms, where most
people lived, began far earlier. Now kids who go to college don't start
working full-time till 21 or 22. With some degrees, like MDs and PhDs, you may
not finish your training till 30.
Teenagers now are useless, except as cheap labor in industries like fast food,
which evolved to exploit precisely this fact. In almost any other kind of
work, they'd be a net loss. But they're also too young to be left
unsupervised. Someone has to watch over them, and the most efficient way to do
this is to collect them together in one place. Then a few adults can watch all
of them.
If you stop there, what you're describing is literally a prison, albeit a
part-time one. The problem is, many schools practically do stop there. The
stated purpose of schools is to educate the kids. But there is no external
pressure to do this well. And so most schools do such a bad job of teaching
that the kids don't really take it seriously-- not even the smart kids. Much
of the time we were all, students and teachers both, just going through the
motions.
In my high school French class we were supposed to read Hugo's _Les
Miserables._ I don't think any of us knew French well enough to make our way
through this enormous book. Like the rest of the class, I just skimmed the
Cliff's Notes. When we were given a test on the book, I noticed that the
questions sounded odd. They were full of long words that our teacher wouldn't
have used. Where had these questions come from? From the Cliff's Notes, it
turned out. The teacher was using them too. We were all just pretending.
There are certainly great public school teachers. The energy and imagination
of my fourth grade teacher, Mr. Mihalko, made that year something his students
still talk about, thirty years later. But teachers like him were individuals
swimming upstream. They couldn't fix the system.
In almost any group of people you'll find hierarchy. When groups of adults
form in the real world, it's generally for some common purpose, and the
leaders end up being those who are best at it. The problem with most schools
is, they have no purpose. But hierarchy there must be. And so the kids make
one out of nothing.
We have a phrase to describe what happens when rankings have to be created
without any meaningful criteria. We say that the situation _degenerates into a
popularity contest._ And that's exactly what happens in most American schools.
Instead of depending on some real test, one's rank depends mostly on one's
ability to increase one's rank. It's like the court of Louis XIV. There is no
external opponent, so the kids become one another's opponents.
When there is some real external test of skill, it isn't painful to be at the
bottom of the hierarchy. A rookie on a football team doesn't resent the skill
of the veteran; he hopes to be like him one day and is happy to have the
chance to learn from him. The veteran may in turn feel a sense of _noblesse
oblige_. And most importantly, their status depends on how well they do
against opponents, not on whether they can push the other down.
Court hierarchies are another thing entirely. This type of society debases
anyone who enters it. There is neither admiration at the bottom, nor _noblesse
oblige_ at the top. It's kill or be killed.
This is the sort of society that gets created in American secondary schools.
And it happens because these schools have no real purpose beyond keeping the
kids all in one place for a certain number of hours each day. What I didn't
realize at the time, and in fact didn't realize till very recently, is that
the twin horrors of school life, the cruelty and the boredom, both have the
same cause.
The mediocrity of American public schools has worse consequences than just
making kids unhappy for six years. It breeds a rebelliousness that actively
drives kids away from the things they're supposed to be learning.
Like many nerds, probably, it was years after high school before I could bring
myself to read anything we'd been assigned then. And I lost more than books. I
mistrusted words like "character" and "integrity" because they had been so
debased by adults. As they were used then, these words all seemed to mean the
same thing: obedience. The kids who got praised for these qualities tended to
be at best dull-witted prize bulls, and at worst facile schmoozers. If that
was what character and integrity were, I wanted no part of them.
The word I most misunderstood was "tact." As used by adults, it seemed to mean
keeping your mouth shut. I assumed it was derived from the same root as
"tacit" and "taciturn," and that it literally meant being quiet. I vowed that
I would never be tactful; they were never going to shut me up. In fact, it's
derived from the same root as "tactile," and what it means is to have a deft
touch. Tactful is the opposite of clumsy. I don't think I learned this until
college.
Nerds aren't the only losers in the popularity rat race. Nerds are unpopular
because they're distracted. There are other kids who deliberately opt out
because they're so disgusted with the whole process.
Teenage kids, even rebels, don't like to be alone, so when kids opt out of the
system, they tend to do it as a group. At the schools I went to, the focus of
rebellion was drug use, specifically marijuana. The kids in this tribe wore
black concert t-shirts and were called "freaks."
Freaks and nerds were allies, and there was a good deal of overlap between
them. Freaks were on the whole smarter than other kids, though never studying
(or at least never appearing to) was an important tribal value. I was more in
the nerd camp, but I was friends with a lot of freaks.
They used drugs, at least at first, for the social bonds they created. It was
something to do together, and because the drugs were illegal, it was a shared
badge of rebellion.
I'm not claiming that bad schools are the whole reason kids get into trouble
with drugs. After a while, drugs have their own momentum. No doubt some of the
freaks ultimately used drugs to escape from other problems-- trouble at home,
for example. But, in my school at least, the reason most kids _started_ using
drugs was rebellion. Fourteen-year-olds didn't start smoking pot because
they'd heard it would help them forget their problems. They started because
they wanted to join a different tribe.
Misrule breeds rebellion; this is not a new idea. And yet the authorities
still for the most part act as if drugs were themselves the cause of the
problem.
The real problem is the emptiness of school life. We won't see solutions till
adults realize that. The adults who may realize it first are the ones who were
themselves nerds in school. Do you want your kids to be as unhappy in eighth
grade as you were? I wouldn't. Well, then, is there anything we can do to fix
things? Almost certainly. There is nothing inevitable about the current
system. It has come about mostly by default.
Adults, though, are busy. Showing up for school plays is one thing. Taking on
the educational bureaucracy is another. Perhaps a few will have the energy to
try to change things. I suspect the hardest part is realizing that you can.
Nerds still in school should not hold their breath. Maybe one day a heavily
armed force of adults will show up in helicopters to rescue you, but they
probably won't be coming this month. Any immediate improvement in nerds' lives
is probably going to have to come from the nerds themselves.
Merely understanding the situation they're in should make it less painful.
Nerds aren't losers. They're just playing a different game, and a game much
closer to the one played in the real world. Adults know this. It's hard to
find successful adults now who don't claim to have been nerds in high school.
It's important for nerds to realize, too, that school is not life. School is a
strange, artificial thing, half sterile and half feral. It's all-encompassing,
like life, but it isn't the real thing. It's only temporary, and if you look,
you can see beyond it even while you're still in it.
If life seems awful to kids, it's neither because hormones are turning you all
into monsters (as your parents believe), nor because life actually is awful
(as you believe). It's because the adults, who no longer have any economic use
for you, have abandoned you to spend years cooped up together with nothing
real to do. _Any_ society of that type is awful to live in. You don't have to
look any further to explain why teenage kids are unhappy.
I've said some harsh things in this essay, but really the thesis is an
optimistic one-- that several problems we take for granted are in fact not
insoluble after all. Teenage kids are not inherently unhappy monsters. That
should be encouraging news to kids and adults both.
**Thanks** to Sarah Harlin, Trevor Blackwell, Robert Morris, Eric Raymond, and
Jackie Weicker for reading drafts of this essay, and Maria Daniels for
scanning photos.
|
July 2007
An investor wants to give you money for a certain percentage of your startup.
Should you take it? You're about to hire your first employee. How much stock
should you give him?
These are some of the hardest questions founders face. And yet both have the
same answer:
1/(1 - n)
Whenever you're trading stock in your company for anything, whether it's money
or an employee or a deal with another company, the test for whether to do it
is the same. You should give up n% of your company if what you trade it for
improves your average outcome enough that the (100 - n)% you have left is
worth more than the whole company was before.
For example, if an investor wants to buy half your company, how much does that
investment have to improve your average outcome for you to break even?
Obviously it has to double: if you trade half your company for something that
more than doubles the company's average outcome, you're net ahead. You have
half as big a share of something worth more than twice as much.
In the general case, if n is the fraction of the company you're giving up, the
deal is a good one if it makes the company worth more than 1/(1 - n).
For example, suppose Y Combinator offers to fund you in return for 7% of your
company. In this case, n is .07 and 1/(1 - n) is 1.075. So you should take the
deal if you believe we can improve your average outcome by more than 7.5%. If
we improve your outcome by 10%, you're net ahead, because the remaining .93
you hold is worth .93 x 1.1 = 1.023. [1]
One of the things the equity equation shows us is that, financially at least,
taking money from a top VC firm can be a really good deal. Greg Mcadoo from
Sequoia recently said at a YC dinner that when Sequoia invests alone they like
to take about 30% of a company. 1/.7 = 1.43, meaning that deal is worth taking
if they can improve your outcome by more than 43%. For the average startup,
that would be an extraordinary bargain. It would improve the average startup's
prospects by more than 43% just to be able to _say_ they were funded by
Sequoia, even if they never actually got the money.
The reason Sequoia is such a good deal is that the percentage of the company
they take is artificially low. They don't even try to get market price for
their investment; they limit their holdings to leave the founders enough stock
to feel the company is still theirs.
The catch is that Sequoia gets about 6000 business plans a year and funds
about 20 of them, so the odds of getting this great deal are 1 in 300. The
companies that make it through are not average startups.
Of course, there are other factors to consider in a VC deal. It's never just a
straight trade of money for stock. But if it were, taking money from a top
firm would generally be a bargain.
You can use the same formula when giving stock to employees, but it works in
the other direction. If i is the average outcome for the company with the
addition of some new person, then they're worth n such that i = 1/(1 - n).
Which means n = (i - 1)/i.
For example, suppose you're just two founders and you want to hire an
additional hacker who's so good you feel he'll increase the average outcome of
the whole company by 20%. n = (1.2 - 1)/1.2 = .167. So you'll break even if
you trade 16.7% of the company for him.
That doesn't mean 16.7% is the right amount of stock to give him. Stock is not
the only cost of hiring someone: there's usually salary and overhead as well.
And if the company merely breaks even on the deal, there's no reason to do it.
I think to translate salary and overhead into stock you should multiply the
annual rate by about 1.5. Most startups grow fast or die; if you die you don't
have to pay the guy, and if you grow fast you'll be paying next year's salary
out of next year's valuation, which should be 3x this year's. If your
valuation grows 3x a year, the total cost in stock of a new hire's salary and
overhead is 1.5 years' cost at the present valuation. [2]
How much of an additional margin should the company need as the "activation
energy" for the deal? Since this is in effect the company's profit on a hire,
the market will determine that: if you're a hot opportunity, you can charge
more.
Let's run through an example. Suppose the company wants to make a "profit" of
50% on the new hire mentioned above. So subtract a third from 16.7% and we
have 11.1% as his "retail" price. Suppose further that he's going to cost $60k
a year in salary and overhead, x 1.5 = $90k total. If the company's valuation
is $2 million, $90k is 4.5%. 11.1% - 4.5% = an offer of 6.6%.
Incidentally, notice how important it is for early employees to take little
salary. It comes right out of stock that could otherwise be given to them.
Obviously there is a great deal of play in these numbers. I'm not claiming
that stock grants can now be reduced to a formula. Ultimately you always have
to guess. But at least know what you're guessing. If you choose a number based
on your gut feel, or a table of typical grant sizes supplied by a VC firm,
understand what those are estimates of.
And more generally, when you make any decision involving equity, run it
through 1/(1 - n) to see if it makes sense. You should always feel richer
after trading equity. If the trade didn't increase the value of your remaining
shares enough to put you net ahead, you wouldn't have (or shouldn't have) done
it.
**Notes**
[1] This is why we can't believe anyone would think Y Combinator was a bad
deal. Does anyone really think we're so useless that in three months we can't
improve a startup's prospects by 7.5%?
[2] The obvious choice for your present valuation is the post-money valuation
of your last funding round. This probably undervalues the company, though,
because (a) unless your last round just happened, the company is presumably
worth more, and (b) the valuation of an early funding round usually reflects
some other contribution by the investors.
**Thanks** to Sam Altman, Trevor Blackwell, Paul Buchheit, Hutch Fishman,
David Hornik, Paul Kedrosky, Jessica Livingston, Gary Sabot, and Joshua
Schachter for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
August 2008
Raising money is the second hardest part of starting a startup. The hardest
part is making something people want: most startups that die, die because they
didn't do that. But the second biggest cause of death is probably the
difficulty of raising money. Fundraising is brutal.
One reason it's so brutal is simply the brutality of markets. People who've
spent most of their lives in schools or big companies may not have been
exposed to that. Professors and bosses usually feel some sense of
responsibility toward you; if you make a valiant effort and fail, they'll cut
you a break. Markets are less forgiving. Customers don't care how hard you
worked, only whether you solved their problems.
Investors evaluate startups the way customers evaluate products, not the way
bosses evaluate employees. If you're making a valiant effort and failing,
maybe they'll invest in your next startup, but not this one.
But raising money from investors is harder than selling to customers, because
there are so few of them. There's nothing like an efficient market. You're
unlikely to have more than 10 who are interested; it's difficult to talk to
more. So the randomness of any one investor's behavior can really affect you.
Problem number 3: investors are very random. All investors, including us, are
by ordinary standards incompetent. We constantly have to make decisions about
things we don't understand, and more often than not we're wrong.
And yet a lot is at stake. The amounts invested by different types of
investors vary from five thousand dollars to fifty million, but the amount
usually seems large for whatever type of investor it is. Investment decisions
are big decisions.
That combination—making big decisions about things they don't understand—tends
to make investors very skittish. VCs are notorious for leading founders on.
Some of the more unscrupulous do it deliberately. But even the most well-
intentioned investors can behave in a way that would seem crazy in everyday
life. One day they're full of enthusiasm and seem ready to write you a check
on the spot; the next they won't return your phone calls. They're not playing
games with you. They just can't make up their minds. [1]
If that weren't bad enough, these wildly fluctuating nodes are all linked
together. Startup investors all know one another, and (though they hate to
admit it) the biggest factor in their opinion of you is the opinion of other
investors. [2] Talk about a recipe for an unstable system. You get the
opposite of the damping that the fear/greed balance usually produces in
markets. No one is interested in a startup that's a "bargain" because everyone
else hates it.
So the inefficient market you get because there are so few players is
exacerbated by the fact that they act less than independently. The result is a
system like some kind of primitive, multi-celled sea creature, where you
irritate one extremity and the whole thing contracts violently.
Y Combinator is working to fix this. We're trying to increase the number of
investors just as we're increasing the number of startups. We hope that as the
number of both increases we'll get something more like an efficient market. As
t approaches infinity, Demo Day approaches an auction.
Unfortunately, t is still very far from infinity. What does a startup do now,
in the imperfect world we currently inhabit? The most important thing is not
to let fundraising get you down. Startups live or die on morale. If you let
the difficulty of raising money destroy your morale, it will become a self-
fulfilling prophecy.
**Bootstrapping (= Consulting)**
Some would-be founders may by now be thinking, why deal with investors at all?
If raising money is so painful, why do it?
One answer to that is obvious: because you need money to live on. It's a fine
idea in principle to finance your startup with its own revenues, but you can't
create instant customers. Whatever you make, you have to sell a certain amount
to break even. It will take time to grow your sales to that point, and it's
hard to predict, till you try, how long it will take.
We could not have bootstrapped Viaweb, for example. We charged quite a lot for
our software—about $140 per user per month—but it was at least a year before
our revenues would have covered even our paltry costs. We didn't have enough
saved to live on for a year.
If you factor out the "bootstrapped" companies that were actually funded by
their founders through savings or a day job, the remainder either (a) got
really lucky, which is hard to do on demand, or (b) began life as consulting
companies and gradually transformed themselves into product companies.
Consulting is the only option you can count on. But consulting is far from
free money. It's not as painful as raising money from investors, perhaps, but
the pain is spread over a longer period. Years, probably. And for many types
of startup, that delay could be fatal. If you're working on something so
unusual that no one else is likely to think of it, you can take your time.
Joshua Schachter gradually built Delicious on the side while working on Wall
Street. He got away with it because no one else realized it was a good idea.
But if you were building something as obviously necessary as online store
software at about the same time as Viaweb, and you were working on it on the
side while spending most of your time on client work, you were not in a good
position.
Bootstrapping sounds great in principle, but this apparently verdant territory
is one from which few startups emerge alive. The mere fact that bootstrapped
startups tend to be famous on that account should set off alarm bells. If it
worked so well, it would be the norm. [3]
Bootstrapping may get easier, because starting a company is getting cheaper.
But I don't think we'll ever reach the point where most startups can do
without outside funding. Technology tends to get dramatically cheaper, but
living expenses don't.
The upshot is, you can choose your pain: either the short, sharp pain of
raising money, or the chronic ache of consulting. For a given total amount of
pain, raising money is the better choice, because new technology is usually
more valuable now than later.
But although for most startups raising money will be the lesser evil, it's
still a pretty big evil—so big that it can easily kill you. Not merely in the
obvious sense that if you fail to raise money you might have to shut the
company down, but because the _process_ of raising money itself can kill you.
To survive it you need a set of techniques mostly orthogonal to the ones used
in convincing investors, just as mountain climbers need to know survival
techniques that are mostly orthogonal to those used in physically getting up
and down mountains.
**1\. Have low expectations.**
The reason raising money destroys so many startups' morale is not simply that
it's hard, but that it's so much harder than they expected. What kills you is
the disappointment. And the lower your expectations, the harder it is to be
disappointed.
Startup founders tend to be optimistic. This can work well in technology, at
least some of the time, but it's the wrong way to approach raising money.
Better to assume investors will always let you down. Acquirers too, while
we're at it. At YC one of our secondary mantras is "Deals fall through." No
matter what deal you have going on, assume it will fall through. The
predictive power of this simple rule is amazing.
There will be a tendency, as a deal progresses, to start to believe it will
happen, and then to depend on it happening. You must resist this. Tie yourself
to the mast. This is what kills you. Deals do not have a trajectory like most
other human interactions, where shared plans solidify linearly over time.
Deals often fall through at the last moment. Often the other party doesn't
really think about what they want till the last moment. So you can't use your
everyday intuitions about shared plans as a guide. When it comes to deals, you
have to consciously turn them off and become pathologically cynical.
This is harder to do than it sounds. It's very flattering when eminent
investors seem interested in funding you. It's easy to start to believe that
raising money will be quick and straightforward. But it hardly ever is.
**2\. Keep working on your startup.**
It sounds obvious to say that you should keep working on your startup while
raising money. Actually this is hard to do. Most startups don't manage to.
Raising money has a mysterious capacity to suck up all your attention. Even if
you only have one meeting a day with investors, somehow that one meeting will
burn up your whole day. It costs not just the time of the actual meeting, but
the time getting there and back, and the time preparing for it beforehand and
thinking about it afterward.
The best way to survive the distraction of meeting with investors is probably
to partition the company: to pick one founder to deal with investors while the
others keep the company going. This works better when a startup has 3 founders
than 2, and better when the leader of the company is not also the lead
developer. In the best case, the company keeps moving forward at about half
speed.
That's the best case, though. More often than not the company comes to a
standstill while raising money. And that is dangerous for so many reasons.
Raising money always takes longer than you expect. What seems like it's going
to be a 2 week interruption turns into a 4 month interruption. That can be
very demoralizing. And worse still, it can make you less attractive to
investors. They want to invest in companies that are dynamic. A company that
hasn't done anything new in 4 months doesn't seem dynamic, so they start to
lose interest. Investors rarely grasp this, but much of what they're
responding to when they lose interest in a startup is the damage done by their
own indecision.
The solution: put the startup first. Fit meetings with investors into the
spare moments in your development schedule, rather than doing development in
the spare moments between meetings with investors. If you keep the company
moving forward—releasing new features, increasing traffic, doing deals,
getting written about—those investor meetings are more likely to be
productive. Not just because your startup will seem more alive, but also
because it will be better for your own morale, which is one of the main ways
investors judge you.
**3\. Be conservative.**
As conditions get worse, the optimal strategy becomes more conservative. When
things go well you can take risks; when things are bad you want to play it
safe.
I advise approaching fundraising as if it were always going badly. The reason
is that between your ability to delude yourself and the wildly unstable nature
of the system you're dealing with, things probably either already are or could
easily become much worse than they seem.
What I tell most startups we fund is that if someone reputable offers you
funding on reasonable terms, take it. There have been startups that ignored
this advice and got away with it—startups that ignored a good offer in the
hope of getting a better one, and actually did. But in the same position I'd
give the same advice again. Who knows how many bullets were in the gun they
were playing Russian roulette with?
Corollary: if an investor seems interested, don't just let them sit. You can't
assume someone interested in investing will stay interested. In fact, you
can't even tell (_they_ can't even tell) if they're really interested till you
try to convert that interest into money. So if you have hot prospect, either
close them now or write them off. And unless you already have enough funding,
that reduces to: close them now.
Startups don't win by getting great funding rounds, but by making great
products. So finish raising money and get back to work.
**4\. Be flexible.**
There are two questions VCs ask that you shouldn't answer: "Who else are you
talking to?" and "How much are you trying to raise?"
VCs don't expect you to answer the first question. They ask it just in case.
[4] They do seem to expect an answer to the second. But I don't think you
should just tell them a number. Not as a way to play games with them, but
because you shouldn't _have_ a fixed amount you need to raise.
The custom of a startup needing a fixed amount of funding is an obsolete one
left over from the days when startups were more expensive. A company that
needed to build a factory or hire 50 people obviously needed to raise a
certain minimum amount. But few technology startups are in that position
today.
We advise startups to tell investors there are several different routes they
could take depending on how much they raised. As little as $50k could pay for
food and rent for the founders for a year. A couple hundred thousand would let
them get office space and hire some smart people they know from school. A
couple million would let them really blow this thing out. The message (and not
just the message, but the fact) should be: we're going to succeed no matter
what. Raising more money just lets us do it faster.
If you're raising an angel round, the size of the round can even change on the
fly. In fact, it's just as well to make the round small initially, then expand
as needed, rather than trying to raise a large round and risk losing the
investors you already have if you can't raise the full amount. You may even
want to do a "rolling close," where the round has no predetermined size, but
instead you sell stock to investors one at a time as they say yes. That helps
break deadlocks, because you can start as soon as the first one is ready to
buy. [5]
**5\. Be independent.**
A startup with a couple founders in their early twenties can have expenses so
low that they could be profitable on as little as $2000 per month. That's
negligible as corporate revenues go, but the effect on your morale and your
bargaining position is anything but. At YC we use the phrase "ramen
profitable" to describe the situation where you're making just enough to pay
your living expenses. Once you cross into ramen profitable, everything
changes. You may still need investment to make it big, but you don't need it
this month.
You can't plan when you start a startup how long it will take to become
profitable. But if you find yourself in a position where a little more effort
expended on sales would carry you over the threshold of ramen profitable, do
it.
Investors like it when you're ramen profitable. It shows you've thought about
making money, instead of just working on amusing technical problems; it shows
you have the discipline to keep your expenses low; but above all, it means you
don't need them.
There is nothing investors like more than a startup that seems like it's going
to succeed even without them. Investors like it when they can help a startup,
but they don't like startups that would die without that help.
At YC we spend a lot of time trying to predict how the startups we've funded
will do, because we're trying to learn how to pick winners. We've now watched
the trajectories of so many startups that we're getting better at predicting
them. And when we're talking about startups we think are likely to succeed,
what we find ourselves saying is things like "Oh, those guys can take care of
themselves. They'll be fine." Not "those guys are really smart" or "those guys
are working on a great idea." [6] When we predict good outcomes for startups,
the qualities that come up in the supporting arguments are toughness,
adaptability, determination. Which means to the extent we're correct, those
are the qualities you need to win.
Investors know this, at least unconsciously. The reason they like it when you
don't need them is not simply that they like what they can't have, but because
that quality is what makes founders succeed.
[Sam Altman](http://www.youtube.com/watch?v=KhhId_WG7RA) has it. You could
parachute him into an island full of cannibals and come back in 5 years and
he'd be the king. If you're Sam Altman, you don't have to be profitable to
convey to investors that you'll succeed with or without them. (He wasn't, and
he did.) Not everyone has Sam's deal-making ability. I myself don't. But if
you don't, you can let the numbers speak for you.
**6\. Don't take rejection personally.**
Getting rejected by investors can make you start to doubt yourself. After all,
they're more experienced than you. If they think your startup is lame, aren't
they probably right?
Maybe, maybe not. The way to handle rejection is with precision. You shouldn't
simply ignore rejection. It might mean something. But you shouldn't
automatically get demoralized either.
To understand what rejection means, you have to understand first of all how
common it is. Statistically, the average VC is a rejection machine. David
Hornik, a partner at August, told me:
> The numbers for me ended up being something like 500 to 800 plans received
> and read, somewhere between 50 and 100 initial 1 hour meetings held, about
> 20 companies that I got interested in, about 5 that I got serious about and
> did a bunch of work, 1 to 2 deals done in a year. So the odds are against
> you. You may be a great entrepreneur, working on interesting stuff, etc. but
> it is still incredibly unlikely that you get funded.
This is less true with angels, but VCs reject practically everyone. The
structure of their business means a partner does at most 2 new investments a
year, no matter how many good startups approach him.
In addition to the odds being terrible, the average investor is, as I
mentioned, a pretty bad judge of startups. It's harder to judge startups than
most other things, because great startup ideas tend to seem wrong. A good
startup idea has to be not just good but novel. And to be both good and novel,
an idea probably has to seem bad to most people, or someone would already be
doing it and it wouldn't be novel.
That makes judging startups harder than most other things one judges. You have
to be an intellectual contrarian to be a good startup investor. That's a
problem for VCs, most of whom are not particularly imaginative. VCs are mostly
money guys, not people who make things. [7] Angels are better at appreciating
novel ideas, because most were founders themselves.
So when you get a rejection, use the data that's in it, and not what's not. If
an investor gives you specific reasons for not investing, look at your startup
and ask if they're right. If they're real problems, fix them. But don't just
take their word for it. You're supposed to be the domain expert; you have to
decide.
Though a rejection doesn't necessarily tell you anything about your startup,
it does suggest your pitch could be improved. Figure out what's not working
and change it. Don't just think "investors are stupid." Often they are, but
figure out precisely where you lose them.
Don't let rejections pile up as a depressing, undifferentiated heap. Sort them
and analyze them, and then instead of thinking "no one likes us," you'll know
precisely how big a problem you have, and what to do about it.
**7\. Be able to downshift into consulting (if appropriate).**
Consulting, as I mentioned, is a dangerous way to finance a startup. But it's
better than dying. It's a bit like anaerobic respiration: not the optimum
solution for the long term, but it can save you from an immediate threat. If
you're having trouble raising money from investors at all, it could save you
to be able to shift toward consulting.
This works better for some startups than others. It wouldn't have been a
natural fit for, say, Google, but if your company was making software for
building web sites, you could degrade fairly gracefully into consulting by
building sites for clients with it.
So long as you were careful not to get sucked permanently into consulting,
this could even have advantages. You'd understand your users well if you were
using the software for them. Plus as a consulting company you might be able to
get big-name users using your software that you wouldn't have gotten as a
product company.
At Viaweb we were forced to operate like a consulting company initially,
because we were so desperate for users that we'd offer to build merchants'
sites for them if they'd sign up. But we never charged for such work, because
we didn't want them to start treating us like actual consultants, and calling
us every time they wanted something changed on their site. We knew we had to
stay a product company, because only that scales.
**8\. Avoid inexperienced investors.**
Though novice investors seem unthreatening they can be the most dangerous
sort, because they're so nervous. Especially in proportion to the amount they
invest. Raising $20,000 from a first-time angel investor can be as much work
as raising $2 million from a VC fund.
Their lawyers are generally inexperienced too. But while the investors can
admit they don't know what they're doing, their lawyers can't. One YC startup
negotiated terms for a tiny round with an angel, only to receive a 70-page
agreement from his lawyer. And since the lawyer could never admit, in front of
his client, that he'd screwed up, he instead had to insist on retaining all
the draconian terms in it, so the deal fell through.
Of course, someone has to take money from novice investors, or there would
never be any experienced ones. But if you do, either (a) drive the process
yourself, including supplying the
[paperwork](http://ycombinator.com/seriesaa.html), or (b) use them only to
fill up a larger round led by someone else.
**9\. Know where you stand.**
The most dangerous thing about investors is their indecisiveness. The worst
case scenario is the long no, the no that comes after months of meetings.
Rejections from investors are like design flaws: inevitable, but much less
costly if you discover them early.
So while you're talking to investors, constantly look for signs of where you
stand. How likely are they to offer you a term sheet? What do they have to be
convinced of first? You shouldn't necessarily always be asking these questions
outright—that could get annoying—but you should always be collecting data
about them.
Investors tend to resist committing except to the extent you push them to.
It's in their interest to collect the maximum amount of information while
making the minimum number of decisions. The best way to force them to act is,
of course, competing investors. But you can also apply some force by focusing
the discussion: by asking what specific questions they need answered to make
up their minds, and then answering them. If you get through several obstacles
and they keep raising new ones, assume that ultimately they're going to flake.
You have to be disciplined when collecting data about investors' intentions.
Otherwise their desire to lead you on will combine with your own desire to be
led on to produce completely inaccurate impressions.
Use the data to weight your strategy. You'll probably be talking to several
investors. Focus on the ones that are most likely to say yes. The value of a
potential investor is a combination of how good it would be if they said yes,
and how likely they are to say it. Put the most weight on the second factor.
Partly because the most important quality in an investor is simply investing.
But also because, as I mentioned, the biggest factor in investors' opinion of
you is other investors' opinion of you. If you're talking to several investors
and you manage to get one over the threshold of saying yes, it will make the
others much more interested. So you're not sacrificing the lukewarm investors
if you focus on the hot ones; convincing the hot investors is the best way to
convince the lukewarm ones.
**Future**
I'm hopeful things won't always be so awkward. I hope that as startups get
cheaper and the number of investors increases, raising money will become, if
not easy, at least straightforward.
In the meantime, the brokenness of the funding process offers a big
opportunity. Most investors have no idea how dangerous they are. They'd be
surprised to hear that raising money from them is something that has to be
treated as a threat to a company's survival. They just think they need a
little more information to make up their minds. They don't get that there are
10 other investors who also want a little more information, and that the
process of talking to them all can bring a startup to a standstill for months.
Because investors don't understand the cost of dealing with them, they don't
realize how much room there is for a potential competitor to undercut them. I
know from my own experience how much faster investors could decide, because
we've brought our own time down to 20 minutes (5 minutes of reading an
application plus a 10 minute interview plus 5 minutes of discussion). If you
were investing more money you'd want to take longer, of course. But if we can
decide in 20 minutes, should it take anyone longer than a couple days?
Opportunities like this don't sit unexploited forever, even in an industry as
conservative as venture capital. So either existing investors will start to
make up their minds faster, or new investors will emerge who do.
In the meantime founders have to treat raising money as a dangerous process.
Fortunately, I can fix the biggest danger right here. The biggest danger is
surprise. It's that startups will underestimate the difficulty of raising
money—that they'll cruise through all the initial steps, but when they turn to
raising money they'll find it surprisingly hard, get demoralized, and give up.
So I'm telling you in advance: raising money is hard.
**Notes**
[1] When investors can't make up their minds, they sometimes describe it as if
it were a property of the startup. "You're too early for us," they sometimes
say. But which of them, if they were taken back in a time machine to the hour
Google was founded, wouldn't offer to invest at any valuation the founders
chose? An hour old is not too early if it's the right startup. What "you're
too early" really means is "we can't figure out yet whether you'll succeed."
[2] Investors influence one another both directly and indirectly. They
influence one another directly through the "buzz" that surrounds a hot
startup. But they also influence one another indirectly _through the
founders._ When a lot of investors are interested in you, it increases your
confidence in a way that makes you much more attractive to investors.
No VC will admit they're influenced by buzz. Some genuinely aren't. But there
are few who can say they're not influenced by confidence.
[3] One VC who read this essay wrote:
"We try to avoid companies that got bootstrapped with consulting. It creates
very bad behaviors/instincts that are hard to erase from a company's culture."
[4] The optimal way to answer the first question is to say that it would be
improper to name names, while simultaneously implying that you're talking to a
bunch of other VCs who are all about to give you term sheets. If you're the
sort of person who understands how to do that, go ahead. If not, don't even
try. Nothing annoys VCs more than clumsy efforts to manipulate them.
[5] The disadvantage of expanding a round on the fly is that the valuation is
fixed at the start, so if you get a sudden rush of interest, you may have to
decide between turning some investors away and selling more of the company
than you meant to. That's a good problem to have, however.
[6] I wouldn't say that intelligence doesn't matter in startups. We're only
comparing YC startups, who've already made it over a certain threshold.
[7] But not all are. Though most VCs are suits at heart, the most successful
ones tend not to be. Oddly enough, the best VCs tend to be the least VC-like.
**Thanks** to Trevor Blackwell, David Hornik, Jessica Livingston, Robert
Morris, and Fred Wilson for reading drafts of this.
|
June 2021
It might not seem there's much to learn about how to work hard. Anyone who's
been to school knows what it entails, even if they chose not to do it. There
are 12 year olds who work amazingly hard. And yet when I ask if I know more
about working hard now than when I was in school, the answer is definitely
yes.
One thing I know is that if you want to do great things, you'll have to work
very hard. I wasn't sure of that as a kid. Schoolwork varied in difficulty;
one didn't always have to work super hard to do well. And some of the things
famous adults did, they seemed to do almost effortlessly. Was there, perhaps,
some way to evade hard work through sheer brilliance? Now I know the answer to
that question. There isn't.
The reason some subjects seemed easy was that my school had low standards. And
the reason famous adults seemed to do things effortlessly was years of
practice; they made it look easy.
Of course, those famous adults usually had a lot of natural ability too. There
are three ingredients in great work: natural ability, practice, and effort.
You can do pretty well with just two, but to do the best work you need all
three: you need great natural ability _and_ to have practiced a lot _and_ to
be trying very hard. [1]
Bill Gates, for example, was among the smartest people in business in his era,
but he was also among the hardest working. "I never took a day off in my
twenties," he said. "Not one." It was similar with Lionel Messi. He had great
natural ability, but when his youth coaches talk about him, what they remember
is not his talent but his dedication and his desire to win. P. G. Wodehouse
would probably get my vote for best English writer of the 20th century, if I
had to choose. Certainly no one ever made it look easier. But no one ever
worked harder. At 74, he wrote
> with each new book of mine I have, as I say, the feeling that this time I
> have picked a lemon in the garden of literature. A good thing, really, I
> suppose. Keeps one up on one's toes and makes one rewrite every sentence ten
> times. Or in many cases twenty times.
Sounds a bit extreme, you think. And yet Bill Gates sounds even more extreme.
Not one day off in ten years? These two had about as much natural ability as
anyone could have, and yet they also worked about as hard as anyone could
work. You need both.
That seems so obvious, and yet in practice we find it slightly hard to grasp.
There's a faint xor between talent and hard work. It comes partly from popular
culture, where it seems to run very deep, and partly from the fact that the
outliers are so rare. If great talent and great drive are both rare, then
people with both are rare squared. Most people you meet who have a lot of one
will have less of the other. But you'll need both if you want to be an outlier
yourself. And since you can't really change how much natural talent you have,
in practice doing great work, insofar as you can, reduces to working very
hard.
It's straightforward to work hard if you have clearly defined, externally
imposed goals, as you do in school. There is some technique to it: you have to
learn not to lie to yourself, not to procrastinate (which is a form of lying
to yourself), not to get distracted, and not to give up when things go wrong.
But this level of discipline seems to be within the reach of quite young
children, if they want it.
What I've learned since I was a kid is how to work toward goals that are
neither clearly defined nor externally imposed. You'll probably have to learn
both if you want to do really great things.
The most basic level of which is simply to feel you should be working without
anyone telling you to. Now, when I'm not working hard, alarm bells go off. I
can't be sure I'm getting anywhere when I'm working hard, but I can be sure
I'm getting nowhere when I'm not, and it feels awful. [2]
There wasn't a single point when I learned this. Like most little kids, I
enjoyed the feeling of achievement when I learned or did something new. As I
grew older, this morphed into a feeling of disgust when I wasn't achieving
anything. The one precisely dateable landmark I have is when I stopped
watching TV, at age 13.
Several people I've talked to remember getting serious about work around this
age. When I asked Patrick Collison when he started to find idleness
distasteful, he said
> I think around age 13 or 14. I have a clear memory from around then of
> sitting in the sitting room, staring outside, and wondering why I was
> wasting my summer holiday.
Perhaps something changes at adolescence. That would make sense.
Strangely enough, the biggest obstacle to getting serious about work was
probably school, which made work (what they called work) seem boring and
pointless. I had to learn what real work was before I could wholeheartedly
desire to do it. That took a while, because even in college a lot of the work
is pointless; there are entire departments that are pointless. But as I
learned the shape of real work, I found that my desire to do it slotted into
it as if they'd been made for each other.
I suspect most people have to learn what work is before they can love it.
Hardy wrote eloquently about this in _A Mathematician's Apology_ :
> I do not remember having felt, as a boy, any _passion_ for mathematics, and
> such notions as I may have had of the career of a mathematician were far
> from noble. I thought of mathematics in terms of examinations and
> scholarships: I wanted to beat other boys, and this seemed to be the way in
> which I could do so most decisively.
He didn't learn what math was really about till part way through college, when
he read Jordan's _Cours d'analyse_.
> I shall never forget the astonishment with which I read that remarkable
> work, the first inspiration for so many mathematicians of my generation, and
> learnt for the first time as I read it what mathematics really meant.
There are two separate kinds of fakeness you need to learn to discount in
order to understand what real work is. One is the kind Hardy encountered in
school. Subjects get distorted when they're adapted to be taught to kids —
often so distorted that they're nothing like the work done by actual
practitioners. [3] The other kind of fakeness is intrinsic to certain types of
work. Some types of work are inherently bogus, or at best mere busywork.
There's a kind of solidity to real work. It's not all writing the _Principia_
, but it all feels necessary. That's a vague criterion, but it's deliberately
vague, because it has to cover a lot of different types. [4]
Once you know the shape of real work, you have to learn how many hours a day
to spend on it. You can't solve this problem by simply working every waking
hour, because in many kinds of work there's a point beyond which the quality
of the result will start to decline.
That limit varies depending on the type of work and the person. I've done
several different kinds of work, and the limits were different for each. My
limit for the harder types of writing or programming is about five hours a
day. Whereas when I was running a startup, I could work all the time. At least
for the three years I did it; if I'd kept going much longer, I'd probably have
needed to take occasional vacations. [5]
The only way to find the limit is by crossing it. Cultivate a sensitivity to
the quality of the work you're doing, and then you'll notice if it decreases
because you're working too hard. Honesty is critical here, in both directions:
you have to notice when you're being lazy, but also when you're working too
hard. And if you think there's something admirable about working too hard, get
that idea out of your head. You're not merely getting worse results, but
getting them because you're showing off — if not to other people, then to
yourself. [6]
Finding the limit of working hard is a constant, ongoing process, not
something you do just once. Both the difficulty of the work and your ability
to do it can vary hour to hour, so you need to be constantly judging both how
hard you're trying and how well you're doing.
Trying hard doesn't mean constantly pushing yourself to work, though. There
may be some people who do, but I think my experience is fairly typical, and I
only have to push myself occasionally when I'm starting a project or when I
encounter some sort of check. That's when I'm in danger of procrastinating.
But once I get rolling, I tend to keep going.
What keeps me going depends on the type of work. When I was working on Viaweb,
I was driven by fear of failure. I barely procrastinated at all then, because
there was always something that needed doing, and if I could put more distance
between me and the pursuing beast by doing it, why wait? [7] Whereas what
drives me now, writing essays, is the flaws in them. Between essays I fuss for
a few days, like a dog circling while it decides exactly where to lie down.
But once I get started on one, I don't have to push myself to work, because
there's always some error or omission already pushing me.
I do make some amount of effort to focus on important topics. Many problems
have a hard core at the center, surrounded by easier stuff at the edges.
Working hard means aiming toward the center to the extent you can. Some days
you may not be able to; some days you'll only be able to work on the easier,
peripheral stuff. But you should always be aiming as close to the center as
you can without stalling.
The bigger question of what to do with your life is one of these problems with
a hard core. There are important problems at the center, which tend to be
hard, and less important, easier ones at the edges. So as well as the small,
daily adjustments involved in working on a specific problem, you'll
occasionally have to make big, lifetime-scale adjustments about which type of
work to do. And the rule is the same: working hard means aiming toward the
center — toward the most ambitious problems.
By center, though, I mean the actual center, not merely the current consensus
about the center. The consensus about which problems are most important is
often mistaken, both in general and within specific fields. If you disagree
with it, and you're right, that could represent a valuable opportunity to do
something new.
The more ambitious types of work will usually be harder, but although you
should not be in denial about this, neither should you treat difficulty as an
infallible guide in deciding what to do. If you discover some ambitious type
of work that's a bargain in the sense of being easier for you than other
people, either because of the abilities you happen to have, or because of some
new way you've found to approach it, or simply because you're more excited
about it, by all means work on that. Some of the best work is done by people
who find an easy way to do something hard.
As well as learning the shape of real work, you need to figure out which kind
you're suited for. And that doesn't just mean figuring out which kind your
natural abilities match the best; it doesn't mean that if you're 7 feet tall,
you have to play basketball. What you're suited for depends not just on your
talents but perhaps even more on your interests. A [_deep
interest_](genius.html) in a topic makes people work harder than any amount of
discipline can.
It can be harder to discover your interests than your talents. There are fewer
types of talent than interest, and they start to be judged early in childhood,
whereas interest in a topic is a subtle thing that may not mature till your
twenties, or even later. The topic may not even exist earlier. Plus there are
some powerful sources of error you need to learn to discount. Are you really
interested in x, or do you want to work on it because you'll make a lot of
money, or because other people will be impressed with you, or because your
parents want you to? [8]
The difficulty of figuring out what to work on varies enormously from one
person to another. That's one of the most important things I've learned about
work since I was a kid. As a kid, you get the impression that everyone has a
calling, and all they have to do is figure out what it is. That's how it works
in movies, and in the streamlined biographies fed to kids. Sometimes it works
that way in real life. Some people figure out what to do as children and just
do it, like Mozart. But others, like Newton, turn restlessly from one kind of
work to another. Maybe in retrospect we can identify one as their calling — we
can wish Newton spent more time on math and physics and less on alchemy and
theology — but this is an [_illusion_](disc.html) induced by hindsight bias.
There was no voice calling to him that he could have heard.
So while some people's lives converge fast, there will be others whose lives
never converge. And for these people, figuring out what to work on is not so
much a prelude to working hard as an ongoing part of it, like one of a set of
simultaneous equations. For these people, the process I described earlier has
a third component: along with measuring both how hard you're working and how
well you're doing, you have to think about whether you should keep working in
this field or switch to another. If you're working hard but not getting good
enough results, you should switch. It sounds simple expressed that way, but in
practice it's very difficult. You shouldn't give up on the first day just
because you work hard and don't get anywhere. You need to give yourself time
to get going. But how much time? And what should you do if work that was going
well stops going well? How much time do you give yourself then? [9]
What even counts as good results? That can be really hard to decide. If you're
exploring an area few others have worked in, you may not even know what good
results look like. History is full of examples of people who misjudged the
importance of what they were working on.
The best test of whether it's worthwhile to work on something is whether you
find it interesting. That may sound like a dangerously subjective measure, but
it's probably the most accurate one you're going to get. You're the one
working on the stuff. Who's in a better position than you to judge whether
it's important, and what's a better predictor of its importance than whether
it's interesting?
For this test to work, though, you have to be honest with yourself. Indeed,
that's the most striking thing about the whole question of working hard: how
at each point it depends on being honest with yourself.
Working hard is not just a dial you turn up to 11. It's a complicated, dynamic
system that has to be tuned just right at each point. You have to understand
the shape of real work, see clearly what kind you're best suited for, aim as
close to the true core of it as you can, accurately judge at each moment both
what you're capable of and how you're doing, and put in as many hours each day
as you can without harming the quality of the result. This network is too
complicated to trick. But if you're consistently honest and clear-sighted, it
will automatically assume an optimal shape, and you'll be productive in a way
few people are.
**Notes**
[1] In "The Bus Ticket Theory of Genius" I said the three ingredients in great
work were natural ability, determination, and interest. That's the formula in
the preceding stage; determination and interest yield practice and effort.
[2] I mean this at a resolution of days, not hours. You'll often get somewhere
while not working in the sense that the solution to a problem comes to you
while taking a [_shower_](top.html), or even in your sleep, but only because
you were working hard on it the day before.
It's good to go on vacation occasionally, but when I go on vacation, I like to
learn new things. I wouldn't like just sitting on a beach.
[3] The thing kids do in school that's most like the real version is sports.
Admittedly because many sports originated as games played in schools. But in
this one area, at least, kids are doing exactly what adults do.
In the average American high school, you have a choice of pretending to do
something serious, or seriously doing something pretend. Arguably the latter
is no worse.
[4] Knowing what you want to work on doesn't mean you'll be able to. Most
people have to spend a lot of their time working on things they don't want to,
especially early on. But if you know what you want to do, you at least know
what direction to nudge your life in.
[5] The lower time limits for intense work suggest a solution to the problem
of having less time to work after you have kids: switch to harder problems. In
effect I did that, though not deliberately.
[6] Some cultures have a tradition of performative hard work. I don't love
this idea, because (a) it makes a parody of something important and (b) it
causes people to wear themselves out doing things that don't matter. I don't
know enough to say for sure whether it's net good or bad, but my guess is bad.
[7] One of the reasons people work so hard on startups is that startups can
fail, and when they do, that failure tends to be both decisive and
conspicuous.
[8] It's ok to work on something to make a lot of money. You need to solve the
money problem somehow, and there's nothing wrong with doing that efficiently
by trying to make a lot at once. I suppose it would even be ok to be
interested in money for its own sake; whatever floats your boat. Just so long
as you're conscious of your motivations. The thing to avoid is _unconsciously_
letting the need for money warp your ideas about what kind of work you find
most interesting.
[9] Many people face this question on a smaller scale with individual
projects. But it's easier both to recognize and to accept a dead end in a
single project than to abandon some type of work entirely. The more determined
you are, the harder it gets. Like a Spanish Flu victim, you're fighting your
own immune system: Instead of giving up, you tell yourself, I should just try
harder. And who can say you're not right?
**Thanks** to Trevor Blackwell, John Carmack, John Collison, Patrick Collison,
Robert Morris, Geoff Ralston, and Harj Taggar for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
Watch how this essay was [written](https://byronm.com/13sentences.html).
February 2009
One of the things I always tell startups is a principle I learned from Paul
Buchheit: it's better to make a few people really happy than to make a lot of
people semi-happy. I was saying recently to a reporter that if I could only
tell startups 10 things, this would be one of them. Then I thought: what would
the other 9 be?
When I made the list there turned out to be 13:
**1\. Pick good cofounders.**
Cofounders are for a startup what location is for real estate. You can change
anything about a house except where it is. In a startup you can change your
idea easily, but changing your cofounders is hard. [1] And the success of a
startup is almost always a function of its founders.
**2\. Launch fast.**
The reason to launch fast is not so much that it's critical to get your
product to market early, but that you haven't really started working on it
till you've launched. Launching teaches you what you should have been
building. Till you know that you're wasting your time. So the main value of
whatever you launch with is as a pretext for engaging users.
**3\. Let your idea evolve.**
This is the second half of launching fast. Launch fast and iterate. It's a big
mistake to treat a startup as if it were merely a matter of implementing some
brilliant initial idea. As in an essay, most of the ideas appear in the
implementing.
**4\. Understand your users.**
You can envision the wealth created by a startup as a rectangle, where one
side is the number of users and the other is how much you improve their lives.
[2] The second dimension is the one you have most control over. And indeed,
the growth in the first will be driven by how well you do in the second. As in
science, the hard part is not answering questions but asking them: the hard
part is seeing something new that users lack. The better you understand them
the better the odds of doing that. That's why so many successful startups make
something the founders needed.
**5\. Better to make a few users love you than a lot ambivalent.**
Ideally you want to make large numbers of users love you, but you can't expect
to hit that right away. Initially you have to choose between satisfying all
the needs of a subset of potential users, or satisfying a subset of the needs
of all potential users. Take the first. It's easier to expand userwise than
satisfactionwise. And perhaps more importantly, it's harder to lie to
yourself. If you think you're 85% of the way to a great product, how do you
know it's not 70%? Or 10%? Whereas it's easy to know how many users you have.
**6\. Offer surprisingly good customer service.**
Customers are used to being maltreated. Most of the companies they deal with
are quasi-monopolies that get away with atrocious customer service. Your own
ideas about what's possible have been unconsciously lowered by such
experiences. Try making your customer service not merely good, but
[surprisingly good](http://www.diaryofawebsite.com/blog/2008/07/wufoo-and-the-
art-of-customer-service/). Go out of your way to make people happy. They'll be
overwhelmed; you'll see. In the earliest stages of a startup, it pays to offer
customer service on a level that wouldn't scale, because it's a way of
learning about your users.
**7\. You make what you measure.**
I learned this one from Joe Kraus. [3] Merely measuring something has an
uncanny tendency to improve it. If you want to make your user numbers go up,
put a big piece of paper on your wall and every day plot the number of users.
You'll be delighted when it goes up and disappointed when it goes down. Pretty
soon you'll start noticing what makes the number go up, and you'll start to do
more of that. Corollary: be careful what you measure.
**8\. Spend little.**
I can't emphasize enough how important it is for a startup to be cheap. Most
startups fail before they make something people want, and the most common form
of failure is running out of money. So being cheap is (almost) interchangeable
with iterating rapidly. [4] But it's more than that. A culture of cheapness
keeps companies young in something like the way exercise keeps people young.
**9\. Get ramen profitable.**
"Ramen profitable" means a startup makes just enough to pay the founders'
living expenses. It's not rapid prototyping for business models (though it can
be), but more a way of hacking the investment process. Once you cross over
into ramen profitable, it completely changes your relationship with investors.
It's also great for morale.
**10\. Avoid distractions.**
Nothing kills startups like distractions. The worst type are those that pay
money: day jobs, consulting, profitable side-projects. The startup may have
more long-term potential, but you'll always interrupt working on it to answer
calls from people paying you now. Paradoxically,
[fundraising](fundraising.html) is this type of distraction, so try to
minimize that too.
**11\. Don't get demoralized.**
Though the immediate cause of death in a startup tends to be running out of
money, the underlying cause is usually lack of focus. Either the company is
run by stupid people (which can't be fixed with advice) or the people are
smart but got demoralized. Starting a startup is a huge moral weight.
Understand this and make a conscious effort not to be ground down by it, just
as you'd be careful to bend at the knees when picking up a heavy box.
**12\. Don't give up.**
Even if you get demoralized, [don't give up](die.html). You can get
surprisingly far by just not giving up. This isn't true in all fields. There
are a lot of people who couldn't become good mathematicians no matter how long
they persisted. But startups aren't like that. Sheer effort is usually enough,
so long as you keep morphing your idea.
**13\. Deals fall through.**
One of the most useful skills we learned from Viaweb was not getting our hopes
up. We probably had 20 deals of various types fall through. After the first 10
or so we learned to treat deals as background processes that we should ignore
till they terminated. It's very dangerous to morale to start to depend on
deals closing, not just because they so often don't, but because it makes them
less likely to.
Having gotten it down to 13 sentences, I asked myself which I'd choose if I
could only keep one.
Understand your users. That's the key. The essential task in a startup is to
create wealth; the dimension of wealth you have most control over is how much
you improve users' lives; and the hardest part of that is knowing what to make
for them. Once you know what to make, it's mere effort to make it, and most
decent hackers are capable of that.
Understanding your users is part of half the principles in this list. That's
the reason to launch early, to understand your users. Evolving your idea is
the embodiment of understanding your users. Understanding your users well will
tend to push you toward making something that makes a few people deeply happy.
The most important reason for having surprisingly good customer service is
that it helps you understand your users. And understanding your users will
even ensure your morale, because when everything else is collapsing around
you, having just ten users who love you will keep you going.
**Notes**
[1] Strictly speaking it's impossible without a time machine.
[2] In practice it's more like a ragged comb.
[3] Joe thinks one of the founders of Hewlett Packard said it first, but he
doesn't remember which.
[4] They'd be interchangeable if markets stood still. Since they don't,
working twice as fast is better than having twice as much time.
|
March 2005
A couple months ago I got an email from a recruiter asking if I was interested
in being a "technologist in residence" at a new venture capital fund. I think
the idea was to play Karl Rove to the VCs' George Bush.
I considered it for about four seconds. Work for a VC fund? Ick.
One of my most vivid memories from our startup is going to visit Greylock, the
famous Boston VCs. They were the most arrogant people I've met in my life. And
I've met a lot of arrogant people. [1]
I'm not alone in feeling this way, of course. Even a VC friend of mine
dislikes VCs. "Assholes," he says.
But lately I've been learning more about how the VC world works, and a few
days ago it hit me that there's a reason VCs are the way they are. It's not so
much that the business attracts jerks, or even that the power they wield
corrupts them. The real problem is the way they're paid.
The problem with VC funds is that they're _funds_. Like the managers of mutual
funds or hedge funds, VCs get paid a percentage of the money they manage:
about 2% a year in management fees, plus a percentage of the gains. So they
want the fund to be huge-- hundreds of millions of dollars, if possible. But
that means each partner ends up being responsible for investing a lot of
money. And since one person can only manage so many deals, each deal has to be
for multiple millions of dollars.
This turns out to explain nearly all the characteristics of VCs that founders
hate.
It explains why VCs take so agonizingly long to make up their minds, and why
their due diligence feels like a body cavity search. [2] With so much at
stake, they have to be paranoid.
It explains why they steal your ideas. Every founder knows that VCs will tell
your secrets to your competitors if they end up investing in them. It's not
unheard of for VCs to meet you when they have no intention of funding you,
just to pick your brain for a competitor. This prospect makes naive founders
clumsily secretive. Experienced founders treat it as a cost of doing business.
Either way it sucks. But again, the only reason VCs are so sneaky is the giant
deals they do. With so much at stake, they have to be devious.
It explains why VCs tend to interfere in the companies they invest in. They
want to be on your board not just so that they can advise you, but so that
they can watch you. Often they even install a new CEO. Yes, he may have
extensive business experience. But he's also their man: these newly installed
CEOs always play something of the role of a political commissar in a Red Army
unit. With so much at stake, VCs can't resist micromanaging you.
The huge investments themselves are something founders would dislike, if they
realized how damaging they can be. VCs don't invest $x million because that's
the amount you need, but because that's the amount the structure of their
business requires them to invest. Like steroids, these sudden huge investments
can do more harm than good. Google survived enormous VC funding because it
could legitimately absorb large amounts of money. They had to buy a lot of
servers and a lot of bandwidth to crawl the whole Web. Less fortunate startups
just end up hiring armies of people to sit around having meetings.
In principle you could take a huge VC investment, put it in treasury bills,
and continue to operate frugally. You just try it.
And of course giant investments mean giant valuations. They have to, or
there's not enough stock left to keep the founders interested. You might think
a high valuation is a great thing. Many founders do. But you can't eat paper.
You can't benefit from a high valuation unless you can somehow achieve what
those in the business call a "liquidity event," and the higher your
valuation, the narrower your options for doing that. Many a founder would be
happy to sell his company for $15 million, but VCs who've just invested at a
pre-money valuation of $8 million won't hear of that. You're rolling the dice
again, whether you like it or not.
Back in 1997, one of our competitors raised $20 million in a single round of
VC funding. This was at the time more than the valuation of our entire
company. Was I worried? Not at all: I was delighted. It was like watching a
car you're chasing turn down a street that you know has no outlet.
Their smartest move at that point would have been to take every penny of the
$20 million and use it to buy us. We would have sold. Their investors would
have been furious of course. But I think the main reason they never considered
this was that they never imagined we could be had so cheap. They probably
assumed we were on the same VC gravy train they were.
In fact we only spent about $2 million in our entire existence. And that gave
us flexibility. We could sell ourselves to Yahoo for $50 million, and everyone
was delighted. If our competitor had done that, the last round of investors
would presumably have lost money. I assume they could have vetoed such a deal.
But no one those days was paying a lot more than Yahoo. So unless their
founders could pull off an IPO (which would be difficult with Yahoo as a
competitor), they had no choice but to ride the thing down.
The puffed-up companies that went public during the Bubble didn't do it just
because they were pulled into it by unscrupulous investment bankers. Most were
pushed just as hard from the other side by VCs who'd invested at high
valuations, leaving an IPO as the only way out. The only people dumber were
retail investors. So it was literally IPO or bust. Or rather, IPO then bust,
or just bust.
Add up all the evidence of VCs' behavior, and the resulting personality is not
attractive. In fact, it's the classic villain: alternately cowardly, greedy,
sneaky, and overbearing.
I used to take it for granted that VCs were like this. Complaining that VCs
were jerks used to seem as naive to me as complaining that users didn't read
the reference manual. Of course VCs were jerks. How could it be otherwise?
But I realize now that they're not intrinsically jerks. VCs are like car
salesmen or bureaucrats: the nature of their work turns them into jerks.
I've met a few VCs I like. Mike Moritz seems a good guy. He even has a sense
of humor, which is almost unheard of among VCs. From what I've read about John
Doerr, he sounds like a good guy too, almost a hacker. But they work for the
very best VC funds. And my theory explains why they'd tend to be different:
just as the very most popular kids don't have to persecute
[nerds](nerds.html), the very best VCs don't have to act like VCs. They get
the pick of all the best deals. So they don't have to be so paranoid and
sneaky, and they can choose those rare companies, like Google, that will
actually benefit from the giant sums they're compelled to invest.
VCs often complain that in their business there's too much money chasing too
few deals. Few realize that this also describes a flaw in the way funding
works at the level of individual firms.
Perhaps this was the sort of strategic insight I was supposed to come up with
as a "technologist in residence." If so, the good news is that they're getting
it for free. The bad news is it means that if you're not one of the very top
funds, you're condemned to be the bad guys.
**Notes**
[1] After Greylock booted founder Philip Greenspun out of ArsDigita, he wrote
a hilarious but also very informative
[essay](http://www.waxy.org/random/arsdigita/) about it.
[2] Since most VCs aren't tech guys, the technology side of their due
diligence tends to be like a body cavity search by someone with a faulty
knowledge of human anatomy. After a while we were quite sore from VCs
attempting to probe our nonexistent database orifice.
No, we don't use Oracle. We just store the data in files. Our secret is to use
an OS that doesn't lose our data. Which OS? FreeBSD. Why do you use that
instead of Windows NT? Because it's better and it doesn't cost anything. What,
you're using a _freeware_ OS?
How many times that conversation was repeated. Then when we got to Yahoo, we
found they used FreeBSD and stored their data in files too.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
November 2005
Venture funding works like gears. A typical startup goes through several
rounds of funding, and at each round you want to take just enough money to
reach the speed where you can shift into the next gear.
Few startups get it quite right. Many are underfunded. A few are overfunded,
which is like trying to start driving in third gear.
I think it would help founders to understand funding better—not just the
mechanics of it, but what investors are thinking. I was surprised recently
when I realized that all the worst problems we faced in our startup were due
not to competitors, but investors. Dealing with competitors was easy by
comparison.
I don't mean to suggest that our investors were nothing but a drag on us. They
were helpful in negotiating deals, for example. I mean more that conflicts
with investors are particularly nasty. Competitors punch you in the jaw, but
investors have you by the balls.
Apparently our situation was not unusual. And if trouble with investors is one
of the biggest threats to a startup, managing them is one of the most
important skills founders need to learn.
Let's start by talking about the five sources of startup funding. Then we'll
trace the life of a hypothetical (very fortunate) startup as it shifts gears
through successive rounds.
**Friends and Family**
A lot of startups get their first funding from friends and family. Excite did,
for example: after the founders graduated from college, they borrowed $15,000
from their parents to start a company. With the help of some part-time jobs
they made it last 18 months.
If your friends or family happen to be rich, the line blurs between them and
angel investors. At Viaweb we got our first $10,000 of seed money from our
friend Julian, but he was sufficiently rich that it's hard to say whether he
should be classified as a friend or angel. He was also a lawyer, which was
great, because it meant we didn't have to pay legal bills out of that initial
small sum.
The advantage of raising money from friends and family is that they're easy to
find. You already know them. There are three main disadvantages: you mix
together your business and personal life; they will probably not be as well
connected as angels or venture firms; and they may not be accredited
investors, which could complicate your life later.
The SEC defines an "accredited investor" as someone with over a million
dollars in liquid assets or an income of over $200,000 a year. The regulatory
burden is much lower if a company's shareholders are all accredited investors.
Once you take money from the general public you're more restricted in what you
can do. [1]
A startup's life will be more complicated, legally, if any of the investors
aren't accredited. In an IPO, it might not merely add expense, but change the
outcome. A lawyer I asked about it said:
> When the company goes public, the SEC will carefully study all prior
> issuances of stock by the company and demand that it take immediate action
> to cure any past violations of securities laws. Those remedial actions can
> delay, stall or even kill the IPO.
Of course the odds of any given startup doing an IPO are small. But not as
small as they might seem. A lot of startups that end up going public didn't
seem likely to at first. (Who could have guessed that the company Wozniak and
Jobs started in their spare time selling plans for microcomputers would yield
one of the biggest IPOs of the decade?) Much of the value of a startup
consists of that tiny probability multiplied by the huge outcome.
It wasn't because they weren't accredited investors that I didn't ask my
parents for seed money, though. When we were starting Viaweb, I didn't know
about the concept of an accredited investor, and didn't stop to think about
the value of investors' connections. The reason I didn't take money from my
parents was that I didn't want them to lose it.
**Consulting**
Another way to fund a startup is to get a job. The best sort of job is a
consulting project in which you can build whatever software you wanted to sell
as a startup. Then you can gradually transform yourself from a consulting
company into a product company, and have your clients pay your development
expenses.
This is a good plan for someone with kids, because it takes most of the risk
out of starting a startup. There never has to be a time when you have no
revenues. Risk and reward are usually proportionate, however: you should
expect a plan that cuts the risk of starting a startup also to cut the average
return. In this case, you trade decreased financial risk for increased risk
that your company won't succeed as a startup.
But isn't the consulting company itself a startup? No, not generally. A
company has to be more than small and newly founded to be a startup. There are
millions of small businesses in America, but only a few thousand are startups.
To be a startup, a company has to be a product business, not a service
business. By which I mean not that it has to make something physical, but that
it has to have one thing it sells to many people, rather than doing custom
work for individual clients. Custom work doesn't scale. To be a startup you
need to be the band that sells a million copies of a song, not the band that
makes money by playing at individual weddings and bar mitzvahs.
The trouble with consulting is that clients have an awkward habit of calling
you on the phone. Most startups operate close to the margin of failure, and
the distraction of having to deal with clients could be enough to put you over
the edge. Especially if you have competitors who get to work full time on just
being a startup.
So you have to be very disciplined if you take the consulting route. You have
to work actively to prevent your company growing into a "weed tree," dependent
on this source of easy but low-margin money. [2]
Indeed, the biggest danger of consulting may be that it gives you an excuse
for failure. In a startup, as in grad school, a lot of what ends up driving
you are the expectations of your family and friends. Once you start a startup
and tell everyone that's what you're doing, you're now on a path labelled "get
rich or bust." You now have to get rich, or you've failed.
Fear of failure is an extraordinarily powerful force. Usually it prevents
people from starting things, but once you publish some definite ambition, it
switches directions and starts working in your favor. I think it's a pretty
clever piece of jiujitsu to set this irresistible force against the slightly
less immovable object of becoming rich. You won't have it driving you if your
stated ambition is merely to start a consulting company that you will one day
morph into a startup.
An advantage of consulting, as a way to develop a product, is that you know
you're making something at least one customer wants. But if you have what it
takes to start a startup you should have sufficient vision not to need this
crutch.
**Angel Investors**
_Angels_ are individual rich people. The word was first used for backers of
Broadway plays, but now applies to individual investors generally. Angels
who've made money in technology are preferable, for two reasons: they
understand your situation, and they're a source of contacts and advice.
The contacts and advice can be more important than the money. When del.icio.us
took money from investors, they took money from, among others, Tim O'Reilly.
The amount he put in was small compared to the VCs who led the round, but Tim
is a smart and influential guy and it's good to have him on your side.
You can do whatever you want with money from consulting or friends and family.
With angels we're now talking about venture funding proper, so it's time to
introduce the concept of _exit strategy_. Younger would-be founders are often
surprised that investors expect them either to sell the company or go public.
The reason is that investors need to get their capital back. They'll only
consider companies that have an exit strategy—meaning companies that could get
bought or go public.
This is not as selfish as it sounds. There are few large, private technology
companies. Those that don't fail all seem to get bought or go public. The
reason is that employees are investors too—of their time—and they want just as
much to be able to cash out. If your competitors offer employees stock options
that might make them rich, while you make it clear you plan to stay private,
your competitors will get the best people. So the principle of an "exit" is
not just something forced on startups by investors, but part of what it means
to be a startup.
Another concept we need to introduce now is valuation. When someone buys
shares in a company, that implicitly establishes a value for it. If someone
pays $20,000 for 10% of a company, the company is in theory worth $200,000. I
say "in theory" because in early stage investing, valuations are voodoo. As a
company gets more established, its valuation gets closer to an actual market
value. But in a newly founded startup, the valuation number is just an
artifact of the respective contributions of everyone involved.
Startups often "pay" investors who will help the company in some way by
letting them invest at low valuations. If I had a startup and Steve Jobs
wanted to invest in it, I'd give him the stock for $10, just to be able to
brag that he was an investor. Unfortunately, it's impractical (if not illegal)
to adjust the valuation of the company up and down for each investor.
Startups' valuations are supposed to rise over time. So if you're going to
sell cheap stock to eminent angels, do it early, when it's natural for the
company to have a low valuation.
Some angel investors join together in syndicates. Any city where people start
startups will have one or more of them. In Boston the biggest is the [Common
Angels](http://commonangels.com/home.html). In the Bay Area it's the [Band of
Angels](http://bandangels.com/). You can find groups near you through the
[Angel Capital Association](http://angelcapitalassociation.org/). [3] However,
most angel investors don't belong to these groups. In fact, the more prominent
the angel, the less likely they are to belong to a group.
Some angel groups charge you money to pitch your idea to them. Needless to
say, you should never do this.
One of the dangers of taking investment from individual angels, rather than
through an angel group or investment firm, is that they have less reputation
to protect. A big-name VC firm will not screw you too outrageously, because
other founders would avoid them if word got out. With individual angels you
don't have this protection, as we found to our dismay in our own startup. In
many startups' lives there comes a point when you're at the investors'
mercy—when you're out of money and the only place to get more is your existing
investors. When we got into such a scrape, our investors took advantage of it
in a way that a name-brand VC probably wouldn't have.
Angels have a corresponding advantage, however: they're also not bound by all
the rules that VC firms are. And so they can, for example, allow founders to
cash out partially in a funding round, by selling some of their stock directly
to the investors. I think this will become more common; the average founder is
eager to do it, and selling, say, half a million dollars worth of stock will
not, as VCs fear, cause most founders to be any less committed to the
business.
The same angels who tried to screw us also let us do this, and so on balance
I'm grateful rather than angry. (As in families, relations between founders
and investors can be complicated.)
The best way to find angel investors is through personal introductions. You
could try to cold-call angel groups near you, but angels, like VCs, will pay
more attention to deals recommended by someone they respect.
Deal terms with angels vary a lot. There are no generally accepted standards.
Sometimes angels' deal terms are as fearsome as VCs'. Other angels,
particularly in the earliest stages, will invest based on a two-page
agreement.
Angels who only invest occasionally may not themselves know what terms they
want. They just want to invest in this startup. What kind of anti-dilution
protection do they want? Hell if they know. In these situations, the deal
terms tend to be random: the angel asks his lawyer to create a vanilla
agreement, and the terms end up being whatever the lawyer considers vanilla.
Which in practice usually means, whatever existing agreement he finds lying
around his firm. (Few legal documents are created from scratch.)
These heaps o' boilerplate are a problem for small startups, because they tend
to grow into the union of all preceding documents. I know of one startup that
got from an angel investor what amounted to a five hundred pound handshake:
after deciding to invest, the angel presented them with a 70-page agreement.
The startup didn't have enough money to pay a lawyer even to read it, let
alone negotiate the terms, so the deal fell through.
One solution to this problem would be to have the startup's lawyer produce the
agreement, instead of the angel's. Some angels might balk at this, but others
would probably welcome it.
Inexperienced angels often get cold feet when the time comes to write that big
check. In our startup, one of the two angels in the initial round took months
to pay us, and only did after repeated nagging from our lawyer, who was also,
fortunately, his lawyer.
It's obvious why investors delay. Investing in startups is risky! When a
company is only two months old, every _day_ you wait gives you 1.7% more data
about their trajectory. But the investor is already being compensated for that
risk in the low price of the stock, so it is unfair to delay.
Fair or not, investors do it if you let them. Even VCs do it. And funding
delays are a big distraction for founders, who ought to be working on their
company, not worrying about investors. What's a startup to do? With both
investors and acquirers, the only leverage you have is competition. If an
investor knows you have other investors lined up, he'll be a lot more eager to
close-- and not just because he'll worry about losing the deal, but because if
other investors are interested, you must be worth investing in. It's the same
with acquisitions. No one wants to buy you till someone else wants to buy you,
and then everyone wants to buy you.
The key to closing deals is never to stop pursuing alternatives. When an
investor says he wants to invest in you, or an acquirer says they want to buy
you, _don't believe it till you get the check._ Your natural tendency when an
investor says yes will be to relax and go back to writing code. Alas, you
can't; you have to keep looking for more investors, if only to get this one to
act. [4]
**Seed Funding Firms**
Seed firms are like angels in that they invest relatively small amounts at
early stages, but like VCs in that they're companies that do it as a business,
rather than individuals making occasional investments on the side.
Till now, nearly all seed firms have been so-called "incubators," so [Y
Combinator](http://ycombinator.com) gets called one too, though the only thing
we have in common is that we invest in the earliest phase.
According to the National Association of Business Incubators, there are about
800 incubators in the US. This is an astounding number, because I know the
founders of a lot of startups, and I can't think of one that began in an
incubator.
What is an incubator? I'm not sure myself. The defining quality seems to be
that you work in their space. That's where the name "incubator" comes from.
They seem to vary a great deal in other respects. At one extreme is the sort
of pork-barrel project where a town gets money from the state government to
renovate a vacant building as a "high-tech incubator," as if it were merely
lack of the right sort of office space that had till now prevented the town
from becoming a [startup hub](siliconvalley.html). At the other extreme are
places like Idealab, which generates ideas for new startups internally and
hires people to work for them.
The classic Bubble incubators, most of which now seem to be dead, were like VC
firms except that they took a much bigger role in the startups they funded. In
addition to working in their space, you were supposed to use their office
staff, lawyers, accountants, and so on.
Whereas incubators tend (or tended) to exert more control than VCs, Y
Combinator exerts less. And we think it's better if startups operate out of
their own premises, however crappy, than the offices of their investors. So
it's annoying that we keep getting called an "incubator," but perhaps
inevitable, because there's only one of us so far and no word yet for what we
are. If we have to be called something, the obvious name would be "excubator."
(The name is more excusable if one considers it as meaning that we enable
people to escape cubicles.)
Because seed firms are companies rather than individual people, reaching them
is easier than reaching angels. Just go to their web site and send them an
email. The importance of personal introductions varies, but is less than with
angels or VCs.
The fact that seed firms are companies also means the investment process is
more standardized. (This is generally true with angel groups too.) Seed firms
will probably have set deal terms they use for every startup they fund. The
fact that the deal terms are standard doesn't mean they're favorable to you,
but if other startups have signed the same agreements and things went well for
them, it's a sign the terms are reasonable.
Seed firms differ from angels and VCs in that they invest exclusively in the
earliest phases—often when the company is still just an idea. Angels and even
VC firms occasionally do this, but they also invest at later stages.
The problems are different in the early stages. For example, in the first
couple months a startup may completely redefine their [idea](ideas.html). So
seed investors usually care less about the idea than the people. This is true
of all venture funding, but especially so in the seed stage.
Like VCs, one of the advantages of seed firms is the advice they offer. But
because seed firms operate in an earlier phase, they need to offer different
kinds of advice. For example, a seed firm should be able to give advice about
how to approach VCs, which VCs obviously don't need to do; whereas VCs should
be able to give advice about how to hire an "executive team," which is not an
issue in the seed stage.
In the earliest phases, a lot of the problems are technical, so seed firms
should be able to help with technical as well as business problems.
Seed firms and angel investors generally want to invest in the initial phases
of a startup, then hand them off to VC firms for the next round. Occasionally
startups go from seed funding direct to acquisition, however, and I expect
this to become increasingly common.
Google has been aggressively pursuing this route, and now
[Yahoo](http://ycombinator.com/buckman.html) is too. Both now compete directly
with VCs. And this is a smart move. Why wait for further funding rounds to
jack up a startup's price? When a startup reaches the point where VCs have
enough information to invest in it, the acquirer should have enough
information to buy it. More information, in fact; with their technical depth,
the acquirers should be better at picking winners than VCs.
**Venture Capital Funds**
VC firms are like seed firms in that they're actual companies, but they invest
other people's money, and much larger amounts of it. VC investments average
several million dollars. So they tend to come later in the life of a startup,
are harder to get, and come with tougher terms.
The word "venture capitalist" is sometimes used loosely for any venture
investor, but there is a sharp difference between VCs and other investors: VC
firms are organized as _funds_ , much like hedge funds or mutual funds. The
fund managers, who are called "general partners," get about 2% of the fund
annually as a management fee, plus about 20% of the fund's gains.
There is a very sharp dropoff in performance among VC firms, because in the VC
business both success and failure are self-perpetuating. When an investment
scores spectacularly, as Google did for Kleiner and Sequoia, it generates a
lot of good publicity for the VCs. And many founders prefer to take money from
successful VC firms, because of the legitimacy it confers. Hence a vicious
(for the losers) cycle: VC firms that have been doing badly will only get the
deals the bigger fish have rejected, causing them to continue to do badly.
As a result, of the thousand or so VC funds in the US now, only about 50 are
likely to make money, and it is very hard for a new fund to break into this
group.
In a sense, the lower-tier VC firms are a bargain for founders. They may not
be quite as smart or as well connected as the big-name firms, but they are
much hungrier for deals. This means you should be able to get better terms
from them.
Better how? The most obvious is valuation: they'll take less of your company.
But as well as money, there's power. I think founders will increasingly be
able to stay on as CEO, and on terms that will make it fairly hard to fire
them later.
The most dramatic change, I predict, is that VCs will allow founders to cash
out partially by [selling](vcsqueeze.html) some of their stock direct to the
VC firm. VCs have traditionally resisted letting founders get anything before
the ultimate "liquidity event." But they're also desperate for deals. And
since I know from my own experience that the rule against buying stock from
founders is a stupid one, this is a natural place for things to give as
venture funding becomes more and more a seller's market.
The disadvantage of taking money from less known firms is that people will
assume, correctly or not, that you were turned down by the more exalted ones.
But, like where you went to college, the name of your VC stops mattering once
you have some performance to measure. So the more confident you are, the less
you need a brand-name VC. We funded Viaweb entirely with angel money; it never
occurred to us that the backing of a well known VC firm would make us seem
more impressive. [5]
Another danger of less known firms is that, like angels, they have less
reputation to protect. I suspect it's the lower-tier firms that are
responsible for most of the tricks that have given VCs such a bad reputation
among hackers. They are doubly hosed: the general partners themselves are less
able, and yet they have harder problems to solve, because the top VCs skim off
all the best deals, leaving the lower-tier firms exactly the startups that are
likely to blow up.
For example, lower-tier firms are much more likely to pretend to want to do a
deal with you just to lock you up while they decide if they really want to.
One experienced CFO said:
> The better ones usually will not give a term sheet unless they really want
> to do a deal. The second or third tier firms have a much higher break
> rate—it could be as high as 50%.
It's obvious why: the lower-tier firms' biggest fear, when chance throws them
a bone, is that one of the big dogs will notice and take it away. The big dogs
don't have to worry about that.
Falling victim to this trick could really hurt you. As one VC told me:
> If you were talking to four VCs, told three of them that you accepted a term
> sheet, and then have to call them back to tell them you were just kidding,
> you are absolutely damaged goods.
Here's a partial solution: when a VC offers you a term sheet, ask how many of
their last 10 term sheets turned into deals. This will at least force them to
lie outright if they want to mislead you.
Not all the people who work at VC firms are partners. Most firms also have a
handful of junior employees called something like associates or analysts. If
you get a call from a VC firm, go to their web site and check whether the
person you talked to is a partner. Odds are it will be a junior person; they
scour the web looking for startups their bosses could invest in. The junior
people will tend to seem very positive about your company. They're not
pretending; they _want_ to believe you're a hot prospect, because it would be
a huge coup for them if their firm invested in a company they discovered.
Don't be misled by this optimism. It's the partners who decide, and they view
things with a colder eye.
Because VCs invest large amounts, the money comes with more restrictions. Most
only come into effect if the company gets into trouble. For example, VCs
generally write it into the deal that in any sale, they get their investment
back first. So if the company gets sold at a low price, the founders could get
nothing. Some VCs now require that in any sale they get 4x their investment
back before the common stock holders (that is, you) get anything, but this is
an abuse that should be resisted.
Another difference with large investments is that the founders are usually
required to accept "vesting"—to surrender their stock and earn it back over
the next 4-5 years. VCs don't want to invest millions in a company the
founders could just walk away from. Financially, vesting has little effect,
but in some situations it could mean founders will have less power. If VCs got
de facto control of the company and fired one of the founders, he'd lose any
unvested stock unless there was specific protection against this. So vesting
would in that situation force founders to toe the line.
The most noticeable change when a startup takes serious funding is that the
founders will no longer have complete control. Ten years ago VCs used to
insist that founders step down as CEO and hand the job over to a business guy
they supplied. This is less the rule now, partly because the disasters of the
Bubble showed that generic business guys don't make such great CEOs.
But while founders will increasingly be able to stay on as CEO, they'll have
to cede some power, because the board of directors will become more powerful.
In the seed stage, the board is generally a formality; if you want to talk to
the other board members, you just yell into the next room. This stops with VC-
scale money. In a typical VC funding deal, the board of directors might be
composed of two VCs, two founders, and one outside person acceptable to both.
The board will have ultimate power, which means the founders now have to
convince instead of commanding.
This is not as bad as it sounds, however. Bill Gates is in the same position;
he doesn't have majority control of Microsoft; in principle he also has to
convince instead of commanding. And yet he seems pretty commanding, doesn't
he? As long as things are going smoothly, boards don't interfere much. The
danger comes when there's a bump in the road, as happened to Steve Jobs at
Apple.
Like angels, VCs prefer to invest in deals that come to them through people
they know. So while nearly all VC funds have some address you can send your
business plan to, VCs privately admit the chance of getting funding by this
route is near zero. One recently told me that he did not know a single startup
that got funded this way.
I suspect VCs accept business plans "over the transom" more as a way to keep
tabs on industry trends than as a source of deals. In fact, I would strongly
advise against mailing your business plan randomly to VCs, because they treat
this as evidence of laziness. Do the extra work of getting personal
introductions. As one VC put it:
> I'm not hard to find. I know a lot of people. If you can't find some way to
> reach me, how are you going to create a successful company?
One of the most difficult problems for startup founders is deciding when to
approach VCs. You really only get one chance, because they rely heavily on
first impressions. And you can't approach some and save others for later,
because (a) they ask who else you've talked to and when and (b) they talk
among themselves. If you're talking to one VC and he finds out that you were
rejected by another several months ago, you'll definitely seem shopworn.
So when do you approach VCs? When you can convince them. If the founders have
impressive resumes and the idea isn't hard to understand, you could approach
VCs quite early. Whereas if the founders are unknown and the idea is very
novel, you might have to launch the thing and show that users loved it before
VCs would be convinced.
If several VCs are interested in you, they will sometimes be willing to split
the deal between them. They're more likely to do this if they're close in the
VC pecking order. Such deals may be a net win for founders, because you get
multiple VCs interested in your success, and you can ask each for advice about
the other. One founder I know wrote:
> Two-firm deals are great. It costs you a little more equity, but being able
> to play the two firms off each other (as well as ask one if the other is
> being out of line) is invaluable.
When you do negotiate with VCs, remember that they've done this a lot more
than you have. They've invested in dozens of startups, whereas this is
probably the first you've founded. But don't let them or the situation
intimidate you. The average founder is smarter than the average VC. So just do
what you'd do in any complex, unfamiliar situation: proceed deliberately, and
question anything that seems odd.
It is, unfortunately, common for VCs to put terms in an agreement whose
consequences surprise founders later, and also common for VCs to defend things
they do by saying that they're standard in the industry. Standard, schmandard;
the whole industry is only a few decades old, and rapidly evolving. The
concept of "standard" is a useful one when you're operating on a small scale
(Y Combinator uses identical terms for every deal because for tiny seed-stage
investments it's not worth the overhead of negotiating individual deals), but
it doesn't apply at the VC level. On that scale, every negotiation is unique.
Most successful startups get money from more than one of the preceding five
sources. [6] And, confusingly, the names of funding sources also tend to be
used as the names of different rounds. The best way to explain how it all
works is to follow the case of a hypothetical startup.
**Stage 1: Seed Round**
Our startup begins when a group of three friends have an idea-- either an idea
for something they might build, or simply the idea "let's start a company."
Presumably they already have some source of food and shelter. But if you have
food and shelter, you probably also have something you're supposed to be
working on: either classwork, or a job. So if you want to work full-time on a
startup, your money situation will probably change too.
A lot of startup founders say they started the company without any idea of
what they planned to do. This is actually less common than it seems: many have
to claim they thought of the idea after quitting because otherwise their
former employer would own it.
The three friends decide to take the leap. Since most startups are in
competitive businesses, you not only want to work full-time on them, but more
than full-time. So some or all of the friends quit their jobs or leave school.
(Some of the founders in a startup can stay in grad school, but at least one
has to make the company his full-time job.)
They're going to run the company out of one of their apartments at first, and
since they don't have any users they don't have to pay much for
infrastructure. Their main expenses are setting up the company, which costs a
couple thousand dollars in legal work and registration fees, and the living
expenses of the founders.
The phrase "seed investment" covers a broad range. To some VC firms it means
$500,000, but to most startups it means several months' living expenses. We'll
suppose our group of friends start with $15,000 from their friend's rich
uncle, who they give 5% of the company in return. There's only common stock at
this stage. They leave 20% as an options pool for later employees (but they
set things up so that they can issue this stock to themselves if they get
bought early and most is still unissued), and the three founders each get 25%.
By living really cheaply they think they can make the remaining money last
five months. When you have five months' runway left, how soon do you need to
start looking for your next round? Answer: immediately. It takes time to find
investors, and time (always more than you expect) for the deal to close even
after they say yes. So if our group of founders know what they're doing
they'll start sniffing around for angel investors right away. But of course
their main job is to build version 1 of their software.
The friends might have liked to have more money in this first phase, but being
slightly underfunded teaches them an important lesson. For a startup,
cheapness is power. The lower your costs, the more options you have—not just
at this stage, but at every point till you're profitable. When you have a high
"burn rate," you're always under time pressure, which means (a) you don't have
time for your ideas to evolve, and (b) you're often forced to take deals you
don't like.
Every startup's rule should be: spend little, and work fast.
After ten weeks' work the three friends have built a prototype that gives one
a taste of what their product will do. It's not what they originally set out
to do—in the process of writing it, they had some new ideas. And it only does
a fraction of what the finished product will do, but that fraction includes
stuff that no one else has done before.
They've also written at least a skeleton business plan, addressing the five
fundamental questions: what they're going to do, why users need it, how large
the market is, how they'll make money, and who the competitors are and why
this company is going to beat them. (That last has to be more specific than
"they suck" or "we'll work really hard.")
If you have to choose between spending time on the demo or the business plan,
spend most on the demo. Software is not only more convincing, but a better way
to explore ideas.
**Stage 2: Angel Round**
While writing the prototype, the group has been traversing their network of
friends in search of angel investors. They find some just as the prototype is
demoable. When they demo it, one of the angels is willing to invest. Now the
group is looking for more money: they want enough to last for a year, and
maybe to hire a couple friends. So they're going to raise $200,000.
The angel agrees to invest at a pre-money valuation of $1 million. The company
issues $200,000 worth of new shares to the angel; if there were 1000 shares
before the deal, this means 200 additional shares. The angel now owns 200/1200
shares, or a sixth of the company, and all the previous shareholders'
percentage ownership is diluted by a sixth. After the deal, the capitalization
table looks like this: shareholder shares percent
\------------------------------- angel 200 16.7 uncle 50 4.2 each founder 250
20.8 option pool 200 16.7 \---- ----- total 1200 100 To keep things simple, I
had the angel do a straight cash for stock deal. In reality the angel might be
more likely to make the investment in the form of a convertible loan. A
convertible loan is a loan that can be converted into stock later; it works
out the same as a stock purchase in the end, but gives the angel more
protection against being squashed by VCs in future rounds.
Who pays the legal bills for this deal? The startup, remember, only has a
couple thousand left. In practice this turns out to be a sticky problem that
usually gets solved in some improvised way. Maybe the startup can find lawyers
who will do it cheaply in the hope of future work if the startup succeeds.
Maybe someone has a lawyer friend. Maybe the angel pays for his lawyer to
represent both sides. (Make sure if you take the latter route that the lawyer
is _representing_ you rather than merely advising you, or his only duty is to
the investor.)
An angel investing $200k would probably expect a seat on the board of
directors. He might also want preferred stock, meaning a special class of
stock that has some additional rights over the common stock everyone else has.
Typically these rights include vetoes over major strategic decisions,
protection against being diluted in future rounds, and the right to get one's
investment back first if the company is sold.
Some investors might expect the founders to accept vesting for a sum this
size, and others wouldn't. VCs are more likely to require vesting than angels.
At Viaweb we managed to raise $2.5 million from angels without ever accepting
vesting, largely because we were so inexperienced that we were appalled at the
idea. In practice this turned out to be good, because it made us harder to
push around.
Our experience was unusual; vesting is the norm for amounts that size. Y
Combinator doesn't require vesting, because (a) we invest such small amounts,
and (b) we think it's unnecessary, and that the hope of getting rich is enough
motivation to keep founders at work. But maybe if we were investing millions
we would think differently.
I should add that vesting is also a way for founders to protect themselves
against one another. It solves the problem of what to do if one of the
founders quits. So some founders impose it on themselves when they start the
company.
The angel deal takes two weeks to close, so we are now three months into the
life of the company.
The point after you get the first big chunk of angel money will usually be the
happiest phase in a startup's life. It's a lot like being a postdoc: you have
no immediate financial worries, and few responsibilities. You get to work on
juicy kinds of work, like designing software. You don't have to spend time on
bureaucratic stuff, because you haven't hired any bureaucrats yet. Enjoy it
while it lasts, and get as much done as you can, because you will never again
be so productive.
With an apparently inexhaustible sum of money sitting safely in the bank, the
founders happily set to work turning their prototype into something they can
release. They hire one of their friends—at first just as a consultant, so they
can try him out—and then a month later as employee #1. They pay him the
smallest salary he can live on, plus 3% of the company in restricted stock,
vesting over four years. (So after this the option pool is down to 13.7%). [7]
They also spend a little money on a freelance graphic designer.
How much stock do you give early employees? That varies so much that there's
no conventional number. If you get someone really good, really early, it might
be wise to give him as much stock as the founders. The one universal rule is
that the amount of stock an employee gets decreases polynomially with the age
of the company. In other words, you get rich as a power of how early you were.
So if some friends want you to come work for their startup, don't wait several
months before deciding.
A month later, at the end of month four, our group of founders have something
they can launch. Gradually through word of mouth they start to get users.
Seeing the system in use by real users—people they don't know—gives them lots
of new ideas. Also they find they now worry obsessively about the status of
their server. (How relaxing founders' lives must have been when startups wrote
VisiCalc.)
By the end of month six, the system is starting to have a solid core of
features, and a small but devoted following. People start to write about it,
and the founders are starting to feel like experts in their field.
We'll assume that their startup is one that could put millions more to use.
Perhaps they need to spend a lot on marketing, or build some kind of expensive
infrastructure, or hire highly paid salesmen. So they decide to start talking
to VCs. They get introductions to VCs from various sources: their angel
investor connects them with a couple; they meet a few at conferences; a couple
VCs call them after reading about them.
**Step 3: Series A Round**
Armed with their now somewhat fleshed-out business plan and able to demo a
real, working system, the founders visit the VCs they have introductions to.
They find the VCs intimidating and inscrutable. They all ask the same
question: who else have you pitched to? (VCs are like high school girls:
they're acutely aware of their position in the VC pecking order, and their
interest in a company is a function of the interest other VCs show in it.)
One of the VC firms says they want to invest and offers the founders a term
sheet. A term sheet is a summary of what the deal terms will be when and if
they do a deal; lawyers will fill in the details later. By accepting the term
sheet, the startup agrees to turn away other VCs for some set amount of time
while this firm does the "due diligence" required for the deal. Due diligence
is the corporate equivalent of a background check: the purpose is to uncover
any hidden bombs that might sink the company later, like serious design flaws
in the product, pending lawsuits against the company, intellectual property
issues, and so on. VCs' legal and financial due diligence is pretty thorough,
but the technical due diligence is generally a joke. [8]
The due diligence discloses no ticking bombs, and six weeks later they go
ahead with the deal. Here are the terms: a $2 million investment at a pre-
money valuation of $4 million, meaning that after the deal closes the VCs will
own a third of the company (2 / (4 + 2)). The VCs also insist that prior to
the deal the option pool be enlarged by an additional hundred shares. So the
total number of new shares issued is 750, and the cap table becomes:
shareholder shares percent \------------------------------- VCs 650 33.3 angel
200 10.3 uncle 50 2.6 each founder 250 12.8 employee 36* 1.8 *unvested option
pool 264 13.5 \---- ----- total 1950 100 This picture is unrealistic in
several respects. For example, while the percentages might end up looking like
this, it's unlikely that the VCs would keep the existing numbers of shares. In
fact, every bit of the startup's paperwork would probably be replaced, as if
the company were being founded anew. Also, the money might come in several
tranches, the later ones subject to various conditions—though this is
apparently more common in deals with lower-tier VCs (whose lot in life is to
fund more dubious startups) than with the top firms.
And of course any VCs reading this are probably rolling on the floor laughing
at how my hypothetical VCs let the angel keep his 10.3 of the company. I
admit, this is the Bambi version; in simplifying the picture, I've also made
everyone nicer. In the real world, VCs regard angels the way a jealous husband
feels about his wife's previous boyfriends. To them the company didn't exist
before they invested in it. [9]
I don't want to give the impression you have to do an angel round before going
to VCs. In this example I stretched things out to show multiple sources of
funding in action. Some startups could go directly from seed funding to a VC
round; several of the companies we've funded have.
The founders are required to vest their shares over four years, and the board
is now reconstituted to consist of two VCs, two founders, and a fifth person
acceptable to both. The angel investor cheerfully surrenders his board seat.
At this point there is nothing new our startup can teach us about funding—or
at least, nothing good. [10] The startup will almost certainly hire more
people at this point; those millions must be put to work, after all. The
company may do additional funding rounds, presumably at higher valuations.
They may if they are extraordinarily fortunate do an IPO, which we should
remember is also in principle a round of funding, regardless of its de facto
purpose. But that, if not beyond the bounds of possibility, is beyond the
scope of this article.
**Deals Fall Through**
Anyone who's been through a startup will find the preceding portrait to be
missing something: disasters. If there's one thing all startups have in
common, it's that something is always going wrong. And nowhere more than in
matters of funding.
For example, our hypothetical startup never spent more than half of one round
before securing the next. That's more ideal than typical. Many startups—even
successful ones—come close to running out of money at some point. Terrible
things happen to startups when they run out of money, because they're designed
for growth, not adversity.
But the most unrealistic thing about the series of deals I've described is
that they all closed. In the startup world, closing is not what deals do. What
deals do is fall through. If you're starting a startup you would do well to
remember that. Birds fly; fish swim; deals fall through.
Why? Partly the reason deals seem to fall through so often is that you lie to
yourself. You want the deal to close, so you start to believe it will. But
even correcting for this, startup deals fall through alarmingly often—far more
often than, say, deals to buy real estate. The reason is that it's such a
risky environment. People about to fund or acquire a startup are prone to
wicked cases of buyer's remorse. They don't really grasp the risk they're
taking till the deal's about to close. And then they panic. And not just
inexperienced angel investors, but big companies too.
So if you're a startup founder wondering why some angel investor isn't
returning your phone calls, you can at least take comfort in the thought that
the same thing is happening to other deals a hundred times the size.
The example of a startup's history that I've presented is like a
skeleton—accurate so far as it goes, but needing to be fleshed out to be a
complete picture. To get a complete picture, just add in every possible
disaster.
A frightening prospect? In a way. And yet also in a way encouraging. The very
uncertainty of startups frightens away almost everyone. People overvalue
stability—especially [young](hiring.html) people, who ironically need it
least. And so in starting a startup, as in any really bold undertaking, merely
deciding to do it gets you halfway there. On the day of the race, most of the
other runners won't show up.
**Notes**
[1] The aim of such regulations is to protect widows and orphans from crooked
investment schemes; people with a million dollars in liquid assets are assumed
to be able to protect themselves. The unintended consequence is that the
investments that generate the highest returns, like hedge funds, are available
only to the rich.
[2] Consulting is where product companies go to die. IBM is the most famous
example. So starting as a consulting company is like starting out in the grave
and trying to work your way up into the world of the living.
[3] If "near you" doesn't mean the Bay Area, Boston, or Seattle, consider
moving. It's not a coincidence you haven't heard of many startups from
Philadelphia.
[4] Investors are often compared to sheep. And they are like sheep, but that's
a rational response to their situation. Sheep act the way they do for a
reason. If all the other sheep head for a certain field, it's probably good
grazing. And when a wolf appears, is he going to eat a sheep in the middle of
the flock, or one near the edge?
[5] This was partly confidence, and partly simple ignorance. We didn't know
ourselves which VC firms were the impressive ones. We thought software was all
that mattered. But that turned out to be the right direction to be naive in:
it's much better to overestimate than underestimate the importance of making a
good product.
[6] I've omitted one source: government grants. I don't think these are even
worth thinking about for the average startup. Governments may mean well when
they set up grant programs to encourage startups, but what they give with one
hand they take away with the other: the process of applying is inevitably so
arduous, and the restrictions on what you can do with the money so burdensome,
that it would be easier to take a job to get the money.
You should be especially suspicious of grants whose purpose is some kind of
social engineering-- e.g. to encourage more startups to be started in
Mississippi. Free money to start a startup in a place where few succeed is
hardly free.
Some government agencies run venture funding groups, which make investments
rather than giving grants. For example, the CIA runs a venture fund called In-
Q-Tel that is modelled on private sector funds and apparently generates good
returns. They would probably be worth approaching—if you don't mind taking
money from the CIA.
[7] Options have largely been replaced with restricted stock, which amounts to
the same thing. Instead of earning the right to buy stock, the employee gets
the stock up front, and earns the right not to have to give it back. The
shares set aside for this purpose are still called the "option pool."
[8] First-rate technical people do not generally hire themselves out to do due
diligence for VCs. So the most difficult part for startup founders is often
responding politely to the inane questions of the "expert" they send to look
you over.
[9] VCs regularly wipe out angels by issuing arbitrary amounts of new stock.
They seem to have a standard piece of casuistry for this situation: that the
angels are no longer working to help the company, and so don't deserve to keep
their stock. This of course reflects a willful misunderstanding of what
investment means; like any investor, the angel is being compensated for risks
he took earlier. By a similar logic, one could argue that the VCs should be
deprived of their shares when the company goes public.
[10] One new thing the company might encounter is a _down round_ , or a
funding round at valuation lower than the previous round. Down rounds are bad
news; it is generally the common stock holders who take the hit. Some of the
most fearsome provisions in VC deal terms have to do with down rounds—like
"full ratchet anti-dilution," which is as frightening as it sounds.
Founders are tempted to ignore these clauses, because they think the company
will either be a big success or a complete bust. VCs know otherwise: it's not
uncommon for startups to have moments of adversity before they ultimately
succeed. So it's worth negotiating anti-dilution provisions, even though you
don't think you need to, and VCs will try to make you feel that you're being
gratuitously troublesome.
**Thanks** to Sam Altman, Hutch Fishman, Steve Huffman, Jessica Livingston,
Sesha Pratap, Stan Reiss, Andy Singleton, Zak Stone, and Aaron Swartz for
reading drafts of this.
|
January 2016
Since the 1970s, economic inequality in the US has increased dramatically. And
in particular, the rich have gotten a lot richer. Nearly everyone who writes
about the topic says that economic inequality should be decreased.
I'm interested in this question because I was one of the founders of a company
called Y Combinator that helps people start startups. Almost by definition, if
a startup succeeds, its founders become rich. Which means by helping startup
founders I've been helping to increase economic inequality. If economic
inequality should be decreased, I shouldn't be helping founders. No one should
be.
But that doesn't sound right. What's going on here? What's going on is that
while economic inequality is a single measure (or more precisely, two:
variation in income, and variation in wealth), it has multiple causes. Many of
these causes are bad, like tax loopholes and drug addiction. But some are
good, like Larry Page and Sergey Brin starting the company you use to find
things online.
If you want to understand economic inequality — and more importantly, if you
actually want to fix the bad aspects of it — you have to tease apart the
components. And yet the trend in nearly everything written about the subject
is to do the opposite: to squash together all the aspects of economic
inequality as if it were a single phenomenon.
Sometimes this is done for ideological reasons. Sometimes it's because the
writer only has very high-level data and so draws conclusions from that, like
the proverbial drunk who looks for his keys under the lamppost, instead of
where he dropped them, because the light is better there. Sometimes it's
because the writer doesn't understand critical aspects of inequality, like the
role of technology in wealth creation. Much of the time, perhaps most of the
time, writing about economic inequality combines all three.
___
The most common mistake people make about economic inequality is to treat it
as a single phenomenon. The most naive version of which is the one based on
the pie fallacy: that the rich get rich by taking money from the poor.
Usually this is an assumption people start from rather than a conclusion they
arrive at by examining the evidence. Sometimes the pie fallacy is stated
explicitly:
> ...those at the top are grabbing an increasing fraction of the nation's
> income — so much of a larger share that what's left over for the rest is
> diminished.... [1]
Other times it's more unconscious. But the unconscious form is very
widespread. I think because we grow up in a world where the pie fallacy is
actually true. To kids, wealth _is_ a fixed pie that's shared out, and if one
person gets more, it's at the expense of another. It takes a conscious effort
to remind oneself that the real world doesn't work that way.
In the real world you can create wealth as well as taking it from others. A
woodworker creates wealth. He makes a chair, and you willingly give him money
in return for it. A high-frequency trader does not. He makes a dollar only
when someone on the other end of a trade loses a dollar.
If the rich people in a society got that way by taking wealth from the poor,
then you have the degenerate case of economic inequality, where the cause of
poverty is the same as the cause of wealth. But instances of inequality don't
have to be instances of the degenerate case. If one woodworker makes 5 chairs
and another makes none, the second woodworker will have less money, but not
because anyone took anything from him.
Even people sophisticated enough to know about the pie fallacy are led toward
it by the custom of describing economic inequality as a ratio of one
quantile's income or wealth to another's. It's so easy to slip from talking
about income shifting from one quantile to another, as a figure of speech,
into believing that is literally what's happening.
Except in the degenerate case, economic inequality can't be described by a
ratio or even a curve. In the general case it consists of multiple ways people
become poor, and multiple ways people become rich. Which means to understand
economic inequality in a country, you have to go find individual people who
are poor or rich and figure out why. [2]
If you want to understand _change_ in economic inequality, you should ask what
those people would have done when it was different. This is one way I know the
rich aren't all getting richer simply from some new system for transferring
wealth to them from everyone else. When you use the would-have method with
startup founders, you find what most would have done [_back in
1960_](re.html), when economic inequality was lower, was to join big companies
or become professors. Before Mark Zuckerberg started Facebook, his default
expectation was that he'd end up working at Microsoft. The reason he and most
other startup founders are richer than they would have been in the mid 20th
century is not because of some right turn the country took during the Reagan
administration, but because progress in technology has made it much easier to
start a new company that [_grows fast_](growth.html).
Traditional economists seem strangely averse to studying individual humans. It
seems to be a rule with them that everything has to start with statistics. So
they give you very precise numbers about variation in wealth and income, then
follow it with the most naive speculation about the underlying causes.
But while there are a lot of people who get rich through rent-seeking of
various forms, and a lot who get rich by playing zero-sum games, there are
also a significant number who get rich by creating wealth. And creating
wealth, as a source of economic inequality, is different from taking it — not
just morally, but also practically, in the sense that it is harder to
eradicate. One reason is that variation in productivity is accelerating. The
rate at which individuals can create wealth depends on the technology
available to them, and that grows exponentially. The other reason creating
wealth is such a tenacious source of inequality is that it can expand to
accommodate a lot of people.
___
I'm all for shutting down the crooked ways to get rich. But that won't
eliminate great variations in wealth, because as long as you leave open the
option of getting rich by creating wealth, people who want to get rich will do
that instead.
Most people who get rich tend to be fairly driven. Whatever their other flaws,
laziness is usually not one of them. Suppose new policies make it hard to make
a fortune in finance. Does it seem plausible that the people who currently go
into finance to make their fortunes will continue to do so, but be content to
work for ordinary salaries? The reason they go into finance is not because
they love finance but because they want to get rich. If the only way left to
get rich is to start startups, they'll start startups. They'll do well at it
too, because determination is the main factor in the success of a startup. [3]
And while it would probably be a good thing for the world if people who wanted
to get rich switched from playing zero-sum games to creating wealth, that
would not only not eliminate great variations in wealth, but might even
exacerbate them. In a zero-sum game there is at least a limit to the upside.
Plus a lot of the new startups would create new technology that further
accelerated variation in productivity.
Variation in productivity is far from the only source of economic inequality,
but it is the irreducible core of it, in the sense that you'll have that left
when you eliminate all other sources. And if you do, that core will be big,
because it will have expanded to include the efforts of all the refugees. Plus
it will have a large Baumol penumbra around it: anyone who could get rich by
creating wealth on their own account will have to be paid enough to prevent
them from doing it.
You can't prevent great variations in wealth without preventing people from
getting rich, and you can't do that without preventing them from starting
startups.
So let's be clear about that. Eliminating great variations in wealth would
mean eliminating startups. And that doesn't seem a wise move. Especially since
it would only mean you eliminated startups in your own country. Ambitious
people already move halfway around the world to further their careers, and
startups can operate from anywhere nowadays. So if you made it impossible to
get rich by creating wealth in your country, people who wanted to do that
would just leave and do it somewhere else. Which would certainly get you a
lower Gini coefficient, along with a lesson in being careful what you ask for.
[4]
I think rising economic inequality is the inevitable fate of countries that
don't choose something worse. We had a 40 year stretch in the middle of the
20th century that convinced some people otherwise. But as I explained in [_The
Refragmentation_](re.html), that was an anomaly — a unique combination of
circumstances that compressed American society not just economically but
culturally too. [5]
And while some of the growth in economic inequality we've seen since then has
been due to bad behavior of various kinds, there has simultaneously been a
huge increase in individuals' ability to create wealth. Startups are almost
entirely a product of this period. And even within the startup world, there
has been a qualitative change in the last 10 years. Technology has decreased
the cost of starting a startup so much that founders now have the upper hand
over investors. Founders get less diluted, and it is now common for them to
retain [_board control_](control.html) as well. Both further increase economic
inequality, the former because founders own more stock, and the latter
because, as investors have learned, founders tend to be better at running
their companies than investors.
While the surface manifestations change, the underlying forces are very, very
old. The acceleration of productivity we see in Silicon Valley has been
happening for thousands of years. If you look at the history of stone tools,
technology was already accelerating in the Mesolithic. The acceleration would
have been too slow to perceive in one lifetime. Such is the nature of the
leftmost part of an exponential curve. But it was the same curve.
You do not want to design your society in a way that's incompatible with this
curve. The evolution of technology is one of the most powerful forces in
history.
Louis Brandeis said "We may have democracy, or we may have wealth concentrated
in the hands of a few, but we can't have both." That sounds plausible. But if
I have to choose between ignoring him and ignoring an exponential curve that
has been operating for thousands of years, I'll bet on the curve. Ignoring any
trend that has been operating for thousands of years is dangerous. But
exponential growth, especially, tends to bite you.
___
If accelerating variation in productivity is always going to produce some
baseline growth in economic inequality, it would be a good idea to spend some
time thinking about that future. Can you have a healthy society with great
variation in wealth? What would it look like?
Notice how novel it feels to think about that. The public conversation so far
has been exclusively about the need to decrease economic inequality. We've
barely given a thought to how to live with it.
I'm hopeful we'll be able to. Brandeis was a product of the Gilded Age, and
things have changed since then. It's harder to hide wrongdoing now. And to get
rich now you don't have to buy politicians the way railroad or oil magnates
did. [6] The great concentrations of wealth I see around me in Silicon Valley
don't seem to be destroying democracy.
There are lots of things wrong with the US that have economic inequality as a
symptom. We should fix those things. In the process we may decrease economic
inequality. But we can't start from the symptom and hope to fix the underlying
causes. [7]
The most obvious is poverty. I'm sure most of those who want to decrease
economic inequality want to do it mainly to help the poor, not to hurt the
rich. [8] Indeed, a good number are merely being sloppy by speaking of
decreasing economic inequality when what they mean is decreasing poverty. But
this is a situation where it would be good to be precise about what we want.
Poverty and economic inequality are not identical. When the city is turning
off your [_water_](http://www.theatlantic.com/business/archive/2014/07/what-
happens-when-detroit-shuts-off-the-water-of-100000-people/374548/) because you
can't pay the bill, it doesn't make any difference what Larry Page's net worth
is compared to yours. He might only be a few times richer than you, and it
would still be just as much of a problem that your water was getting turned
off.
Closely related to poverty is lack of social mobility. I've seen this myself:
you don't have to grow up rich or even upper middle class to get rich as a
startup founder, but few successful founders grew up desperately poor. But
again, the problem here is not simply economic inequality. There is an
enormous difference in wealth between the household Larry Page grew up in and
that of a successful startup founder, but that didn't prevent him from joining
their ranks. It's not economic inequality per se that's blocking social
mobility, but some specific combination of things that go wrong when kids grow
up sufficiently poor.
One of the most important principles in Silicon Valley is that "you make what
you measure." It means that if you pick some number to focus on, it will tend
to improve, but that you have to choose the right number, because only the one
you choose will improve; another that seems conceptually adjacent might not.
For example, if you're a university president and you decide to focus on
graduation rates, then you'll improve graduation rates. But only graduation
rates, not how much students learn. Students could learn less, if to improve
graduation rates you made classes easier.
Economic inequality is sufficiently far from identical with the various
problems that have it as a symptom that we'll probably only hit whichever of
the two we aim at. If we aim at economic inequality, we won't fix these
problems. So I say let's aim at the problems.
For example, let's attack poverty, and if necessary damage wealth in the
process. That's much more likely to work than attacking wealth in the hope
that you will thereby fix poverty. [9] And if there are people getting rich by
tricking consumers or lobbying the government for anti-competitive regulations
or tax loopholes, then let's stop them. Not because it's causing economic
inequality, but because it's stealing. [10]
If all you have is statistics, it seems like that's what you need to fix. But
behind a broad statistical measure like economic inequality there are some
things that are good and some that are bad, some that are historical trends
with immense momentum and others that are random accidents. If we want to fix
the world behind the statistics, we have to understand it, and focus our
efforts where they'll do the most good.
**Notes**
[1] Stiglitz, Joseph. _The Price of Inequality_. Norton, 2012. p. 32.
[2] Particularly since economic inequality is a matter of outliers, and
outliers are disproportionately likely to have gotten where they are by ways
that have little do with the sort of things economists usually think about,
like wages and productivity, but rather by, say, ending up on the wrong side
of the "War on Drugs."
[3] Determination is the most important factor in deciding between success and
failure, which in startups tend to be sharply differentiated. But it takes
more than determination to create one of the hugely successful startups.
Though most founders start out excited about the idea of getting rich, purely
mercenary founders will usually take one of the big acquisition offers most
successful startups get on the way up. The founders who go on to the next
stage tend to be driven by a sense of mission. They have the same attachment
to their companies that an artist or writer has to their work. But it is very
hard to predict at the outset which founders will do that. It's not simply a
function of their initial attitude. Starting a company changes people.
[4] After reading a draft of this essay, Richard Florida told me how he had
once talked to a group of Europeans "who said they wanted to make Europe more
entrepreneurial and more like Silicon Valley. I said by definition this will
give you more inequality. They thought I was insane — they could not process
it."
[5] Economic inequality has been decreasing globally. But this is mainly due
to the erosion of the kleptocracies that formerly dominated all the poorer
countries. Once the playing field is leveler politically, we'll see economic
inequality start to rise again. The US is the bellwether. The situation we
face here, the rest of the world will sooner or later.
[6] Some people still get rich by buying politicians. My point is that it's no
longer a precondition.
[7] As well as problems that have economic inequality as a symptom, there are
those that have it as a cause. But in most if not all, economic inequality is
not the primary cause. There is usually some injustice that is allowing
economic inequality to turn into other forms of inequality, and that injustice
is what we need to fix. For example, the police in the US treat the poor worse
than the rich. But the solution is not to make people richer. It's to make the
police treat people more equitably. Otherwise they'll continue to maltreat
people who are weak in other ways.
[8] Some who read this essay will say that I'm clueless or even being
deliberately misleading by focusing so much on the richer end of economic
inequality — that economic inequality is really about poverty. But that is
exactly the point I'm making, though sloppier language than I'd use to make
it. The real problem is poverty, not economic inequality. And if you conflate
them you're aiming at the wrong target.
Others will say I'm clueless or being misleading by focusing on people who get
rich by creating wealth — that startups aren't the problem, but corrupt
practices in finance, healthcare, and so on. Once again, that is exactly my
point. The problem is not economic inequality, but those specific abuses.
It's a strange task to write an essay about why something isn't the problem,
but that's the situation you find yourself in when so many people mistakenly
think it is.
[9] Particularly since many causes of poverty are only partially driven by
people trying to make money from them. For example, America's abnormally high
incarceration rate is a major cause of poverty. But although [_for-profit
prison
companies_](https://www.washingtonpost.com/posteverything/wp/2015/04/28/how-
for-profit-prisons-have-become-the-biggest-lobby-no-one-is-talking-about/) and
[_prison guard unions_](http://mic.com/articles/41531/union-of-the-snake-how-
california-s-prison-guards-subvert-democracy) both spend a lot lobbying for
harsh sentencing laws, they are not the original source of them.
[10] Incidentally, tax loopholes are definitely not a product of some power
shift due to recent increases in economic inequality. The golden age of
economic equality in the mid 20th century was also the golden age of tax
avoidance. Indeed, it was so widespread and so effective that I'm skeptical
whether economic inequality was really so low then as we think. In a period
when people are trying to hide wealth from the government, it will tend to be
hidden from statistics too. One sign of the potential magnitude of the problem
is the discrepancy between government receipts as a percentage of GDP, which
have remained more or less constant during the entire period from the end of
World War II to the present, and tax rates, which have varied dramatically.
**Thanks** to Sam Altman, Tiffani Ashley Bell, Patrick Collison, Ron Conway,
Richard Florida, Ben Horowitz, Jessica Livingston, Robert Morris, Tim
O'Reilly, Max Roser, and Alexia Tsotsis for reading drafts of this.
**Note:** This is a new version from which I removed a pair of metaphors that
made a lot of people mad, essentially by macroexpanding them. If anyone wants
to see the old version, I put it [_here_](ineqold.html).
**Related:**
|
February 2008
A user on Hacker News recently posted a
[comment](http://news.ycombinator.com/item?id=116938) that set me thinking:
> Something about hacker culture that never really set well with me was this �
> the nastiness. ... I just don't understand why people troll like they do.
I've thought a lot over the last couple years about the problem of trolls.
It's an old one, as old as forums, but we're still just learning what the
causes are and how to address them.
There are two senses of the word "troll." In the original sense it meant
someone, usually an outsider, who deliberately stirred up fights in a forum by
saying controversial things. [1] For example, someone who didn't use a certain
programming language might go to a forum for users of that language and make
disparaging remarks about it, then sit back and watch as people rose to the
bait. This sort of trolling was in the nature of a practical joke, like
letting a bat loose in a room full of people.
The definition then spread to people who behaved like assholes in forums,
whether intentionally or not. Now when people talk about trolls they usually
mean this broader sense of the word. Though in a sense this is historically
inaccurate, it is in other ways more accurate, because when someone is being
an asshole it's usually uncertain even in their own mind how much is
deliberate. That is arguably one of the defining qualities of an asshole.
I think trolling in the broader sense has four causes. The most important is
distance. People will say things in anonymous forums that they'd never dare
say to someone's face, just as they'll do things in cars that they'd never do
as pedestrians � like tailgate people, or honk at them, or cut them off.
Trolling tends to be particularly bad in forums related to computers, and I
think that's due to the kind of people you find there. Most of them (myself
included) are more comfortable dealing with abstract ideas than with people.
Hackers can be abrupt even in person. Put them on an anonymous forum, and the
problem gets worse.
The third cause of trolling is incompetence. If you disagree with something,
it's easier to say "you suck" than to figure out and explain exactly what you
disagree with. You're also safe that way from refutation. In this respect
trolling is a lot like graffiti. Graffiti happens at the intersection of
ambition and incompetence: people want to make their mark on the world, but
have no other way to do it than literally making a mark on the world. [2]
The final contributing factor is the culture of the forum. Trolls are like
children (many _are_ children) in that they're capable of a wide range of
behavior depending on what they think will be tolerated. In a place where
rudeness isn't tolerated, most can be polite. But vice versa as well.
There's a sort of Gresham's Law of trolls: trolls are willing to use a forum
with a lot of thoughtful people in it, but thoughtful people aren't willing to
use a forum with a lot of trolls in it. Which means that once trolling takes
hold, it tends to become the dominant culture. That had already happened to
Slashdot and Digg by the time I paid attention to comment threads there, but I
watched it happen to Reddit.
News.YC is, among other things, an experiment to see if this fate can be
avoided. The sites's [guidelines](http://ycombinator.com/newsguidelines.html)
explicitly ask people not to say things they wouldn't say face to face. If
someone starts being rude, other users will step in and tell them to stop. And
when people seem to be deliberately trolling, we ban them ruthlessly.
Technical tweaks may also help. On Reddit, votes on your comments don't affect
your karma score, but they do on News.YC. And it does seem to influence people
when they can see their reputation in the eyes of their peers drain away after
making an asshole remark. Often users have second thoughts and delete such
comments.
One might worry this would prevent people from expressing controversial ideas,
but empirically that doesn't seem to be what happens. When people say
something substantial that gets modded down, they stubbornly leave it up. What
people delete are wisecracks, because they have less invested in them.
So far the experiment seems to be working. The level of conversation on
News.YC is as high as on any forum I've seen. But we still only have about
8,000 uniques a day. The conversations on Reddit were good when it was that
small. The challenge is whether we can keep things this way.
I'm optimistic we will. We're not depending just on technical tricks. The core
users of News.YC are mostly refugees from other sites that were overrun by
trolls. They feel about trolls roughly the way refugees from Cuba or Eastern
Europe feel about dictatorships. So there are a lot of people working to keep
this from happening again.
**Notes**
[1] I mean forum in the general sense of a place to exchange views. The
original Internet forums were not web sites but Usenet newsgroups.
[2] I'm talking here about everyday tagging. Some graffiti is quite impressive
(anything becomes art if you do it well enough) but the median tag is just
visual spam.
|
April 2007
_(This essay is derived from a keynote talk at the 2007 ASES Summit at
Stanford.)_
The world of investors is a foreign one to most hackers—partly because
investors are so unlike hackers, and partly because they tend to operate in
secret. I've been dealing with this world for many years, both as a founder
and an investor, and I still don't fully understand it.
In this essay I'm going to list some of the more surprising things I've
learned about investors. Some I only learned in the past year.
Teaching hackers how to deal with investors is probably the second most
important thing we do at Y Combinator. The most important thing for a startup
is to make something good. But everyone knows that's important. The dangerous
thing about investors is that hackers don't know how little they know about
this strange world.
**1\. The investors are what make a startup hub.**
About a year ago I tried to figure out what you'd need to reproduce [Silicon
Valley](siliconvalley.html). I decided the critical ingredients were rich
people and nerds—investors and founders. People are all you need to make
technology, and all the other people will move.
If I had to narrow that down, I'd say investors are the limiting factor. Not
because they contribute more to the startup, but simply because they're least
willing to move. They're rich. They're not going to move to Albuquerque just
because there are some smart hackers there they could invest in. Whereas
hackers will move to the Bay Area to find investors.
**2\. Angel investors are the most critical.**
There are several types of investors. The two main categories are angels and
VCs: VCs invest other people's money, and angels invest their own.
Though they're less well known, the angel investors are probably the more
critical ingredient in creating a silicon valley. Most companies that VCs
invest in would never have made it that far if angels hadn't invested first.
VCs say between half and three quarters of companies that raise series A
rounds have taken some outside investment already. [1]
Angels are willing to fund riskier projects than VCs. They also give valuable
advice, because (unlike VCs) many have been startup founders themselves.
Google's story shows the key role angels play. A lot of people know Google
raised money from Kleiner and Sequoia. What most don't realize is how late.
That VC round was a series B round; the premoney valuation was $75 million.
Google was already a successful company at that point. Really, Google was
funded with angel money.
It may seem odd that the canonical Silicon Valley startup was funded by
angels, but this is not so surprising. Risk is always proportionate to reward.
So the most successful startup of all is likely to have seemed an extremely
risky bet at first, and that is exactly the kind VCs won't touch.
Where do angel investors come from? From other startups. So startup hubs like
Silicon Valley benefit from something like the marketplace effect, but shifted
in time: startups are there because startups were there.
**3\. Angels don't like publicity.**
If angels are so important, why do we hear more about VCs? Because VCs like
publicity. They need to market themselves to the investors who are their
"customers"—the endowments and pension funds and rich families whose money
they invest—and also to founders who might come to them for funding.
Angels don't need to market themselves to investors because they invest their
own money. Nor do they want to market themselves to founders: they don't want
random people pestering them with business plans. Actually, neither do VCs.
Both angels and VCs get deals almost exclusively through personal
introductions. [2]
The reason VCs want a strong brand is not to draw in more business plans over
the transom, but so they win deals when competing against other VCs. Whereas
angels are rarely in direct competition, because (a) they do fewer deals, (b)
they're happy to split them, and (c) they invest at a point where the stream
is broader.
**4\. Most investors, especially VCs, are not like founders.**
Some angels are, or were, hackers. But most VCs are a different type of
people: they're dealmakers.
If you're a hacker, here's a thought experiment you can run to understand why
there are basically no hacker VCs: How would you like a job where you never
got to make anything, but instead spent all your time listening to other
people pitch (mostly terrible) projects, deciding whether to fund them, and
sitting on their boards if you did? That would not be fun for most hackers.
Hackers like to make things. This would be like being an administrator.
Because most VCs are a different species of people from founders, it's hard to
know what they're thinking. If you're a hacker, the last time you had to deal
with these guys was in high school. Maybe in college you walked past their
fraternity on your way to the lab. But don't underestimate them. They're as
expert in their world as you are in yours. What they're good at is reading
people, and making deals work to their advantage. Think twice before you try
to beat them at that.
**5\. Most investors are momentum investors.**
Because most investors are dealmakers rather than technology people, they
generally don't understand what you're doing. I knew as a founder that most
VCs didn't get technology. I also knew some made a lot of money. And yet it
never occurred to me till recently to put those two ideas together and ask
"How can VCs make money by investing in stuff they don't understand?"
The answer is that they're like momentum investors. You can (or could once)
make a lot of money by noticing sudden changes in stock prices. When a stock
jumps upward, you buy, and when it suddenly drops, you sell. In effect you're
insider trading, without knowing what you know. You just know someone knows
something, and that's making the stock move.
This is how most venture investors operate. They don't try to look at
something and predict whether it will take off. They win by noticing that
something _is_ taking off a little sooner than everyone else. That generates
almost as good returns as actually being able to pick winners. They may have
to pay a little more than they would if they got in at the very beginning, but
only a little.
Investors always say what they really care about is the team. Actually what
they care most about is your traffic, then what other investors think, then
the team. If you don't yet have any traffic, they fall back on number 2, what
other investors think. And this, as you can imagine, produces wild
oscillations in the "stock price" of a startup. One week everyone wants you,
and they're begging not to be cut out of the deal. But all it takes is for one
big investor to cool on you, and the next week no one will return your phone
calls. We regularly have startups go from hot to cold or cold to hot in a
matter of days, and literally nothing has changed.
There are two ways to deal with this phenomenon. If you're feeling really
confident, you can try to ride it. You can start by asking a comparatively
lowly VC for a small amount of money, and then after generating interest
there, ask more prestigious VCs for larger amounts, stirring up a crescendo of
buzz, and then "sell" at the top. This is extremely risky, and takes months
even if you succeed. I wouldn't try it myself. My advice is to err on the side
of safety: when someone offers you a decent deal, just take it and get on with
building the company. Startups win or lose based on the quality of their
product, not the quality of their funding deals.
**6\. Most investors are looking for big hits.**
Venture investors like companies that could go public. That's where the big
returns are. They know the odds of any individual startup going public are
small, but they want to invest in those that at least have a _chance_ of going
public.
Currently the way VCs seem to operate is to invest in a bunch of companies,
most of which fail, and one of which is Google. Those few big wins compensate
for losses on their other investments. What this means is that most VCs will
only invest in you if you're a potential Google. They don't care about
companies that are a safe bet to be acquired for $20 million. There needs to
be a chance, however small, of the company becoming really big.
Angels are different in this respect. They're happy to invest in a company
where the most likely outcome is a $20 million acquisition if they can do it
at a low enough valuation. But of course they like companies that could go
public too. So having an ambitious long-term plan pleases everyone.
If you take VC money, you have to mean it, because the structure of VC deals
prevents early acquisitions. If you take VC money, they won't let you sell
early.
**7\. VCs want to invest large amounts.**
The fact that they're running investment funds makes VCs want to invest large
amounts. A typical VC fund is now hundreds of millions of dollars. If $400
million has to be invested by 10 partners, they have to invest $40 million
each. VCs usually sit on the boards of companies they fund. If the average
deal size was $1 million, each partner would have to sit on 40 boards, which
would not be fun. So they prefer bigger deals, where they can put a lot of
money to work at once.
VCs don't regard you as a bargain if you don't need a lot of money. That may
even make you less attractive, because it means their investment creates less
of a barrier to entry for competitors.
Angels are in a different position because they're investing their own money.
They're happy to invest small amounts—sometimes as little as $20,000—as long
as the potential returns look good enough. So if you're doing something
inexpensive, go to angels.
**8\. Valuations are fiction.**
VCs admit that valuations are an artifact. They decide how much money you need
and how much of the company they want, and those two constraints yield a
valuation.
Valuations increase as the size of the investment does. A company that an
angel is willing to put $50,000 into at a valuation of a million can't take $6
million from VCs at that valuation. That would leave the founders less than a
seventh of the company between them (since the option pool would also come out
of that seventh). Most VCs wouldn't want that, which is why you never hear of
deals where a VC invests $6 million at a premoney valuation of $1 million.
If valuations change depending on the amount invested, that shows how far they
are from reflecting any kind of value of the company.
Since valuations are made up, founders shouldn't care too much about them.
That's not the part to focus on. In fact, a high valuation can be a bad thing.
If you take funding at a premoney valuation of $10 million, you won't be
selling the company for 20. You'll have to sell for over 50 for the VCs to get
even a 5x return, which is low to them. More likely they'll want you to hold
out for 100. But needing to get a high price decreases the chance of getting
bought at all; many companies can buy you for $10 million, but only a handful
for 100. And since a startup is like a pass/fail course for the founders, what
you want to optimize is your chance of a good outcome, not the percentage of
the company you keep.
So why do founders chase high valuations? They're tricked by misplaced
ambition. They feel they've achieved more if they get a higher valuation. They
usually know other founders, and if they get a higher valuation they can say
"mine is bigger than yours." But funding is not the real test. The real test
is the final outcome for the founder, and getting too high a valuation may
just make a good outcome less likely.
The one advantage of a high valuation is that you get less dilution. But there
is another less sexy way to achieve that: just take less money.
**9\. Investors look for founders like the current stars.**
Ten years ago investors were looking for the next Bill Gates. This was a
mistake, because Microsoft was a very anomalous startup. They started almost
as a contract programming operation, and the reason they became huge was that
IBM happened to drop the PC standard in their lap.
Now all the VCs are looking for the next Larry and Sergey. This is a good
trend, because Larry and Sergey are closer to the ideal startup founders.
Historically investors thought it was important for a founder to be an expert
in business. So they were willing to fund teams of MBAs who planned to use the
money to pay programmers to build their product for them. This is like funding
Steve Ballmer in the hope that the programmer he'll hire is Bill Gates—kind of
backward, as the events of the Bubble showed. Now most VCs know they should be
funding technical guys. This is more pronounced among the very top funds; the
lamer ones still want to fund MBAs.
If you're a hacker, it's good news that investors are looking for Larry and
Sergey. The bad news is, the only investors who can do it right are the ones
who knew them when they were a couple of CS grad students, not the confident
media stars they are today. What investors still don't get is how clueless and
tentative great founders can seem at the very beginning.
**10\. The contribution of investors tends to be underestimated.**
Investors do more for startups than give them money. They're helpful in doing
deals and arranging introductions, and some of the smarter ones, particularly
angels, can give good advice about the product.
In fact, I'd say what separates the great investors from the mediocre ones is
the quality of their advice. Most investors give advice, but the top ones give
_good_ advice.
Whatever help investors give a startup tends to be underestimated. It's to
everyone's advantage to let the world think the founders thought of
everything. The goal of the investors is for the company to become valuable,
and the company seems more valuable if it seems like all the good ideas came
from within.
This trend is compounded by the obsession that the press has with founders. In
a company founded by two people, 10% of the ideas might come from the first
guy they hire. Arguably they've done a bad job of hiring otherwise. And yet
this guy will be almost entirely overlooked by the press.
I say this as a founder: the contribution of founders is always overestimated.
The danger here is that new founders, looking at existing founders, will think
that they're supermen that one couldn't possibly equal oneself. Actually they
have a hundred different types of support people just offscreen making the
whole show possible. [3]
**11\. VCs are afraid of looking bad.**
I've been very surprised to discover how timid most VCs are. They seem to be
afraid of looking bad to their partners, and perhaps also to the limited
partners—the people whose money they invest.
You can measure this fear in how much less risk VCs are willing to take. You
can tell they won't make investments for their fund that they might be willing
to make themselves as angels. Though it's not quite accurate to say that VCs
are less willing to take risks. They're less willing to do things that might
look bad. That's not the same thing.
For example, most VCs would be very reluctant to invest in a startup founded
by a pair of 18 year old hackers, no matter how brilliant, because if the
startup failed their partners could turn on them and say "What, you invested
$x million of our money in a pair of 18 year olds?" Whereas if a VC invested
in a startup founded by three former banking executives in their 40s who
planned to outsource their product development—which to my mind is actually a
lot riskier than investing in a pair of really smart 18 year olds—he couldn't
be faulted, if it failed, for making such an apparently prudent investment.
As a friend of mine said, "Most VCs can't do anything that would sound bad to
the kind of doofuses who run pension funds." Angels can take greater risks
because they don't have to answer to anyone.
**12\. Being turned down by investors doesn't mean much.**
Some founders are quite dejected when they get turned down by investors. They
shouldn't take it so much to heart. To start with, investors are often wrong.
It's hard to think of a successful startup that wasn't turned down by
investors at some point. Lots of VCs rejected Google. So obviously the
reaction of investors is not a very meaningful test.
Investors will often reject you for what seem to be superficial reasons. I
read of one VC who [turned
down](http://ricksegal.typepad.com/pmv/2007/02/a_fatal_paper_c.html) a startup
simply because they'd given away so many little bits of stock that the deal
required too many signatures to close. [4] The reason investors can get away
with this is that they see so many deals. It doesn't matter if they
underestimate you because of some surface imperfection, because the next best
deal will be [almost as good](judgement.html). Imagine picking out apples at a
grocery store. You grab one with a little bruise. Maybe it's just a surface
bruise, but why even bother checking when there are so many other unbruised
apples to choose from?
Investors would be the first to admit they're often wrong. So when you get
rejected by investors, don't think "we suck," but instead ask "do we suck?"
Rejection is a question, not an answer.
**13\. Investors are emotional.**
I've been surprised to discover how emotional investors can be. You'd expect
them to be cold and calculating, or at least businesslike, but often they're
not. I'm not sure if it's their position of power that makes them this way, or
the large sums of money involved, but investment negotiations can easily turn
personal. If you offend investors, they'll leave in a huff.
A while ago an eminent VC firm offered a series A round to a startup we'd seed
funded. Then they heard a rival VC firm was also interested. They were so
afraid that they'd be rejected in favor of this other firm that they gave the
startup what's known as an "exploding termsheet." They had, I think, 24 hours
to say yes or no, or the deal was off. Exploding termsheets are a somewhat
dubious device, but not uncommon. What surprised me was their reaction when I
called to talk about it. I asked if they'd still be interested in the startup
if the rival VC didn't end up making an offer, and they said no. What rational
basis could they have had for saying that? If they thought the startup was
worth investing in, what difference should it make what some other VC thought?
Surely it was their duty to their limited partners simply to invest in the
best opportunities they found; they should be delighted if the other VC said
no, because it would mean they'd overlooked a good opportunity. But of course
there was no rational basis for their decision. They just couldn't stand the
idea of taking this rival firm's rejects.
In this case the exploding termsheet was not (or not only) a tactic to
pressure the startup. It was more like the high school trick of breaking up
with someone before they can break up with you. In an [earlier
essay](startupfunding.html) I said that VCs were a lot like high school girls.
A few VCs have joked about that characterization, but none have disputed it.
**14\. The negotiation never stops till the closing.**
Most deals, for investment or acquisition, happen in two phases. There's an
initial phase of negotiation about the big questions. If this succeeds you get
a termsheet, so called because it outlines the key terms of a deal. A
termsheet is not legally binding, but it is a definite step. It's supposed to
mean that a deal is going to happen, once the lawyers work out all the
details. In theory these details are minor ones; by definition all the
important points are supposed to be covered in the termsheet.
Inexperience and wishful thinking combine to make founders feel that when they
have a termsheet, they have a deal. They want there to be a deal; everyone
acts like they have a deal; so there must be a deal. But there isn't and may
not be for several months. A lot can change for a startup in several months.
It's not uncommon for investors and acquirers to get buyer's remorse. So you
have to keep pushing, keep selling, all the way to the close. Otherwise all
the "minor" details left unspecified in the termsheet will be interpreted to
your disadvantage. The other side may even break the deal; if they do that,
they'll usually seize on some technicality or claim you misled them, rather
than admitting they changed their minds.
It can be hard to keep the pressure on an investor or acquirer all the way to
the closing, because the most effective pressure is competition from other
investors or acquirers, and these tend to drop away when you get a termsheet.
You should try to stay as close friends as you can with these rivals, but the
most important thing is just to keep up the momentum in your startup. The
investors or acquirers chose you because you seemed hot. Keep doing whatever
made you seem hot. Keep releasing new features; keep getting new users; keep
getting mentioned in the press and in blogs.
**15\. Investors like to co-invest.**
I've been surprised how willing investors are to split deals. You might think
that if they found a good deal they'd want it all to themselves, but they seem
positively eager to syndicate. This is understandable with angels; they invest
on a smaller scale and don't like to have too much money tied up in any one
deal. But VCs also share deals a lot. Why?
Partly I think this is an artifact of the rule I quoted earlier: after
traffic, VCs care most what other VCs think. A deal that has multiple VCs
interested in it is more likely to close, so of deals that close, more will
have multiple investors.
There is one rational reason to want multiple VCs in a deal: Any investor who
co-invests with you is one less investor who could fund a competitor.
Apparently Kleiner and Sequoia didn't like splitting the Google deal, but it
did at least have the advantage, from each one's point of view, that there
probably wouldn't be a competitor funded by the other. Splitting deals thus
has similar advantages to confusing paternity.
But I think the main reason VCs like splitting deals is the fear of looking
bad. If another firm shares the deal, then in the event of failure it will
seem to have been a prudent choice—a consensus decision, rather than just the
whim of an individual partner.
**16\. Investors collude.**
Investing is not covered by antitrust law. At least, it better not be, because
investors regularly do things that would be illegal otherwise. I know
personally of cases where one investor has talked another out of making a
competitive offer, using the promise of sharing future deals.
In principle investors are all competing for the same deals, but the spirit of
cooperation is stronger than the spirit of competition. The reason, again, is
that there are so many deals. Though a professional investor may have a closer
relationship with a founder he invests in than with other investors, his
relationship with the founder is only going to last a couple years, whereas
his relationship with other firms will last his whole career. There isn't so
much at stake in his interactions with other investors, but there will be a
lot of them. Professional investors are constantly trading little favors.
Another reason investors stick together is to preserve the power of investors
as a whole. So you will not, as of this writing, be able to get investors into
an auction for your series A round. They'd rather lose the deal than establish
a precedent of VCs competitively bidding against one another. An efficient
startup funding market may be coming in the distant future; things tend to
move in that direction; but it's certainly not here now.
**17\. Large-scale investors care about their portfolio, not any individual
company.**
The reason startups work so well is that everyone with power also has equity.
The only way any of them can succeed is if they all do. This makes everyone
naturally pull in the same direction, subject to differences of opinion about
tactics.
The problem is, larger scale investors don't have exactly the same motivation.
Close, but not identical. They don't need any given startup to succeed, like
founders do, just their portfolio as a whole to. So in borderline cases the
rational thing for them to do is to sacrifice unpromising startups.
Large-scale investors tend to put startups in three categories: successes,
failures, and the "living dead"—companies that are plugging along but don't
seem likely in the immediate future to get bought or go public. To the
founders, "living dead" sounds harsh. These companies may be far from failures
by ordinary standards. But they might as well be from a venture investor's
point of view, and they suck up just as much time and attention as the
successes. So if such a company has two possible strategies, a conservative
one that's slightly more likely to work in the end, or a risky one that within
a short time will either yield a giant success or kill the company, VCs will
push for the kill-or-cure option. To them the company is already a write-off.
Better to have resolution, one way or the other, as soon as possible.
If a startup gets into real trouble, instead of trying to save it VCs may just
sell it at a low price to another of their portfolio companies. Philip
Greenspun said in [_Founders at
Work_](http://www.amazon.com/gp/product/1590597141) that Ars Digita's VCs did
this to them.
**18\. Investors have different risk profiles from founders.**
Most people would rather a 100% chance of $1 million than a 20% chance of $10
million. Investors are rich enough to be rational and prefer the latter. So
they'll always tend to encourage founders to keep rolling the dice. If a
company is doing well, investors will want founders to turn down most
acquisition offers. And indeed, most startups that turn down acquisition
offers ultimately do better. But it's still hair-raising for the founders,
because they might end up with nothing. When someone's offering to buy you for
a price at which your stock is worth $5 million, saying no is equivalent to
having $5 million and betting it all on one spin of the roulette wheel.
Investors will tell you the company is worth more. And they may be right. But
that doesn't mean it's wrong to sell. Any financial advisor who put all his
client's assets in the stock of a single, private company would probably lose
his license for it.
More and more, investors are letting founders cash out partially. That should
correct the problem. Most founders have such low standards that they'll feel
rich with a sum that doesn't seem huge to investors. But this custom is
spreading too slowly, because VCs are afraid of seeming irresponsible. No one
wants to be the first VC to give someone fuck-you money and then actually get
told "fuck you." But until this does start to happen, we know VCs are being
too conservative.
**19\. Investors vary greatly.**
Back when I was a founder I used to think all VCs were the same. And in fact
they do all [look](http://www.redpoint.com/team/) the same. They're all what
hackers call "suits." But since I've been dealing with VCs more I've learned
that some suits are smarter than others.
They're also in a business where winners tend to keep winning and losers to
keep losing. When a VC firm has been successful in the past, everyone wants
funding from them, so they get the pick of all the new deals. The self-
reinforcing nature of the venture funding market means that the top ten firms
live in a completely different world from, say, the hundredth. As well as
being smarter, they tend to be calmer and more upstanding; they don't need to
do iffy things to get an edge, and don't want to because they have more brand
to protect.
There are only two kinds of VCs you want to take money from, if you have the
luxury of choosing: the "top tier" VCs, meaning about the top 20 or so firms,
plus a few new ones that are not among the top 20 only because they haven't
been around long enough.
It's particularly important to raise money from a top firm if you're a hacker,
because they're more confident. That means they're less likely to stick you
with a business guy as CEO, like VCs used to do in the 90s. If you seem smart
and want to do it, they'll let you run the company.
**20\. Investors don't realize how much it costs to raise money from them.**
Raising money is a huge time suck at just the point where startups can least
afford it. It's not unusual for it to take five or six months to close a
funding round. Six weeks is fast. And raising money is not just something you
can leave running as a background process. When you're raising money, it's
inevitably the main focus of the company. Which means building the product
isn't.
Suppose a Y Combinator company starts talking to VCs after demo day, and is
successful in raising money from them, closing the deal after a comparatively
short 8 weeks. Since demo day occurs after 10 weeks, the company is now 18
weeks old. Raising money, rather than working on the product, has been the
company's main focus for 44% of its existence. And mind you, this an example
where things turned out _well_.
When a startup does return to working on the product after a funding round
finally closes, it's as if they were returning to work after a months-long
illness. They've lost most of their momentum.
Investors have no idea how much they damage the companies they invest in by
taking so long to do it. But companies do. So there is a big opportunity here
for a new kind of venture fund that invests smaller amounts at lower
valuations, but promises to either close or say no very quickly. If there were
such a firm, I'd recommend it to startups in preference to any other, no
matter how prestigious. Startups live on speed and momentum.
**21\. Investors don't like to say no.**
The reason funding deals take so long to close is mainly that investors can't
make up their minds. VCs are not big companies; they can do a deal in 24 hours
if they need to. But they usually let the initial meetings stretch out over a
couple weeks. The reason is the selection algorithm I mentioned earlier. Most
don't try to predict whether a startup will win, but to notice quickly that it
already is winning. They care what the market thinks of you and what other VCs
think of you, and they can't judge those just from meeting you.
Because they're investing in things that (a) change fast and (b) they don't
understand, a lot of investors will reject you in a way that can later be
claimed not to have been a rejection. Unless you know this world, you may not
even realize you've been rejected. Here's a VC saying no:
> We're really excited about your project, and we want to keep in close touch
> as you develop it further.
Translated into more straightforward language, this means: We're not investing
in you, but we may change our minds if it looks like you're taking off.
Sometimes they're more candid and say explicitly that they need to "see some
traction." They'll invest in you if you start to get lots of users. But so
would any VC. So all they're saying is that you're still at square 1.
Here's a test for deciding whether a VC's response was yes or no. Look down at
your hands. Are you holding a termsheet?
**22\. You need investors.**
Some founders say "Who needs investors?" Empirically the answer seems to be:
everyone who wants to succeed. Practically every successful startup takes
outside investment at some point.
Why? What the people who think they don't need investors forget is that they
will have competitors. The question is not whether you _need_ outside
investment, but whether it could help you at all. If the answer is yes, and
you don't take investment, then competitors who do will have an advantage over
you. And in the startup world a little advantage can expand into a lot.
Mike Moritz famously said that he invested in Yahoo because he thought they
had a few weeks' lead over their competitors. That may not have mattered quite
so much as he thought, because Google came along three years later and kicked
Yahoo's ass. But there is something in what he said. Sometimes a small lead
can grow into the yes half of a binary choice.
Maybe as it gets cheaper to start a startup, it will start to be possible to
succeed in a competitive market without outside funding. There are certainly
costs to raising money. But as of this writing the empirical evidence says
it's a net win.
**23\. Investors like it when you don't need them.**
A lot of founders approach investors as if they needed their permission to
start a company—as if it were like getting into college. But you don't need
investors to start most companies; they just make it easier.
And in fact, investors greatly prefer it if you don't need them. What excites
them, both consciously and unconsciously, is the sort of startup that
approaches them saying "the train's leaving the station; are you in or out?"
not the one saying "please can we have some money to start a company?"
Most investors are "bottoms" in the sense that the startups they like most are
those that are rough with them. When Google stuck Kleiner and Sequoia with a
$75 million premoney valuation, their reaction was probably "Ouch! That feels
so good." And they were right, weren't they? That deal probably made them more
than any other they've done.
The thing is, VCs are pretty good at reading people. So don't try to act tough
with them unless you really are the next Google, or they'll see through you in
a second. Instead of acting tough, what most startups should do is simply
always have a backup plan. Always have some alternative plan for getting
started if any given investor says no. Having one is the best insurance
against needing one.
So you shouldn't start a startup that's expensive to start, because then
you'll be at the mercy of investors. If you ultimately want to do something
that will cost a lot, start by doing a cheaper subset of it, and expand your
ambitions when and if you raise more money.
Apparently the most likely animals to be left alive after a nuclear war are
cockroaches, because they're so hard to kill. That's what you want to be as a
startup, initially. Instead of a beautiful but fragile flower that needs to
have its stem in a plastic tube to support itself, better to be small, ugly,
and indestructible.
**Notes**
[1] I may be underestimating VCs. They may play some behind the scenes role in
IPOs, which you ultimately need if you want to create a silicon valley.
[2] A few VCs have an email address you can send your business plan to, but
the number of startups that get funded this way is basically zero. You should
always get a personal introduction—and to a partner, not an associate.
[3] Several people have told us that the most valuable thing about [startup
school](http://startupschool.org) was that they got to see famous startup
founders and realized they were just ordinary guys. Though we're happy to
provide this service, this is not generally the way we pitch startup school to
potential speakers.
[4] Actually this sounds to me like a VC who got buyer's remorse, then used a
technicality to get out of the deal. But it's telling that it even seemed a
plausible excuse.
**Thanks** to Sam Altman, Paul Buchheit, Hutch Fishman, and Robert Morris for
reading drafts of this, and to Kenneth King of ASES for inviting me to speak.
[Comment](http://news.ycombinator.com/item?id=17947) on this essay.
|
October 2015
Here's a simple trick for getting more people to read what you write: write in
spoken language.
Something comes over most people when they start writing. They write in a
different language than they'd use if they were talking to a friend. The
sentence structure and even the words are different. No one uses "pen" as a
verb in spoken English. You'd feel like an idiot using "pen" instead of
"write" in a conversation with a friend.
The last straw for me was a sentence I read a couple days ago:
> The mercurial Spaniard himself declared: "After Altamira, all is decadence."
It's from Neil Oliver's _A History of Ancient Britain_. I feel bad making an
example of this book, because it's no worse than lots of others. But just
imagine calling Picasso "the mercurial Spaniard" when talking to a friend.
Even one sentence of this would raise eyebrows in conversation. And yet people
write whole books of it.
Ok, so written and spoken language are different. Does that make written
language worse?
If you want people to read and understand what you write, yes. Written
language is more complex, which makes it more work to read. It's also more
formal and distant, which gives the reader's attention permission to drift.
But perhaps worst of all, the complex sentences and fancy words give you, the
writer, the false impression that you're saying more than you actually are.
You don't need complex sentences to express complex ideas. When specialists in
some abstruse topic talk to one another about ideas in their field, they don't
use sentences any more complex than they do when talking about what to have
for lunch. They use different words, certainly. But even those they use no
more than necessary. And in my experience, the harder the subject, the more
informally experts speak. Partly, I think, because they have less to prove,
and partly because the harder the ideas you're talking about, the less you can
afford to let language get in the way.
Informal language is the athletic clothing of ideas.
I'm not saying spoken language always works best. Poetry is as much music as
text, so you can say things you wouldn't say in conversation. And there are a
handful of writers who can get away with using fancy language in prose. And
then of course there are cases where writers don't want to make it easy to
understand what they're saying—in corporate announcements of bad news, for
example, or at the more
[_bogus_](https://scholar.google.com/scholar?hl=en&as_sdt=1,5&q=transgression+narrative+postmodern+gender)
end of the humanities. But for nearly everyone else, spoken language is
better.
It seems to be hard for most people to write in spoken language. So perhaps
the best solution is to write your first draft the way you usually would, then
afterward look at each sentence and ask "Is this the way I'd say this if I
were talking to a friend?" If it isn't, imagine what you would say, and use
that instead. After a while this filter will start to operate as you write.
When you write something you wouldn't say, you'll hear the clank as it hits
the page.
Before I publish a new essay, I read it out loud and fix everything that
doesn't sound like conversation. I even fix bits that are phonetically
awkward; I don't know if that's necessary, but it doesn't cost much.
This trick may not always be enough. I've seen writing so far removed from
spoken language that it couldn't be fixed sentence by sentence. For cases like
that there's a more drastic solution. After writing the first draft, try
explaining to a friend what you just wrote. Then replace the draft with what
you said to your friend.
People often tell me how much my essays sound like me talking. The fact that
this seems worthy of comment shows how rarely people manage to write in spoken
language. Otherwise everyone's writing would sound like them talking.
If you simply manage to write in spoken language, you'll be ahead of 95% of
writers. And it's so easy to do: just don't let a sentence through unless it's
the way you'd say it to a friend.
**Thanks** to Patrick Collison and Jessica Livingston for reading drafts of
this.
|
August 2009
Kate Courteau is the architect who designed Y Combinator's office. Recently we
managed to recruit her to help us run YC when she's not busy with
architectural projects. Though she'd heard a lot about YC since the beginning,
the last 9 months have been a total immersion.
I've been around the startup world for so long that it seems normal to me, so
I was curious to hear what had surprised her most about it. This was her list:
**1\. How many startups fail.**
Kate knew in principle that startups were very risky, but she was surprised to
see how constant the threat of failure was — not just for the minnows, but
even for the famous startups whose founders came to speak at YC dinners.
**2\. How much startups' ideas change.**
As usual, by Demo Day about half the startups were doing something
significantly different than they started with. We encourage that. Starting a
startup is like science in that you have to follow the truth wherever it
leads. In the rest of the world, people don't start things till they're sure
what they want to do, and once started they tend to continue on their initial
path even if it's mistaken.
**3\. How little money it can take to start a startup.**
In Kate's world, everything is still physical and expensive. You can barely
renovate a bathroom for the cost of starting a startup.
**4\. How scrappy founders are.**
That was her actual word. I agree with her, but till she mentioned this it
never occurred to me how little this quality is appreciated in most of the
rest of the world. It wouldn't be a compliment in most organizations to call
someone scrappy.
What does it mean, exactly? It's basically the diminutive form of belligerent.
Someone who's scrappy manages to be both threatening and undignified at the
same time. Which seems to me exactly what one would want to be, in any kind of
work. If you're not threatening, you're probably not doing anything new, and
dignity is merely a sort of plaque.
**5\. How tech-saturated Silicon Valley is.**
"It seems like everybody here is in the industry." That isn't literally true,
but there is a qualitative difference between Silicon Valley and other places.
You tend to keep your voice down, because there's a good chance the person at
the next table would know some of the people you're talking about. I never
felt that in Boston. The good news is, there's also a good chance the person
at the next table could help you in some way.
**6\. That the speakers at YC were so consistent in their advice.**
Actually, I've noticed this too. I always worry the speakers will put us in an
embarrassing position by contradicting what we tell the startups, but it
happens surprisingly rarely.
When I asked her what specific things she remembered speakers always saying,
she mentioned: that the way to succeed was to launch something fast, listen to
users, and then iterate; that startups required resilience because they were
always an emotional rollercoaster; and that most VCs were sheep.
I've been impressed by how consistently the speakers advocate launching fast
and iterating. That was contrarian advice 10 years ago, but it's clearly now
the established practice.
**7\. How casual successful startup founders are.**
Most of the famous founders in Silicon Valley are people you'd overlook on the
street. It's not merely that they don't dress up. They don't project any kind
of aura of power either. "They're not trying to impress anyone."
Interestingly, while Kate said that she could never pick out successful
founders, she could recognize VCs, both by the way they dressed and the way
they carried themselves.
**8\. How important it is for founders to have people to ask for advice.**
(I swear I didn't prompt this one.) Without advice "they'd just be sort of
lost." Fortunately, there are a lot of people to help them. There's a strong
tradition within YC of helping other YC-funded startups. But we didn't invent
that idea: it's just a slightly more concentrated form of existing Valley
culture.
**9\. What a solitary task startups are.**
Architects are constantly interacting face to face with other people, whereas
doing a technology startup, at least, tends to require long stretches of
uninterrupted time to work. "You could do it in a box."
By inverting this list, we can get a portrait of the "normal" world. It's
populated by people who talk a lot with one another as they work slowly but
harmoniously on conservative, expensive projects whose destinations are
decided in advance, and who carefully adjust their manner to reflect their
position in the hierarchy.
That's also a fairly accurate description of the past. So startup culture may
not merely be different in the way you'd expect any subculture to be, but a
leading indicator.
|
March 2012
I'm not a very good speaker. I say "um" a lot. Sometimes I have to pause when
I lose my train of thought. I wish I were a better speaker. But I don't wish I
were a better speaker like I wish I were a better writer. What I really want
is to have good ideas, and that's a much bigger part of being a good writer
than being a good speaker.
Having good ideas is most of writing well. If you know what you're talking
about, you can say it in the plainest words and you'll be perceived as having
a good style. With speaking it's the opposite: having good ideas is an
alarmingly small component of being a good speaker.
I first noticed this at a conference several years ago. There was another
speaker who was much better than me. He had all of us roaring with laughter. I
seemed awkward and halting by comparison. Afterward I put my talk online like
I usually do. As I was doing it I tried to imagine what a transcript of the
other guy's talk would be like, and it was only then I realized he hadn't said
very much.
Maybe this would have been obvious to someone who knew more about speaking,
but it was a revelation to me how much less ideas mattered in speaking than
writing. [1]
A few years later I heard a talk by someone who was not merely a better
speaker than me, but a famous speaker. Boy was he good. So I decided I'd pay
close attention to what he said, to learn how he did it. After about ten
sentences I found myself thinking "I don't want to be a good speaker."
Being a really good speaker is not merely orthogonal to having good ideas, but
in many ways pushes you in the opposite direction. For example, when I give a
talk, I usually write it out beforehand. I know that's a mistake; I know
delivering a prewritten talk makes it harder to engage with an audience. The
way to get the attention of an audience is to give them _your_ full attention,
and when you're delivering a prewritten talk, your attention is always divided
between the audience and the talk — even if you've memorized it. If you want
to engage an audience, it's better to start with no more than an outline of
what you want to say and ad lib the individual sentences. But if you do that,
you might spend no more time thinking about each sentence than it takes to say
it. [2] Occasionally the stimulation of talking to a live audience makes you
think of new things, but in general this is not going to generate ideas as
well as writing does, where you can spend as long on each sentence as you
want.
If you rehearse a prewritten speech enough, you can get asymptotically close
to the sort of engagement you get when speaking ad lib. Actors do. But here
again there's a tradeoff between smoothness and ideas. All the time you spend
practicing a talk, you could instead spend making it better. Actors don't face
that temptation, except in the rare cases where they've written the script,
but any speaker does. Before I give a talk I can usually be found sitting in a
corner somewhere with a copy printed out on paper, trying to rehearse it in my
head. But I always end up spending most of the time rewriting it instead.
Every talk I give ends up being given from a manuscript full of things crossed
out and rewritten. Which of course makes me um even more, because I haven't
had any time to practice the new bits. [3]
Depending on your audience, there are even worse tradeoffs than these.
Audiences like to be flattered; they like jokes; they like to be swept off
their feet by a vigorous stream of words. As you decrease the intelligence of
the audience, being a good speaker is increasingly a matter of being a good
bullshitter. That's true in writing too of course, but the descent is steeper
with talks. Any given person is dumber as a member of an audience than as a
reader. Just as a speaker ad libbing can only spend as long thinking about
each sentence as it takes to say it, a person hearing a talk can only spend as
long thinking about each sentence as it takes to hear it. Plus people in an
audience are always affected by the reactions of those around them, and the
reactions that spread from person to person in an audience are
disproportionately the more brutish sort, just as low notes travel through
walls better than high ones. Every audience is an incipient mob, and a good
speaker uses that. Part of the reason I laughed so much at the talk by the
good speaker at that conference was that everyone else did. [4]
So are talks useless? They're certainly inferior to the written word as a
source of ideas. But that's not all talks are good for. When I go to a talk,
it's usually because I'm interested in the speaker. Listening to a talk is the
closest most of us can get to having a conversation with someone like the
president, who doesn't have time to meet individually with all the people who
want to meet him.
Talks are also good at motivating me to do things. It's probably no
coincidence that so many famous speakers are described as motivational
speakers. That may be what public speaking is really for. It's probably what
it was originally for. The emotional reactions you can elicit with a talk can
be a powerful force. I wish I could say that this force was more often used
for good than ill, but I'm not sure.
**Notes**
[1] I'm not talking here about academic talks, which are a different type of
thing. While the audience at an academic talk might appreciate a joke, they
will (or at least should) make a conscious effort to see what new ideas you're
presenting.
[2] That's the lower bound. In practice you can often do better, because talks
are usually about things you've written or talked about before, and when you
ad lib, you end up reproducing some of those sentences. Like early medieval
architecture, impromptu talks are made of spolia. Which feels a bit dishonest,
incidentally, because you have to deliver these sentences as if you'd just
thought of them.
[3] Robert Morris points out that there is a way in which practicing talks
makes them better: reading a talk out loud can expose awkward parts. I agree
and in fact I read most things I write out loud at least once for that reason.
[4] For sufficiently small audiences, it may not be true that being part of an
audience makes people dumber. The real decline seems to set in when the
audience gets too big for the talk to feel like a conversation — maybe around
10 people.
**Thanks** to Sam Altman and Robert Morris for reading drafts of this.
|
February 2015
One of the most valuable exercises you can try if you want to understand
startups is to look at the most successful companies and explain why they were
not as lame as they seemed when they first launched. Because they practically
all seemed lame at first. Not just small, lame. Not just the first step up a
big mountain. More like the first step into a swamp.
A Basic interpreter for the Altair? How could that ever grow into a giant
company? People sleeping on airbeds in strangers' apartments? A web site for
college students to stalk one another? A wimpy little single-board computer
for hobbyists that used a TV as a monitor? A new search engine, when there
were already about 10, and they were all trying to de-emphasize search? These
ideas didn't just seem small. They seemed wrong. They were the kind of ideas
you could not merely ignore, but ridicule.
Often the founders themselves didn't know why their ideas were promising. They
were attracted to these ideas by instinct, because they were [living in the
future](startupideas.html) and they sensed that something was missing. But
they could not have put into words exactly how their ugly ducklings were going
to grow into big, beautiful swans.
Most people's first impulse when they hear about a lame-sounding new startup
idea is to make fun of it. Even a lot of people who should know better.
When I encounter a startup with a lame-sounding idea, I ask "What Microsoft is
this the Altair Basic of?" Now it's a puzzle, and the burden is on me to solve
it. Sometimes I can't think of an answer, especially when the idea is a made-
up one. But it's remarkable how often there does turn out to be an answer.
Often it's one the founders themselves hadn't seen yet.
Intriguingly, there are sometimes multiple answers. I talked to a startup a
few days ago that could grow into 3 distinct Microsofts. They'd probably vary
in size by orders of magnitude. But you can never predict how big a Microsoft
is going to be, so in cases like that I encourage founders to follow whichever
path is most immediately exciting to them. Their instincts got them this far.
Why stop now?
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
October 2008
The economic situation is apparently so grim that some experts fear we may be
in for a stretch as bad as the mid seventies.
When Microsoft and Apple were founded.
As those examples suggest, a recession may not be such a bad time to start a
startup. I'm not claiming it's a particularly good time either. The truth is
more boring: the state of the economy doesn't matter much either way.
If we've learned one thing from funding so many startups, it's that they
succeed or fail based on the qualities of the founders. The economy has some
effect, certainly, but as a predictor of success it's rounding error compared
to the founders.
Which means that what matters is who you are, not when you do it. If you're
the right sort of person, you'll win even in a bad economy. And if you're not,
a good economy won't save you. Someone who thinks "I better not start a
startup now, because the economy is so bad" is making the same mistake as the
people who thought during the Bubble "all I have to do is start a startup, and
I'll be rich."
So if you want to improve your chances, you should think far more about who
you can recruit as a cofounder than the state of the economy. And if you're
worried about threats to the survival of your company, don't look for them in
the news. Look in the mirror.
But for any given team of founders, would it not pay to wait till the economy
is better before taking the leap? If you're starting a restaurant, maybe, but
not if you're working on technology. Technology progresses more or less
independently of the stock market. So for any given idea, the payoff for
acting fast in a bad economy will be higher than for waiting. Microsoft's
first product was a Basic interpreter for the Altair. That was exactly what
the world needed in 1975, but if Gates and Allen had decided to wait a few
years, it would have been too late.
Of course, the idea you have now won't be the last you have. There are always
new ideas. But if you have a specific idea you want to act on, act now.
That doesn't mean you can ignore the economy. Both customers and investors
will be feeling pinched. It's not necessarily a problem if customers feel
pinched: you may even be able to benefit from it, by making things that [save
money](http://bountii.com). Startups often make things cheaper, so in that
respect they're better positioned to prosper in a recession than big
companies.
Investors are more of a problem. Startups generally need to raise some amount
of external funding, and investors tend to be less willing to invest in bad
times. They shouldn't be. Everyone knows you're supposed to buy when times are
bad and sell when times are good. But of course what makes investing so
counterintuitive is that in equity markets, good times are defined as everyone
thinking it's time to buy. You have to be a contrarian to be correct, and by
definition only a minority of investors can be.
So just as investors in 1999 were tripping over one another trying to buy into
lousy startups, investors in 2009 will presumably be reluctant to invest even
in good ones.
You'll have to adapt to this. But that's nothing new: startups always have to
adapt to the whims of investors. Ask any founder in any economy if they'd
describe investors as fickle, and watch the face they make. Last year you had
to be prepared to explain how your startup was viral. Next year you'll have to
explain how it's recession-proof.
(Those are both good things to be. The mistake investors make is not the
criteria they use but that they always tend to focus on one to the exclusion
of the rest.)
Fortunately the way to make a startup recession-proof is to do exactly what
you should do anyway: run it as cheaply as possible. For years I've been
telling founders that the surest route to success is to be the cockroaches of
the corporate world. The immediate cause of death in a startup is always
running out of money. So the cheaper your company is to operate, the harder it
is to kill. And fortunately it has gotten very cheap to run a startup. A
recession will if anything make it cheaper still.
If nuclear winter really is here, it may be safer to be a cockroach even than
to keep your job. Customers may drop off individually if they can no longer
afford you, but you're not going to lose them all at once; markets don't
"reduce headcount."
What if you quit your job to start a startup that fails, and you can't find
another? That could be a problem if you work in sales or marketing. In those
fields it can take months to find a new job in a bad economy. But hackers seem
to be more liquid. Good hackers can always get some kind of job. It might not
be your dream job, but you're not going to starve.
Another advantage of bad times is that there's less competition. Technology
trains leave the station at regular intervals. If everyone else is cowering in
a corner, you may have a whole car to yourself.
You're an investor too. As a founder, you're buying stock with work: the
reason Larry and Sergey are so rich is not so much that they've done work
worth tens of billions of dollars, but that they were the first investors in
Google. And like any investor you should buy when times are bad.
Were you nodding in agreement, thinking "stupid investors" a few paragraphs
ago when I was talking about how investors are reluctant to put money into
startups in bad markets, even though that's the time they should rationally be
most willing to buy? Well, founders aren't much better. When times get bad,
hackers go to grad school. And no doubt that will happen this time too. In
fact, what makes the preceding paragraph true is that most readers won't
believe it—at least to the extent of acting on it.
So maybe a recession is a good time to start a startup. It's hard to say
whether advantages like lack of competition outweigh disadvantages like
reluctant investors. But it doesn't matter much either way. It's the people
that matter. And for a given set of people working on a given technology, the
time to act is always now.
|
November 2008
One of the differences between big companies and startups is that big
companies tend to have developed procedures to protect themselves against
mistakes. A startup walks like a toddler, bashing into things and falling over
all the time. A big company is more deliberate.
The gradual accumulation of checks in an organization is a kind of learning,
based on disasters that have happened to it or others like it. After giving a
contract to a supplier who goes bankrupt and fails to deliver, for example, a
company might require all suppliers to prove they're solvent before submitting
bids.
As companies grow they invariably get more such checks, either in response to
disasters they've suffered, or (probably more often) by hiring people from
bigger companies who bring with them customs for protecting against new types
of disasters.
It's natural for organizations to learn from mistakes. The problem is, people
who propose new checks almost never consider that the check itself has a cost.
_Every check has a cost._ For example, consider the case of making suppliers
verify their solvency. Surely that's mere prudence? But in fact it could have
substantial costs. There's obviously the direct cost in time of the people on
both sides who supply and check proofs of the supplier's solvency. But the
real costs are the ones you never hear about: the company that would be the
best supplier, but doesn't bid because they can't spare the effort to get
verified. Or the company that would be the best supplier, but falls just short
of the threshold for solvency—which will of course have been set on the high
side, since there is no apparent cost of increasing it.
Whenever someone in an organization proposes to add a new check, they should
have to explain not just the benefit but the cost. No matter how bad a job
they did of analyzing it, this meta-check would at least remind everyone there
had to _be_ a cost, and send them looking for it.
If companies started doing that, they'd find some surprises. Joel Spolsky
recently spoke at Y Combinator about selling software to corporate customers.
He said that in most companies software costing up to about $1000 could be
bought by individual managers without any additional approvals. Above that
threshold, software purchases generally had to be approved by a committee. But
babysitting this process was so expensive for software vendors that it didn't
make sense to charge less than $50,000. Which means if you're making something
you might otherwise have charged $5000 for, you have to sell it for $50,000
instead.
The purpose of the committee is presumably to ensure that the company doesn't
waste money. And yet the result is that the company pays 10 times as much.
Checks on purchases will always be expensive, because the harder it is to sell
something to you, the more it has to cost. And not merely linearly, either. If
you're hard enough to sell to, the people who are best at making things don't
want to bother. The only people who will sell to you are companies that
specialize in selling to you. Then you've sunk to a whole new level of
inefficiency. Market mechanisms no longer protect you, because the good
suppliers are no longer in the market.
Such things happen constantly to the biggest organizations of all,
governments. But checks instituted by governments can cause much worse
problems than merely overpaying. Checks instituted by governments can cripple
a country's whole economy. Up till about 1400, China was richer and more
technologically advanced than Europe. One reason Europe pulled ahead was that
the Chinese government restricted long trading voyages. So it was left to the
Europeans to explore and eventually to dominate the rest of the world,
including China.
In more recent times, Sarbanes-Oxley has practically destroyed the US IPO
market. That wasn't the intention of the legislators who wrote it. They just
wanted to add a few more checks on public companies. But they forgot to
consider the cost. They forgot that companies about to go public are usually
rather stretched, and that the weight of a few extra checks that might be easy
for General Electric to bear are enough to prevent younger companies from
being public at all.
Once you start to think about the cost of checks, you can start to ask other
interesting questions. Is the cost increasing or decreasing? Is it higher in
some areas than others? Where does it increase discontinuously? If large
organizations started to ask questions like that, they'd learn some
frightening things.
I think the cost of checks may actually be increasing. The reason is that
software plays an increasingly important role in companies, and the people who
write software are particularly harmed by checks.
Programmers are unlike many types of workers in that the best ones actually
prefer to work hard. This doesn't seem to be the case in most types of work.
When I worked in fast food, we didn't prefer the busy times. And when I used
to mow lawns, I definitely didn't prefer it when the grass was long after a
week of rain.
Programmers, though, like it better when they write more code. Or more
precisely, when they release more code. Programmers like to make a difference.
Good ones, anyway.
For good programmers, one of the best things about working for a startup is
that there are few checks on releases. In true startups, there are no external
checks at all. If you have an idea for a new feature in the morning, you can
write it and push it to the production servers before lunch. And when you can
do that, you have more ideas.
At big companies, software has to go through various approvals before it can
be launched. And the cost of doing this can be enormous—in fact,
discontinuous. I was talking recently to a group of three programmers whose
startup had been acquired a few years before by a big company. When they'd
been independent, they could release changes instantly. Now, they said, the
absolute fastest they could get code released on the production servers was
two weeks.
This didn't merely make them less productive. It made them hate working for
the acquirer.
Here's a sign of how much programmers like to be able to work hard: these guys
would have _paid_ to be able to release code immediately, the way they used
to. I asked them if they'd trade 10% of the acquisition price for the ability
to release code immediately, and all three instantly said yes. Then I asked
what was the maximum percentage of the acquisition price they'd trade for it.
They said they didn't want to think about it, because they didn't want to know
how high they'd go, but I got the impression it might be as much as half.
They'd have sacrificed hundreds of thousands of dollars, perhaps millions,
just to be able to deliver more software to users. And you know what? It would
have been perfectly safe to let them. In fact, the acquirer would have been
better off; not only wouldn't these guys have broken anything, they'd have
gotten a lot more done. So the acquirer is in fact getting worse performance
at greater cost. Just like the committee approving software purchases.
And just as the greatest danger of being hard to sell to is not that you
overpay but that the best suppliers won't even sell to you, the greatest
danger of applying too many checks to your programmers is not that you'll make
them unproductive, but that good programmers won't even want to work for you.
Steve Jobs's famous maxim "artists ship" works both ways. Artists aren't
merely capable of shipping. They insist on it. So if you don't let people
ship, you won't have any artists.
|
December 2014
American technology companies want the government to make immigration easier
because they say they can't find enough programmers in the US. Anti-
immigration people say that instead of letting foreigners take these jobs, we
should train more Americans to be programmers. Who's right?
The technology companies are right. What the anti-immigration people don't
understand is that there is a huge variation in ability between competent
programmers and exceptional ones, and while you can train people to be
competent, you can't train them to be exceptional. Exceptional programmers
have an aptitude for and [_interest in_](genius.html) programming that is not
merely the product of training. [1]
The US has less than 5% of the world's population. Which means if the
qualities that make someone a great programmer are evenly distributed, 95% of
great programmers are born outside the US.
The anti-immigration people have to invent some explanation to account for all
the effort technology companies have expended trying to make immigration
easier. So they claim it's because they want to drive down salaries. But if
you talk to startups, you find practically every one over a certain size has
gone through legal contortions to get programmers into the US, where they then
paid them the same as they'd have paid an American. Why would they go to extra
trouble to get programmers for the same price? The only explanation is that
they're telling the truth: there are just not enough great programmers to go
around. [2]
I asked the CEO of a startup with about 70 programmers how many more he'd hire
if he could get all the great programmers he wanted. He said "We'd hire 30
tomorrow morning." And this is one of the hot startups that always win
recruiting battles. It's the same all over Silicon Valley. Startups are that
constrained for talent.
It would be great if more Americans were trained as programmers, but no amount
of training can flip a ratio as overwhelming as 95 to 5. Especially since
programmers are being trained in other countries too. Barring some cataclysm,
it will always be true that most great programmers are born outside the US. It
will always be true that most people who are great at anything are born
outside the US. [3]
Exceptional performance implies immigration. A country with only a few percent
of the world's population will be exceptional in some field only if there are
a lot of immigrants working in it.
But this whole discussion has taken something for granted: that if we let more
great programmers into the US, they'll want to come. That's true now, and we
don't realize how lucky we are that it is. If we want to keep this option
open, the best way to do it is to take advantage of it: the more of the
world's great programmers are here, the more the rest will want to come here.
And if we don't, the US could be seriously fucked. I realize that's strong
language, but the people dithering about this don't seem to realize the power
of the forces at work here. Technology gives the best programmers huge
leverage. The world market in programmers seems to be becoming dramatically
more liquid. And since good people like good colleagues, that means the best
programmers could collect in just a few hubs. Maybe mostly in one hub.
What if most of the great programmers collected in one hub, and it wasn't
here? That scenario may seem unlikely now, but it won't be if things change as
much in the next 50 years as they did in the last 50.
We have the potential to ensure that the US remains a technology superpower
just by letting in a few thousand great programmers a year. What a colossal
mistake it would be to let that opportunity slip. It could easily be the
defining mistake this generation of American politicians later become famous
for. And unlike other potential mistakes on that scale, it costs nothing to
fix.
So please, get on with it.
**Notes**
[1] How much better is a great programmer than an ordinary one? So much better
that you can't even measure the difference directly. A great programmer
doesn't merely do the same work faster. A great programmer will invent things
an ordinary programmer would never even think of. This doesn't mean a great
programmer is infinitely more valuable, because any invention has a finite
market value. But it's easy to imagine cases where a great programmer might
invent things worth 100x or even 1000x an average programmer's salary.
[2] There are a handful of consulting firms that rent out big pools of foreign
programmers they bring in on H1-B visas. By all means crack down on these. It
should be easy to write legislation that distinguishes them, because they are
so different from technology companies. But it is dishonest of the anti-
immigration people to claim that companies like Google and Facebook are driven
by the same motives. An influx of inexpensive but mediocre programmers is the
last thing they'd want; it would destroy them.
[3] Though this essay talks about programmers, the group of people we need to
import is broader, ranging from designers to programmers to electrical
engineers. The best one could do as a general term might be "digital talent."
It seemed better to make the argument a little too narrow than to confuse
everyone with a neologism.
**Thanks** to Sam Altman, John Collison, Patrick Collison, Jessica Livingston,
Geoff Ralston, Fred Wilson, and Qasar Younis for reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
August 2010
When I went to work for Yahoo after they bought our startup in 1998, it felt
like the center of the world. It was supposed to be the next big thing. It was
supposed to be what Google turned out to be.
What went wrong? The problems that hosed Yahoo go back a long time,
practically to the beginning of the company. They were already very visible
when I got there in 1998. Yahoo had two problems Google didn't: easy money,
and ambivalence about being a technology company.
**Money**
The first time I met Jerry Yang, we thought we were meeting for different
reasons. He thought we were meeting so he could check us out in person before
buying us. I thought we were meeting so we could show him our new technology,
Revenue Loop. It was a way of sorting shopping search results. Merchants bid a
percentage of sales for traffic, but the results were sorted not by the bid
but by the bid times the average amount a user would buy. It was like the
algorithm Google uses now to sort ads, but this was in the spring of 1998,
before Google was founded.
Revenue Loop was the optimal sort for shopping search, in the sense that it
sorted in order of how much money Yahoo would make from each link. But it
wasn't just optimal in that sense. Ranking search results by user behavior
also makes search better. Users train the search: you can start out finding
matches based on mere textual similarity, and as users buy more stuff the
search results get better and better.
Jerry didn't seem to care. I was confused. I was showing him technology that
extracted the maximum value from search traffic, and he didn't care? I
couldn't tell whether I was explaining it badly, or he was just very poker
faced.
I didn't realize the answer till later, after I went to work at Yahoo. It was
neither of my guesses. The reason Yahoo didn't care about a technique that
extracted the full value of traffic was that advertisers were already
overpaying for it. If Yahoo merely extracted the actual value, they'd have
made less.
Hard as it is to believe now, the big money then was in banner ads.
Advertisers were willing to pay ridiculous amounts for banner ads. So Yahoo's
sales force had evolved to exploit this source of revenue. Led by a large and
terrifyingly formidable man called Anil Singh, Yahoo's sales guys would fly
out to Procter & Gamble and come back with million dollar orders for banner ad
impressions.
The prices seemed cheap compared to print, which was what advertisers, for
lack of any other reference, compared them to. But they were expensive
compared to what they were worth. So these big, dumb companies were a
dangerous source of revenue to depend on. But there was another source even
more dangerous: other Internet startups.
By 1998, Yahoo was the beneficiary of a de facto Ponzi scheme. Investors were
excited about the Internet. One reason they were excited was Yahoo's revenue
growth. So they invested in new Internet startups. The startups then used the
money to buy ads on Yahoo to get traffic. Which caused yet more revenue growth
for Yahoo, and further convinced investors the Internet was worth investing
in. When I realized this one day, sitting in my cubicle, I jumped up like
Archimedes in his bathtub, except instead of "Eureka!" I was shouting "Sell!"
Both the Internet startups and the Procter & Gambles were doing brand
advertising. They didn't care about targeting. They just wanted lots of people
to see their ads. So traffic became the thing to get at Yahoo. It didn't
matter what type. [1]
It wasn't just Yahoo. All the search engines were doing it. This was why they
were trying to get people to start calling them "portals" instead of "search
engines." Despite the actual meaning of the word portal, what they meant by it
was a site where users would find what they wanted on the site itself, instead
of just passing through on their way to other destinations, as they did at a
search engine.
I remember telling David Filo in late 1998 or early 1999 that Yahoo should buy
Google, because I and most of the other programmers in the company were using
it instead of Yahoo for search. He told me that it wasn't worth worrying
about. Search was only 6% of our traffic, and we were growing at 10% a month.
It wasn't worth doing better.
I didn't say "But search traffic is worth more than other traffic!" I said
"Oh, ok." Because I didn't realize either how much search traffic was worth.
I'm not sure even Larry and Sergey did then. If they had, Google presumably
wouldn't have expended any effort on enterprise search.
If circumstances had been different, the people running Yahoo might have
realized sooner how important search was. But they had the most opaque
obstacle in the world between them and the truth: money. As long as customers
were writing big checks for banner ads, it was hard to take search seriously.
Google didn't have that to distract them.
**Hackers**
But Yahoo also had another problem that made it hard to change directions.
They'd been thrown off balance from the start by their ambivalence about being
a technology company.
One of the weirdest things about Yahoo when I went to work there was the way
they insisted on calling themselves a "media company." If you walked around
their offices, it seemed like a software company. The cubicles were full of
programmers writing code, product managers thinking about feature lists and
ship dates, support people (yes, there were actually support people) telling
users to restart their browsers, and so on, just like a software company. So
why did they call themselves a media company?
One reason was the way they made money: by selling ads. In 1995 it was hard to
imagine a technology company making money that way. Technology companies made
money by selling their software to users. Media companies sold ads. So they
must be a media company.
Another big factor was the fear of Microsoft. If anyone at Yahoo considered
the idea that they should be a technology company, the next thought would have
been that Microsoft would crush them.
It's hard for anyone much younger than me to understand the fear Microsoft
still inspired in 1995. Imagine a company with several times the power Google
has now, but way meaner. It was perfectly reasonable to be afraid of them.
Yahoo watched them crush the first hot Internet company, Netscape. It was
reasonable to worry that if they tried to be the next Netscape, they'd suffer
the same fate. How were they to know that Netscape would turn out to be
Microsoft's last victim?
It would have been a clever move to pretend to be a media company to throw
Microsoft off their scent. But unfortunately Yahoo actually tried to be one,
sort of. Project managers at Yahoo were called "producers," for example, and
the different parts of the company were called "properties." But what Yahoo
really needed to be was a technology company, and by trying to be something
else, they ended up being something that was neither here nor there. That's
why Yahoo as a company has never had a sharply defined identity.
The worst consequence of trying to be a media company was that they didn't
take programming seriously enough. Microsoft (back in the day), Google, and
Facebook have all had hacker-centric cultures. But Yahoo treated programming
as a commodity. At Yahoo, user-facing software was controlled by product
managers and designers. The job of programmers was just to take the work of
the product managers and designers the final step, by translating it into
code.
One obvious result of this practice was that when Yahoo built things, they
often weren't very good. But that wasn't the worst problem. The worst problem
was that they hired bad programmers.
Microsoft (back in the day), Google, and Facebook have all been obsessed with
hiring the best programmers. Yahoo wasn't. They preferred good programmers to
bad ones, but they didn't have the kind of single-minded, almost obnoxiously
elitist focus on hiring the smartest people that the big winners have had. And
when you consider how much competition there was for programmers when they
were hiring, during the Bubble, it's not surprising that the quality of their
programmers was uneven.
In technology, once you have bad programmers, you're doomed. I can't think of
an instance where a company has sunk into technical mediocrity and recovered.
Good programmers want to work with other good programmers. So once the quality
of programmers at your company starts to drop, you enter a death spiral from
which there is no recovery. [2]
At Yahoo this death spiral started early. If there was ever a time when Yahoo
was a Google-style talent magnet, it was over by the time I got there in 1998.
The company felt prematurely old. Most technology companies eventually get
taken over by suits and middle managers. At Yahoo it felt as if they'd
deliberately accelerated this process. They didn't want to be a bunch of
hackers. They wanted to be suits. A media company should be run by suits.
The first time I visited Google, they had about 500 people, the same number
Yahoo had when I went to work there. But boy did things seem different. It was
still very much a hacker-centric culture. I remember talking to some
programmers in the cafeteria about the problem of gaming search results (now
known as SEO), and they asked "what should we do?" Programmers at Yahoo
wouldn't have asked that. Theirs was not to reason why; theirs was to build
what product managers spec'd. I remember coming away from Google thinking
"Wow, it's still a startup."
There's not much we can learn from Yahoo's first fatal flaw. It's probably too
much to hope any company could avoid being damaged by depending on a bogus
source of revenue. But startups can learn an important lesson from the second
one. In the software business, you can't afford not to have a hacker-centric
culture.
Probably the most impressive commitment I've heard to having a hacker-centric
culture came from Mark Zuckerberg, when he spoke at Startup School in 2007. He
said that in the early days Facebook made a point of hiring programmers even
for jobs that would not ordinarily consist of programming, like HR and
marketing.
So which companies need to have a hacker-centric culture? Which companies are
"in the software business" in this respect? As Yahoo discovered, the area
covered by this rule is bigger than most people realize. The answer is: any
company that needs to have good software.
Why would great programmers want to work for a company that didn't have a
hacker-centric culture, as long as there were others that did? I can imagine
two reasons: if they were paid a huge amount, or if the domain was interesting
and none of the companies in it were hacker-centric. Otherwise you can't
attract good programmers to work in a suit-centric culture. And without good
programmers you won't get good software, no matter how many people you put on
a task, or how many procedures you establish to ensure "quality."
[Hacker culture](gba.html) often seems kind of irresponsible. That's why
people proposing to destroy it use phrases like "adult supervision." That was
the phrase they used at Yahoo. But there are worse things than seeming
irresponsible. Losing, for example.
**Notes**
[1] The closest we got to targeting when I was there was when we created
pets.yahoo.com in order to provoke a bidding war between 3 pet supply startups
for the spot as top sponsor.
[2] In theory you could beat the death spiral by buying good programmers
instead of hiring them. You can get programmers who would never have come to
you as employees by buying their startups. But so far the only companies smart
enough to do this are companies smart enough not to need to.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Geoff Ralston for
reading drafts of this.
|
**Want to start a startup?** Get funded by [Y
Combinator](http://ycombinator.com/apply.html).
August 2006, rev. April 2007, September 2010
In a few days it will be Demo Day, when the startups we funded this summer
present to investors. Y Combinator funds startups twice a year, in January and
June. Ten weeks later we invite all the investors we know to hear them present
what they've built so far.
Ten weeks is not much time. The average startup probably doesn't have much to
show for itself after ten weeks. But the average startup fails. When you look
at the ones that went on to do great things, you find a lot that began with
someone pounding out a prototype in a week or two of nonstop work. Startups
are a counterexample to the rule that haste makes waste.
(Too much money seems to be as bad for startups as too much time, so we don't
give them much money either.)
A week before Demo Day, we have a dress rehearsal called Rehearsal Day. At
other Y Combinator events we allow outside guests, but not at Rehearsal Day.
No one except the other founders gets to see the rehearsals.
The presentations on Rehearsal Day are often pretty rough. But this is to be
expected. We try to pick founders who are good at building things, not ones
who are slick presenters. Some of the founders are just out of college, or
even still in it, and have never spoken to a group of people they didn't
already know.
So we concentrate on the basics. On Demo Day each startup will only get ten
minutes, so we encourage them to focus on just two goals: (a) explain what
you're doing, and (b) explain why users will want it.
That might sound easy, but it's not when the speakers have no experience
presenting, and they're explaining technical matters to an audience that's
mostly non-technical.
This situation is constantly repeated when startups present to investors:
people who are bad at explaining, talking to people who are bad at
understanding. Practically every successful startup, including stars like
Google, presented at some point to investors who didn't get it and turned them
down. Was it because the founders were bad at presenting, or because the
investors were obtuse? It's probably always some of both.
At the most recent Rehearsal Day, we four Y Combinator partners found
ourselves saying a lot of the same things we said at the last two. So at
dinner afterward we collected all our tips about presenting to investors. Most
startups face similar challenges, so we hope these will be useful to a wider
audience.
**1\. Explain what you're doing.**
Investors' main question when judging a very early startup is whether you've
made a compelling product. Before they can judge whether you've built a good
x, they have to understand what kind of x you've built. They will get very
frustrated if instead of telling them what you do, you make them sit through
some kind of preamble.
Say what you're doing as soon as possible, preferably in the first sentence.
"We're Jeff and Bob and we've built an easy to use web-based database. Now
we'll show it to you and explain why people need this."
If you're a great public speaker you may be able to violate this rule. Last
year one founder spent the whole first half of his talk on a fascinating
analysis of the limits of the conventional desktop metaphor. He got away with
it, but unless you're a captivating speaker, which most hackers aren't, it's
better to play it safe.
**2\. Get rapidly to demo.**
_This section is now obsolete for YC founders presenting at Demo Day, because
Demo Day presentations are now so short that they rarely include much if any
demo. They seem to work just as well without, however, which makes me think I
was wrong to emphasize demos so much before._
A demo explains what you've made more effectively than any verbal description.
The only thing worth talking about first is the problem you're trying to solve
and why it's important. But don't spend more than a tenth of your time on
that. Then demo.
When you demo, don't run through a catalog of features. Instead start with the
problem you're solving, and then show how your product solves it. Show
features in an order driven by some kind of purpose, rather than the order in
which they happen to appear on the screen.
If you're demoing something web-based, assume that the network connection will
mysteriously die 30 seconds into your presentation, and come prepared with a
copy of the server software running on your laptop.
**3\. Better a narrow description than a vague one.**
One reason founders resist describing their projects concisely is that, at
this early stage, there are all kinds of possibilities. The most concise
descriptions seem misleadingly narrow. So for example a group that has built
an easy web-based database might resist calling their applicaton that, because
it could be so much more. In fact, it could be anything...
The problem is, as you approach (in the calculus sense) a description of
something that could be anything, the content of your description approaches
zero. If you describe your web-based database as "a system to allow people to
collaboratively leverage the value of information," it will go in one investor
ear and out the other. They'll just discard that sentence as meaningless
boilerplate, and hope, with increasing impatience, that in the next sentence
you'll actually explain what you've made.
Your primary goal is not to describe everything your system might one day
become, but simply to convince investors you're worth talking to further. So
approach this like an algorithm that gets the right answer by successive
approximations. Begin with a description that's gripping but perhaps overly
narrow, then flesh it out to the extent you can. It's the same principle as
incremental development: start with a simple prototype, then add features, but
at every point have working code. In this case, "working code" means a working
description in the investor's head.
**4\. Don't talk and drive.**
Have one person talk while another uses the computer. If the same person does
both, they'll inevitably mumble downwards at the computer screen instead of
talking clearly at the audience.
As long as you're standing near the audience and looking at them, politeness
(and habit) compel them to pay attention to you. Once you stop looking at them
to fuss with something on your computer, their minds drift off to the errands
they have to run later.
**5\. Don't talk about secondary matters at length.**
If you only have a few minutes, spend them explaining what your product does
and why it's great. Second order issues like competitors or resumes should be
single slides you go through quickly at the end. If you have impressive
resumes, just flash them on the screen for 15 seconds and say a few words. For
competitors, list the top 3 and explain in one sentence each what they lack
that you have. And put this kind of thing at the end, after you've made it
clear what you've built.
**6\. Don't get too deeply into business models.**
It's good to talk about how you plan to make money, but mainly because it
shows you care about that and have thought about it. Don't go into detail
about your business model, because (a) that's not what smart investors care
about in a brief presentation, and (b) any business model you have at this
point is probably wrong anyway.
Recently a VC who came to speak at Y Combinator talked about a company he just
invested in. He said their business model was wrong and would probably change
three times before they got it right. The founders were experienced guys who'd
done startups before and who'd just succeeded in getting millions from one of
the top VC firms, and even their business model was crap. (And yet he invested
anyway, because he expected it to be crap at this stage.)
If you're solving an important problem, you're going to sound a lot smarter
talking about that than the business model. The business model is just a bunch
of guesses, and guesses about stuff that's probably not your area of
expertise. So don't spend your precious few minutes talking about crap when
you could be talking about solid, interesting things you know a lot about: the
problem you're solving and what you've built so far.
As well as being a bad use of time, if your business model seems spectacularly
wrong, that will push the stuff you want investors to remember out of their
heads. They'll just remember you as the company with the boneheaded plan for
making money, rather than the company that solved that important problem.
**7\. Talk slowly and clearly at the audience.**
Everyone at Rehearsal Day could see the difference between the people who'd
been out in the world for a while and had presented to groups, and those who
hadn't.
You need to use a completely different voice and manner talking to a roomful
of people than you would in conversation. Everyday life gives you no practice
in this. If you can't already do it, the best solution is to treat it as a
consciously artificial trick, like juggling.
However, that doesn't mean you should talk like some kind of announcer.
Audiences tune that out. What you need to do is talk in this artificial way,
and yet make it seem conversational. (Writing is the same. Good writing is an
elaborate effort to seem spontaneous.)
If you want to write out your whole presentation beforehand and memorize it,
that's ok. That has worked for some groups in the past. But make sure to write
something that sounds like spontaneous, informal speech, and deliver it that
way too.
Err on the side of speaking slowly. At Rehearsal Day, one of the founders
mentioned a rule actors use: if you feel you're speaking too slowly, you're
speaking at about the right speed.
**8\. Have one person talk.**
Startups often want to show that all the founders are equal partners. This is
a good instinct; investors dislike unbalanced teams. But trying to show it by
partitioning the presentation is going too far. It's distracting. You can
demonstrate your respect for one another in more subtle ways. For example,
when one of the groups presented at Demo Day, the more extroverted of the two
founders did most of the talking, but he described his co-founder as the best
hacker he'd ever met, and you could tell he meant it.
Pick the one or at most two best speakers, and have them do most of the
talking.
Exception: If one of the founders is an expert in some specific technical
field, it can be good for them to talk about that for a minute or so. This
kind of "expert witness" can add credibility, even if the audience doesn't
understand all the details. If Jobs and Wozniak had 10 minutes to present the
Apple II, it might be a good plan to have Jobs speak for 9 minutes and have
Woz speak for a minute in the middle about some of the technical feats he'd
pulled off in the design. (Though of course if it were actually those two,
Jobs would speak for the entire 10 minutes.)
**9\. Seem confident.**
Between the brief time available and their lack of technical background, many
in the audience will have a hard time evaluating what you're doing. Probably
the single biggest piece of evidence, initially, will be your own confidence
in it. You have to show you're impressed with what you've made.
And I mean show, not tell. Never say "we're passionate" or "our product is
great." People just ignore that—or worse, write you off as bullshitters. Such
messages must be implicit.
What you must not do is seem nervous and apologetic. If you've truly made
something good, you're doing investors a _favor_ by telling them about it. If
you don't genuinely believe that, perhaps you ought to change what your
company is doing. If you don't believe your startup has such promise that
you'd be doing them a favor by letting them invest, why are you investing your
time in it?
**10\. Don't try to seem more than you are.**
Don't worry if your company is just a few months old and doesn't have an
office yet, or your founders are technical people with no business experience.
Google was like that once, and they turned out ok. Smart investors can see
past such superficial flaws. They're not looking for finished, smooth
presentations. They're looking for raw talent. All you need to convince them
of is that you're smart and that you're onto something good. If you try too
hard to conceal your rawness—by trying to seem corporate, or pretending to
know about stuff you don't—you may just conceal your talent.
You can afford to be candid about what you haven't figured out yet. Don't go
out of your way to bring it up (e.g. by having a slide about what might go
wrong), but don't try to pretend either that you're further along than you
are. If you're a hacker and you're presenting to experienced investors,
they're probably better at detecting bullshit than you are at producing it.
**11\. Don't put too many words on slides.**
When there are a lot of words on a slide, people just skip reading it. So look
at your slides and ask of each word "could I cross this out?" This includes
gratuitous clip art. Try to get your slides under 20 words if you can.
Don't read your slides. They should be something in the background as you face
the audience and talk to them, not something you face and read to an audience
sitting behind you.
Cluttered sites don't do well in demos, especially when they're projected onto
a screen. At the very least, crank up the font size big enough to make all the
text legible. But cluttered sites are bad anyway, so perhaps you should use
this opportunity to make your design simpler.
**12\. Specific numbers are good.**
If you have any kind of data, however preliminary, tell the audience. Numbers
stick in people's heads. If you can claim that the median visitor generates 12
page views, that's great.
But don't give them more than four or five numbers, and only give them numbers
specific to you. You don't need to tell them the size of the market you're in.
Who cares, really, if it's 500 million or 5 billion a year? Talking about that
is like an actor at the beginning of his career telling his parents how much
Tom Hanks makes. Yeah, sure, but first you have to become Tom Hanks. The
important part is not whether he makes ten million a year or a hundred, but
how you get there.
**13\. Tell stories about users.**
The biggest fear of investors looking at early stage startups is that you've
built something based on your own a priori theories of what the world needs,
but that no one will actually want. So it's good if you can talk about
problems specific users have and how you solve them.
Greg Mcadoo said one thing Sequoia looks for is the "proxy for demand." What
are people doing now, using inadequate tools, that shows they need what you're
making?
Another sign of user need is when people pay a lot for something. It's easy to
convince investors there will be demand for a cheaper alternative to something
popular, if you preserve the qualities that made it popular.
The best stories about user needs are about your own. A remarkable number of
famous startups grew out of some need the founders had: Apple, Microsoft,
Yahoo, Google. Experienced investors know that, so stories of this type will
get their attention. The next best thing is to talk about the needs of people
you know personally, like your friends or siblings.
**14\. Make a soundbite stick in their heads.**
Professional investors hear a lot of pitches. After a while they all blur
together. The first cut is simply to be one of those they remember. And the
way to ensure that is to create a descriptive phrase about yourself that
sticks in their heads.
In Hollywood, these phrases seem to be of the form "x meets y." In the
startup world, they're usually "the x of y" or "the x y." Viaweb's was "the
Microsoft Word of ecommerce."
Find one and launch it clearly (but apparently casually) in your talk,
preferably near the beginning.
It's a good exercise for you, too, to sit down and try to figure out how to
describe your startup in one compelling phrase. If you can't, your plans may
not be sufficiently focused.
|
August 2015
If you have a US startup called X and you don't have x.com, you should
probably change your name.
The reason is not just that people can't find you. For companies with mobile
apps, especially, having the right domain name is not as critical as it used
to be for getting users. The problem with not having the .com of your name is
that it signals weakness. Unless you're so big that your reputation precedes
you, a marginal domain suggests you're a marginal company. Whereas (as Stripe
shows) having x.com signals strength even if it has no relation to what you
do.
Even good founders can be in denial about this. Their denial derives from two
very powerful forces: identity, and lack of imagination.
X is what we _are_ , founders think. There's no other name as good. Both of
which are false.
You can fix the first by stepping back from the problem. Imagine you'd called
your company something else. If you had, surely you'd be just as attached to
that name as you are to your current one. The idea of switching to your
current name would seem repellent. [1]
There's nothing intrinsically great about your current name. Nearly all your
attachment to it comes from it being attached to you. [2]
The way to neutralize the second source of denial, your inability to think of
other potential names, is to acknowledge that you're bad at naming. Naming is
a completely separate skill from those you need to be a good founder. You can
be a great startup founder but hopeless at thinking of names for your company.
Once you acknowledge that, you stop believing there is nothing else you could
be called. There are lots of other potential names that are as good or better;
you just can't think of them.
How do you find them? One answer is the default way to solve problems you're
bad at: find someone else who can think of names. But with company names there
is another possible approach. It turns out almost any word or word pair that
is not an obviously bad name is a sufficiently good one, and the number of
such domains is so large that you can find plenty that are cheap or even
untaken. So make a list and try to buy some. That's what
[Stripe](http://www.quora.com/How-did-Stripe-come-up-with-its-name?share=1)
did. (Their search also turned up parse.com, which their friends at Parse
took.)
The reason I know that naming companies is a distinct skill orthogonal to the
others you need in a startup is that I happen to have it. Back when I was
running YC and did more office hours with startups, I would often help them
find new names. 80% of the time we could find at least one good name in a 20
minute office hour slot.
Now when I do office hours I have to focus on more important questions, like
what the company is doing. I tell them when they need to change their name.
But I know the power of the forces that have them in their grip, so I know
most won't listen. [3]
There are of course examples of startups that have succeeded without having
the .com of their name. There are startups that have succeeded despite any
number of different mistakes. But this mistake is less excusable than most.
It's something that can be fixed in a couple days if you have sufficient
discipline to acknowledge the problem.
100% of the top 20 YC companies by valuation have the .com of their name. 94%
of the top 50 do. But only 66% of companies in the current batch have the .com
of their name. Which suggests there are lessons ahead for most of the rest,
one way or another.
**Notes**
[1] Incidentally, this thought experiment works for [nationality and
religion](identity.html) too.
[2] The liking you have for a name that has become part of your identity
manifests itself not directly, which would be easy to discount, but as a
collection of specious beliefs about its intrinsic qualities. (This too is
true of nationality and religion as well.)
[3] Sometimes founders know it's a problem that they don't have the .com of
their name, but delusion strikes a step later in the belief that they'll be
able to buy it despite having no evidence it's for sale. Don't believe a
domain is for sale unless the owner has already told you an asking price.
**Thanks** to Sam Altman, Jessica Livingston, and Geoff Ralston for reading
drafts of this.
|
January 2020
When I was young, I thought old people had everything figured out. Now that
I'm old, I know this isn't true.
I constantly feel like a noob. It seems like I'm always talking to some
startup working in a new field I know nothing about, or reading a book about a
topic I don't understand well enough, or visiting some new country where I
don't know how things work.
It's not pleasant to feel like a noob. And the word "noob" is certainly not a
compliment. And yet today I realized something encouraging about being a noob:
the more of a noob you are locally, the less of a noob you are globally.
For example, if you stay in your home country, you'll feel less of a noob than
if you move to Farawavia, where everything works differently. And yet you'll
know more if you move. So the feeling of being a noob is inversely correlated
with actual ignorance.
But if the feeling of being a noob is good for us, why do we dislike it? What
evolutionary purpose could such an aversion serve?
I think the answer is that there are two sources of feeling like a noob: being
stupid, and doing something novel. Our dislike of feeling like a noob is our
brain telling us "Come on, come on, figure this out." Which was the right
thing to be thinking for most of human history. The life of hunter-gatherers
was complex, but it didn't change as much as life does now. They didn't
suddenly have to figure out what to do about cryptocurrency. So it made sense
to be biased toward competence at existing problems over the discovery of new
ones. It made sense for humans to dislike the feeling of being a noob, just
as, in a world where food was scarce, it made sense for them to dislike the
feeling of being hungry.
Now that too much food is more of a problem than too little, our dislike of
feeling hungry leads us astray. And I think our dislike of feeling like a noob
does too.
Though it feels unpleasant, and people will sometimes ridicule you for it, the
more you feel like a noob, the better.
|
April 2016
_(This is a talk I gave at an event called Opt412 in Pittsburgh. Much of it
will apply to other towns. But not all, because as I say in the talk,
Pittsburgh has some important advantages over most would-be startup hubs.)_
What would it take to make Pittsburgh into a startup hub, like Silicon Valley?
I understand Pittsburgh pretty well, because I grew up here, in Monroeville.
And I understand Silicon Valley pretty well because that's where I live now.
Could you get that kind of startup ecosystem going here?
When I agreed to speak here, I didn't think I'd be able to give a very
optimistic talk. I thought I'd be talking about what Pittsburgh could do to
become a startup hub, very much in the subjunctive. Instead I'm going to talk
about what Pittsburgh can do.
What changed my mind was an article I read in, of all places, the _New York
Times_ food section. The title was "[_Pittsburgh's Youth-Driven Food
Boom_](http://www.nytimes.com/2016/03/16/dining/pittsburgh-restaurants.html)."
To most people that might not even sound interesting, let alone something
related to startups. But it was electrifying to me to read that title. I don't
think I could pick a more promising one if I tried. And when I read the
article I got even more excited. It said "people ages 25 to 29 now make up 7.6
percent of all residents, up from 7 percent about a decade ago." Wow, I
thought, Pittsburgh could be the next Portland. It could become the cool place
all the people in their twenties want to go live.
When I got here a couple days ago, I could feel the difference. I lived here
from 1968 to 1984. I didn't realize it at the time, but during that whole
period the city was in free fall. On top of the flight to the suburbs that
happened everywhere, the steel and nuclear businesses were both dying. Boy are
things different now. It's not just that downtown seems a lot more prosperous.
There is an energy here that was not here when I was a kid.
When I was a kid, this was a place young people left. Now it's a place that
attracts them.
What does that have to do with startups? Startups are made of people, and the
average age of the people in a typical startup is right in that 25 to 29
bracket.
I've seen how powerful it is for a city to have those people. Five years ago
they shifted the center of gravity of Silicon Valley from the peninsula to San
Francisco. Google and Facebook are on the peninsula, but the next generation
of big winners are all in SF. The reason the center of gravity shifted was the
talent war, for programmers especially. Most 25 to 29 year olds want to live
in the city, not down in the boring suburbs. So whether they like it or not,
founders know they have to be in the city. I know multiple founders who would
have preferred to live down in the Valley proper, but who made themselves move
to SF because they knew otherwise they'd lose the talent war.
So being a magnet for people in their twenties is a very promising thing to
be. It's hard to imagine a place becoming a startup hub without also being
that. When I read that statistic about the increasing percentage of 25 to 29
year olds, I had exactly the same feeling of excitement I get when I see a
startup's graphs start to creep upward off the x axis.
Nationally the percentage of 25 to 29 year olds is 6.8%. That means you're .8%
ahead. The population is 306,000, so we're talking about a surplus of about
2500 people. That's the population of a small town, and that's just the
surplus. So you have a toehold. Now you just have to expand it.
And though "youth-driven food boom" may sound frivolous, it is anything but.
Restaurants and cafes are a big part of the personality of a city. Imagine
walking down a street in Paris. What are you walking past? Little restaurants
and cafes. Imagine driving through some depressing random exurb. What are you
driving past? Starbucks and McDonalds and Pizza Hut. As Gertrude Stein said,
there is no there there. You could be anywhere.
These independent restaurants and cafes are not just feeding people. They're
making there be a there here.
So here is my first concrete recommendation for turning Pittsburgh into the
next Silicon Valley: do everything you can to encourage this youth-driven food
boom. What could the city do? Treat the people starting these little
restaurants and cafes as your users, and go ask them what they want. I can
guess at least one thing they might want: a fast permit process. San Francisco
has left you a huge amount of room to beat them in that department.
I know restaurants aren't the prime mover though. The prime mover, as the
Times article said, is cheap housing. That's a big advantage. But that phrase
"cheap housing" is a bit misleading. There are plenty of places that are
cheaper. What's special about Pittsburgh is not that it's cheap, but that it's
a cheap place you'd actually want to live.
Part of that is the buildings themselves. I realized a long time ago, back
when I was a poor twenty-something myself, that the best deals were places
that had once been rich, and then became poor. If a place has always been
rich, it's nice but too expensive. If a place has always been poor, it's cheap
but grim. But if a place was once rich and then got poor, you can find palaces
for cheap. And that's what's bringing people here. When Pittsburgh was rich, a
hundred years ago, the people who lived here built big solid buildings. Not
always in the best taste, but definitely solid. So here is another piece of
advice for becoming a startup hub: don't destroy the buildings that are
bringing people here. When cities are on the way back up, like Pittsburgh is
now, developers race to tear down the old buildings. Don't let that happen.
Focus on historic preservation. Big real estate development projects are not
what's bringing the twenty-somethings here. They're the opposite of the new
restaurants and cafes; they subtract personality from the city.
The empirical evidence suggests you cannot be too strict about historic
preservation. The tougher cities are about it, the better they seem to do.
But the appeal of Pittsburgh is not just the buildings themselves. It's the
neighborhoods they're in. Like San Francisco and New York, Pittsburgh is
fortunate in being a pre-car city. It's not too spread out. Because those 25
to 29 year olds do not like driving. They prefer walking, or bicycling, or
taking public transport. If you've been to San Francisco recently you can't
help noticing the huge number of bicyclists. And this is not just a fad that
the twenty-somethings have adopted. In this respect they have discovered a
better way to live. The beards will go, but not the bikes. Cities where you
can get around without driving are just better period. So I would suggest you
do everything you can to capitalize on this. As with historic preservation, it
seems impossible to go too far.
Why not make Pittsburgh the most bicycle and pedestrian friendly city in the
country? See if you can go so far that you make San Francisco seem backward by
comparison. If you do, it's very unlikely you'll regret it. The city will seem
like a paradise to the young people you want to attract. If they do leave to
get jobs elsewhere, it will be with regret at leaving behind such a place. And
what's the downside? Can you imagine a headline "City ruined by becoming too
bicycle-friendly?" It just doesn't happen.
So suppose cool old neighborhoods and cool little restaurants make this the
next Portland. Will that be enough? It will put you in a way better position
than Portland itself, because Pittsburgh has something Portland lacks: a
first-rate research university. CMU plus little cafes means you have more than
hipsters drinking lattes. It means you have hipsters drinking lattes while
talking about distributed systems. Now you're getting really close to San
Francisco.
In fact you're better off than San Francisco in one way, because CMU is
downtown, but Stanford and Berkeley are out in the suburbs.
What can CMU do to help Pittsburgh become a startup hub? Be an even better
research university. CMU is one of the best universities in the world, but
imagine what things would be like if it were the very best, and everyone knew
it. There are a lot of ambitious people who must go to the best place,
wherever it is. If CMU were it, they would all come here. There would be kids
in Kazakhstan dreaming of one day living in Pittsburgh.
Being that kind of talent magnet is the most important contribution
universities can make toward making their city a startup hub. In fact it is
practically the only contribution they can make.
But wait, shouldn't universities be setting up programs with words like
"innovation" and "entrepreneurship" in their names? No, they should not. These
kind of things almost always turn out to be disappointments. They're pursuing
the wrong targets. The way to get innovation is not to aim for innovation but
to aim for something more specific, like better batteries or better 3D
printing. And the way to learn about entrepreneurship is to do it, which you
[_can't in school_](before.html).
I know it may disappoint some administrators to hear that the best thing a
university can do to encourage startups is to be a great university. It's like
telling people who want to lose weight that the way to do it is to eat less.
But if you want to know where startups come from, look at the empirical
evidence. Look at the histories of the most successful startups, and you'll
find they grow organically out of a couple of founders building something that
starts as an interesting side project. Universities are great at bringing
together founders, but beyond that the best thing they can do is get out of
the way. For example, by not claiming ownership of "intellectual property"
that students and faculty develop, and by having liberal rules about deferred
admission and leaves of absence.
In fact, one of the most effective things a university could do to encourage
startups is an elaborate form of getting out of the way invented by Harvard.
Harvard used to have exams for the fall semester after Christmas. At the
beginning of January they had something called "Reading Period" when you were
supposed to be studying for exams. And Microsoft and Facebook have something
in common that few people realize: they were both started during Reading
Period. It's the perfect situation for producing the sort of side projects
that turn into startups. The students are all on campus, but they don't have
to do anything because they're supposed to be studying for exams.
Harvard may have closed this window, because a few years ago they moved exams
before Christmas and shortened reading period from 11 days to 7. But if a
university really wanted to help its students start startups, the empirical
evidence, weighted by market cap, suggests the best thing they can do is
literally nothing.
The culture of Pittsburgh is another of its strengths. It seems like a city
has to be socially liberal to be a startup hub, and it's pretty clear why. A
city has to tolerate strangeness to be a home for startups, because startups
are so strange. And you can't choose to allow just the forms of strangeness
that will turn into big startups, because they're all intermingled. You have
to tolerate all strangeness.
That immediately rules out [_big chunks of the
US_](http://www.nytimes.com/2016/04/06/us/gay-rights-mississippi-north-
carolina.html). I'm optimistic it doesn't rule out Pittsburgh. One of the
things I remember from growing up here, though I didn't realize at the time
that there was anything unusual about it, is how well people got along. I'm
still not sure why. Maybe one reason was that everyone felt like an immigrant.
When I was a kid in Monroeville, people didn't call themselves American. They
called themselves Italian or Serbian or Ukranian. Just imagine what it must
have been like here a hundred years ago, when people were pouring in from
twenty different countries. Tolerance was the only option.
What I remember about the culture of Pittsburgh is that it was both tolerant
and pragmatic. That's how I'd describe the culture of Silicon Valley too. And
it's not a coincidence, because Pittsburgh was the Silicon Valley of its time.
This was a city where people built new things. And while the things people
build have changed, the spirit you need to do that kind of work is the same.
So although an influx of latte-swilling hipsters may be annoying in some ways,
I would go out of my way to encourage them. And more generally to tolerate
strangeness, even unto the degree wacko Californians do. For Pittsburgh that
is a conservative choice: it's a return to the city's roots.
Unfortunately I saved the toughest part for last. There is one more thing you
need to be a startup hub, and Pittsburgh hasn't got it: investors. Silicon
Valley has a big investor community because it's had 50 years to grow one. New
York has a big investor community because it's full of people who like money a
lot and are quick to notice new ways to get it. But Pittsburgh has neither of
these. And the cheap housing that draws other people here has no effect on
investors.
If an investor community grows up here, it will happen the same way it did in
Silicon Valley: slowly and organically. So I would not bet on having a big
investor community in the short term. But fortunately there are three trends
that make that less necessary than it used to be. One is that startups are
increasingly cheap to start, so you just don't need as much outside money as
you used to. The second is that thanks to things like Kickstarter, a startup
can get to revenue faster. You can put something on Kickstarter from anywhere.
The third is programs like Y Combinator. A startup from anywhere in the world
can go to YC for 3 months, pick up funding, and then return home if they want.
My advice is to make Pittsburgh a great place for startups, and gradually more
of them will stick. Some of those will succeed; some of their founders will
become investors; and still more startups will stick.
This is not a fast path to becoming a startup hub. But it is at least a path,
which is something few other cities have. And it's not as if you have to make
painful sacrifices in the meantime. Think about what I've suggested you should
do. Encourage local restaurants, save old buildings, take advantage of
density, make CMU the best, promote tolerance. These are the things that make
Pittsburgh good to live in now. All I'm saying is that you should do even more
of them.
And that's an encouraging thought. If Pittsburgh's path to becoming a startup
hub is to be even more itself, then it has a good chance of succeeding. In
fact it probably has the best chance of any city its size. It will take some
effort, and a lot of time, but if any city can do it, Pittsburgh can.
**Thanks** to Charlie Cheever and Jessica Livingston for reading drafts of
this, and to Meg Cheever for organizing Opt412 and inviting me to speak.
|
April 2008
There are some topics I save up because they'll be so much fun to write about.
This is one of them: a list of my heroes.
I'm not claiming this is a list of the _n_ most admirable people. Who could
make such a list, even if they wanted to?
Einstein isn't on the list, for example, even though he probably deserves to
be on any shortlist of admirable people. I once asked a physicist friend if
Einstein was really as smart as his fame implies, and she said that yes, he
was. So why isn't he on the list? Because I had to ask. This is a list of
people who've influenced me, not people who would have if I understood their
work.
My test was to think of someone and ask "is this person my hero?" It often
returned surprising answers. For example, it returned false for Montaigne, who
was arguably the inventor of the essay. Why? When I thought about what it
meant to call someone a hero, it meant I'd decide what to do by asking what
they'd do in the same situation. That's a stricter standard than admiration.
After I made the list, I looked to see if there was a pattern, and there was,
a very clear one. Everyone on the list had two qualities: they cared almost
excessively about their work, and they were absolutely honest. By honest I
don't mean trustworthy so much as that they never pander: they never say or do
something because that's what the audience wants. They are all fundamentally
subversive for this reason, though they conceal it to varying degrees.
**Jack Lambert**
I grew up in Pittsburgh in the 1970s. Unless you were there it's hard to
imagine how that town felt about the Steelers. Locally, all the news was bad.
The steel industry was dying. But the Steelers were the best team in football
— and moreover, in a way that seemed to reflect the personality of the city.
They didn't do anything fancy. They just got the job done.
Other players were more famous: Terry Bradshaw, Franco Harris, Lynn Swann. But
they played offense, and you always get more attention for that. It seemed to
me as a twelve year old football expert that the best of them all was [Jack
Lambert](http://en.wikipedia.org/wiki/Jack_Lambert_\(American_football_player\)).
And what made him so good was that he was utterly relentless. He didn't just
care about playing well; he cared almost too much. He seemed to regard it as a
personal insult when someone from the other team had possession of the ball on
his side of the line of scrimmage.
The suburbs of Pittsburgh in the 1970s were a pretty dull place. School was
boring. All the adults around were bored with their jobs working for big
companies. Everything that came to us through the mass media was (a) blandly
uniform and (b) produced elsewhere. Jack Lambert was the exception. He was
like nothing else I'd seen.
**Kenneth Clark**
Kenneth Clark is the best nonfiction writer I know of, on any subject. Most
people who write about art history don't really like art; you can tell from a
thousand little signs. But Clark did, and not just intellectually, but the way
one anticipates a delicious dinner.
What really makes him stand out, though, is the quality of his ideas. His
style is deceptively casual, but there is more in his books than in a library
of art monographs. Reading [_The Nude_](http://www.amazon.com/Nude-Study-
Ideal-Form/dp/0691017883) is like a ride in a Ferrari. Just as you're getting
settled, you're slammed back in your seat by the acceleration. Before you can
adjust, you're thrown sideways as the car screeches into the first turn. His
brain throws off ideas almost too fast to grasp them. Finally at the end of
the chapter you come to a halt, with your eyes wide and a big smile on your
face.
Kenneth Clark was a star in his day, thanks to the documentary series
[_Civilisation_](http://www.amazon.com/dp/B000F0UUKA). And if you read only
one book about art history,
[_Civilisation_](http://www.abebooks.com/servlet/SearchResults?an=clark&sts=t&tn=civilisation)
is the one I'd recommend. It's much better than the drab Sears Catalogs of art
that undergraduates are forced to buy for Art History 101.
**Larry Mihalko**
A lot of people have a great teacher at some point in their childhood. Larry
Mihalko was mine. When I look back it's like there's a line drawn between
third and fourth grade. After Mr. Mihalko, everything was different.
Why? First of all, he was intellectually curious. I had a few other teachers
who were smart, but I wouldn't describe them as intellectually curious. In
retrospect, he was out of place as an elementary school teacher, and I think
he knew it. That must have been hard for him, but it was wonderful for us, his
students. His class was a constant adventure. I used to like going to school
every day.
The other thing that made him different was that he liked us. Kids are good at
telling that. The other teachers were at best benevolently indifferent. But
Mr. Mihalko seemed like he actually wanted to be our friend. On the last day
of fourth grade, he got out one of the heavy school record players and played
James Taylor's "You've Got a Friend" to us. Just call out my name, and you
know wherever I am, I'll come running. He died at 59 of lung cancer. I've
never cried like I cried at his funeral.
**Leonardo**
One of the things I've learned about making things that I didn't realize when
I was a kid is that much of the best stuff isn't made for audiences, but for
oneself. You see paintings and drawings in museums and imagine they were made
for you to look at. Actually a lot of the best ones were made as a way of
exploring the world, not as a way to please other people. The best of these
explorations are sometimes more pleasing than stuff made explicitly to please.
Leonardo did a lot of things. One of his most admirable qualities was that he
did so many different things that were admirable. What people know of him now
is his paintings and his more flamboyant inventions, like flying machines.
That makes him seem like some kind of dreamer who sketched artists'
conceptions of rocket ships on the side. In fact he made a large number of far
more practical technical discoveries. He was as good an engineer as a painter.
His most impressive work, to me, is his
[drawings](https://sep.turbifycdn.com/ty/cdn/paulgraham/leonardo-
skull.jpg?t=1688221954&). They're clearly made more as a way of studying the
world than producing something beautiful. And yet they can hold their own with
any work of art ever made. No one else, before or since, was that good when no
one was looking.
**Robert Morris**
Robert Morris has a very unusual quality: he's never wrong. It might seem this
would require you to be omniscient, but actually it's surprisingly easy. Don't
say anything unless you're fairly sure of it. If you're not omniscient, you
just don't end up saying much.
More precisely, the trick is to pay careful attention to how you qualify what
you say. By using this trick, Robert has, as far as I know, managed to be
mistaken only once, and that was when he was an undergrad. When the Mac came
out, he said that little desktop computers would never be suitable for real
hacking.
It's wrong to call it a trick in his case, though. If it were a conscious
trick, he would have slipped in a moment of excitement. With Robert this
quality is wired-in. He has an almost superhuman integrity. He's not just
generally correct, but also correct about how correct he is.
You'd think it would be such a great thing never to be wrong that everyone
would do this. It doesn't seem like that much extra work to pay as much
attention to the error on an idea as to the idea itself. And yet practically
no one does. I know how hard it is, because since meeting Robert I've tried to
do in software what he seems to do in hardware.
**P. G. Wodehouse**
People are finally starting to admit that Wodehouse was a great writer. If you
want to be thought a great novelist in your own time, you have to sound
intellectual. If what you write is popular, or entertaining, or funny, you're
ipso facto suspect. That makes Wodehouse doubly impressive, because it meant
that to write as he wanted to, he had to commit to being despised in his own
lifetime.
Evelyn Waugh called him a great writer, but to most people at the time that
would have read as a chivalrous or deliberately perverse gesture. At the time
any random autobiographical novel by a recent college grad could count on more
respectful treatment from the literary establishment.
Wodehouse may have begun with simple atoms, but the way he composed them into
molecules was near faultless. His rhythm in particular. It makes me self-
conscious to write about it. I can think of only two other writers who came
near him for style: Evelyn Waugh and Nancy Mitford. Those three used the
English language like they owned it.
But Wodehouse has something neither of them did. He's at ease. Evelyn Waugh
and Nancy Mitford cared what other people thought of them: he wanted to seem
aristocratic; she was afraid she wasn't smart enough. But Wodehouse didn't
give a damn what anyone thought of him. He wrote exactly what he wanted.
**Alexander Calder**
Calder's on this list because he makes me happy. Can his work stand up to
Leonardo's? Probably not. There might not be anything from the 20th Century
that can. But what was good about Modernism, Calder had, and had in a way that
he made seem effortless.
What was good about Modernism was its freshness. Art became stuffy in the
nineteenth century. The paintings that were popular at the time were mostly
the art equivalent of McMansions—big, pretentious, and fake. Modernism meant
starting over, making things with the same earnest motives that children
might. The artists who benefited most from this were the ones who had
preserved a child's confidence, like Klee and Calder.
Klee was impressive because he could work in so many different styles. But
between the two I like Calder better, because his work seemed happier.
Ultimately the point of art is to engage the viewer. It's hard to predict what
will; often something that seems interesting at first will bore you after a
month. Calder's
[sculptures](https://www.flickr.com/photos/uergevich/7029234689/) never get
boring. They just sit there quietly radiating optimism, like a battery that
never runs out. As far as I can tell from books and photographs, the happiness
of Calder's work is his own happiness showing through.
**Jane Austen**
Everyone admires Jane Austen. Add my name to the list. To me she seems the
best novelist of all time.
I'm interested in how things work. When I read most novels, I pay as much
attention to the author's choices as to the story. But in her novels I can't
see the gears at work. Though I'd really like to know how she does what she
does, I can't figure it out, because she's so good that her stories don't seem
made up. I feel like I'm reading a description of something that actually
happened.
I used to read a lot of novels when I was younger. I can't read most anymore,
because they don't have enough information in them. Novels seem so
impoverished compared to history and biography. But reading Austen is like
reading nonfiction. She writes so well you don't even notice her.
**John McCarthy**
John McCarthy invented Lisp, the field of (or at least the term) artificial
intelligence, and was an early member of both of the top two computer science
departments, MIT and Stanford. No one would dispute that he's one of the
greats, but he's an especial hero to me because of [Lisp](rootsoflisp.html).
It's hard for us now to understand what a conceptual leap that was at the
time. Paradoxically, one of the reasons his achievement is hard to appreciate
is that it was so successful. Practically every programming language invented
in the last 20 years includes ideas from Lisp, and each year the median
language gets more Lisplike.
In 1958 these ideas were anything but obvious. In 1958 there seem to have been
two ways of thinking about programming. Some people thought of it as math, and
proved things about Turing Machines. Others thought of it as a way to get
things done, and designed languages all too influenced by the technology of
the day. McCarthy alone bridged the gap. He designed a language that was math.
But designed is not really the word; discovered is more like it.
**The Spitfire**
As I was making this list I found myself thinking of people like [Douglas
Bader](http://en.wikipedia.org/wiki/Douglas_Bader) and [R.J.
Mitchell](http://en.wikipedia.org/wiki/R._J._Mitchell) and [Jeffrey
Quill](http://www.amazon.com/Spitfire-Pilots-Story-Crecy-Cover/dp/0947554726)
and I realized that though all of them had done many things in their lives,
there was one factor above all that connected them: the Spitfire.
This is supposed to be a list of heroes. How can a machine be on it? Because
that machine was not just a machine. It was a lens of heroes. Extraordinary
devotion went into it, and extraordinary courage came out.
It's a cliche to call World War II a contest between good and evil, but
between fighter designs, it really was. The Spitfire's original nemesis, the
ME 109, was a brutally practical plane. It was a killing machine. The Spitfire
was optimism embodied. And not just in its beautiful lines: it was at the edge
of what could be manufactured. But taking the high road worked. In the air,
beauty had the edge, just.
**Steve Jobs**
People alive when Kennedy was killed usually remember exactly where they were
when they heard about it. I remember exactly where I was when a friend asked
if I'd heard Steve Jobs had cancer. It was like the floor dropped out. A few
seconds later she told me that it was a rare operable type, and that he'd be
ok. But those seconds seemed long.
I wasn't sure whether to include Jobs on this list. A lot of people at Apple
seem to be afraid of him, which is a bad sign. But he compels admiration.
There's no name for what Steve Jobs is, because there hasn't been anyone quite
like him before. He doesn't design Apple's products himself. Historically the
closest analogy to what he does are the great Renaissance patrons of the arts.
As the CEO of a company, that makes him unique.
Most CEOs delegate [taste](taste.html) to a subordinate. The [design
paradox](gh.html) means they're choosing more or less at random. But Steve
Jobs actually has taste himself — such good taste that he's shown the world
how much more important taste is than they realized.
**Isaac Newton**
Newton has a strange role in my pantheon of heroes: he's the one I reproach
myself with. He worked on big things, at least for part of his life. It's so
easy to get distracted working on small stuff. The questions you're answering
are pleasantly familiar. You get immediate rewards — in fact, you get bigger
rewards in your time if you work on matters of passing importance. But I'm
uncomfortably aware that this is the route to well-deserved obscurity.
To do really great things, you have to seek out questions people didn't even
realize were questions. There have probably been other people who did this as
well as Newton, for their time, but Newton is my model of this kind of
thought. I can just begin to understand what it must have felt like for him.
You only get one life. Why not do something huge? The phrase "paradigm shift"
is overused now, but Kuhn was onto something. And you know more are out there,
separated from us by what will later seem a surprisingly thin wall of laziness
and stupidity. If we work like Newton.
**Thanks** to Trevor Blackwell, Jessica Livingston, and Jackie McDonough for
reading drafts of this.
|
Subsets and Splits