text
stringlengths 55
74.7k
|
---|
May 2021There's one kind of opinion I'd be very afraid to express publicly.
If someone I knew to be both a domain expert and a reasonable person
proposed an idea that sounded preposterous, I'd be very reluctant
to say "That will never work."Anyone who has studied the history of ideas, and especially the
history of science, knows that's how big things start. Someone
proposes an idea that sounds crazy, most people dismiss it, then
it gradually takes over the world.Most implausible-sounding ideas are in fact bad and could be safely
dismissed. But not when they're proposed by reasonable domain
experts. If the person proposing the idea is reasonable, then they
know how implausible it sounds. And yet they're proposing it anyway.
That suggests they know something you don't. And if they have deep
domain expertise, that's probably the source of it.
[1]Such ideas are not merely unsafe to dismiss, but disproportionately
likely to be interesting. When the average person proposes an
implausible-sounding idea, its implausibility is evidence of their
incompetence. But when a reasonable domain expert does it, the
situation is reversed. There's something like an efficient market
here: on average the ideas that seem craziest will, if correct,
have the biggest effect. So if you can eliminate the theory that
the person proposing an implausible-sounding idea is incompetent,
its implausibility switches from evidence that it's boring to
evidence that it's exciting.
[2]Such ideas are not guaranteed to work. But they don't have to be.
They just have to be sufficiently good bets β to have sufficiently
high expected value. And I think on average they do. I think if you
bet on the entire set of implausible-sounding ideas proposed by
reasonable domain experts, you'd end up net ahead.The reason is that everyone is too conservative. The word "paradigm"
is overused, but this is a case where it's warranted. Everyone is
too much in the grip of the current paradigm. Even the people who
have the new ideas undervalue them initially. Which means that
before they reach the stage of proposing them publicly, they've
already subjected them to an excessively strict filter.
[3]The wise response to such an idea is not to make statements, but
to ask questions, because there's a real mystery here. Why has this
smart and reasonable person proposed an idea that seems so wrong?
Are they mistaken, or are you? One of you has to be. If you're the
one who's mistaken, that would be good to know, because it means
there's a hole in your model of the world. But even if they're
mistaken, it should be interesting to learn why. A trap that an
expert falls into is one you have to worry about too.This all seems pretty obvious. And yet there are clearly a lot of
people who don't share my fear of dismissing new ideas. Why do they
do it? Why risk looking like a jerk now and a fool later, instead
of just reserving judgement?One reason they do it is envy. If you propose a radical new idea
and it succeeds, your reputation (and perhaps also your wealth)
will increase proportionally. Some people would be envious if that
happened, and this potential envy propagates back into a conviction
that you must be wrong.Another reason people dismiss new ideas is that it's an easy way
to seem sophisticated. When a new idea first emerges, it usually
seems pretty feeble. It's a mere hatchling. Received wisdom is a
full-grown eagle by comparison. So it's easy to launch a devastating
attack on a new idea, and anyone who does will seem clever to those
who don't understand this asymmetry.This phenomenon is exacerbated by the difference between how those
working on new ideas and those attacking them are rewarded. The
rewards for working on new ideas are weighted by the value of the
outcome. So it's worth working on something that only has a 10%
chance of succeeding if it would make things more than 10x better.
Whereas the rewards for attacking new ideas are roughly constant;
such attacks seem roughly equally clever regardless of the target.People will also attack new ideas when they have a vested interest
in the old ones. It's not surprising, for example, that some of
Darwin's harshest critics were churchmen. People build whole careers
on some ideas. When someone claims they're false or obsolete, they
feel threatened.The lowest form of dismissal is mere factionalism: to automatically
dismiss any idea associated with the opposing faction. The lowest
form of all is to dismiss an idea because of who proposed it.But the main thing that leads reasonable people to dismiss new ideas
is the same thing that holds people back from proposing them: the
sheer pervasiveness of the current paradigm. It doesn't just affect
the way we think; it is the Lego blocks we build thoughts out of.
Popping out of the current paradigm is something only a few people
can do. And even they usually have to suppress their intuitions at
first, like a pilot flying through cloud who has to trust his
instruments over his sense of balance.
[4]Paradigms don't just define our present thinking. They also vacuum
up the trail of crumbs that led to them, making our standards for
new ideas impossibly high. The current paradigm seems so perfect
to us, its offspring, that we imagine it must have been accepted
completely as soon as it was discovered β that whatever the church thought
of the heliocentric model, astronomers must have been convinced as
soon as Copernicus proposed it. Far, in fact, from it. Copernicus
published the heliocentric model in 1532, but it wasn't till the
mid seventeenth century that the balance of scientific opinion
shifted in its favor.
[5]Few understand how feeble new ideas look when they first appear.
So if you want to have new ideas yourself, one of the most valuable
things you can do is to learn what they look like when they're born.
Read about how new ideas happened, and try to get yourself into the
heads of people at the time. How did things look to them, when the
new idea was only half-finished, and even the person who had it was
only half-convinced it was right?But you don't have to stop at history. You can observe big new ideas
being born all around you right now. Just look for a reasonable
domain expert proposing something that sounds wrong.If you're nice, as well as wise, you won't merely resist attacking
such people, but encourage them. Having new ideas is a lonely
business. Only those who've tried it know how lonely. These people
need your help. And if you help them, you'll probably learn something
in the process.Notes[1]
This domain expertise could be in another field. Indeed,
such crossovers tend to be particularly promising.[2]
I'm not claiming this principle extends much beyond math,
engineering, and the hard sciences. In politics, for example,
crazy-sounding ideas generally are as bad as they sound. Though
arguably this is not an exception, because the people who propose
them are not in fact domain experts; politicians are domain experts
in political tactics, like how to get elected and how to get
legislation passed, but not in the world that policy acts upon.
Perhaps no one could be.[3]
This sense of "paradigm" was defined by Thomas Kuhn in his
Structure of Scientific Revolutions, but I also recommend his
Copernican Revolution, where you can see him at work developing the
idea.[4]
This is one reason people with a touch of Asperger's may have
an advantage in discovering new ideas. They're always flying on
instruments.[5]
Hall, Rupert. From Galileo to Newton. Collins, 1963. This
book is particularly good at getting into contemporaries' heads.Thanks to Trevor Blackwell, Patrick Collison, Suhail Doshi, Daniel
Gackle, Jessica Livingston, and Robert Morris for reading drafts of this. |
May 2004When people care enough about something to do it well, those who
do it best tend to be far better than everyone else. There's a
huge gap between Leonardo and second-rate contemporaries like
Borgognone. You see the same gap between Raymond Chandler and the
average writer of detective novels. A top-ranked professional chess
player could play ten thousand games against an ordinary club player
without losing once.Like chess or painting or writing novels, making money is a very
specialized skill. But for some reason we treat this skill
differently. No one complains when a few people surpass all the
rest at playing chess or writing novels, but when a few people make
more money than the rest, we get editorials saying this is wrong.Why? The pattern of variation seems no different than for any other
skill. What causes people to react so strongly when the skill is
making money?I think there are three reasons we treat making money as different:
the misleading model of wealth we learn as children; the disreputable
way in which, till recently, most fortunes were accumulated; and
the worry that great variations in income are somehow bad for
society. As far as I can tell, the first is mistaken, the second
outdated, and the third empirically false. Could it be that, in a
modern democracy, variation in income is actually a sign of health?The Daddy Model of WealthWhen I was five I thought electricity was created by electric
sockets. I didn't realize there were power plants out there
generating it. Likewise, it doesn't occur to most kids that wealth
is something that has to be generated. It seems to be something
that flows from parents.Because of the circumstances in which they encounter it, children
tend to misunderstand wealth. They confuse it with money. They
think that there is a fixed amount of it. And they think of it as
something that's distributed by authorities (and so should be
distributed equally), rather than something that has to be created
(and might be created unequally).In fact, wealth is not money. Money is just a convenient way of
trading one form of wealth for another. Wealth is the underlying
stuffβthe goods and services we buy. When you travel to a
rich or poor country, you don't have to look at people's bank
accounts to tell which kind you're in. You can see
wealthβin buildings and streets, in the clothes and the health
of the people.Where does wealth come from? People make it. This was easier to
grasp when most people lived on farms, and made many of the things
they wanted with their own hands. Then you could see in the house,
the herds, and the granary the wealth that each family created. It
was obvious then too that the wealth of the world was not a fixed
quantity that had to be shared out, like slices of a pie. If you
wanted more wealth, you could make it.This is just as true today, though few of us create wealth directly
for ourselves (except for a few vestigial domestic tasks). Mostly
we create wealth for other people in exchange for money, which we
then trade for the forms of wealth we want.
[1]Because kids are unable to create wealth, whatever they have has
to be given to them. And when wealth is something you're given,
then of course it seems that it should be distributed equally.
[2]
As in most families it is. The kids see to that. "Unfair," they
cry, when one sibling gets more than another.In the real world, you can't keep living off your parents. If you
want something, you either have to make it, or do something of
equivalent value for someone else, in order to get them to give you
enough money to buy it. In the real world, wealth is (except for
a few specialists like thieves and speculators) something you have
to create, not something that's distributed by Daddy. And since
the ability and desire to create it vary from person to person,
it's not made equally.You get paid by doing or making something people want, and those
who make more money are often simply better at doing what people
want. Top actors make a lot more money than B-list actors. The
B-list actors might be almost as charismatic, but when people go
to the theater and look at the list of movies playing, they want
that extra oomph that the big stars have.Doing what people want is not the only way to get money, of course.
You could also rob banks, or solicit bribes, or establish a monopoly.
Such tricks account for some variation in wealth, and indeed for
some of the biggest individual fortunes, but they are not the root
cause of variation in income. The root cause of variation in income,
as Occam's Razor implies, is the same as the root cause of variation
in every other human skill.In the United States, the CEO of a large public company makes about
100 times as much as the average person.
[3]
Basketball players
make about 128 times as much, and baseball players 72 times as much.
Editorials quote this kind of statistic with horror. But I have
no trouble imagining that one person could be 100 times as productive
as another. In ancient Rome the price of slaves varied by
a factor of 50 depending on their skills.
[4]
And that's without
considering motivation, or the extra leverage in productivity that
you can get from modern technology.Editorials about athletes' or CEOs' salaries remind me of early
Christian writers, arguing from first principles about whether the
Earth was round, when they could just walk outside and check.
[5]
How much someone's work is worth is not a policy question. It's
something the market already determines."Are they really worth 100 of us?" editorialists ask. Depends on
what you mean by worth. If you mean worth in the sense of what
people will pay for their skills, the answer is yes, apparently.A few CEOs' incomes reflect some kind of wrongdoing. But are there
not others whose incomes really do reflect the wealth they generate?
Steve Jobs saved a company that was in a terminal decline. And not
merely in the way a turnaround specialist does, by cutting costs;
he had to decide what Apple's next products should be. Few others
could have done it. And regardless of the case with CEOs, it's
hard to see how anyone could argue that the salaries of professional
basketball players don't reflect supply and demand.It may seem unlikely in principle that one individual could really
generate so much more wealth than another. The key to this mystery
is to revisit that question, are they really worth 100 of us?
Would a basketball team trade one of their players for 100
random people? What would Apple's next product look like if you
replaced Steve Jobs with a committee of 100 random people?
[6]
These
things don't scale linearly. Perhaps the CEO or the professional
athlete has only ten times (whatever that means) the skill and
determination of an ordinary person. But it makes all the difference
that it's concentrated in one individual.When we say that one kind of work is overpaid and another underpaid,
what are we really saying? In a free market, prices are determined
by what buyers want. People like baseball more than poetry, so
baseball players make more than poets. To say that a certain kind
of work is underpaid is thus identical with saying that people want
the wrong things.Well, of course people want the wrong things. It seems odd to be
surprised by that. And it seems even odder to say that it's
unjust that certain kinds of work are underpaid.
[7]
Then
you're saying that it's unjust that people want the wrong things.
It's lamentable that people prefer reality TV and corndogs to
Shakespeare and steamed vegetables, but unjust? That seems like
saying that blue is heavy, or that up is circular.The appearance of the word "unjust" here is the unmistakable spectral
signature of the Daddy Model. Why else would this idea occur in
this odd context? Whereas if the speaker were still operating on
the Daddy Model, and saw wealth as something that flowed from a
common source and had to be shared out, rather than something
generated by doing what other people wanted, this is exactly what
you'd get on noticing that some people made much more than others.When we talk about "unequal distribution of income," we should
also ask, where does that income come from?
[8]
Who made the wealth
it represents? Because to the extent that income varies simply
according to how much wealth people create, the distribution may
be unequal, but it's hardly unjust.Stealing ItThe second reason we tend to find great disparities of wealth
alarming is that for most of human history the usual way to accumulate
a fortune was to steal it: in pastoral societies by cattle raiding;
in agricultural societies by appropriating others' estates in times
of war, and taxing them in times of peace.In conflicts, those on the winning side would receive the estates
confiscated from the losers. In England in the 1060s, when William
the Conqueror distributed the estates of the defeated Anglo-Saxon
nobles to his followers, the conflict was military. By the 1530s,
when Henry VIII distributed the estates of the monasteries to his
followers, it was mostly political.
[9]
But the principle was the
same. Indeed, the same principle is at work now in Zimbabwe.In more organized societies, like China, the ruler and his officials
used taxation instead of confiscation. But here too we see the
same principle: the way to get rich was not to create wealth, but
to serve a ruler powerful enough to appropriate it.This started to change in Europe with the rise of the middle class.
Now we think of the middle class as people who are neither rich nor
poor, but originally they were a distinct group. In a feudal
society, there are just two classes: a warrior aristocracy, and the
serfs who work their estates. The middle class were a new, third
group who lived in towns and supported themselves by manufacturing
and trade.Starting in the tenth and eleventh centuries, petty nobles and
former serfs banded together in towns that gradually became powerful
enough to ignore the local feudal lords.
[10]
Like serfs, the middle
class made a living largely by creating wealth. (In port cities
like Genoa and Pisa, they also engaged in piracy.) But unlike serfs
they had an incentive to create a lot of it. Any wealth a serf
created belonged to his master. There was not much point in making
more than you could hide. Whereas the independence of the townsmen
allowed them to keep whatever wealth they created.Once it became possible to get rich by creating wealth, society as
a whole started to get richer very rapidly. Nearly everything we
have was created by the middle class. Indeed, the other two classes
have effectively disappeared in industrial societies, and their
names been given to either end of the middle class. (In the original
sense of the word, Bill Gates is middle class.)But it was not till the Industrial Revolution that wealth creation
definitively replaced corruption as the best way to get rich. In
England, at least, corruption only became unfashionable (and in
fact only started to be called "corruption") when there started to
be other, faster ways to get rich.Seventeenth-century England was much like the third world today,
in that government office was a recognized route to wealth. The
great fortunes of that time still derived more from what we would
now call corruption than from commerce.
[11]
By the nineteenth
century that had changed. There continued to be bribes, as there
still are everywhere, but politics had by then been left to men who
were driven more by vanity than greed. Technology had made it
possible to create wealth faster than you could steal it. The
prototypical rich man of the nineteenth century was not a courtier
but an industrialist.With the rise of the middle class, wealth stopped being a zero-sum
game. Jobs and Wozniak didn't have to make us poor to make themselves
rich. Quite the opposite: they created things that made our lives
materially richer. They had to, or we wouldn't have paid for them.But since for most of the world's history the main route to wealth
was to steal it, we tend to be suspicious of rich people. Idealistic
undergraduates find their unconsciously preserved child's model of
wealth confirmed by eminent writers of the past. It is a case of
the mistaken meeting the outdated."Behind every great fortune, there is a crime," Balzac wrote. Except
he didn't. What he actually said was that a great fortune with no
apparent cause was probably due to a crime well enough executed
that it had been forgotten. If we were talking about Europe in
1000, or most of the third world today, the standard misquotation
would be spot on. But Balzac lived in nineteenth-century France,
where the Industrial Revolution was well advanced. He knew you
could make a fortune without stealing it. After all, he did himself,
as a popular novelist.
[12]Only a few countries (by no coincidence, the richest ones) have
reached this stage. In most, corruption still has the upper hand.
In most, the fastest way to get wealth is by stealing it. And so
when we see increasing differences in income in a rich country,
there is a tendency to worry that it's sliding back toward becoming
another Venezuela. I think the opposite is happening. I think
you're seeing a country a full step ahead of Venezuela.The Lever of TechnologyWill technology increase the gap between rich and poor? It will
certainly increase the gap between the productive and the unproductive.
That's the whole point of technology. With a tractor an energetic
farmer could plow six times as much land in a day as he could with
a team of horses. But only if he mastered a new kind of farming.I've seen the lever of technology grow visibly in my own time. In
high school I made money by mowing lawns and scooping ice cream at
Baskin-Robbins. This was the only kind of work available at the
time. Now high school kids could write software or design web
sites. But only some of them will; the rest will still be scooping
ice cream.I remember very vividly when in 1985 improved technology made it
possible for me to buy a computer of my own. Within months I was
using it to make money as a freelance programmer. A few years
before, I couldn't have done this. A few years before, there was
no such thing as a freelance programmer. But Apple created
wealth, in the form of powerful, inexpensive computers, and programmers
immediately set to work using it to create more.As this example suggests, the rate at which technology increases
our productive capacity is probably exponential, rather than linear.
So we should expect to see ever-increasing variation in individual
productivity as time goes on. Will that increase the gap between
rich and the poor? Depends which gap you mean.Technology should increase the gap in income, but it seems to
decrease other gaps. A hundred years ago, the rich led a different
kind of life from ordinary people. They lived in houses
full of servants, wore elaborately uncomfortable clothes, and
travelled about in carriages drawn by teams of horses which themselves
required their own houses and servants. Now, thanks to technology,
the rich live more like the average person.Cars are a good example of why. It's possible to buy expensive,
handmade cars that cost hundreds of thousands of dollars. But there
is not much point. Companies make more money by building a large
number of ordinary cars than a small number of expensive ones. So
a company making a mass-produced car can afford to spend a lot more
on its design. If you buy a custom-made car, something will always
be breaking. The only point of buying one now is to advertise that
you can.Or consider watches. Fifty years ago, by spending a lot of money
on a watch you could get better performance. When watches had
mechanical movements, expensive watches kept better time. Not any
more. Since the invention of the quartz movement, an ordinary Timex
is more accurate than a Patek Philippe costing hundreds of thousands
of dollars.
[13]
Indeed, as with expensive cars, if you're determined
to spend a lot of money on a watch, you have to put up with some
inconvenience to do it: as well as keeping worse time, mechanical
watches have to be wound.The only thing technology can't cheapen is brand. Which is precisely
why we hear ever more about it. Brand is the residue left as the
substantive differences between rich and poor evaporate. But what
label you have on your stuff is a much smaller matter than having
it versus not having it. In 1900, if you kept a carriage, no one
asked what year or brand it was. If you had one, you were rich.
And if you weren't rich, you took the omnibus or walked. Now even
the poorest Americans drive cars, and it is only because we're so
well trained by advertising that we can even recognize the especially
expensive ones.
[14]The same pattern has played out in industry after industry. If
there is enough demand for something, technology will make it cheap
enough to sell in large volumes, and the mass-produced versions
will be, if not better, at least more convenient.
[15]
And there
is nothing the rich like more than convenience. The rich people I
know drive the same cars, wear the same clothes, have the same kind
of furniture, and eat the same foods as my other friends. Their
houses are in different neighborhoods, or if in the same neighborhood
are different sizes, but within them life is similar. The houses
are made using the same construction techniques and contain much
the same objects. It's inconvenient to do something expensive and
custom.The rich spend their time more like everyone else too. Bertie
Wooster seems long gone. Now, most people who are rich enough not
to work do anyway. It's not just social pressure that makes them;
idleness is lonely and demoralizing.Nor do we have the social distinctions there were a hundred years
ago. The novels and etiquette manuals of that period read now
like descriptions of some strange tribal society. "With respect
to the continuance of friendships..." hints Mrs. Beeton's Book
of Household Management (1880), "it may be found necessary, in
some cases, for a mistress to relinquish, on assuming the responsibility
of a household, many of those commenced in the earlier part of her
life." A woman who married a rich man was expected to drop friends
who didn't. You'd seem a barbarian if you behaved that way today.
You'd also have a very boring life. People still tend to segregate
themselves somewhat, but much more on the basis of education than
wealth.
[16]Materially and socially, technology seems to be decreasing the gap
between the rich and the poor, not increasing it. If Lenin walked
around the offices of a company like Yahoo or Intel or Cisco, he'd
think communism had won. Everyone would be wearing the same clothes,
have the same kind of office (or rather, cubicle) with the same
furnishings, and address one another by their first names instead
of by honorifics. Everything would seem exactly as he'd predicted,
until he looked at their bank accounts. Oops.Is it a problem if technology increases that gap? It doesn't seem
to be so far. As it increases the gap in income, it seems to
decrease most other gaps.Alternative to an AxiomOne often hears a policy criticized on the grounds that it would
increase the income gap between rich and poor. As if it were an
axiom that this would be bad. It might be true that increased
variation in income would be bad, but I don't see how we can say
it's axiomatic.Indeed, it may even be false, in industrial democracies. In a
society of serfs and warlords, certainly, variation in income is a
sign of an underlying problem. But serfdom is not the only cause
of variation in income. A 747 pilot doesn't make 40 times as much
as a checkout clerk because he is a warlord who somehow holds her
in thrall. His skills are simply much more valuable.I'd like to propose an alternative idea: that in a modern society,
increasing variation in income is a sign of health. Technology
seems to increase the variation in productivity at faster than
linear rates. If we don't see corresponding variation in income,
there are three possible explanations: (a) that technical innovation
has stopped, (b) that the people who would create the most wealth
aren't doing it, or (c) that they aren't getting paid for it.I think we can safely say that (a) and (b) would be bad. If you
disagree, try living for a year using only the resources available
to the average Frankish nobleman in 800, and report back to us.
(I'll be generous and not send you back to the stone age.)The only option, if you're going to have an increasingly prosperous
society without increasing variation in income, seems to be (c),
that people will create a lot of wealth without being paid for it.
That Jobs and Wozniak, for example, will cheerfully work 20-hour
days to produce the Apple computer for a society that allows them,
after taxes, to keep just enough of their income to match what they
would have made working 9 to 5 at a big company.Will people create wealth if they can't get paid for it? Only if
it's fun. People will write operating systems for free. But they
won't install them, or take support calls, or train customers to
use them. And at least 90% of the work that even the highest tech
companies do is of this second, unedifying kind.All the unfun kinds of wealth creation slow dramatically in a society
that confiscates private fortunes. We can confirm this empirically.
Suppose you hear a strange noise that you think may be due to a
nearby fan. You turn the fan off, and the noise stops. You turn
the fan back on, and the noise starts again. Off, quiet. On,
noise. In the absence of other information, it would seem the noise
is caused by the fan.At various times and places in history, whether you could accumulate
a fortune by creating wealth has been turned on and off. Northern
Italy in 800, off (warlords would steal it). Northern Italy in
1100, on. Central France in 1100, off (still feudal). England in
1800, on. England in 1974, off (98% tax on investment income).
United States in 1974, on. We've even had a twin study: West
Germany, on; East Germany, off. In every case, the creation of
wealth seems to appear and disappear like the noise of a fan as you
switch on and off the prospect of keeping it.There is some momentum involved. It probably takes at least a
generation to turn people into East Germans (luckily for England).
But if it were merely a fan we were studying, without all the extra
baggage that comes from the controversial topic of wealth, no one
would have any doubt that the fan was causing the noise.If you suppress variations in income, whether by stealing private
fortunes, as feudal rulers used to do, or by taxing them away, as
some modern governments have done, the result always seems to be
the same. Society as a whole ends up poorer.If I had a choice of living in a society where I was materially
much better off than I am now, but was among the poorest, or in one
where I was the richest, but much worse off than I am now, I'd take
the first option. If I had children, it would arguably be immoral
not to. It's absolute poverty you want to avoid, not relative
poverty. If, as the evidence so far implies, you have to have one
or the other in your society, take relative poverty.You need rich people in your society not so much because in spending
their money they create jobs, but because of what they have to do
to get rich. I'm not talking about the trickle-down effect
here. I'm not saying that if you let Henry Ford get rich, he'll
hire you as a waiter at his next party. I'm saying that he'll make
you a tractor to replace your horse.Notes[1]
Part of the reason this subject is so contentious is that some
of those most vocal on the subject of wealthβuniversity
students, heirs, professors, politicians, and journalistsβhave
the least experience creating it. (This phenomenon will be familiar
to anyone who has overheard conversations about sports in a bar.)Students are mostly still on the parental dole, and have not stopped
to think about where that money comes from. Heirs will be on the
parental dole for life. Professors and politicians live within
socialist eddies of the economy, at one remove from the creation
of wealth, and are paid a flat rate regardless of how hard they
work. And journalists as part of their professional code segregate
themselves from the revenue-collecting half of the businesses they
work for (the ad sales department). Many of these people never
come face to face with the fact that the money they receive represents
wealthβwealth that, except in the case of journalists, someone
else created earlier. They live in a world in which income is
doled out by a central authority according to some abstract notion
of fairness (or randomly, in the case of heirs), rather than given
by other people in return for something they wanted, so it may seem
to them unfair that things don't work the same in the rest of the
economy.(Some professors do create a great deal of wealth for
society. But the money they're paid isn't a quid pro quo.
It's more in the nature of an investment.)[2]
When one reads about the origins of the Fabian Society, it
sounds like something cooked up by the high-minded Edwardian
child-heroes of Edith Nesbit's The Wouldbegoods.[3]
According to a study by the Corporate Library, the median total
compensation, including salary, bonus, stock grants, and the exercise
of stock options, of S&P 500 CEOs in 2002 was $3.65 million.
According to Sports Illustrated, the average NBA player's
salary during the 2002-03 season was $4.54 million, and the average
major league baseball player's salary at the start of the 2003
season was $2.56 million. According to the Bureau of Labor
Statistics, the mean annual wage in the US in 2002 was $35,560.[4]
In the early empire the price of an ordinary adult slave seems
to have been about 2,000 sestertii (e.g. Horace, Sat. ii.7.43).
A servant girl cost 600 (Martial vi.66), while Columella (iii.3.8)
says that a skilled vine-dresser was worth 8,000. A doctor, P.
Decimus Eros Merula, paid 50,000 sestertii for his freedom (Dessau,
Inscriptiones 7812). Seneca (Ep. xxvii.7) reports
that one Calvisius Sabinus paid 100,000 sestertii apiece for slaves
learned in the Greek classics. Pliny (Hist. Nat. vii.39)
says that the highest price paid for a slave up to his time was
700,000 sestertii, for the linguist (and presumably teacher) Daphnis,
but that this had since been exceeded by actors buying their own
freedom.Classical Athens saw a similar variation in prices. An ordinary
laborer was worth about 125 to 150 drachmae. Xenophon (Mem.
ii.5) mentions prices ranging from 50 to 6,000 drachmae (for the
manager of a silver mine).For more on the economics of ancient slavery see:Jones, A. H. M., "Slavery in the Ancient World," Economic History
Review, 2:9 (1956), 185-199, reprinted in Finley, M. I. (ed.),
Slavery in Classical Antiquity, Heffer, 1964.[5]
Eratosthenes (276β195 BC) used shadow lengths in different
cities to estimate the Earth's circumference. He was off by only
about 2%.[6]
No, and Windows, respectively.[7]
One of the biggest divergences between the Daddy Model and
reality is the valuation of hard work. In the Daddy Model, hard
work is in itself deserving. In reality, wealth is measured by
what one delivers, not how much effort it costs. If I paint someone's
house, the owner shouldn't pay me extra for doing it with a toothbrush.It will seem to someone still implicitly operating on the Daddy
Model that it is unfair when someone works hard and doesn't get
paid much. To help clarify the matter, get rid of everyone else
and put our worker on a desert island, hunting and gathering fruit.
If he's bad at it he'll work very hard and not end up with much
food. Is this unfair? Who is being unfair to him?[8]
Part of the reason for the tenacity of the Daddy Model may be
the dual meaning of "distribution." When economists talk about
"distribution of income," they mean statistical distribution. But
when you use the phrase frequently, you can't help associating it
with the other sense of the word (as in e.g. "distribution of alms"),
and thereby subconsciously seeing wealth as something that flows
from some central tap. The word "regressive" as applied to tax
rates has a similar effect, at least on me; how can anything
regressive be good?[9]
"From the beginning of the reign Thomas Lord Roos was an assiduous
courtier of the young Henry VIII and was soon to reap the rewards.
In 1525 he was made a Knight of the Garter and given the Earldom
of Rutland. In the thirties his support of the breach with Rome,
his zeal in crushing the Pilgrimage of Grace, and his readiness to
vote the death-penalty in the succession of spectacular treason
trials that punctuated Henry's erratic matrimonial progress made
him an obvious candidate for grants of monastic property."Stone, Lawrence, Family and Fortune: Studies in Aristocratic
Finance in the Sixteenth and Seventeenth Centuries, Oxford
University Press, 1973, p. 166.[10]
There is archaeological evidence for large settlements earlier,
but it's hard to say what was happening in them.Hodges, Richard and David Whitehouse, Mohammed, Charlemagne and
the Origins of Europe, Cornell University Press, 1983.[11]
William Cecil and his son Robert were each in turn the most
powerful minister of the crown, and both used their position to
amass fortunes among the largest of their times. Robert in particular
took bribery to the point of treason. "As Secretary of State and
the leading advisor to King James on foreign policy, [he] was a
special recipient of favour, being offered large bribes by the Dutch
not to make peace with Spain, and large bribes by Spain to make
peace." (Stone, op. cit., p. 17.)[12]
Though Balzac made a lot of money from writing, he was notoriously
improvident and was troubled by debts all his life.[13]
A Timex will gain or lose about .5 seconds per day. The most
accurate mechanical watch, the Patek Philippe 10 Day Tourbillon,
is rated at -1.5 to +2 seconds. Its retail price is about $220,000.[14]
If asked to choose which was more expensive, a well-preserved
1989 Lincoln Town Car ten-passenger limousine ($5,000) or a 2004
Mercedes S600 sedan ($122,000), the average Edwardian might well
guess wrong.[15]
To say anything meaningful about income trends, you have to
talk about real income, or income as measured in what it can buy.
But the usual way of calculating real income ignores much of the
growth in wealth over time, because it depends on a consumer price
index created by bolting end to end a series of numbers that are
only locally accurate, and that don't include the prices of new
inventions until they become so common that their prices stabilize.So while we might think it was very much better to live in a world
with antibiotics or air travel or an electric power grid than
without, real income statistics calculated in the usual way will
prove to us that we are only slightly richer for having these things.Another approach would be to ask, if you were going back to the
year x in a time machine, how much would you have to spend on trade
goods to make your fortune? For example, if you were going back
to 1970 it would certainly be less than $500, because the processing
power you can get for $500 today would have been worth at least
$150 million in 1970. The function goes asymptotic fairly quickly,
because for times over a hundred years or so you could get all you
needed in present-day trash. In 1800 an empty plastic drink bottle
with a screw top would have seemed a miracle of workmanship.[16]
Some will say this amounts to the same thing, because the rich
have better opportunities for education. That's a valid point. It
is still possible, to a degree, to buy your kids' way into top
colleges by sending them to private schools that in effect hack the
college admissions process.According to a 2002 report by the National Center for Education
Statistics, about 1.7% of American kids attend private, non-sectarian
schools. At Princeton, 36% of the class of 2007 came from such
schools. (Interestingly, the number at Harvard is significantly
lower, about 28%.) Obviously this is a huge loophole. It does at
least seem to be closing, not widening.Perhaps the designers of admissions processes should take a lesson
from the example of computer security, and instead of just assuming
that their system can't be hacked, measure the degree to which it
is. |
December 2014If the world were static, we could have monotonically increasing
confidence in our beliefs. The more (and more varied) experience
a belief survived, the less likely it would be false. Most people
implicitly believe something like this about their opinions. And
they're justified in doing so with opinions about things that don't
change much, like human nature. But you can't trust your opinions
in the same way about things that change, which could include
practically everything else.When experts are wrong, it's often because they're experts on an
earlier version of the world.Is it possible to avoid that? Can you protect yourself against
obsolete beliefs? To some extent, yes. I spent almost a decade
investing in early stage startups, and curiously enough protecting
yourself against obsolete beliefs is exactly what you have to do
to succeed as a startup investor. Most really good startup ideas
look like bad ideas at first, and many of those look bad specifically
because some change in the world just switched them from bad to
good. I spent a lot of time learning to recognize such ideas, and
the techniques I used may be applicable to ideas in general.The first step is to have an explicit belief in change. People who
fall victim to a monotonically increasing confidence in their
opinions are implicitly concluding the world is static. If you
consciously remind yourself it isn't, you start to look for change.Where should one look for it? Beyond the moderately useful
generalization that human nature doesn't change much, the unfortunate
fact is that change is hard to predict. This is largely a tautology
but worth remembering all the same: change that matters usually
comes from an unforeseen quarter.So I don't even try to predict it. When I get asked in interviews
to predict the future, I always have to struggle to come up with
something plausible-sounding on the fly, like a student who hasn't
prepared for an exam.
[1]
But it's not out of laziness that I haven't
prepared. It seems to me that beliefs about the future are so
rarely correct that they usually aren't worth the extra rigidity
they impose, and that the best strategy is simply to be aggressively
open-minded. Instead of trying to point yourself in the right
direction, admit you have no idea what the right direction is, and
try instead to be super sensitive to the winds of change.It's ok to have working hypotheses, even though they may constrain
you a bit, because they also motivate you. It's exciting to chase
things and exciting to try to guess answers. But you have to be
disciplined about not letting your hypotheses harden into anything
more.
[2]I believe this passive m.o. works not just for evaluating new ideas
but also for having them. The way to come up with new ideas is not
to try explicitly to, but to try to solve problems and simply not
discount weird hunches you have in the process.The winds of change originate in the unconscious minds of domain
experts. If you're sufficiently expert in a field, any weird idea
or apparently irrelevant question that occurs to you is ipso facto
worth exploring.
[3]
Within Y Combinator, when an idea is described
as crazy, it's a complimentβin fact, on average probably a
higher compliment than when an idea is described as good.Startup investors have extraordinary incentives for correcting
obsolete beliefs. If they can realize before other investors that
some apparently unpromising startup isn't, they can make a huge
amount of money. But the incentives are more than just financial.
Investors' opinions are explicitly tested: startups come to them
and they have to say yes or no, and then, fairly quickly, they learn
whether they guessed right. The investors who say no to a Google
(and there were several) will remember it for the rest of their
lives.Anyone who must in some sense bet on ideas rather than merely
commenting on them has similar incentives. Which means anyone who
wants such incentives can have them, by turning their comments into
bets: if you write about a topic in some fairly durable and public
form, you'll find you worry much more about getting things right
than most people would in a casual conversation.
[4]Another trick I've found to protect myself against obsolete beliefs
is to focus initially on people rather than ideas. Though the nature
of future discoveries is hard to predict, I've found I can predict
quite well what sort of people will make them. Good new ideas come
from earnest, energetic, independent-minded people.Betting on people over ideas saved me countless times as an investor.
We thought Airbnb was a bad idea, for example. But we could tell
the founders were earnest, energetic, and independent-minded.
(Indeed, almost pathologically so.) So we suspended disbelief and
funded them.This too seems a technique that should be generally applicable.
Surround yourself with the sort of people new ideas come from. If
you want to notice quickly when your beliefs become obsolete, you
can't do better than to be friends with the people whose discoveries
will make them so.It's hard enough already not to become the prisoner of your own
expertise, but it will only get harder, because change is accelerating.
That's not a recent trend; change has been accelerating since the
paleolithic era. Ideas beget ideas. I don't expect that to change.
But I could be wrong.
Notes[1]
My usual trick is to talk about aspects of the present that
most people haven't noticed yet.[2]
Especially if they become well enough known that people start
to identify them with you. You have to be extra skeptical about
things you want to believe, and once a hypothesis starts to be
identified with you, it will almost certainly start to be in that
category.[3]
In practice "sufficiently expert" doesn't require one to be
recognized as an expertβwhich is a trailing indicator in any
case. In many fields a year of focused work plus caring a lot would
be enough.[4]
Though they are public and persist indefinitely, comments on
e.g. forums and places like Twitter seem empirically to work like
casual conversation. The threshold may be whether what you write
has a title.
Thanks to Sam Altman, Patrick Collison, and Robert Morris
for reading drafts of this. |
October 2015When I talk to a startup that's been operating for more than 8 or
9 months, the first thing I want to know is almost always the same.
Assuming their expenses remain constant and their revenue growth
is what it has been over the last several months, do they make it to
profitability on the money they have left? Or to put it more
dramatically, by default do they live or die?The startling thing is how often the founders themselves don't know.
Half the founders I talk to don't know whether they're default alive
or default dead.If you're among that number, Trevor Blackwell has made a handy
calculator you can use to find out.The reason I want to know first whether a startup is default alive
or default dead is that the rest of the conversation depends on the
answer. If the company is default alive, we can talk about ambitious
new things they could do. If it's default dead, we probably need
to talk about how to save it. We know the current trajectory ends
badly. How can they get off that trajectory?Why do so few founders know whether they're default alive or default
dead? Mainly, I think, because they're not used to asking that.
It's not a question that makes sense to ask early on, any more than
it makes sense to ask a 3 year old how he plans to support
himself. But as the company grows older, the question switches from
meaningless to critical. That kind of switch often takes people
by surprise.I propose the following solution: instead of starting to ask too
late whether you're default alive or default dead, start asking too
early. It's hard to say precisely when the question switches
polarity. But it's probably not that dangerous to start worrying
too early that you're default dead, whereas it's very dangerous to
start worrying too late.The reason is a phenomenon I wrote about earlier: the
fatal pinch.
The fatal pinch is default dead + slow growth + not enough
time to fix it. And the way founders end up in it is by not realizing
that's where they're headed.There is another reason founders don't ask themselves whether they're
default alive or default dead: they assume it will be easy to raise
more money. But that assumption is often false, and worse still, the
more you depend on it, the falser it becomes.Maybe it will help to separate facts from hopes. Instead of thinking
of the future with vague optimism, explicitly separate the components.
Say "We're default dead, but we're counting on investors to save
us." Maybe as you say that, it will set off the same alarms in your
head that it does in mine. And if you set off the alarms sufficiently
early, you may be able to avoid the fatal pinch.It would be safe to be default dead if you could count on investors
saving you. As a rule their interest is a function of
growth. If you have steep revenue growth, say over 5x a year, you
can start to count on investors being interested even if you're not
profitable.
[1]
But investors are so fickle that you can never
do more than start to count on them. Sometimes something about your
business will spook investors even if your growth is great. So no
matter how good your growth is, you can never safely treat fundraising
as more than a plan A. You should always have a plan B as well: you
should know (as in write down) precisely what you'll need to do to
survive if you can't raise more money, and precisely when you'll
have to switch to plan B if plan A isn't working.In any case, growing fast versus operating cheaply is far from the
sharp dichotomy many founders assume it to be. In practice there
is surprisingly little connection between how much a startup spends
and how fast it grows. When a startup grows fast, it's usually
because the product hits a nerve, in the sense of hitting some big
need straight on. When a startup spends a lot, it's usually because
the product is expensive to develop or sell, or simply because
they're wasteful.If you're paying attention, you'll be asking at this point not just
how to avoid the fatal pinch, but how to avoid being default dead.
That one is easy: don't hire too fast. Hiring too fast is by far
the biggest killer of startups that raise money.
[2]Founders tell themselves they need to hire in order to grow. But
most err on the side of overestimating this need rather than
underestimating it. Why? Partly because there's so much work to
do. Naive founders think that if they can just hire enough
people, it will all get done. Partly because successful startups have
lots of employees, so it seems like that's what one does in order
to be successful. In fact the large staffs of successful startups
are probably more the effect of growth than the cause. And
partly because when founders have slow growth they don't want to
face what is usually the real reason: the product is not appealing
enough.Plus founders who've just raised money are often encouraged to
overhire by the VCs who funded them. Kill-or-cure strategies are
optimal for VCs because they're protected by the portfolio effect.
VCs want to blow you up, in one sense of the phrase or the other.
But as a founder your incentives are different. You want above all
to survive.
[3]Here's a common way startups die. They make something moderately
appealing and have decent initial growth. They raise their first
round fairly easily, because the founders seem smart and the idea
sounds plausible. But because the product is only moderately
appealing, growth is ok but not great. The founders convince
themselves that hiring a bunch of people is the way to boost growth.
Their investors agree. But (because the product is only moderately
appealing) the growth never comes. Now they're rapidly running out
of runway. They hope further investment will save them. But because
they have high expenses and slow growth, they're now unappealing
to investors. They're unable to raise more, and the company dies.What the company should have done is address the fundamental problem:
that the product is only moderately appealing. Hiring people is
rarely the way to fix that. More often than not it makes it harder.
At this early stage, the product needs to evolve more than to be
"built out," and that's usually easier with fewer people.
[4]Asking whether you're default alive or default dead may save you
from this. Maybe the alarm bells it sets off will counteract the
forces that push you to overhire. Instead you'll be compelled to
seek growth in other ways. For example, by doing
things that don't scale, or by redesigning the product in the
way only founders can.
And for many if not most startups, these paths to growth will be
the ones that actually work.Airbnb waited 4 months after raising money at the end of YΒ Combinator
before they hired their first employee. In the meantime the founders
were terribly overworked. But they were overworked evolving Airbnb
into the astonishingly successful organism it is now.Notes[1]
Steep usage growth will also interest investors. Revenue
will ultimately be a constant multiple of usage, so x% usage growth
predicts x% revenue growth. But in practice investors discount
merely predicted revenue, so if you're measuring usage you need a
higher growth rate to impress investors.[2]
Startups that don't raise money are saved from hiring too
fast because they can't afford to. But that doesn't mean you should
avoid raising money in order to avoid this problem, any more than
that total abstinence is the only way to avoid becoming an alcoholic.[3]
I would not be surprised if VCs' tendency to push founders
to overhire is not even in their own interest. They don't know how
many of the companies that get killed by overspending might have
done well if they'd survived. My guess is a significant number.[4]
After reading a draft, Sam Altman wrote:"I think you should make the hiring point more strongly. I think
it's roughly correct to say that YC's most successful companies
have never been the fastest to hire, and one of the marks of a great
founder is being able to resist this urge."Paul Buchheit adds:"A related problem that I see a lot is premature scalingβfounders
take a small business that isn't really working (bad unit economics,
typically) and then scale it up because they want impressive growth
numbers. This is similar to over-hiring in that it makes the business
much harder to fix once it's big, plus they are bleeding cash really
fast."
Thanks to Sam Altman, Paul Buchheit, Joe Gebbia, Jessica Livingston,
and Geoff Ralston for reading drafts of this. |
January 2003(This article is derived from a keynote talk at the fall 2002 meeting
of NEPLS.)Visitors to this country are often surprised to find that
Americans like to begin a conversation by asking "what do you do?"
I've never liked this question. I've rarely had a
neat answer to it. But I think I have finally solved the problem.
Now, when someone asks me what I do, I look them straight
in the eye and say "I'm designing a
new dialect of Lisp."
I recommend this answer to anyone who doesn't like being asked what
they do. The conversation will turn immediately to other topics.I don't consider myself to be doing research on programming languages.
I'm just designing one, in the same way that someone might design
a building or a chair or a new typeface.
I'm not trying to discover anything new. I just want
to make a language that will be good to program in. In some ways,
this assumption makes life a lot easier.The difference between design and research seems to be a question
of new versus good. Design doesn't have to be new, but it has to
be good. Research doesn't have to be good, but it has to be new.
I think these two paths converge at the top: the best design
surpasses its predecessors by using new ideas, and the best research
solves problems that are not only new, but actually worth solving.
So ultimately we're aiming for the same destination, just approaching
it from different directions.What I'm going to talk about today is what your target looks like
from the back. What do you do differently when you treat
programming languages as a design problem instead of a research topic?The biggest difference is that you focus more on the user.
Design begins by asking, who is this
for and what do they need from it? A good architect,
for example, does not begin by creating a design that he then
imposes on the users, but by studying the intended users and figuring
out what they need.Notice I said "what they need," not "what they want." I don't mean
to give the impression that working as a designer means working as
a sort of short-order cook, making whatever the client tells you
to. This varies from field to field in the arts, but
I don't think there is any field in which the best work is done by
the people who just make exactly what the customers tell them to.The customer is always right in
the sense that the measure of good design is how well it works
for the user. If you make a novel that bores everyone, or a chair
that's horribly uncomfortable to sit in, then you've done a bad
job, period. It's no defense to say that the novel or the chair
is designed according to the most advanced theoretical principles.And yet, making what works for the user doesn't mean simply making
what the user tells you to. Users don't know what all the choices
are, and are often mistaken about what they really want.The answer to the paradox, I think, is that you have to design
for the user, but you have to design what the user needs, not simply
what he says he wants.
It's much like being a doctor. You can't just treat a patient's
symptoms. When a patient tells you his symptoms, you have to figure
out what's actually wrong with him, and treat that.This focus on the user is a kind of axiom from which most of the
practice of good design can be derived, and around which most design
issues center.If good design must do what the user needs, who is the user? When
I say that design must be for users, I don't mean to imply that good
design aims at some kind of
lowest common denominator. You can pick any group of users you
want. If you're designing a tool, for example, you can design it
for anyone from beginners to experts, and what's good design
for one group might be bad for another. The point
is, you have to pick some group of users. I don't think you can
even talk about good or bad design except with
reference to some intended user.You're most likely to get good design if the intended users include
the designer himself. When you design something
for a group that doesn't include you, it tends to be for people
you consider to be less sophisticated than you, not more sophisticated.That's a problem, because looking down on the user, however benevolently,
seems inevitably to corrupt the designer.
I suspect that very few housing
projects in the US were designed by architects who expected to live
in them. You can see the same thing
in programming languages. C, Lisp, and Smalltalk were created for
their own designers to use. Cobol, Ada, and Java, were created
for other people to use.If you think you're designing something for idiots, the odds are
that you're not designing something good, even for idiots.
Even if you're designing something for the most sophisticated
users, though, you're still designing for humans. It's different
in research. In math you
don't choose abstractions because they're
easy for humans to understand; you choose whichever make the
proof shorter. I think this is true for the sciences generally.
Scientific ideas are not meant to be ergonomic.Over in the arts, things are very different. Design is
all about people. The human body is a strange
thing, but when you're designing a chair,
that's what you're designing for, and there's no way around it.
All the arts have to pander to the interests and limitations
of humans. In painting, for example, all other things being
equal a painting with people in it will be more interesting than
one without. It is not merely an accident of history that
the great paintings of the Renaissance are all full of people.
If they hadn't been, painting as a medium wouldn't have the prestige
that it does.Like it or not, programming languages are also for people,
and I suspect the human brain is just as lumpy and idiosyncratic
as the human body. Some ideas are easy for people to grasp
and some aren't. For example, we seem to have a very limited
capacity for dealing with detail. It's this fact that makes
programing languages a good idea in the first place; if we
could handle the detail, we could just program in machine
language.Remember, too, that languages are not
primarily a form for finished programs, but something that
programs have to be developed in. Anyone in the arts could
tell you that you might want different mediums for the
two situations. Marble, for example, is a nice, durable
medium for finished ideas, but a hopelessly inflexible one
for developing new ideas.A program, like a proof,
is a pruned version of a tree that in the past has had
false starts branching off all over it. So the test of
a language is not simply how clean the finished program looks
in it, but how clean the path to the finished program was.
A design choice that gives you elegant finished programs
may not give you an elegant design process. For example,
I've written a few macro-defining macros full of nested
backquotes that look now like little gems, but writing them
took hours of the ugliest trial and error, and frankly, I'm still
not entirely sure they're correct.We often act as if the test of a language were how good
finished programs look in it.
It seems so convincing when you see the same program
written in two languages, and one version is much shorter.
When you approach the problem from the direction of the
arts, you're less likely to depend on this sort of
test. You don't want to end up with a programming
language like marble.For example, it is a huge win in developing software to
have an interactive toplevel, what in Lisp is called a
read-eval-print loop. And when you have one this has
real effects on the design of the language. It would not
work well for a language where you have to declare
variables before using them, for example. When you're
just typing expressions into the toplevel, you want to be
able to set x to some value and then start doing things
to x. You don't want to have to declare the type of x
first. You may dispute either of the premises, but if
a language has to have a toplevel to be convenient, and
mandatory type declarations are incompatible with a
toplevel, then no language that makes type declarations
mandatory could be convenient to program in.In practice, to get good design you have to get close, and stay
close, to your users. You have to calibrate your ideas on actual
users constantly, especially in the beginning. One of the reasons
Jane Austen's novels are so good is that she read them out loud to
her family. That's why she never sinks into self-indulgently arty
descriptions of landscapes,
or pretentious philosophizing. (The philosophy's there, but it's
woven into the story instead of being pasted onto it like a label.)
If you open an average "literary" novel and imagine reading it out loud
to your friends as something you'd written, you'll feel all too
keenly what an imposition that kind of thing is upon the reader.In the software world, this idea is known as Worse is Better.
Actually, there are several ideas mixed together in the concept of
Worse is Better, which is why people are still arguing about
whether worse
is actually better or not. But one of the main ideas in that
mix is that if you're building something new, you should get a
prototype in front of users as soon as possible.The alternative approach might be called the Hail Mary strategy.
Instead of getting a prototype out quickly and gradually refining
it, you try to create the complete, finished, product in one long
touchdown pass. As far as I know, this is a
recipe for disaster. Countless startups destroyed themselves this
way during the Internet bubble. I've never heard of a case
where it worked.What people outside the software world may not realize is that
Worse is Better is found throughout the arts.
In drawing, for example, the idea was discovered during the
Renaissance. Now almost every drawing teacher will tell you that
the right way to get an accurate drawing is not to
work your way slowly around the contour of an object, because errors will
accumulate and you'll find at the end that the lines don't meet.
Instead you should draw a few quick lines in roughly the right place,
and then gradually refine this initial sketch.In most fields, prototypes
have traditionally been made out of different materials.
Typefaces to be cut in metal were initially designed
with a brush on paper. Statues to be cast in bronze
were modelled in wax. Patterns to be embroidered on tapestries
were drawn on paper with ink wash. Buildings to be
constructed from stone were tested on a smaller scale in wood.What made oil paint so exciting, when it
first became popular in the fifteenth century, was that you
could actually make the finished work from the prototype.
You could make a preliminary drawing if you wanted to, but you
weren't held to it; you could work out all the details, and
even make major changes, as you finished the painting.You can do this in software too. A prototype doesn't have to
be just a model; you can refine it into the finished product.
I think you should always do this when you can. It lets you
take advantage of new insights you have along the way. But
perhaps even more important, it's good for morale.Morale is key in design. I'm surprised people
don't talk more about it. One of my first
drawing teachers told me: if you're bored when you're
drawing something, the drawing will look boring.
For example, suppose you have to draw a building, and you
decide to draw each brick individually. You can do this
if you want, but if you get bored halfway through and start
making the bricks mechanically instead of observing each one,
the drawing will look worse than if you had merely suggested
the bricks.Building something by gradually refining a prototype is good
for morale because it keeps you engaged. In software, my
rule is: always have working code. If you're writing
something that you'll be able to test in an hour, then you
have the prospect of an immediate reward to motivate you.
The same is true in the arts, and particularly in oil painting.
Most painters start with a blurry sketch and gradually
refine it.
If you work this way, then in principle
you never have to end the day with something that actually
looks unfinished. Indeed, there is even a saying among
painters: "A painting is never finished, you just stop
working on it." This idea will be familiar to anyone who
has worked on software.Morale is another reason that it's hard to design something
for an unsophisticated user. It's hard to stay interested in
something you don't like yourself. To make something
good, you have to be thinking, "wow, this is really great,"
not "what a piece of shit; those fools will love it."Design means making things for humans. But it's not just the
user who's human. The designer is human too.Notice all this time I've been talking about "the designer."
Design usually has to be under the control of a single person to
be any good. And yet it seems to be possible for several people
to collaborate on a research project. This seems to
me one of the most interesting differences between research and
design.There have been famous instances of collaboration in the arts,
but most of them seem to have been cases of molecular bonding rather
than nuclear fusion. In an opera it's common for one person to
write the libretto and another to write the music. And during the Renaissance,
journeymen from northern
Europe were often employed to do the landscapes in the
backgrounds of Italian paintings. But these aren't true collaborations.
They're more like examples of Robert Frost's
"good fences make good neighbors." You can stick instances
of good design together, but within each individual project,
one person has to be in control.I'm not saying that good design requires that one person think
of everything. There's nothing more valuable than the advice
of someone whose judgement you trust. But after the talking is
done, the decision about what to do has to rest with one person.Why is it that research can be done by collaborators and
design can't? This is an interesting question. I don't
know the answer. Perhaps,
if design and research converge, the best research is also
good design, and in fact can't be done by collaborators.
A lot of the most famous scientists seem to have worked alone.
But I don't know enough to say whether there
is a pattern here. It could be simply that many famous scientists
worked when collaboration was less common.Whatever the story is in the sciences, true collaboration
seems to be vanishingly rare in the arts. Design by committee is a
synonym for bad design. Why is that so? Is there some way to
beat this limitation?I'm inclined to think there isn't-- that good design requires
a dictator. One reason is that good design has to
be all of a piece. Design is not just for humans, but
for individual humans. If a design represents an idea that
fits in one person's head, then the idea will fit in the user's
head too.Related: |
February 2021Before college the two main things I worked on, outside of school,
were writing and programming. I didn't write essays. I wrote what
beginning writers were supposed to write then, and probably still
are: short stories. My stories were awful. They had hardly any plot,
just characters with strong feelings, which I imagined made them
deep.The first programs I tried writing were on the IBM 1401 that our
school district used for what was then called "data processing."
This was in 9th grade, so I was 13 or 14. The school district's
1401 happened to be in the basement of our junior high school, and
my friend Rich Draves and I got permission to use it. It was like
a mini Bond villain's lair down there, with all these alien-looking
machines Β CPU, disk drives, printer, card reader Β sitting up
on a raised floor under bright fluorescent lights.The language we used was an early version of Fortran. You had to
type programs on punch cards, then stack them in the card reader
and press a button to load the program into memory and run it. The
result would ordinarily be to print something on the spectacularly
loud printer.I was puzzled by the 1401. I couldn't figure out what to do with
it. And in retrospect there's not much I could have done with it.
The only form of input to programs was data stored on punched cards,
and I didn't have any data stored on punched cards. The only other
option was to do things that didn't rely on any input, like calculate
approximations of pi, but I didn't know enough math to do anything
interesting of that type. So I'm not surprised I can't remember any
programs I wrote, because they can't have done much. My clearest
memory is of the moment I learned it was possible for programs not
to terminate, when one of mine didn't. On a machine without
time-sharing, this was a social as well as a technical error, as
the data center manager's expression made clear.With microcomputers, everything changed. Now you could have a
computer sitting right in front of you, on a desk, that could respond
to your keystrokes as it was running instead of just churning through
a stack of punch cards and then stopping.
[1]The first of my friends to get a microcomputer built it himself.
It was sold as a kit by Heathkit. I remember vividly how impressed
and envious I felt watching him sitting in front of it, typing
programs right into the computer.Computers were expensive in those days and it took me years of
nagging before I convinced my father to buy one, a TRS-80, in about
1980. The gold standard then was the Apple II, but a TRS-80 was
good enough. This was when I really started programming. I wrote
simple games, a program to predict how high my model rockets would
fly, and a word processor that my father used to write at least one
book. There was only room in memory for about 2 pages of text, so
he'd write 2 pages at a time and then print them out, but it was a
lot better than a typewriter.Though I liked programming, I didn't plan to study it in college.
In college I was going to study philosophy, which sounded much more
powerful. It seemed, to my naive high school self, to be the study
of the ultimate truths, compared to which the things studied in
other fields would be mere domain knowledge. What I discovered when
I got to college was that the other fields took up so much of the
space of ideas that there wasn't much left for these supposed
ultimate truths. All that seemed left for philosophy were edge cases
that people in other fields felt could safely be ignored.I couldn't have put this into words when I was 18. All I knew at
the time was that I kept taking philosophy courses and they kept
being boring. So I decided to switch to AI.AI was in the air in the mid 1980s, but there were two things
especially that made me want to work on it: a novel by Heinlein
called The Moon is a Harsh Mistress, which featured an intelligent
computer called Mike, and a PBS documentary that showed Terry
Winograd using SHRDLU. I haven't tried rereading The Moon is a Harsh
Mistress, so I don't know how well it has aged, but when I read it
I was drawn entirely into its world. It seemed only a matter of
time before we'd have Mike, and when I saw Winograd using SHRDLU,
it seemed like that time would be a few years at most. All you had
to do was teach SHRDLU more words.There weren't any classes in AI at Cornell then, not even graduate
classes, so I started trying to teach myself. Which meant learning
Lisp, since in those days Lisp was regarded as the language of AI.
The commonly used programming languages then were pretty primitive,
and programmers' ideas correspondingly so. The default language at
Cornell was a Pascal-like language called PL/I, and the situation
was similar elsewhere. Learning Lisp expanded my concept of a program
so fast that it was years before I started to have a sense of where
the new limits were. This was more like it; this was what I had
expected college to do. It wasn't happening in a class, like it was
supposed to, but that was ok. For the next couple years I was on a
roll. I knew what I was going to do.For my undergraduate thesis, I reverse-engineered SHRDLU. My God
did I love working on that program. It was a pleasing bit of code,
but what made it even more exciting was my belief Β hard to imagine
now, but not unique in 1985 Β that it was already climbing the
lower slopes of intelligence.I had gotten into a program at Cornell that didn't make you choose
a major. You could take whatever classes you liked, and choose
whatever you liked to put on your degree. I of course chose "Artificial
Intelligence." When I got the actual physical diploma, I was dismayed
to find that the quotes had been included, which made them read as
scare-quotes. At the time this bothered me, but now it seems amusingly
accurate, for reasons I was about to discover.I applied to 3 grad schools: MIT and Yale, which were renowned for
AI at the time, and Harvard, which I'd visited because Rich Draves
went there, and was also home to Bill Woods, who'd invented the
type of parser I used in my SHRDLU clone. Only Harvard accepted me,
so that was where I went.I don't remember the moment it happened, or if there even was a
specific moment, but during the first year of grad school I realized
that AI, as practiced at the time, was a hoax. By which I mean the
sort of AI in which a program that's told "the dog is sitting on
the chair" translates this into some formal representation and adds
it to the list of things it knows.What these programs really showed was that there's a subset of
natural language that's a formal language. But a very proper subset.
It was clear that there was an unbridgeable gap between what they
could do and actually understanding natural language. It was not,
in fact, simply a matter of teaching SHRDLU more words. That whole
way of doing AI, with explicit data structures representing concepts,
was not going to work. Its brokenness did, as so often happens,
generate a lot of opportunities to write papers about various
band-aids that could be applied to it, but it was never going to
get us Mike.So I looked around to see what I could salvage from the wreckage
of my plans, and there was Lisp. I knew from experience that Lisp
was interesting for its own sake and not just for its association
with AI, even though that was the main reason people cared about
it at the time. So I decided to focus on Lisp. In fact, I decided
to write a book about Lisp hacking. It's scary to think how little
I knew about Lisp hacking when I started writing that book. But
there's nothing like writing a book about something to help you
learn it. The book, On Lisp, wasn't published till 1993, but I wrote
much of it in grad school.Computer Science is an uneasy alliance between two halves, theory
and systems. The theory people prove things, and the systems people
build things. I wanted to build things. I had plenty of respect for
theory Β indeed, a sneaking suspicion that it was the more admirable
of the two halves Β but building things seemed so much more exciting.The problem with systems work, though, was that it didn't last.
Any program you wrote today, no matter how good, would be obsolete
in a couple decades at best. People might mention your software in
footnotes, but no one would actually use it. And indeed, it would
seem very feeble work. Only people with a sense of the history of
the field would even realize that, in its time, it had been good.There were some surplus Xerox Dandelions floating around the computer
lab at one point. Anyone who wanted one to play around with could
have one. I was briefly tempted, but they were so slow by present
standards; what was the point? No one else wanted one either, so
off they went. That was what happened to systems work.I wanted not just to build things, but to build things that would
last.In this dissatisfied state I went in 1988 to visit Rich Draves at
CMU, where he was in grad school. One day I went to visit the
Carnegie Institute, where I'd spent a lot of time as a kid. While
looking at a painting there I realized something that might seem
obvious, but was a big surprise to me. There, right on the wall,
was something you could make that would last. Paintings didn't
become obsolete. Some of the best ones were hundreds of years old.And moreover this was something you could make a living doing. Not
as easily as you could by writing software, of course, but I thought
if you were really industrious and lived really cheaply, it had to
be possible to make enough to survive. And as an artist you could
be truly independent. You wouldn't have a boss, or even need to get
research funding.I had always liked looking at paintings. Could I make them? I had
no idea. I'd never imagined it was even possible. I knew intellectually
that people made art Β that it didn't just appear spontaneously
Β but it was as if the people who made it were a different species.
They either lived long ago or were mysterious geniuses doing strange
things in profiles in Life magazine. The idea of actually being
able to make art, to put that verb before that noun, seemed almost
miraculous.That fall I started taking art classes at Harvard. Grad students
could take classes in any department, and my advisor, Tom Cheatham,
was very easy going. If he even knew about the strange classes I
was taking, he never said anything.So now I was in a PhD program in computer science, yet planning to
be an artist, yet also genuinely in love with Lisp hacking and
working away at On Lisp. In other words, like many a grad student,
I was working energetically on multiple projects that were not my
thesis.I didn't see a way out of this situation. I didn't want to drop out
of grad school, but how else was I going to get out? I remember
when my friend Robert Morris got kicked out of Cornell for writing
the internet worm of 1988, I was envious that he'd found such a
spectacular way to get out of grad school.Then one day in April 1990 a crack appeared in the wall. I ran into
professor Cheatham and he asked if I was far enough along to graduate
that June. I didn't have a word of my dissertation written, but in
what must have been the quickest bit of thinking in my life, I
decided to take a shot at writing one in the 5 weeks or so that
remained before the deadline, reusing parts of On Lisp where I
could, and I was able to respond, with no perceptible delay "Yes,
I think so. I'll give you something to read in a few days."I picked applications of continuations as the topic. In retrospect
I should have written about macros and embedded languages. There's
a whole world there that's barely been explored. But all I wanted
was to get out of grad school, and my rapidly written dissertation
sufficed, just barely.Meanwhile I was applying to art schools. I applied to two: RISD in
the US, and the Accademia di Belli Arti in Florence, which, because
it was the oldest art school, I imagined would be good. RISD accepted
me, and I never heard back from the Accademia, so off to Providence
I went.I'd applied for the BFA program at RISD, which meant in effect that
I had to go to college again. This was not as strange as it sounds,
because I was only 25, and art schools are full of people of different
ages. RISD counted me as a transfer sophomore and said I had to do
the foundation that summer. The foundation means the classes that
everyone has to take in fundamental subjects like drawing, color,
and design.Toward the end of the summer I got a big surprise: a letter from
the Accademia, which had been delayed because they'd sent it to
Cambridge England instead of Cambridge Massachusetts, inviting me
to take the entrance exam in Florence that fall. This was now only
weeks away. My nice landlady let me leave my stuff in her attic. I
had some money saved from consulting work I'd done in grad school;
there was probably enough to last a year if I lived cheaply. Now
all I had to do was learn Italian.Only stranieri (foreigners) had to take this entrance exam. In
retrospect it may well have been a way of excluding them, because
there were so many stranieri attracted by the idea of studying
art in Florence that the Italian students would otherwise have been
outnumbered. I was in decent shape at painting and drawing from the
RISD foundation that summer, but I still don't know how I managed
to pass the written exam. I remember that I answered the essay
question by writing about Cezanne, and that I cranked up the
intellectual level as high as I could to make the most of my limited
vocabulary.
[2]I'm only up to age 25 and already there are such conspicuous patterns.
Here I was, yet again about to attend some august institution in
the hopes of learning about some prestigious subject, and yet again
about to be disappointed. The students and faculty in the painting
department at the Accademia were the nicest people you could imagine,
but they had long since arrived at an arrangement whereby the
students wouldn't require the faculty to teach anything, and in
return the faculty wouldn't require the students to learn anything.
And at the same time all involved would adhere outwardly to the
conventions of a 19th century atelier. We actually had one of those
little stoves, fed with kindling, that you see in 19th century
studio paintings, and a nude model sitting as close to it as possible
without getting burned. Except hardly anyone else painted her besides
me. The rest of the students spent their time chatting or occasionally
trying to imitate things they'd seen in American art magazines.Our model turned out to live just down the street from me. She made
a living from a combination of modelling and making fakes for a
local antique dealer. She'd copy an obscure old painting out of a
book, and then he'd take the copy and maltreat it to make it look
old.
[3]While I was a student at the Accademia I started painting still
lives in my bedroom at night. These paintings were tiny, because
the room was, and because I painted them on leftover scraps of
canvas, which was all I could afford at the time. Painting still
lives is different from painting people, because the subject, as
its name suggests, can't move. People can't sit for more than about
15 minutes at a time, and when they do they don't sit very still.
So the traditional m.o. for painting people is to know how to paint
a generic person, which you then modify to match the specific person
you're painting. Whereas a still life you can, if you want, copy
pixel by pixel from what you're seeing. You don't want to stop
there, of course, or you get merely photographic accuracy, and what
makes a still life interesting is that it's been through a head.
You want to emphasize the visual cues that tell you, for example,
that the reason the color changes suddenly at a certain point is
that it's the edge of an object. By subtly emphasizing such things
you can make paintings that are more realistic than photographs not
just in some metaphorical sense, but in the strict information-theoretic
sense.
[4]I liked painting still lives because I was curious about what I was
seeing. In everyday life, we aren't consciously aware of much we're
seeing. Most visual perception is handled by low-level processes
that merely tell your brain "that's a water droplet" without telling
you details like where the lightest and darkest points are, or
"that's a bush" without telling you the shape and position of every
leaf. This is a feature of brains, not a bug. In everyday life it
would be distracting to notice every leaf on every bush. But when
you have to paint something, you have to look more closely, and
when you do there's a lot to see. You can still be noticing new
things after days of trying to paint something people usually take
for granted, just as you can after
days of trying to write an essay about something people usually
take for granted.This is not the only way to paint. I'm not 100% sure it's even a
good way to paint. But it seemed a good enough bet to be worth
trying.Our teacher, professor Ulivi, was a nice guy. He could see I worked
hard, and gave me a good grade, which he wrote down in a sort of
passport each student had. But the Accademia wasn't teaching me
anything except Italian, and my money was running out, so at the
end of the first year I went back to the US.I wanted to go back to RISD, but I was now broke and RISD was very
expensive, so I decided to get a job for a year and then return to
RISD the next fall. I got one at a company called Interleaf, which
made software for creating documents. You mean like Microsoft Word?
Exactly. That was how I learned that low end software tends to eat
high end software. But Interleaf still had a few years to live yet.
[5]Interleaf had done something pretty bold. Inspired by Emacs, they'd
added a scripting language, and even made the scripting language a
dialect of Lisp. Now they wanted a Lisp hacker to write things in
it. This was the closest thing I've had to a normal job, and I
hereby apologize to my boss and coworkers, because I was a bad
employee. Their Lisp was the thinnest icing on a giant C cake, and
since I didn't know C and didn't want to learn it, I never understood
most of the software. Plus I was terribly irresponsible. This was
back when a programming job meant showing up every day during certain
working hours. That seemed unnatural to me, and on this point the
rest of the world is coming around to my way of thinking, but at
the time it caused a lot of friction. Toward the end of the year I
spent much of my time surreptitiously working on On Lisp, which I
had by this time gotten a contract to publish.The good part was that I got paid huge amounts of money, especially
by art student standards. In Florence, after paying my part of the
rent, my budget for everything else had been $7 a day. Now I was
getting paid more than 4 times that every hour, even when I was
just sitting in a meeting. By living cheaply I not only managed to
save enough to go back to RISD, but also paid off my college loans.I learned some useful things at Interleaf, though they were mostly
about what not to do. I learned that it's better for technology
companies to be run by product people than sales people (though
sales is a real skill and people who are good at it are really good
at it), that it leads to bugs when code is edited by too many people,
that cheap office space is no bargain if it's depressing, that
planned meetings are inferior to corridor conversations, that big,
bureaucratic customers are a dangerous source of money, and that
there's not much overlap between conventional office hours and the
optimal time for hacking, or conventional offices and the optimal
place for it.But the most important thing I learned, and which I used in both
Viaweb and Y Combinator, is that the low end eats the high end:
that it's good to be the "entry level" option, even though that
will be less prestigious, because if you're not, someone else will
be, and will squash you against the ceiling. Which in turn means
that prestige is a danger sign.When I left to go back to RISD the next fall, I arranged to do
freelance work for the group that did projects for customers, and
this was how I survived for the next several years. When I came
back to visit for a project later on, someone told me about a new
thing called HTML, which was, as he described it, a derivative of
SGML. Markup language enthusiasts were an occupational hazard at
Interleaf and I ignored him, but this HTML thing later became a big
part of my life.In the fall of 1992 I moved back to Providence to continue at RISD.
The foundation had merely been intro stuff, and the Accademia had
been a (very civilized) joke. Now I was going to see what real art
school was like. But alas it was more like the Accademia than not.
Better organized, certainly, and a lot more expensive, but it was
now becoming clear that art school did not bear the same relationship
to art that medical school bore to medicine. At least not the
painting department. The textile department, which my next door
neighbor belonged to, seemed to be pretty rigorous. No doubt
illustration and architecture were too. But painting was post-rigorous.
Painting students were supposed to express themselves, which to the
more worldly ones meant to try to cook up some sort of distinctive
signature style.A signature style is the visual equivalent of what in show business
is known as a "schtick": something that immediately identifies the
work as yours and no one else's. For example, when you see a painting
that looks like a certain kind of cartoon, you know it's by Roy
Lichtenstein. So if you see a big painting of this type hanging in
the apartment of a hedge fund manager, you know he paid millions
of dollars for it. That's not always why artists have a signature
style, but it's usually why buyers pay a lot for such work.
[6]There were plenty of earnest students too: kids who "could draw"
in high school, and now had come to what was supposed to be the
best art school in the country, to learn to draw even better. They
tended to be confused and demoralized by what they found at RISD,
but they kept going, because painting was what they did. I was not
one of the kids who could draw in high school, but at RISD I was
definitely closer to their tribe than the tribe of signature style
seekers.I learned a lot in the color class I took at RISD, but otherwise I
was basically teaching myself to paint, and I could do that for
free. So in 1993 I dropped out. I hung around Providence for a bit,
and then my college friend Nancy Parmet did me a big favor. A
rent-controlled apartment in a building her mother owned in New
York was becoming vacant. Did I want it? It wasn't much more than
my current place, and New York was supposed to be where the artists
were. So yes, I wanted it!
[7]Asterix comics begin by zooming in on a tiny corner of Roman Gaul
that turns out not to be controlled by the Romans. You can do
something similar on a map of New York City: if you zoom in on the
Upper East Side, there's a tiny corner that's not rich, or at least
wasn't in 1993. It's called Yorkville, and that was my new home.
Now I was a New York artist Β in the strictly technical sense of
making paintings and living in New York.I was nervous about money, because I could sense that Interleaf was
on the way down. Freelance Lisp hacking work was very rare, and I
didn't want to have to program in another language, which in those
days would have meant C++ if I was lucky. So with my unerring nose
for financial opportunity, I decided to write another book on Lisp.
This would be a popular book, the sort of book that could be used
as a textbook. I imagined myself living frugally off the royalties
and spending all my time painting. (The painting on the cover of
this book, ANSI Common Lisp, is one that I painted around this
time.)The best thing about New York for me was the presence of Idelle and
Julian Weber. Idelle Weber was a painter, one of the early
photorealists, and I'd taken her painting class at Harvard. I've
never known a teacher more beloved by her students. Large numbers
of former students kept in touch with her, including me. After I
moved to New York I became her de facto studio assistant.She liked to paint on big, square canvases, 4 to 5 feet on a side.
One day in late 1994 as I was stretching one of these monsters there
was something on the radio about a famous fund manager. He wasn't
that much older than me, and was super rich. The thought suddenly
occurred to me: why don't I become rich? Then I'll be able to work
on whatever I want.Meanwhile I'd been hearing more and more about this new thing called
the World Wide Web. Robert Morris showed it to me when I visited
him in Cambridge, where he was now in grad school at Harvard. It
seemed to me that the web would be a big deal. I'd seen what graphical
user interfaces had done for the popularity of microcomputers. It
seemed like the web would do the same for the internet.If I wanted to get rich, here was the next train leaving the station.
I was right about that part. What I got wrong was the idea. I decided
we should start a company to put art galleries online. I can't
honestly say, after reading so many Y Combinator applications, that
this was the worst startup idea ever, but it was up there. Art
galleries didn't want to be online, and still don't, not the fancy
ones. That's not how they sell. I wrote some software to generate
web sites for galleries, and Robert wrote some to resize images and
set up an http server to serve the pages. Then we tried to sign up
galleries. To call this a difficult sale would be an understatement.
It was difficult to give away. A few galleries let us make sites
for them for free, but none paid us.Then some online stores started to appear, and I realized that
except for the order buttons they were identical to the sites we'd
been generating for galleries. This impressive-sounding thing called
an "internet storefront" was something we already knew how to build.So in the summer of 1995, after I submitted the camera-ready copy
of ANSI Common Lisp to the publishers, we started trying to write
software to build online stores. At first this was going to be
normal desktop software, which in those days meant Windows software.
That was an alarming prospect, because neither of us knew how to
write Windows software or wanted to learn. We lived in the Unix
world. But we decided we'd at least try writing a prototype store
builder on Unix. Robert wrote a shopping cart, and I wrote a new
site generator for stores Β in Lisp, of course.We were working out of Robert's apartment in Cambridge. His roommate
was away for big chunks of time, during which I got to sleep in his
room. For some reason there was no bed frame or sheets, just a
mattress on the floor. One morning as I was lying on this mattress
I had an idea that made me sit up like a capital L. What if we ran
the software on the server, and let users control it by clicking
on links? Then we'd never have to write anything to run on users'
computers. We could generate the sites on the same server we'd serve
them from. Users wouldn't need anything more than a browser.This kind of software, known as a web app, is common now, but at
the time it wasn't clear that it was even possible. To find out,
we decided to try making a version of our store builder that you
could control through the browser. A couple days later, on August
12, we had one that worked. The UI was horrible, but it proved you
could build a whole store through the browser, without any client
software or typing anything into the command line on the server.Now we felt like we were really onto something. I had visions of a
whole new generation of software working this way. You wouldn't
need versions, or ports, or any of that crap. At Interleaf there
had been a whole group called Release Engineering that seemed to
be at least as big as the group that actually wrote the software.
Now you could just update the software right on the server.We started a new company we called Viaweb, after the fact that our
software worked via the web, and we got $10,000 in seed funding
from Idelle's husband Julian. In return for that and doing the
initial legal work and giving us business advice, we gave him 10%
of the company. Ten years later this deal became the model for Y
Combinator's. We knew founders needed something like this, because
we'd needed it ourselves.At this stage I had a negative net worth, because the thousand
dollars or so I had in the bank was more than counterbalanced by
what I owed the government in taxes. (Had I diligently set aside
the proper proportion of the money I'd made consulting for Interleaf?
No, I had not.) So although Robert had his graduate student stipend,
I needed that seed funding to live on.We originally hoped to launch in September, but we got more ambitious
about the software as we worked on it. Eventually we managed to
build a WYSIWYG site builder, in the sense that as you were creating
pages, they looked exactly like the static ones that would be
generated later, except that instead of leading to static pages,
the links all referred to closures stored in a hash table on the
server.It helped to have studied art, because the main goal of an online
store builder is to make users look legit, and the key to looking
legit is high production values. If you get page layouts and fonts
and colors right, you can make a guy running a store out of his
bedroom look more legit than a big company.(If you're curious why my site looks so old-fashioned, it's because
it's still made with this software. It may look clunky today, but
in 1996 it was the last word in slick.)In September, Robert rebelled. "We've been working on this for a
month," he said, "and it's still not done." This is funny in
retrospect, because he would still be working on it almost 3 years
later. But I decided it might be prudent to recruit more programmers,
and I asked Robert who else in grad school with him was really good.
He recommended Trevor Blackwell, which surprised me at first, because
at that point I knew Trevor mainly for his plan to reduce everything
in his life to a stack of notecards, which he carried around with
him. But Rtm was right, as usual. Trevor turned out to be a
frighteningly effective hacker.It was a lot of fun working with Robert and Trevor. They're the two
most independent-minded people
I know, and in completely different
ways. If you could see inside Rtm's brain it would look like a
colonial New England church, and if you could see inside Trevor's
it would look like the worst excesses of Austrian Rococo.We opened for business, with 6 stores, in January 1996. It was just
as well we waited a few months, because although we worried we were
late, we were actually almost fatally early. There was a lot of
talk in the press then about ecommerce, but not many people actually
wanted online stores.
[8]There were three main parts to the software: the editor, which
people used to build sites and which I wrote, the shopping cart,
which Robert wrote, and the manager, which kept track of orders and
statistics, and which Trevor wrote. In its time, the editor was one
of the best general-purpose site builders. I kept the code tight
and didn't have to integrate with any other software except Robert's
and Trevor's, so it was quite fun to work on. If all I'd had to do
was work on this software, the next 3 years would have been the
easiest of my life. Unfortunately I had to do a lot more, all of
it stuff I was worse at than programming, and the next 3 years were
instead the most stressful.There were a lot of startups making ecommerce software in the second
half of the 90s. We were determined to be the Microsoft Word, not
the Interleaf. Which meant being easy to use and inexpensive. It
was lucky for us that we were poor, because that caused us to make
Viaweb even more inexpensive than we realized. We charged $100 a
month for a small store and $300 a month for a big one. This low
price was a big attraction, and a constant thorn in the sides of
competitors, but it wasn't because of some clever insight that we
set the price low. We had no idea what businesses paid for things.
$300 a month seemed like a lot of money to us.We did a lot of things right by accident like that. For example,
we did what's now called "doing things that
don't scale," although
at the time we would have described it as "being so lame that we're
driven to the most desperate measures to get users." The most common
of which was building stores for them. This seemed particularly
humiliating, since the whole raison d'etre of our software was that
people could use it to make their own stores. But anything to get
users.We learned a lot more about retail than we wanted to know. For
example, that if you could only have a small image of a man's shirt
(and all images were small then by present standards), it was better
to have a closeup of the collar than a picture of the whole shirt.
The reason I remember learning this was that it meant I had to
rescan about 30 images of men's shirts. My first set of scans were
so beautiful too.Though this felt wrong, it was exactly the right thing to be doing.
Building stores for users taught us about retail, and about how it
felt to use our software. I was initially both mystified and repelled
by "business" and thought we needed a "business person" to be in
charge of it, but once we started to get users, I was converted,
in much the same way I was converted to
fatherhood once I had kids.
Whatever users wanted, I was all theirs. Maybe one day we'd have
so many users that I couldn't scan their images for them, but in
the meantime there was nothing more important to do.Another thing I didn't get at the time is that
growth rate is the
ultimate test of a startup. Our growth rate was fine. We had about
70 stores at the end of 1996 and about 500 at the end of 1997. I
mistakenly thought the thing that mattered was the absolute number
of users. And that is the thing that matters in the sense that
that's how much money you're making, and if you're not making enough,
you might go out of business. But in the long term the growth rate
takes care of the absolute number. If we'd been a startup I was
advising at Y Combinator, I would have said: Stop being so stressed
out, because you're doing fine. You're growing 7x a year. Just don't
hire too many more people and you'll soon be profitable, and then
you'll control your own destiny.Alas I hired lots more people, partly because our investors wanted
me to, and partly because that's what startups did during the
Internet Bubble. A company with just a handful of employees would
have seemed amateurish. So we didn't reach breakeven until about
when Yahoo bought us in the summer of 1998. Which in turn meant we
were at the mercy of investors for the entire life of the company.
And since both we and our investors were noobs at startups, the
result was a mess even by startup standards.It was a huge relief when Yahoo bought us. In principle our Viaweb
stock was valuable. It was a share in a business that was profitable
and growing rapidly. But it didn't feel very valuable to me; I had
no idea how to value a business, but I was all too keenly aware of
the near-death experiences we seemed to have every few months. Nor
had I changed my grad student lifestyle significantly since we
started. So when Yahoo bought us it felt like going from rags to
riches. Since we were going to California, I bought a car, a yellow
1998 VW GTI. I remember thinking that its leather seats alone were
by far the most luxurious thing I owned.The next year, from the summer of 1998 to the summer of 1999, must
have been the least productive of my life. I didn't realize it at
the time, but I was worn out from the effort and stress of running
Viaweb. For a while after I got to California I tried to continue
my usual m.o. of programming till 3 in the morning, but fatigue
combined with Yahoo's prematurely aged
culture and grim cube farm
in Santa Clara gradually dragged me down. After a few months it
felt disconcertingly like working at Interleaf.Yahoo had given us a lot of options when they bought us. At the
time I thought Yahoo was so overvalued that they'd never be worth
anything, but to my astonishment the stock went up 5x in the next
year. I hung on till the first chunk of options vested, then in the
summer of 1999 I left. It had been so long since I'd painted anything
that I'd half forgotten why I was doing this. My brain had been
entirely full of software and men's shirts for 4 years. But I had
done this to get rich so I could paint, I reminded myself, and now
I was rich, so I should go paint.When I said I was leaving, my boss at Yahoo had a long conversation
with me about my plans. I told him all about the kinds of pictures
I wanted to paint. At the time I was touched that he took such an
interest in me. Now I realize it was because he thought I was lying.
My options at that point were worth about $2 million a month. If I
was leaving that kind of money on the table, it could only be to
go and start some new startup, and if I did, I might take people
with me. This was the height of the Internet Bubble, and Yahoo was
ground zero of it. My boss was at that moment a billionaire. Leaving
then to start a new startup must have seemed to him an insanely,
and yet also plausibly, ambitious plan.But I really was quitting to paint, and I started immediately.
There was no time to lose. I'd already burned 4 years getting rich.
Now when I talk to founders who are leaving after selling their
companies, my advice is always the same: take a vacation. That's
what I should have done, just gone off somewhere and done nothing
for a month or two, but the idea never occurred to me.So I tried to paint, but I just didn't seem to have any energy or
ambition. Part of the problem was that I didn't know many people
in California. I'd compounded this problem by buying a house up in
the Santa Cruz Mountains, with a beautiful view but miles from
anywhere. I stuck it out for a few more months, then in desperation
I went back to New York, where unless you understand about rent
control you'll be surprised to hear I still had my apartment, sealed
up like a tomb of my old life. Idelle was in New York at least, and
there were other people trying to paint there, even though I didn't
know any of them.When I got back to New York I resumed my old life, except now I was
rich. It was as weird as it sounds. I resumed all my old patterns,
except now there were doors where there hadn't been. Now when I was
tired of walking, all I had to do was raise my hand, and (unless
it was raining) a taxi would stop to pick me up. Now when I walked
past charming little restaurants I could go in and order lunch. It
was exciting for a while. Painting started to go better. I experimented
with a new kind of still life where I'd paint one painting in the
old way, then photograph it and print it, blown up, on canvas, and
then use that as the underpainting for a second still life, painted
from the same objects (which hopefully hadn't rotted yet).Meanwhile I looked for an apartment to buy. Now I could actually
choose what neighborhood to live in. Where, I asked myself and
various real estate agents, is the Cambridge of New York? Aided by
occasional visits to actual Cambridge, I gradually realized there
wasn't one. Huh.Around this time, in the spring of 2000, I had an idea. It was clear
from our experience with Viaweb that web apps were the future. Why
not build a web app for making web apps? Why not let people edit
code on our server through the browser, and then host the resulting
applications for them?
[9]
You could run all sorts of services
on the servers that these applications could use just by making an
API call: making and receiving phone calls, manipulating images,
taking credit card payments, etc.I got so excited about this idea that I couldn't think about anything
else. It seemed obvious that this was the future. I didn't particularly
want to start another company, but it was clear that this idea would
have to be embodied as one, so I decided to move to Cambridge and
start it. I hoped to lure Robert into working on it with me, but
there I ran into a hitch. Robert was now a postdoc at MIT, and
though he'd made a lot of money the last time I'd lured him into
working on one of my schemes, it had also been a huge time sink.
So while he agreed that it sounded like a plausible idea, he firmly
refused to work on it.Hmph. Well, I'd do it myself then. I recruited Dan Giffin, who had
worked for Viaweb, and two undergrads who wanted summer jobs, and
we got to work trying to build what it's now clear is about twenty
companies and several open source projects worth of software. The
language for defining applications would of course be a dialect of
Lisp. But I wasn't so naive as to assume I could spring an overt
Lisp on a general audience; we'd hide the parentheses, like Dylan
did.By then there was a name for the kind of company Viaweb was, an
"application service provider," or ASP. This name didn't last long
before it was replaced by "software as a service," but it was current
for long enough that I named this new company after it: it was going
to be called Aspra.I started working on the application builder, Dan worked on network
infrastructure, and the two undergrads worked on the first two
services (images and phone calls). But about halfway through the
summer I realized I really didn't want to run a company Β especially
not a big one, which it was looking like this would have to be. I'd
only started Viaweb because I needed the money. Now that I didn't
need money anymore, why was I doing this? If this vision had to be
realized as a company, then screw the vision. I'd build a subset
that could be done as an open source project.Much to my surprise, the time I spent working on this stuff was not
wasted after all. After we started Y Combinator, I would often
encounter startups working on parts of this new architecture, and
it was very useful to have spent so much time thinking about it and
even trying to write some of it.The subset I would build as an open source project was the new Lisp,
whose parentheses I now wouldn't even have to hide. A lot of Lisp
hackers dream of building a new Lisp, partly because one of the
distinctive features of the language is that it has dialects, and
partly, I think, because we have in our minds a Platonic form of
Lisp that all existing dialects fall short of. I certainly did. So
at the end of the summer Dan and I switched to working on this new
dialect of Lisp, which I called Arc, in a house I bought in Cambridge.The following spring, lightning struck. I was invited to give a
talk at a Lisp conference, so I gave one about how we'd used Lisp
at Viaweb. Afterward I put a postscript file of this talk online,
on paulgraham.com, which I'd created years before using Viaweb but
had never used for anything. In one day it got 30,000 page views.
What on earth had happened? The referring urls showed that someone
had posted it on Slashdot.
[10]Wow, I thought, there's an audience. If I write something and put
it on the web, anyone can read it. That may seem obvious now, but
it was surprising then. In the print era there was a narrow channel
to readers, guarded by fierce monsters known as editors. The only
way to get an audience for anything you wrote was to get it published
as a book, or in a newspaper or magazine. Now anyone could publish
anything.This had been possible in principle since 1993, but not many people
had realized it yet. I had been intimately involved with building
the infrastructure of the web for most of that time, and a writer
as well, and it had taken me 8 years to realize it. Even then it
took me several years to understand the implications. It meant there
would be a whole new generation of
essays.
[11]In the print era, the channel for publishing essays had been
vanishingly small. Except for a few officially anointed thinkers
who went to the right parties in New York, the only people allowed
to publish essays were specialists writing about their specialties.
There were so many essays that had never been written, because there
had been no way to publish them. Now they could be, and I was going
to write them.
[12]I've worked on several different things, but to the extent there
was a turning point where I figured out what to work on, it was
when I started publishing essays online. From then on I knew that
whatever else I did, I'd always write essays too.I knew that online essays would be a
marginal medium at first.
Socially they'd seem more like rants posted by nutjobs on their
GeoCities sites than the genteel and beautifully typeset compositions
published in The New Yorker. But by this point I knew enough to
find that encouraging instead of discouraging.One of the most conspicuous patterns I've noticed in my life is how
well it has worked, for me at least, to work on things that weren't
prestigious. Still life has always been the least prestigious form
of painting. Viaweb and Y Combinator both seemed lame when we started
them. I still get the glassy eye from strangers when they ask what
I'm writing, and I explain that it's an essay I'm going to publish
on my web site. Even Lisp, though prestigious intellectually in
something like the way Latin is, also seems about as hip.It's not that unprestigious types of work are good per se. But when
you find yourself drawn to some kind of work despite its current
lack of prestige, it's a sign both that there's something real to
be discovered there, and that you have the right kind of motives.
Impure motives are a big danger for the ambitious. If anything is
going to lead you astray, it will be the desire to impress people.
So while working on things that aren't prestigious doesn't guarantee
you're on the right track, it at least guarantees you're not on the
most common type of wrong one.Over the next several years I wrote lots of essays about all kinds
of different topics. O'Reilly reprinted a collection of them as a
book, called Hackers & Painters after one of the essays in it. I
also worked on spam filters, and did some more painting. I used to
have dinners for a group of friends every thursday night, which
taught me how to cook for groups. And I bought another building in
Cambridge, a former candy factory (and later, twas said, porn
studio), to use as an office.One night in October 2003 there was a big party at my house. It was
a clever idea of my friend Maria Daniels, who was one of the thursday
diners. Three separate hosts would all invite their friends to one
party. So for every guest, two thirds of the other guests would be
people they didn't know but would probably like. One of the guests
was someone I didn't know but would turn out to like a lot: a woman
called Jessica Livingston. A couple days later I asked her out.Jessica was in charge of marketing at a Boston investment bank.
This bank thought it understood startups, but over the next year,
as she met friends of mine from the startup world, she was surprised
how different reality was. And how colorful their stories were. So
she decided to compile a book of
interviews with startup founders.When the bank had financial problems and she had to fire half her
staff, she started looking for a new job. In early 2005 she interviewed
for a marketing job at a Boston VC firm. It took them weeks to make
up their minds, and during this time I started telling her about
all the things that needed to be fixed about venture capital. They
should make a larger number of smaller investments instead of a
handful of giant ones, they should be funding younger, more technical
founders instead of MBAs, they should let the founders remain as
CEO, and so on.One of my tricks for writing essays had always been to give talks.
The prospect of having to stand up in front of a group of people
and tell them something that won't waste their time is a great
spur to the imagination. When the Harvard Computer Society, the
undergrad computer club, asked me to give a talk, I decided I would
tell them how to start a startup. Maybe they'd be able to avoid the
worst of the mistakes we'd made.So I gave this talk, in the course of which I told them that the
best sources of seed funding were successful startup founders,
because then they'd be sources of advice too. Whereupon it seemed
they were all looking expectantly at me. Horrified at the prospect
of having my inbox flooded by business plans (if I'd only known),
I blurted out "But not me!" and went on with the talk. But afterward
it occurred to me that I should really stop procrastinating about
angel investing. I'd been meaning to since Yahoo bought us, and now
it was 7 years later and I still hadn't done one angel investment.Meanwhile I had been scheming with Robert and Trevor about projects
we could work on together. I missed working with them, and it seemed
like there had to be something we could collaborate on.As Jessica and I were walking home from dinner on March 11, at the
corner of Garden and Walker streets, these three threads converged.
Screw the VCs who were taking so long to make up their minds. We'd
start our own investment firm and actually implement the ideas we'd
been talking about. I'd fund it, and Jessica could quit her job and
work for it, and we'd get Robert and Trevor as partners too.
[13]Once again, ignorance worked in our favor. We had no idea how to
be angel investors, and in Boston in 2005 there were no Ron Conways
to learn from. So we just made what seemed like the obvious choices,
and some of the things we did turned out to be novel.There are multiple components to Y Combinator, and we didn't figure
them all out at once. The part we got first was to be an angel firm.
In those days, those two words didn't go together. There were VC
firms, which were organized companies with people whose job it was
to make investments, but they only did big, million dollar investments.
And there were angels, who did smaller investments, but these were
individuals who were usually focused on other things and made
investments on the side. And neither of them helped founders enough
in the beginning. We knew how helpless founders were in some respects,
because we remembered how helpless we'd been. For example, one thing
Julian had done for us that seemed to us like magic was to get us
set up as a company. We were fine writing fairly difficult software,
but actually getting incorporated, with bylaws and stock and all
that stuff, how on earth did you do that? Our plan was not only to
make seed investments, but to do for startups everything Julian had
done for us.YC was not organized as a fund. It was cheap enough to run that we
funded it with our own money. That went right by 99% of readers,
but professional investors are thinking "Wow, that means they got
all the returns." But once again, this was not due to any particular
insight on our part. We didn't know how VC firms were organized.
It never occurred to us to try to raise a fund, and if it had, we
wouldn't have known where to start.
[14]The most distinctive thing about YC is the batch model: to fund a
bunch of startups all at once, twice a year, and then to spend three
months focusing intensively on trying to help them. That part we
discovered by accident, not merely implicitly but explicitly due
to our ignorance about investing. We needed to get experience as
investors. What better way, we thought, than to fund a whole bunch
of startups at once? We knew undergrads got temporary jobs at tech
companies during the summer. Why not organize a summer program where
they'd start startups instead? We wouldn't feel guilty for being
in a sense fake investors, because they would in a similar sense
be fake founders. So while we probably wouldn't make much money out
of it, we'd at least get to practice being investors on them, and
they for their part would probably have a more interesting summer
than they would working at Microsoft.We'd use the building I owned in Cambridge as our headquarters.
We'd all have dinner there once a week Β on tuesdays, since I was
already cooking for the thursday diners on thursdays Β and after
dinner we'd bring in experts on startups to give talks.We knew undergrads were deciding then about summer jobs, so in a
matter of days we cooked up something we called the Summer Founders
Program, and I posted an
announcement
on my site, inviting undergrads
to apply. I had never imagined that writing essays would be a way
to get "deal flow," as investors call it, but it turned out to be
the perfect source.
[15]
We got 225 applications for the Summer
Founders Program, and we were surprised to find that a lot of them
were from people who'd already graduated, or were about to that
spring. Already this SFP thing was starting to feel more serious
than we'd intended.We invited about 20 of the 225 groups to interview in person, and
from those we picked 8 to fund. They were an impressive group. That
first batch included reddit, Justin Kan and Emmett Shear, who went
on to found Twitch, Aaron Swartz, who had already helped write the
RSS spec and would a few years later become a martyr for open access,
and Sam Altman, who would later become the second president of YC.
I don't think it was entirely luck that the first batch was so good.
You had to be pretty bold to sign up for a weird thing like the
Summer Founders Program instead of a summer job at a legit place
like Microsoft or Goldman Sachs.The deal for startups was based on a combination of the deal we did
with Julian ($10k for 10%) and what Robert said MIT grad students
got for the summer ($6k). We invested $6k per founder, which in the
typical two-founder case was $12k, in return for 6%. That had to
be fair, because it was twice as good as the deal we ourselves had
taken. Plus that first summer, which was really hot, Jessica brought
the founders free air conditioners.
[16]Fairly quickly I realized that we had stumbled upon the way to scale
startup funding. Funding startups in batches was more convenient
for us, because it meant we could do things for a lot of startups
at once, but being part of a batch was better for the startups too.
It solved one of the biggest problems faced by founders: the
isolation. Now you not only had colleagues, but colleagues who
understood the problems you were facing and could tell you how they
were solving them.As YC grew, we started to notice other advantages of scale. The
alumni became a tight community, dedicated to helping one another,
and especially the current batch, whose shoes they remembered being
in. We also noticed that the startups were becoming one another's
customers. We used to refer jokingly to the "YC GDP," but as YC
grows this becomes less and less of a joke. Now lots of startups
get their initial set of customers almost entirely from among their
batchmates.I had not originally intended YC to be a full-time job. I was going
to do three things: hack, write essays, and work on YC. As YC grew,
and I grew more excited about it, it started to take up a lot more
than a third of my attention. But for the first few years I was
still able to work on other things.In the summer of 2006, Robert and I started working on a new version
of Arc. This one was reasonably fast, because it was compiled into
Scheme. To test this new Arc, I wrote Hacker News in it. It was
originally meant to be a news aggregator for startup founders and
was called Startup News, but after a few months I got tired of
reading about nothing but startups. Plus it wasn't startup founders
we wanted to reach. It was future startup founders. So I changed
the name to Hacker News and the topic to whatever engaged one's
intellectual curiosity.HN was no doubt good for YC, but it was also by far the biggest
source of stress for me. If all I'd had to do was select and help
founders, life would have been so easy. And that implies that HN
was a mistake. Surely the biggest source of stress in one's work
should at least be something close to the core of the work. Whereas
I was like someone who was in pain while running a marathon not
from the exertion of running, but because I had a blister from an
ill-fitting shoe. When I was dealing with some urgent problem during
YC, there was about a 60% chance it had to do with HN, and a 40%
chance it had do with everything else combined.
[17]As well as HN, I wrote all of YC's internal software in Arc. But
while I continued to work a good deal in Arc, I gradually stopped
working on Arc, partly because I didn't have time to, and partly
because it was a lot less attractive to mess around with the language
now that we had all this infrastructure depending on it. So now my
three projects were reduced to two: writing essays and working on
YC.YC was different from other kinds of work I've done. Instead of
deciding for myself what to work on, the problems came to me. Every
6 months there was a new batch of startups, and their problems,
whatever they were, became our problems. It was very engaging work,
because their problems were quite varied, and the good founders
were very effective. If you were trying to learn the most you could
about startups in the shortest possible time, you couldn't have
picked a better way to do it.There were parts of the job I didn't like. Disputes between cofounders,
figuring out when people were lying to us, fighting with people who
maltreated the startups, and so on. But I worked hard even at the
parts I didn't like. I was haunted by something Kevin Hale once
said about companies: "No one works harder than the boss." He meant
it both descriptively and prescriptively, and it was the second
part that scared me. I wanted YC to be good, so if how hard I worked
set the upper bound on how hard everyone else worked, I'd better
work very hard.One day in 2010, when he was visiting California for interviews,
Robert Morris did something astonishing: he offered me unsolicited
advice. I can only remember him doing that once before. One day at
Viaweb, when I was bent over double from a kidney stone, he suggested
that it would be a good idea for him to take me to the hospital.
That was what it took for Rtm to offer unsolicited advice. So I
remember his exact words very clearly. "You know," he said, "you
should make sure Y Combinator isn't the last cool thing you do."At the time I didn't understand what he meant, but gradually it
dawned on me that he was saying I should quit. This seemed strange
advice, because YC was doing great. But if there was one thing rarer
than Rtm offering advice, it was Rtm being wrong. So this set me
thinking. It was true that on my current trajectory, YC would be
the last thing I did, because it was only taking up more of my
attention. It had already eaten Arc, and was in the process of
eating essays too. Either YC was my life's work or I'd have to leave
eventually. And it wasn't, so I would.In the summer of 2012 my mother had a stroke, and the cause turned
out to be a blood clot caused by colon cancer. The stroke destroyed
her balance, and she was put in a nursing home, but she really
wanted to get out of it and back to her house, and my sister and I
were determined to help her do it. I used to fly up to Oregon to
visit her regularly, and I had a lot of time to think on those
flights. On one of them I realized I was ready to hand YC over to
someone else.I asked Jessica if she wanted to be president, but she didn't, so
we decided we'd try to recruit Sam Altman. We talked to Robert and
Trevor and we agreed to make it a complete changing of the guard.
Up till that point YC had been controlled by the original LLC we
four had started. But we wanted YC to last for a long time, and to
do that it couldn't be controlled by the founders. So if Sam said
yes, we'd let him reorganize YC. Robert and I would retire, and
Jessica and Trevor would become ordinary partners.When we asked Sam if he wanted to be president of YC, initially he
said no. He wanted to start a startup to make nuclear reactors.
But I kept at it, and in October 2013 he finally agreed. We decided
he'd take over starting with the winter 2014 batch. For the rest
of 2013 I left running YC more and more to Sam, partly so he could
learn the job, and partly because I was focused on my mother, whose
cancer had returned.She died on January 15, 2014. We knew this was coming, but it was
still hard when it did.I kept working on YC till March, to help get that batch of startups
through Demo Day, then I checked out pretty completely. (I still
talk to alumni and to new startups working on things I'm interested
in, but that only takes a few hours a week.)What should I do next? Rtm's advice hadn't included anything about
that. I wanted to do something completely different, so I decided
I'd paint. I wanted to see how good I could get if I really focused
on it. So the day after I stopped working on YC, I started painting.
I was rusty and it took a while to get back into shape, but it was
at least completely engaging.
[18]I spent most of the rest of 2014 painting. I'd never been able to
work so uninterruptedly before, and I got to be better than I had
been. Not good enough, but better. Then in November, right in the
middle of a painting, I ran out of steam. Up till that point I'd
always been curious to see how the painting I was working on would
turn out, but suddenly finishing this one seemed like a chore. So
I stopped working on it and cleaned my brushes and haven't painted
since. So far anyway.I realize that sounds rather wimpy. But attention is a zero sum
game. If you can choose what to work on, and you choose a project
that's not the best one (or at least a good one) for you, then it's
getting in the way of another project that is. And at 50 there was
some opportunity cost to screwing around.I started writing essays again, and wrote a bunch of new ones over
the next few months. I even wrote a couple that
weren't about
startups. Then in March 2015 I started working on Lisp again.The distinctive thing about Lisp is that its core is a language
defined by writing an interpreter in itself. It wasn't originally
intended as a programming language in the ordinary sense. It was
meant to be a formal model of computation, an alternative to the
Turing machine. If you want to write an interpreter for a language
in itself, what's the minimum set of predefined operators you need?
The Lisp that John McCarthy invented, or more accurately discovered,
is an answer to that question.
[19]McCarthy didn't realize this Lisp could even be used to program
computers till his grad student Steve Russell suggested it. Russell
translated McCarthy's interpreter into IBM 704 machine language,
and from that point Lisp started also to be a programming language
in the ordinary sense. But its origins as a model of computation
gave it a power and elegance that other languages couldn't match.
It was this that attracted me in college, though I didn't understand
why at the time.McCarthy's 1960 Lisp did nothing more than interpret Lisp expressions.
It was missing a lot of things you'd want in a programming language.
So these had to be added, and when they were, they weren't defined
using McCarthy's original axiomatic approach. That wouldn't have
been feasible at the time. McCarthy tested his interpreter by
hand-simulating the execution of programs. But it was already getting
close to the limit of interpreters you could test that way Β indeed,
there was a bug in it that McCarthy had overlooked. To test a more
complicated interpreter, you'd have had to run it, and computers
then weren't powerful enough.Now they are, though. Now you could continue using McCarthy's
axiomatic approach till you'd defined a complete programming language.
And as long as every change you made to McCarthy's Lisp was a
discoveredness-preserving transformation, you could, in principle,
end up with a complete language that had this quality. Harder to
do than to talk about, of course, but if it was possible in principle,
why not try? So I decided to take a shot at it. It took 4 years,
from March 26, 2015 to October 12, 2019. It was fortunate that I
had a precisely defined goal, or it would have been hard to keep
at it for so long.I wrote this new Lisp, called Bel,
in itself in Arc. That may sound
like a contradiction, but it's an indication of the sort of trickery
I had to engage in to make this work. By means of an egregious
collection of hacks I managed to make something close enough to an
interpreter written in itself that could actually run. Not fast,
but fast enough to test.I had to ban myself from writing essays during most of this time,
or I'd never have finished. In late 2015 I spent 3 months writing
essays, and when I went back to working on Bel I could barely
understand the code. Not so much because it was badly written as
because the problem is so convoluted. When you're working on an
interpreter written in itself, it's hard to keep track of what's
happening at what level, and errors can be practically encrypted
by the time you get them.So I said no more essays till Bel was done. But I told few people
about Bel while I was working on it. So for years it must have
seemed that I was doing nothing, when in fact I was working harder
than I'd ever worked on anything. Occasionally after wrestling for
hours with some gruesome bug I'd check Twitter or HN and see someone
asking "Does Paul Graham still code?"Working on Bel was hard but satisfying. I worked on it so intensively
that at any given time I had a decent chunk of the code in my head
and could write more there. I remember taking the boys to the
coast on a sunny day in 2015 and figuring out how to deal with some
problem involving continuations while I watched them play in the
tide pools. It felt like I was doing life right. I remember that
because I was slightly dismayed at how novel it felt. The good news
is that I had more moments like this over the next few years.In the summer of 2016 we moved to England. We wanted our kids to
see what it was like living in another country, and since I was a
British citizen by birth, that seemed the obvious choice. We only
meant to stay for a year, but we liked it so much that we still
live there. So most of Bel was written in England.In the fall of 2019, Bel was finally finished. Like McCarthy's
original Lisp, it's a spec rather than an implementation, although
like McCarthy's Lisp it's a spec expressed as code.Now that I could write essays again, I wrote a bunch about topics
I'd had stacked up. I kept writing essays through 2020, but I also
started to think about other things I could work on. How should I
choose what to do? Well, how had I chosen what to work on in the
past? I wrote an essay for myself to answer that question, and I
was surprised how long and messy the answer turned out to be. If
this surprised me, who'd lived it, then I thought perhaps it would
be interesting to other people, and encouraging to those with
similarly messy lives. So I wrote a more detailed version for others
to read, and this is the last sentence of it.
Notes[1]
My experience skipped a step in the evolution of computers:
time-sharing machines with interactive OSes. I went straight from
batch processing to microcomputers, which made microcomputers seem
all the more exciting.[2]
Italian words for abstract concepts can nearly always be
predicted from their English cognates (except for occasional traps
like polluzione). It's the everyday words that differ. So if you
string together a lot of abstract concepts with a few simple verbs,
you can make a little Italian go a long way.[3]
I lived at Piazza San Felice 4, so my walk to the Accademia
went straight down the spine of old Florence: past the Pitti, across
the bridge, past Orsanmichele, between the Duomo and the Baptistery,
and then up Via Ricasoli to Piazza San Marco. I saw Florence at
street level in every possible condition, from empty dark winter
evenings to sweltering summer days when the streets were packed with
tourists.[4]
You can of course paint people like still lives if you want
to, and they're willing. That sort of portrait is arguably the apex
of still life painting, though the long sitting does tend to produce
pained expressions in the sitters.[5]
Interleaf was one of many companies that had smart people and
built impressive technology, and yet got crushed by Moore's Law.
In the 1990s the exponential growth in the power of commodity (i.e.
Intel) processors rolled up high-end, special-purpose hardware and
software companies like a bulldozer.[6]
The signature style seekers at RISD weren't specifically
mercenary. In the art world, money and coolness are tightly coupled.
Anything expensive comes to be seen as cool, and anything seen as
cool will soon become equally expensive.[7]
Technically the apartment wasn't rent-controlled but
rent-stabilized, but this is a refinement only New Yorkers would
know or care about. The point is that it was really cheap, less
than half market price.[8]
Most software you can launch as soon as it's done. But when
the software is an online store builder and you're hosting the
stores, if you don't have any users yet, that fact will be painfully
obvious. So before we could launch publicly we had to launch
privately, in the sense of recruiting an initial set of users and
making sure they had decent-looking stores.[9]
We'd had a code editor in Viaweb for users to define their
own page styles. They didn't know it, but they were editing Lisp
expressions underneath. But this wasn't an app editor, because the
code ran when the merchants' sites were generated, not when shoppers
visited them.[10]
This was the first instance of what is now a familiar experience,
and so was what happened next, when I read the comments and found
they were full of angry people. How could I claim that Lisp was
better than other languages? Weren't they all Turing complete?
People who see the responses to essays I write sometimes tell me
how sorry they feel for me, but I'm not exaggerating when I reply
that it has always been like this, since the very beginning. It
comes with the territory. An essay must tell readers things they
don't already know, and some
people dislike being told such things.[11]
People put plenty of stuff on the internet in the 90s of
course, but putting something online is not the same as publishing
it online. Publishing online means you treat the online version as
the (or at least a) primary version.[12]
There is a general lesson here that our experience with Y
Combinator also teaches: Customs continue to constrain you long
after the restrictions that caused them have disappeared. Customary
VC practice had once, like the customs about publishing essays,
been based on real constraints. Startups had once been much more
expensive to start, and proportionally rare. Now they could be cheap
and common, but the VCs' customs still reflected the old world,
just as customs about writing essays still reflected the constraints
of the print era.Which in turn implies that people who are independent-minded (i.e.
less influenced by custom) will have an advantage in fields affected
by rapid change (where customs are more likely to be obsolete).Here's an interesting point, though: you can't always predict which
fields will be affected by rapid change. Obviously software and
venture capital will be, but who would have predicted that essay
writing would be?[13]
Y Combinator was not the original name. At first we were
called Cambridge Seed. But we didn't want a regional name, in case
someone copied us in Silicon Valley, so we renamed ourselves after
one of the coolest tricks in the lambda calculus, the Y combinator.I picked orange as our color partly because it's the warmest, and
partly because no VC used it. In 2005 all the VCs used staid colors
like maroon, navy blue, and forest green, because they were trying
to appeal to LPs, not founders. The YC logo itself is an inside
joke: the Viaweb logo had been a white V on a red circle, so I made
the YC logo a white Y on an orange square.[14]
YC did become a fund for a couple years starting in 2009,
because it was getting so big I could no longer afford to fund it
personally. But after Heroku got bought we had enough money to go
back to being self-funded.[15]
I've never liked the term "deal flow," because it implies
that the number of new startups at any given time is fixed. This
is not only false, but it's the purpose of YC to falsify it, by
causing startups to be founded that would not otherwise have existed.[16]
She reports that they were all different shapes and sizes,
because there was a run on air conditioners and she had to get
whatever she could, but that they were all heavier than she could
carry now.[17]
Another problem with HN was a bizarre edge case that occurs
when you both write essays and run a forum. When you run a forum,
you're assumed to see if not every conversation, at least every
conversation involving you. And when you write essays, people post
highly imaginative misinterpretations of them on forums. Individually
these two phenomena are tedious but bearable, but the combination
is disastrous. You actually have to respond to the misinterpretations,
because the assumption that you're present in the conversation means
that not responding to any sufficiently upvoted misinterpretation
reads as a tacit admission that it's correct. But that in turn
encourages more; anyone who wants to pick a fight with you senses
that now is their chance.[18]
The worst thing about leaving YC was not working with Jessica
anymore. We'd been working on YC almost the whole time we'd known
each other, and we'd neither tried nor wanted to separate it from
our personal lives, so leaving was like pulling up a deeply rooted
tree.[19]
One way to get more precise about the concept of invented vs
discovered is to talk about space aliens. Any sufficiently advanced
alien civilization would certainly know about the Pythagorean
theorem, for example. I believe, though with less certainty, that
they would also know about the Lisp in McCarthy's 1960 paper.But if so there's no reason to suppose that this is the limit of
the language that might be known to them. Presumably aliens need
numbers and errors and I/O too. So it seems likely there exists at
least one path out of McCarthy's Lisp along which discoveredness
is preserved.Thanks to Trevor Blackwell, John Collison, Patrick Collison, Daniel
Gackle, Ralph Hazell, Jessica Livingston, Robert Morris, and Harj
Taggar for reading drafts of this. |
November 2022Since I was about 9 I've been puzzled by the apparent contradiction
between being made of matter that behaves in a predictable way, and
the feeling that I could choose to do whatever I wanted. At the
time I had a self-interested motive for exploring the question. At
that age (like most succeeding ages) I was always in trouble with
the authorities, and it seemed to me that there might possibly be
some way to get out of trouble by arguing that I wasn't responsible
for my actions. I gradually lost hope of that, but the puzzle
remained: How do you reconcile being a machine made of matter with
the feeling that you're free to choose what you do?
[1]The best way to explain the answer may be to start with a slightly
wrong version, and then fix it. The wrong version is: You can do
what you want, but you can't want what you want. Yes, you can control
what you do, but you'll do what you want, and you can't control
that.The reason this is mistaken is that people do sometimes change what
they want. People who don't want to want something β drug addicts,
for example β can sometimes make themselves stop wanting it. And
people who want to want something β who want to like classical
music, or broccoli β sometimes succeed.So we modify our initial statement: You can do what you want, but
you can't want to want what you want.That's still not quite true. It's possible to change what you want
to want. I can imagine someone saying "I decided to stop wanting
to like classical music." But we're getting closer to the truth.
It's rare for people to change what they want to want, and the more
"want to"s we add, the rarer it gets.We can get arbitrarily close to a true statement by adding more "want
to"s in much the same way we can get arbitrarily close to 1 by adding
more 9s to a string of 9s following a decimal point. In practice
three or four "want to"s must surely be enough. It's hard even to
envision what it would mean to change what you want to want to want
to want, let alone actually do it.So one way to express the correct answer is to use a regular
expression. You can do what you want, but there's some statement
of the form "you can't (want to)* want what you want" that's true.
Ultimately you get back to a want that you don't control.
[2]
Notes[1]
I didn't know when I was 9 that matter might behave randomly,
but I don't think it affects the problem much. Randomness destroys
the ghost in the machine as effectively as determinism.[2]
If you don't like using an expression, you can make the same
point using higher-order desires: There is some n such that you
don't control your nth-order desires.
Thanks to Trevor Blackwell,
Jessica Livingston, Robert Morris, and
Michael Nielsen for reading drafts of this. |
Want to start a startup? Get funded by
Y Combinator.
October 2014(This essay is derived from a guest lecture in Sam Altman's startup class at
Stanford. It's intended for college students, but much of it is
applicable to potential founders at other ages.)One of the advantages of having kids is that when you have to give
advice, you can ask yourself "what would I tell my own kids?" My
kids are little, but I can imagine what I'd tell them about startups
if they were in college, and that's what I'm going to tell you.Startups are very counterintuitive. I'm not sure why. Maybe it's
just because knowledge about them hasn't permeated our culture yet.
But whatever the reason, starting a startup is a task where you
can't always trust your instincts.It's like skiing in that way. When you first try skiing and you
want to slow down, your instinct is to lean back. But if you lean
back on skis you fly down the hill out of control. So part of
learning to ski is learning to suppress that impulse. Eventually
you get new habits, but at first it takes a conscious effort. At
first there's a list of things you're trying to remember as you
start down the hill.Startups are as unnatural as skiing, so there's a similar list for
startups. Here I'm going to give you the first part of it β the things
to remember if you want to prepare yourself to start a startup.
CounterintuitiveThe first item on it is the fact I already mentioned: that startups
are so weird that if you trust your instincts, you'll make a lot
of mistakes. If you know nothing more than this, you may at least
pause before making them.When I was running Y Combinator I used to joke that our function
was to tell founders things they would ignore. It's really true.
Batch after batch, the YC partners warn founders about mistakes
they're about to make, and the founders ignore them, and then come
back a year later and say "I wish we'd listened."Why do the founders ignore the partners' advice? Well, that's the
thing about counterintuitive ideas: they contradict your intuitions.
They seem wrong. So of course your first impulse is to disregard
them. And in fact my joking description is not merely the curse
of Y Combinator but part of its raison d'etre. If founders' instincts
already gave them the right answers, they wouldn't need us. You
only need other people to give you advice that surprises you. That's
why there are a lot of ski instructors and not many running
instructors.
[1]You can, however, trust your instincts about people. And in fact
one of the most common mistakes young founders make is not to
do that enough. They get involved with people who seem impressive,
but about whom they feel some misgivings personally. Later when
things blow up they say "I knew there was something off about him,
but I ignored it because he seemed so impressive."If you're thinking about getting involved with someone β as a
cofounder, an employee, an investor, or an acquirer β and you
have misgivings about them, trust your gut. If someone seems
slippery, or bogus, or a jerk, don't ignore it.This is one case where it pays to be self-indulgent. Work with
people you genuinely like, and you've known long enough to be sure.
ExpertiseThe second counterintuitive point is that it's not that important
to know a lot about startups. The way to succeed in a startup is
not to be an expert on startups, but to be an expert on your users
and the problem you're solving for them.
Mark Zuckerberg didn't succeed because he was an expert on startups.
He succeeded despite being a complete noob at startups, because he
understood his users really well.If you don't know anything about, say, how to raise an angel round,
don't feel bad on that account. That sort of thing you can learn
when you need to, and forget after you've done it.In fact, I worry it's not merely unnecessary to learn in great
detail about the mechanics of startups, but possibly somewhat
dangerous. If I met an undergrad who knew all about convertible
notes and employee agreements and (God forbid) class FF stock, I
wouldn't think "here is someone who is way ahead of their peers."
It would set off alarms. Because another of the characteristic
mistakes of young founders is to go through the motions of starting
a startup. They make up some plausible-sounding idea, raise money
at a good valuation, rent a cool office, hire a bunch of people.
From the outside that seems like what startups do. But the next
step after rent a cool office and hire a bunch of people is: gradually
realize how completely fucked they are, because while imitating all
the outward forms of a startup they have neglected the one thing
that's actually essential: making something people want.
GameWe saw this happen so often that we made up a name for it: playing
house. Eventually I realized why it was happening. The reason
young founders go through the motions of starting a startup is
because that's what they've been trained to do for their whole lives
up to that point. Think about what you have to do to get into
college, for example. Extracurricular activities, check. Even in
college classes most of the work is as artificial as running laps.I'm not attacking the educational system for being this way. There
will always be a certain amount of fakeness in the work you do when
you're being taught something, and if you measure their performance
it's inevitable that people will exploit the difference to the point
where much of what you're measuring is artifacts of the fakeness.I confess I did it myself in college. I found that in a lot of
classes there might only be 20 or 30 ideas that were the right shape
to make good exam questions. The way I studied for exams in these
classes was not (except incidentally) to master the material taught
in the class, but to make a list of potential exam questions and
work out the answers in advance. When I walked into the final, the
main thing I'd be feeling was curiosity about which of my questions
would turn up on the exam. It was like a game.It's not surprising that after being trained for their whole lives
to play such games, young founders' first impulse on starting a
startup is to try to figure out the tricks for winning at this new
game. Since fundraising appears to be the measure of success for
startups (another classic noob mistake), they always want to know what the
tricks are for convincing investors. We tell them the best way to
convince investors is to make a startup
that's actually doing well, meaning growing fast, and then simply
tell investors so. Then they want to know what the tricks are for
growing fast. And we have to tell them the best way to do that is
simply to make something people want.So many of the conversations YC partners have with young founders
begin with the founder asking "How do we..." and the partner replying
"Just..."Why do the founders always make things so complicated? The reason,
I realized, is that they're looking for the trick.So this is the third counterintuitive thing to remember about
startups: starting a startup is where gaming the system stops
working. Gaming the system may continue to work if you go to work
for a big company. Depending on how broken the company is, you can
succeed by sucking up to the right people, giving the impression
of productivity, and so on.
[2]
But that doesn't work with startups.
There is no boss to trick, only users, and all users care about is
whether your product does what they want. Startups are as impersonal
as physics. You have to make something people want, and you prosper
only to the extent you do.The dangerous thing is, faking does work to some degree on investors.
If you're super good at sounding like you know what you're talking
about, you can fool investors for at least one and perhaps even two
rounds of funding. But it's not in your interest to. The company
is ultimately doomed. All you're doing is wasting your own time
riding it down.So stop looking for the trick. There are tricks in startups, as
there are in any domain, but they are an order of magnitude less
important than solving the real problem. A founder who knows nothing
about fundraising but has made something users love will have an
easier time raising money than one who knows every trick in the
book but has a flat usage graph. And more importantly, the founder
who has made something users love is the one who will go on to
succeed after raising the money.Though in a sense it's bad news in that you're deprived of one of
your most powerful weapons, I think it's exciting that gaming the
system stops working when you start a startup. It's exciting that
there even exist parts of the world where you win by doing good
work. Imagine how depressing the world would be if it were all
like school and big companies, where you either have to spend a lot
of time on bullshit things or lose to people who do.
[3]
I would
have been delighted if I'd realized in college that there were parts
of the real world where gaming the system mattered less than others,
and a few where it hardly mattered at all. But there are, and this
variation is one of the most important things to consider when
you're thinking about your future. How do you win in each type of
work, and what would you like to win by doing?
[4]
All-ConsumingThat brings us to our fourth counterintuitive point: startups are
all-consuming. If you start a startup, it will take over your life
to a degree you cannot imagine. And if your startup succeeds, it
will take over your life for a long time: for several years at the
very least, maybe for a decade, maybe for the rest of your working
life. So there is a real opportunity cost here.Larry Page may seem to have an enviable life, but there are aspects
of it that are unenviable. Basically at 25 he started running as
fast as he could and it must seem to him that he hasn't stopped to
catch his breath since. Every day new shit happens in the Google
empire that only the CEO can deal with, and he, as CEO, has to deal
with it. If he goes on vacation for even a week, a whole week's
backlog of shit accumulates. And he has to bear this uncomplainingly,
partly because as the company's daddy he can never show fear or
weakness, and partly because billionaires get less than zero sympathy
if they talk about having difficult lives. Which has the strange
side effect that the difficulty of being a successful startup founder
is concealed from almost everyone except those who've done it.Y Combinator has now funded several companies that can be called
big successes, and in every single case the founders say the same
thing. It never gets any easier. The nature of the problems change.
You're worrying about construction delays at your London office
instead of the broken air conditioner in your studio apartment.
But the total volume of worry never decreases; if anything it
increases.Starting a successful startup is similar to having kids in that
it's like a button you push that changes your life irrevocably.
And while it's truly wonderful having kids, there are a lot of
things that are easier to do before you have them than after. Many
of which will make you a better parent when you do have kids. And
since you can delay pushing the button for a while, most people in
rich countries do.Yet when it comes to startups, a lot of people seem to think they're
supposed to start them while they're still in college. Are you
crazy? And what are the universities thinking? They go out of
their way to ensure their students are well supplied with contraceptives,
and yet they're setting up entrepreneurship programs and startup
incubators left and right.To be fair, the universities have their hand forced here. A lot
of incoming students are interested in startups. Universities are,
at least de facto, expected to prepare them for their careers. So
students who want to start startups hope universities can teach
them about startups. And whether universities can do this or not,
there's some pressure to claim they can, lest they lose applicants
to other universities that do.Can universities teach students about startups? Yes and no. They
can teach students about startups, but as I explained before, this
is not what you need to know. What you need to learn about are the
needs of your own users, and you can't do that until you actually
start the company.
[5]
So starting a startup is intrinsically
something you can only really learn by doing it. And it's impossible
to do that in college, for the reason I just explained: startups
take over your life. You can't start a startup for real as a
student, because if you start a startup for real you're not a student
anymore. You may be nominally a student for a bit, but you won't even
be that for long.
[6]Given this dichotomy, which of the two paths should you take? Be
a real student and not start a startup, or start a real startup and
not be a student? I can answer that one for you. Do not start a
startup in college. How to start a startup is just a subset of a
bigger problem you're trying to solve: how to have a good life.
And though starting a startup can be part of a good life for a lot
of ambitious people, age 20 is not the optimal time to do it.
Starting a startup is like a brutally fast depth-first search. Most
people should still be searching breadth-first at 20.You can do things in your early 20s that you can't do as well before
or after, like plunge deeply into projects on a whim and travel
super cheaply with no sense of a deadline. For unambitious people,
this sort of thing is the dreaded "failure to launch," but for the
ambitious ones it can be an incomparably valuable sort of exploration.
If you start a startup at 20 and you're sufficiently successful,
you'll never get to do it.
[7]Mark Zuckerberg will never get to bum around a foreign country. He
can do other things most people can't, like charter jets to fly him
to foreign countries. But success has taken a lot of the serendipity
out of his life. Facebook is running him as much as he's running
Facebook. And while it can be very cool to be in the grip of a
project you consider your life's work, there are advantages to
serendipity too, especially early in life. Among other things it
gives you more options to choose your life's work from.There's not even a tradeoff here. You're not sacrificing anything
if you forgo starting a startup at 20, because you're more likely
to succeed if you wait. In the unlikely case that you're 20 and
one of your side projects takes off like Facebook did, you'll face
a choice of running with it or not, and it may be reasonable to run
with it. But the usual way startups take off is for the founders
to make them take off, and it's gratuitously
stupid to do that at 20.
TryShould you do it at any age? I realize I've made startups sound
pretty hard. If I haven't, let me try again: starting a startup
is really hard. What if it's too hard? How can you tell if you're
up to this challenge?The answer is the fifth counterintuitive point: you can't tell. Your
life so far may have given you some idea what your prospects might
be if you tried to become a mathematician, or a professional football
player. But unless you've had a very strange life you haven't done
much that was like being a startup founder.
Starting a startup will change you a lot. So what you're trying
to estimate is not just what you are, but what you could grow into,
and who can do that?For the past 9 years it was my job to predict whether people would
have what it took to start successful startups. It was easy to
tell how smart they were, and most people reading this will be over
that threshold. The hard part was predicting how tough and ambitious they would become. There
may be no one who has more experience at trying to predict that,
so I can tell you how much an expert can know about it, and the
answer is: not much. I learned to keep a completely open mind about
which of the startups in each batch would turn out to be the stars.The founders sometimes think they know. Some arrive feeling sure
they will ace Y Combinator just as they've aced every one of the (few,
artificial, easy) tests they've faced in life so far. Others arrive
wondering how they got in, and hoping YC doesn't discover whatever
mistake caused it to accept them. But there is little correlation
between founders' initial attitudes and how well their companies
do.I've read that the same is true in the military β that the
swaggering recruits are no more likely to turn out to be really
tough than the quiet ones. And probably for the same reason: that
the tests involved are so different from the ones in their previous
lives.If you're absolutely terrified of starting a startup, you probably
shouldn't do it. But if you're merely unsure whether you're up to
it, the only way to find out is to try. Just not now.
IdeasSo if you want to start a startup one day, what should you do in
college? There are only two things you need initially: an idea and
cofounders. And the m.o. for getting both is the same. Which leads
to our sixth and last counterintuitive point: that the way to get
startup ideas is not to try to think of startup ideas.I've written a whole essay on this,
so I won't repeat it all here. But the short version is that if
you make a conscious effort to think of startup ideas, the ideas
you come up with will not merely be bad, but bad and plausible-sounding,
meaning you'll waste a lot of time on them before realizing they're
bad.The way to come up with good startup ideas is to take a step back.
Instead of making a conscious effort to think of startup ideas,
turn your mind into the type that startup ideas form in without any
conscious effort. In fact, so unconsciously that you don't even
realize at first that they're startup ideas.This is not only possible, it's how Apple, Yahoo, Google, and
Facebook all got started. None of these companies were even meant
to be companies at first. They were all just side projects. The
best startups almost have to start as side projects, because great
ideas tend to be such outliers that your conscious mind would reject
them as ideas for companies.Ok, so how do you turn your mind into the type that startup ideas
form in unconsciously? (1) Learn a lot about things that matter,
then (2) work on problems that interest you (3) with people you
like and respect. The third part, incidentally, is how you get
cofounders at the same time as the idea.The first time I wrote that paragraph, instead of "learn a lot about
things that matter," I wrote "become good at some technology." But
that prescription, though sufficient, is too narrow. What was
special about Brian Chesky and Joe Gebbia was not that they were
experts in technology. They were good at design, and perhaps even
more importantly, they were good at organizing groups and making
projects happen. So you don't have to work on technology per se,
so long as you work on problems demanding enough to stretch you.What kind of problems are those? That is very hard to answer in
the general case. History is full of examples of young people who
were working on important problems that no
one else at the time thought were important, and in particular
that their parents didn't think were important. On the other hand,
history is even fuller of examples of parents who thought their
kids were wasting their time and who were right. So how do you
know when you're working on real stuff?
[8]I know how I know. Real problems are interesting, and I am
self-indulgent in the sense that I always want to work on interesting
things, even if no one else cares about them (in fact, especially
if no one else cares about them), and find it very hard to make
myself work on boring things, even if they're supposed to be
important.My life is full of case after case where I worked on something just
because it seemed interesting, and it turned out later to be useful
in some worldly way. Y
Combinator itself was something I only did because it seemed
interesting. So I seem to have some sort of internal compass that
helps me out. But I don't know what other people have in their
heads. Maybe if I think more about this I can come up with heuristics
for recognizing genuinely interesting problems, but for the moment
the best I can offer is the hopelessly question-begging advice that
if you have a taste for genuinely interesting problems, indulging
it energetically is the best way to prepare yourself for a startup.
And indeed, probably also the best way to live.
[9]But although I can't explain in the general case what counts as an
interesting problem, I can tell you about a large subset of them.
If you think of technology as something that's spreading like a
sort of fractal stain, every moving point on the edge represents
an interesting problem. So one guaranteed way to turn your mind
into the type that has good startup ideas is to get yourself to the
leading edge of some technology β to cause yourself, as Paul
Buchheit put it, to "live in the future." When you reach that point,
ideas that will seem to other people uncannily prescient will seem
obvious to you. You may not realize they're startup ideas, but
you'll know they're something that ought to exist.For example, back at Harvard in the mid 90s a fellow grad student
of my friends Robert and Trevor wrote his own voice over IP software.
He didn't mean it to be a startup, and he never tried to turn it
into one. He just wanted to talk to his girlfriend in Taiwan without
paying for long distance calls, and since he was an expert on
networks it seemed obvious to him that the way to do it was turn
the sound into packets and ship it over the Internet. He never did
any more with his software than talk to his girlfriend, but this
is exactly the way the best startups get started.So strangely enough the optimal thing to do in college if you want
to be a successful startup founder is not some sort of new, vocational
version of college focused on "entrepreneurship." It's the classic
version of college as education for its own sake. If you want to
start a startup after college, what you should do in college is
learn powerful things. And if you have genuine intellectual
curiosity, that's what you'll naturally tend to do if you just
follow your own inclinations.
[10]The component of entrepreneurship that really matters is domain
expertise. The way to become Larry Page was to become an expert
on search. And the way to become an expert on search was to be
driven by genuine curiosity, not some ulterior motive.At its best, starting a startup is merely an ulterior motive for
curiosity. And you'll do it best if you introduce the ulterior
motive toward the end of the process.So here is the ultimate advice for young would-be startup founders,
boiled down to two words: just learn.
Notes[1]
Some founders listen more than others, and this tends to be a
predictor of success. One of the things I
remember about the Airbnbs during YC is how intently they listened.[2]
In fact, this is one of the reasons startups are possible. If
big companies weren't plagued by internal inefficiencies, they'd
be proportionately more effective, leaving less room for startups.[3]
In a startup you have to spend a lot of time on schleps, but this sort of work is merely
unglamorous, not bogus.[4]
What should you do if your true calling is gaming the system?
Management consulting.[5]
The company may not be incorporated, but if you start to get
significant numbers of users, you've started it, whether you realize
it yet or not.[6]
It shouldn't be that surprising that colleges can't teach
students how to be good startup founders, because they can't teach
them how to be good employees either.The way universities "teach" students how to be employees is to
hand off the task to companies via internship programs. But you
couldn't do the equivalent thing for startups, because by definition
if the students did well they would never come back.[7]
Charles Darwin was 22 when he received an invitation to travel
aboard the HMS Beagle as a naturalist. It was only because he was
otherwise unoccupied, to a degree that alarmed his family, that he
could accept it. And yet if he hadn't we probably would not know
his name.[8]
Parents can sometimes be especially conservative in this
department. There are some whose definition of important problems
includes only those on the critical path to med school.[9]
I did manage to think of a heuristic for detecting whether you
have a taste for interesting ideas: whether you find known boring
ideas intolerable. Could you endure studying literary theory, or
working in middle management at a large company?[10]
In fact, if your goal is to start a startup, you can stick
even more closely to the ideal of a liberal education than past
generations have. Back when students focused mainly on getting a
job after college, they thought at least a little about how the
courses they took might look to an employer. And perhaps even
worse, they might shy away from taking a difficult class lest they
get a low grade, which would harm their all-important GPA. Good
news: users don't care what your GPA
was. And I've never heard of investors caring either. Y Combinator
certainly never asks what classes you took in college or what grades
you got in them.
Thanks to Sam Altman, Paul Buchheit, John Collison, Patrick
Collison, Jessica Livingston, Robert Morris, Geoff Ralston, and
Fred Wilson for reading drafts of this. |
May 2001
(I wrote this article to help myself understand exactly
what McCarthy discovered. You don't need to know this stuff
to program in Lisp, but it should be helpful to
anyone who wants to
understand the essence of Lisp Β both in the sense of its
origins and its semantic core. The fact that it has such a core
is one of Lisp's distinguishing features, and the reason why,
unlike other languages, Lisp has dialects.)In 1960, John
McCarthy published a remarkable paper in
which he did for programming something like what Euclid did for
geometry. He showed how, given a handful of simple
operators and a notation for functions, you can
build a whole programming language.
He called this language Lisp, for "List Processing,"
because one of his key ideas was to use a simple
data structure called a list for both
code and data.It's worth understanding what McCarthy discovered, not
just as a landmark in the history of computers, but as
a model for what programming is tending to become in
our own time. It seems to me that there have been
two really clean, consistent models of programming so
far: the C model and the Lisp model.
These two seem points of high ground, with swampy lowlands
between them. As computers have grown more powerful,
the new languages being developed have been moving
steadily toward the Lisp model. A popular recipe
for new programming languages in the past 20 years
has been to take the C model of computing and add to
it, piecemeal, parts taken from the Lisp model,
like runtime typing and garbage collection.In this article I'm going to try to explain in the
simplest possible terms what McCarthy discovered.
The point is not just to learn about an interesting
theoretical result someone figured out forty years ago,
but to show where languages are heading.
The unusual thing about Lisp Β in fact, the defining
quality of Lisp Β is that it can be written in
itself. To understand what McCarthy meant by this,
we're going to retrace his steps, with his mathematical
notation translated into running Common Lisp code. |
"May 2001(This article was written as a kind of business plan for a\nnew language.\nSo it is missing(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 41